-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: race condition of the lastmatcherevent #6080
fix: race condition of the lastmatcherevent #6080
Conversation
WalkthroughThe change modifies the Changes
Sequence Diagram(s)sequenceDiagram
participant Caller
participant Executor as TemplateExecuter
participant Matcher as lastMatcherEvent
Caller->>Executor: Call Execute()
Executor->>Matcher: Lock()
Note over Executor: Processing logic and error handling
Executor-->>Matcher: defer Unlock() (automatically called on exit)
Executor->>Caller: Return result
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🔇 Additional comments (1)
✨ Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
the race scenario was pretty inconsistent when i try but used this unit test to generate it. with -race flag go test -race -run TestUnlockRaceLastMatcherEvent -v -timeout 30m Note this also shows some other warning which might cause race, those are not part of this MR package tmplexec_test
import (
"context"
"fmt"
"runtime"
"strings"
"sync"
"testing"
"time"
"github.com/logrusorgru/aurora"
"github.com/projectdiscovery/nuclei/v3/pkg/model"
"github.com/projectdiscovery/nuclei/v3/pkg/operators"
"github.com/projectdiscovery/nuclei/v3/pkg/operators/extractors"
"github.com/projectdiscovery/nuclei/v3/pkg/operators/matchers"
"github.com/projectdiscovery/nuclei/v3/pkg/output"
"github.com/projectdiscovery/nuclei/v3/pkg/protocols"
"github.com/projectdiscovery/nuclei/v3/pkg/protocols/common/contextargs"
"github.com/projectdiscovery/nuclei/v3/pkg/scan"
templateTypes "github.com/projectdiscovery/nuclei/v3/pkg/templates/types"
"github.com/projectdiscovery/nuclei/v3/pkg/tmplexec"
pkgTypes "github.com/projectdiscovery/nuclei/v3/pkg/types"
)
// MockRaceRequest is a special request implementation designed to trigger the race condition
type MockRaceRequest struct{}
func (m *MockRaceRequest) Compile(options *protocols.ExecutorOptions) error { return nil }
func (m *MockRaceRequest) Requests() int { return 1 }
func (m *MockRaceRequest) Type() templateTypes.ProtocolType { return templateTypes.HTTPProtocol }
func (m *MockRaceRequest) GetCompiledOperators() []*operators.Operators { return nil }
func (m *MockRaceRequest) GetID() string { return "" }
func (m *MockRaceRequest) Extract(data map[string]interface{}, extractor *extractors.Extractor) map[string]struct{} {
return nil
}
func (m *MockRaceRequest) Match(data map[string]interface{}, matcher *matchers.Matcher) (bool, []string) {
return false, nil
}
func (m *MockRaceRequest) MakeResultEvent(wrapped *output.InternalWrappedEvent) []*output.ResultEvent {
return nil
}
func (m *MockRaceRequest) MakeResultEventItem(wrapped *output.InternalWrappedEvent) *output.ResultEvent {
return &output.ResultEvent{}
}
func (m *MockRaceRequest) ExecuteWithResults(ctx *contextargs.Context, input output.InternalEvent, payloads output.InternalEvent, callback protocols.OutputEventCallback) error {
event := &output.InternalWrappedEvent{
InternalEvent: map[string]interface{}{
"test": "value",
"host": "test-host",
},
Results: []*output.ResultEvent{
{
TemplateID: "test-template",
Type: "http",
Host: "test-host",
},
},
// Deliberately NOT setting OperatorsResult to ensure it goes into lastMatcherEvent path
}
// Signal to the context that we're calling the callback (this will trigger context cancellation in the test)
select {
case <-ctx.Context().Done():
// Context already cancelled, still call callback to increase race potential
callback(event)
return ctx.Context().Err()
default:
// Call the callback which will process the event
callback(event)
return nil
}
}
type mockOutput struct{}
func (m *mockOutput) Write(event *output.ResultEvent) error { return nil }
func (m *mockOutput) WriteFailure(event *output.InternalWrappedEvent) error { return nil }
func (m *mockOutput) Request(templateID, url, requestType string, err error) {}
func (m *mockOutput) WriteStoreDebugData(host, templateID, eventType string, data string) {}
func (m *mockOutput) Close() {}
func (m *mockOutput) Colorizer() aurora.Aurora { return aurora.NewAurora(false) }
func (m *mockOutput) RequestStatsLog(statusCode, response string) {}
type mockProgress struct{}
func (m *mockProgress) Stop() {}
func (m *mockProgress) Init(hostCount int64, rulesCount int, requestCount int64) {}
func (m *mockProgress) AddToTotal(delta int64) {}
func (m *mockProgress) IncrementRequests() {}
func (m *mockProgress) SetRequests(count uint64) {}
func (m *mockProgress) IncrementMatched() {}
func (m *mockProgress) IncrementErrorsBy(count int64) {}
func (m *mockProgress) IncrementFailedRequestsBy(count int64) {}
func (m *mockProgress) IncrementTotal() {}
// createRaceExecutorOptions creates options specifically configured to trigger the race
func createRaceExecutorOptions() *protocols.ExecutorOptions {
return &protocols.ExecutorOptions{
Options: &pkgTypes.Options{
TemplateThreads: 100, // High concurrency
Timeout: 1, // Short timeout to increase cancellations
MatcherStatus: true, // Required for the lastMatcherEvent handling path
},
TemplateID: "test-race-template",
TemplateInfo: model.Info{
Name: "Race Test Template",
},
Output: &mockOutput{},
Progress: &mockProgress{},
}
}
func TestUnlockRaceLastMatcherEvent(t *testing.T) {
// Create a mock request designed to trigger the race
mockReq := &MockRaceRequest{}
// Create executor options with high concurrency settings
opts := createRaceExecutorOptions()
opts.CreateTemplateCtxStore()
// Channel to collect success signal
successChan := make(chan bool, 1)
// Run many iterations to increase chance of hitting the race
numIterations := 5000
// Run multiple parallel test instances to increase chances
var outerWg sync.WaitGroup
numParallel := 10
outerWg.Add(numParallel)
for p := 0; p < numParallel; p++ {
go func(parallelID int) {
defer outerWg.Done()
for i := 0; i < numIterations/numParallel; i++ {
// Create unique iteration ID
iterID := parallelID*numIterations/numParallel + i
// Create executor for each iteration to avoid shared state
executor, err := tmplexec.NewTemplateExecuter([]protocols.Request{mockReq}, opts)
if err != nil {
continue
}
if err := executor.Compile(); err != nil {
continue
}
// Use an extremely short timeout to increase likelihood of interruption
ctx, cancel := context.WithTimeout(context.Background(), time.Microsecond*5)
// Create metaInput and scan context
metaInput := contextargs.NewMetaInput()
metaInput.Input = fmt.Sprintf("test-input-%d", iterID)
ctxArgs := contextargs.New(ctx)
ctxArgs.MetaInput = metaInput
scanCtx := scan.NewScanContext(ctx, ctxArgs)
// Add errors to the scan context to ensure GenerateErrorMessage returns something
scanCtx.LogError(fmt.Errorf("test error %d", iterID))
scanCtx.LogError(fmt.Errorf("another test error %d", iterID))
// Launch multiple goroutines that will race to execute, cancel, and cleanup
var wg sync.WaitGroup
wg.Add(5)
// Key technique: Launch all goroutines in tight sequence with minimal delays
// to maximize chance they'll interfere with each other
// Goroutine 1: Execute
go func() {
defer wg.Done()
_, err := executor.Execute(scanCtx)
if err != nil {
if strings.Contains(err.Error(), "sync: Unlock of unlocked RWMutex") {
// Found the race condition!
select {
case successChan <- true:
t.Logf("SUCCESS! Found race at iteration %d: %v", iterID, err)
default:
}
cancel() // Cancel to stop other goroutines
}
}
}()
// Goroutine 2: Execute again immediately with same context
go func() {
defer wg.Done()
// Execute again immediately, don't wait
_, err := executor.Execute(scanCtx)
if err != nil {
if strings.Contains(err.Error(), "sync: Unlock of unlocked RWMutex") {
select {
case successChan <- true:
t.Logf("SUCCESS! Found race in duplicate execute: %v", err)
default:
}
}
}
}()
// Goroutine 3: Cancel context during execution but still try to execute again
go func() {
defer wg.Done()
// Cancel immediately
runtime.Gosched() // Force thread switch
cancel()
// Even though cancelled, try to execute again - this is critical for the race
_, err := executor.Execute(scanCtx)
if err != nil {
if strings.Contains(err.Error(), "sync: Unlock of unlocked RWMutex") {
select {
case successChan <- true:
t.Logf("SUCCESS! Found race after context cancel: %v", err)
default:
}
}
}
}()
// Goroutine 4: Remove template context during execution and execute again
go func() {
defer wg.Done()
runtime.Gosched() // Force thread switch
opts.RemoveTemplateCtx(metaInput)
// Direct attack on the race: try to execute right after removal
_, err := executor.Execute(scanCtx)
if err != nil {
if strings.Contains(err.Error(), "sync: Unlock of unlocked RWMutex") {
select {
case successChan <- true:
t.Logf("SUCCESS! Found race after RemoveTemplateCtx: %v", err)
default:
}
}
}
}()
// Goroutine 5: Manipulate the context to cause more chaos
go func() {
defer wg.Done()
// Add more errors while other goroutines are processing
scanCtx.LogError(fmt.Errorf("dynamic error during execution %d", iterID))
_, err := executor.Execute(scanCtx)
if err != nil {
if strings.Contains(err.Error(), "sync: Unlock of unlocked RWMutex") {
select {
case successChan <- true:
t.Logf("SUCCESS! Found race in chaos goroutine: %v", err)
default:
}
}
}
}()
// Create a tight timeout to prevent test from hanging
// but also to increase pressure
timeoutDuration := time.Millisecond * 10
if iterID%3 == 0 {
// Sometimes use an even shorter timeout for more chaos
timeoutDuration = time.Microsecond * 100
}
// Wait with timeout
done := make(chan struct{})
go func() {
wg.Wait()
close(done)
}()
select {
case <-done:
// Goroutines completed
case <-time.After(timeoutDuration):
// Timed out - continue to next iteration
}
// Cleanup
cancel()
// Force GC more frequently
if iterID%50 == 0 {
runtime.GC()
}
// Check if race was found
select {
case <-successChan:
// Race found!
return
default:
// Continue
}
}
}(p)
}
// Wait for all parallel tests with timeout
waitDone := make(chan struct{})
go func() {
outerWg.Wait()
close(waitDone)
}()
select {
case <-waitDone:
// All tests completed
case <-time.After(time.Second * 10):
// Global timeout - skip remaining tests
t.Log("Test timed out after 10 seconds")
case <-successChan:
// Race found in one of the parallel tests
t.Log("Successfully reproduced the race condition!")
return
}
// If we get here, we failed to reproduce the race
t.Log("Failed to reproduce the race condition after", numIterations, "iterations across", numParallel, "parallel tests")
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR Overview
This PR fixes a race condition in the template executor by improving the locking mechanism around lastMatcherEvent.
- Introduces a defer statement to ensure the mutex is always unlocked.
- Removes the explicit unlock call to prevent the "sync: Unlock of unlocked RWMutex" panic.
Reviewed Changes
File | Description |
---|---|
pkg/tmplexec/exec.go | Uses a proper defer pattern for unlocking the lastMatcherEvent to address a race condition that caused a panic |
Copilot reviewed 1 out of 1 changed files in this pull request and generated no comments.
Comments suppressed due to low confidence (1)
pkg/tmplexec/exec.go:218
- Review whether writeFailureCallback might require the mutex to be unlocked prior to being called. If it doesn't rely on the lock, consider unlocking explicitly before calling writeFailureCallback to reduce the lock hold time and lower the risk of potential deadlocks.
defer lastMatcherEvent.Unlock()
Thanks for your contribution @knakul853 ! :) |
Proposed changes
This PR fixes a race condition that causes the "sync: Unlock of unlocked RWMutex" panic in the template executor. reported in this issue #6062
Problem
The issue occurs in pkg/tmplexec/exec.go where a lastMatcherEvent is modified and then passed to writeFailureCallback. Multiple goroutines can access this event simultaneously, leading to a race condition where one goroutine tries to unlock a mutex that was never locked or was already unlocked by another goroutine.
Solution
Improve the locking mechanism with a proper defer pattern to ensure the mutex is always unlocked after being locked:
i tried multiple solution but this one seems simple and make sure resrouce is unlocked per thread.
Checklist
Summary by CodeRabbit