Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak in v8go.(*Value).String #367

Open
kevburnsjr opened this issue Jan 29, 2023 · 0 comments
Open

Memory leak in v8go.(*Value).String #367

kevburnsjr opened this issue Jan 29, 2023 · 0 comments

Comments

@kevburnsjr
Copy link

kevburnsjr commented Jan 29, 2023

Calling String() on info.Args() in callback causes v8go@v0.8.0 to leak memory.

func (vm *v8vm) callback(info *v8go.FunctionCallbackInfo) (val *v8go.Value) {
	defer info.Release()
	var vals = info.Args()
	var args = make([]string, len(vals))
	for i := range vals {
		args[i] = vals[i].String()
	}
	// ...
}

pprof output:

Type: inuse_space
Time: Jan 29, 2023 at 9:20am (-05)
Showing nodes accounting for 2064.64MB, 100% of 2064.64MB total
----------------------------------------------------------+-------------
      flat  flat%   sum%        cum   cum%   calls calls% + context 	 	 
----------------------------------------------------------+-------------
                                         1952.78MB   100% |   rogchap.com/v8go.(*Value).String \go\pkg\mod\rogchap.com\v8go@v0.8.0\value.go:244 (inline)
 1952.78MB 94.58% 94.58%  1952.78MB 94.58%                | rogchap.com/v8go._Cfunc_GoStringN _cgo_gotypes.go:572
----------------------------------------------------------+-------------

Call graph:
image

We have 12 nodes each running a pool of 128 isolates.
The service uses v8go to process events at a rate of ~ 40 req/s per node.
We are reliably closing contexts after every event.
We are reliably disposing of each isolate after it processes ~100 events or when its heap exceeds 20MB.
We call runtime.GC() manually after every isolate disposal.
The new GOMEMLIMIT parameter has no effect on memory growth.

Memory growth remains unbounded, leading nodes to be OOMKilled by Kubernetes.

image

Downgrading to v8go@v0.7.0 does not solve the issue.

Type: inuse_space
Time: Jan 29, 2023 at 10:01am (-05)
Showing nodes accounting for 605.62MB, 100% of 605.62MB total
----------------------------------------------------------+-------------
      flat  flat%   sum%        cum   cum%   calls calls% + context 	 	 
----------------------------------------------------------+-------------
                                          565.05MB   100% |   rogchap.com/v8go.(*Value).String \go\pkg\mod\rogchap.com\v8go@v0.7.0\value.go:246 (inline)
  565.05MB 93.30% 93.30%   565.05MB 93.30%                | rogchap.com/v8go._Cfunc_GoString _cgo_gotypes.go:546
----------------------------------------------------------+-------------

image

Has anyone else experienced this behavior before?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant