-
-
Notifications
You must be signed in to change notification settings - Fork 798
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
6.8.0 is significantly slower than 6.6.0 and memory grows with every request, eventually crashing with a Javascript heap out of memory error #1119
Comments
<--- Last few GCs ---> [2193:0x103005000] 1287260 ms: Scavenge 2034.7 (2044.9) -> 2030.9 (2045.1) MB, 3.4 / 0.0 ms (average mu = 0.146, current mu = 0.113) allocation failure <--- JS stacktrace ---> ==== JS stack trace =========================================
Security context: 0x3a2f185408d1 FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory |
Significantly slower is an understatement. When I switched to 6.8.0 my requests went from taking 100ms to taking 1-2 seconds. When running inspect / debugging in VSCode requests were taking 7-10 seconds. serverless version: v1.83.0 |
Yeah this needs to get fixed it slows down development by a lot, its more worthwhile to just mock data and run functions using serverless invoke because this is to slow. |
Still no movement on this? I actually had to revert back to 6.5 because 6.6 is also running extremely slow when debugging in VSCode. |
How is the search going for a fix on this? |
Oh I thought they had released a newer version since 6.8. Yeah. This is still an issue, basically stuck on 6.5 until this is resolved. |
Same here see my comment serverless/serverless#6503 (comment) I rollback to 6.5.0 and it works without the memory leak. WORKAROUND: lock to |
Yep switched back to |
Does anyone have an example Serverless project (and some bash script which triggers repeated requests) which replicates this issue? I'd be happy to start investigating a solution in my free time. |
@james-relyea The main issue is when you are running it with I have multiple projects that use this and they all exhibit the same behavior. This is a typical setup when running projects using VSCode and debug.
|
Still no traction on this issue? Would be really nice to be able to upgrade to the latest version. |
I can confirm @bryantbiggs' suspicion - the memory leak was introduced by #1050/#1091 (which is also in violation of the MIT license for the clear-module project. So we need to update the serverless-offline license to attribute that code being copied from the original author). Regarding the leak, it looks to be caused by the I suppose as a temporary workaround - you can use the Does anyone know the reasoning behind clearing out the If I alter the parameters for the invocation to set |
I believe that was the case. but you can not reliably remove modules from the cache, memory will always be allocated. you can search the issues, it has been mentioned and discussed ad nauseam. it works only reliably without a memory leak if one starts a new process or new child process. or better: worker threads. I added those some time ago. the only thing missing was any HMR file watching + reloading functionality. |
@dnalborczyk Gotcha - that makes sense. Maybe we could just have this
👍 I'd be willing to spend some time looking into adding that in. |
Any updates on this? Not sure about everyone else, but this is basically preventing us from updating to Node 14. Not sure how this isn't a higher priority, maybe it's just me. 🤷 |
I honestly just changed from it into |
we did the same, for local, and it strongly enhances the developer experience. we kept serverless-offline only for very specific use cases (connection reuse, etc.) but in this case the memory leak is not a huge issue |
We were on serverless-offline version 6.3.2 but needed to make the jump to 6.9.0 or higher to take advantage of the nodejs14.x support. Doing so made local development incredibly slow and we were forced to downgrade and stick with nodejs12.x. After reading this thread, I suspected this is what was happening to us. I can confirm that adding the This is our start script: |
Any chance the fix could be prioritized? Or making I'm not sure exactly what the |
Slightly off topic, but in case it's helpful to anyone struggling with this: What I did is package up my core app logic - in my case, You can see my setup here: Obviously not ideal though. I hope I know maintaining is not easy so thanks to everyone who is helping to pick up the pace on this project recently, too. |
I haven't had much free time to look into this stuff as I have been thrust back into server-based work in the past year. I imagine this is still an issue to be resolved in latest version 8.x? |
@james-relyea Yes, confirmed it is still happening on version |
same here. anyhow, this should be fixed once and for all in v9. worker threads will be turned on by default, running handlers in-process (with potential memory leaks on reload) will be an opt-in. I'm also thinking to make the handler-reload-on-access an opt-in, at least until we we implemented HMR. v9 should be going out in the next coming days. in fact, the only missing piece is the above mentioned opt-in flag. |
if there's still problems or if it doesn't fit the expectations please open a new issue. |
Bug Report
Current Behavior
With Serverless-offline v6.8.0 my httpApi requests take 3-4x longer to execute and memory usage continues to grow until it eventually crashes. With v6.6.0 all of my handler execute quickly and memory usage does not grow. v6.7.0 doesn't seem to work at all so I've reverted to v6.6.0 for now.
I've tried differnet versions of serverless, serverless-webpack and serverless-offline. The issue is consistently present with serverless-offline v6.8.0.
Sample Code
It doesn't seem to matter what handler I call, the behavior is the same.
Expected behavior/code
Environment
serverless
version: v1.72.0serverless-offline
version: v6.8.0node.js
version: v12.9OS
: macOS 10.15.7Possible Solution
Revert to v6.6.0
Additional context/Screenshots
The text was updated successfully, but these errors were encountered: