-
Notifications
You must be signed in to change notification settings - Fork 859
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SIGSEGV SEGV_ACCERR on node-chakracore #1633
Comments
Thanks for the report, enabling ChakraCore is on our radar but we haven't had a chance to investigate all of the issues. Can you please confirm what part of the repro steps are failing? I saw that one of the steps was running the unit tests and we know there are failures there. |
Sorry I should have been more clear on the repro instead of just linking. I didn't even run the unit tests; I had no illusions about those failing at least in part. The repro is just:
You don't even need to build it; just download yesterday's nightly. Mostly I was hoping the
|
Thanks I was able to get a local repro and have some leads on the issue. Will report back when I have narrowed down the cause. |
The SIGSEGV above was caused by WSL preventing explicit reads of an execute only memory region during demand commit. After fixing that up I was able to run the example and the fix should be out to insider builds soon. If you want to try to work around this until the fix makes it out you can adjust the over commit setting with Also hit another issue here where very large PROT_NONE memory regions are being handled differently and I'll look at addressing that separately. |
Well that was quick. 🏆 |
Right; that's still in there. Said memory region is "very large" if you have a 2GB Atom notebook. It's somewhat large if you have a 64GB consumer desktop, like me. And it is not all that large at all if you are the #1115 guy. Whichever way, it's a bug. Maybe mention to that to the chakra guys if you bump into them in the hall. |
Thanks for the follow-up Ken. We've been in contact with the chakra folks, working to get that scenario enabled. Initial investigations show PROT_NONE has special handling on native Linux though not well documented (e.g. see OpenJDK comment here - https://lwn.net/Articles/627557/). Basically PROT_NONE just reserves address space which typically has low resource consumption compared to providing backing for the memory. On the NT side this is similar to using MEM_RESRVED followed by MEM_COMMIT and we need to translate this correctly inside of WSL. |
No longer curious. Heuristic (ie '0') means whatever you want a "seriously wild allocation" to mean. Got it. If there is any consolation in all this, I can now use "Rusty Russel score of -4" in casual conversation. |
Are you running as root? If not you need to do sudo sh -c "echo '2' >> /proc/sys/vm/overcommit_memory" (e.g. http://stackoverflow.com/questions/84882/sudo-echo-something-etc-privilegedfile-doesnt-work-is-there-an-alterna). |
Yeah; I was just being an idiot re: root. Finger muscle memory failed me for some reason. I also have to apologize for dissing the chakra guys. I only just figured out allocating flat memory space up front is kind of standard operating procedure. V8 just happens to be more dynamic about it (or at least less aggressive about it). |
@therealkenc - it seems to be a common pattern in Linux to have absurdly large sparse memory allocations. We don't handle these very well currently but better support is on our backlog. |
@stehufntdev - Might as well ping all of these. Also confirmed working in 15046, but is not noted in the 15042 release notes. Really appreciate all these being addressed! |
Thanks for the confirmation, will close this out! |
I caught the first anniversary of ChakraCore blog post (time flies), and took another look at how it fares on WSL since last time. I think this might be related to #286; if so dup away. But there's no
PROT_GROWSDOWN
here, so I am wondering since noEINVAL
whether their use-case is expected (assumed?) to work.Repro steps here are easy. Confirmed it's okay on native. Failing sequence on the offending thread is:
The text was updated successfully, but these errors were encountered: