Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed up parsing by storing intermediate tokens #42

Closed
earwig opened this issue Aug 19, 2013 · 1 comment
Closed

Speed up parsing by storing intermediate tokens #42

earwig opened this issue Aug 19, 2013 · 1 comment

Comments

@earwig
Copy link
Owner

earwig commented Aug 19, 2013

There are certain situations where we rapidly construct and destroy and then have to reconstruct the same token stack when hitting bad routes. These are rare, but act as a major slowdown when they do occur, potentially hitting the parse cycle limit. Ideally, when failing a route, we store the tokenization of complete nodes in some kind of cache which associates them with the starting/ending head locations and the context. This cache can then be popped from if we reach that head location again with the same context, allowing us to bypass regular parsing.

@earwig
Copy link
Owner Author

earwig commented Jun 23, 2017

🎉 💯

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant