-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial Implementation #1
Conversation
We could theoretically write the negotiate function like a function where you set the supportedValues and then it has a fixed supportedValues array. Then we could apply caching, which could do some performance gain. But I am kind of unsure if this is actually a smart idea or not, as encodings could be alot variations. |
@mcollina |
Does this passes the same tests of the more popular module? |
Yes, I used the same tests + some tests from encoding package from dougwilson. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code looks good.
How is the performance against the original?
Could you fill in the README too?
I implemented also a Negotiator-Class, which seems to be slightly faster. You can also pass a Cache-Store, like mnemonist LRU-Cache, which makes it then incredible fast of course. Here the latest benchmarks:
The benchmarks of Negotiator-class
The benchmarks of Negotiator-class with LRUCache from mnemonist with a size of 1000
The same benchmarks against the encoding-negotiator-package
The same benchmarks against the negotiator-package, which is used by express:
|
I guess we can also use this for fastify-accepts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
@mcollina
@kibertoad
This is the promised faster implementation.
I tried to extract the function
processMatch
but it is slightly slower. Or I had not the right idea.