-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor: Use filter and map instead of reduce #948
Conversation
103f5f8
to
d7406a4
Compare
It was a consideration, sure. |
d7406a4
to
4020934
Compare
@aduth I being it up because it seems like we're exposing a loss of control by eagerly converting to a string and adding the accumulator whereas if we keep the transformation stages separate and then combine them it's clearer to read and intercept. it eliminates the off-by-one error. put another way, instead of coupling the side-effects (conversion to a string) along the course of the process, push them to the end thoughts? |
Just curious on the choice of the reduce. Seems like a filter and map are more appropriate when joined with a space.
4020934
to
7b43ce8
Compare
I think using reduce instead of chained operators is usually a good thing. The performance benefit is often negligible, but reduce has other benefits. Rather than refactoring out the reduce, you could create a composed function via |
@BE-Webdesign could you elaborate on what you find more beneficial? this PR isn't meant per-se to request a change to the code, but to better understand the design choices being made here. I'm asking not because I think that map/filter is just better, but because
|
I don't know what dangling combiners are, so if this somehow improves some of the functionality of the parser that is great. All I was trying to point out was that most likely you can achieve the same result and benefits using function composition (via flow) and reduce. Like I said above though, I don't know enough about parsing really to offer a good enough judgement, so I will defer to someone else 😄 . |
haha, nothing fancy, just a trailing
yep. computationally the same 😄 see transducers for robust libraries doing just this… transduce(
filterer( key => realAttributes[ key ] !== undefined ),
mapper( key => `${ key }="${ realAttributes[ key ] }"` )
)
don't think it does. doesn't matter much at all. I would consider it a win for code maintainability in the sense that it communicates the intent more clearly of what we want to do and also in that it provides easier hooks into the "transformation pipeline," if you will, whereas the single |
Yup transducers and Clojure for the win. |
Before was fine and trivially more performant. This is still fine and trivially more functionally explicit. I don't see a net change in objective "better"-ness here, and am therefore indifferent to it. The reduce could be changed to accumulate into an array and |
I'm closing this out because the discussion was helpful and the intended purpose. Thanks y'all! |
Just curious on the choice of the reduce. Seems like a filter and map
are more appropriate when joined with a space.
@aduth any particular reason for the reduce approach? seemed to stick out when I saw it, mainly because of the extra
it leaves at the end.
granted, the reduce only has a single iteration while this has two, but I assumed you hadn't chosen what you did for speed reasons