-
-
Notifications
You must be signed in to change notification settings - Fork 939
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor(locale): normalize animal data #2791
Conversation
✅ Deploy Preview for fakerjs ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test snapshots need to be updated, but otherwise this looks good to me.
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## next #2791 +/- ##
==========================================
- Coverage 99.96% 99.95% -0.01%
==========================================
Files 2971 2971
Lines 212696 212619 -77
Branches 949 945 -4
==========================================
- Hits 212617 212526 -91
- Misses 79 93 +14
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there anything we can do to speed up the normalisation code? It seems to add several seconds to every preflight run even when only running for a couple of modules.
Guess we can parallelize the processing of all methods in a module as well as the module processing in general. This would not fix the data normalization in general, tho. Any ideas on that? |
bd505fe
bd505fe
to
3cd809c
Compare
Description
Follow-on to #2265
Normalize the entries in the
animal
module definitions in all locales.I choose this module as only one file (
src/locales/fr/animal/bird.ts
) that exceeded the entry limit. Th rest of the changes come from sorting and deduping the datasets.