You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For corrections that use string input fields extensively, string comparison could have a significant performance impact. It may be the case that the strings are relatively static (e.g. constant for an entire dataset or subset of dataset), so one would expect branch prediction to do a good job in amortizing the expense. Nevertheless, some profiling to understand the extent of the issue would be useful.
There are a few improvements we could make to reduce string comparison:
Add an API that allows to pre-create some integer token that represents the string and pass that as an argument in the Correction::evaluate call, which internally would then do a faster lookup
For corrections that use string input fields extensively, string comparison could have a significant performance impact. It may be the case that the strings are relatively static (e.g. constant for an entire dataset or subset of dataset), so one would expect branch prediction to do a good job in amortizing the expense. Nevertheless, some profiling to understand the extent of the issue would be useful.
There are a few improvements we could make to reduce string comparison:
Correction::evaluate
call, which internally would then do a faster lookupThe text was updated successfully, but these errors were encountered: