-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multiple way of computing some label measurement #340
Comments
Can you give an insight performance-wise? I could imagine that the one function is much slower than the other. |
Idk if slower or not tbh, but we can easily imagied that statistics is very dirty speed test using the prototype on a label blob:
We would clearly loose some speed (maybe a bit less in C++ but not certain). But we would consolidate the code and make less stuff for me to develop ... Edit: Same test but using
I am on my M2 (not sure its relevant) |
How about putting the algorithm that currently calculates the centroid from |
I think the algorithms are strongly related anyway. @StRigaud you should decide what's most important: Code length or speed/performance. Both are reasonable development goals, and you cannot achieve both commonly. |
That's a way but:
I am aiming on implementing the missing function for the assistant. So I will focus on making things work, hence reuse the statistics code as it is simple for me right now (no too much extra code and tests, no modification of the statistics function which is complicate, etc.). Nothing stops me later on (e.g. if an issue is raised) to optimise it back (most likely the way @thawn proposed it) |
Is it normal that some label measurement (e.g. centroid of label) are computed in different ways depending on which function is used?
User can get this information from the
statistics of label
approach and from thecentroids of labels
which rely on different code for computing the coordinates.Mostlikely that the results is the same (hopefully) but this increase code redondance. Wouldn't it be better to rely all on the statistics function computation? We may loose a bit of speed because we would also compute other measurement not requested but it would make more sense i guess.
Asking this for the
pyclesperanto
implementation developmentThe text was updated successfully, but these errors were encountered: