-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Handle compression level correctly for different algorithms #13434
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice
I still see it restrictive to have these 4 levels named to follow only the deflate packages. For example klauspost/zstd has only 4 levels supported out of the 29 and all of the 4 don't have 'None' in them so all options will compress another package for lz4 is using 9 levels, snappy doesn't even have a level. Expanding this list will be hard as well. Let's imagine we added snappy and the user wanted to use it thinking that it supports all these levels that we listed here then figuring out that it actually doesn't. Why shall we cause a confusion?? |
I used those four as an example. If you like we can also convert this to have 9 levels. All I want is a standardized set of values because we cannot ask a user to read through each package to find out what to type into the config file. Furthermore, how often do you need the fine-grained 22-level setting in practice? If we really need those (I really doubt) we can still add another option e.g.
Which confusion? In the |
While the numbers are chosen are package specific but they're inherently generic levels that are used in all the other implementation of the compression algorithm (ie 1 will always be faster than 9 but 9 will be smaller in size) and if someone doesn't care about these numbers he can ignore them if he does care he should know what they mean. These numbers are actually standards by themselves within the compression Algorithm.
After all it's my opinion but I can go with any implementation and workaround it so it shouldn't be a blocker. |
@zekena2 yeah I guess the nine levels are a quasi-standard. So what is the mapping of levels 1-9? Can we just pass that into the libs? As I said, the new type was just an example... For algorithms that do not support certain levels, we can document this and issue an error... |
@zekena2 is this what you had in mind? |
I think that's fine as well, but WDYT maybe my approach is more DRY? I give a function for checking them instead of each algorithm checking it's own. |
@zekena2 the issue is if some plugin decides not to call the checking function but directly instantiates the encoder directly... Furthermore, my approach is more modular as each encoder is responsible for checking its inputs so you have to change things in one position if an encoder changes / is added instead of having a global function... |
Download PR build artifacts for linux_amd64.tar.gz, darwin_amd64.tar.gz, and windows_amd64.zip. 📦 Click here to get additional PR build artifactsArtifact URLs |
@powersj can you please give it another check? |
) (cherry picked from commit a1c0642)
case zlib.NoCompression, zlib.DefaultCompression, zlib.BestSpeed, zlib.BestCompression: | ||
// Do nothing as those are valid levels | ||
default: | ||
return nil, fmt.Errorf("invalid compression level, only 0, 1 and 9 are supported") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent use of fmt.Errorf
vs errors.New
.
replaces #13424
This PR maps the given, standardized compression level to the individual values for each compression algorithm. Furthermore, we add a function to map a string configuration value to the defined levels.