Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

extra_trait_impls: Extra trait implementations #230

Closed
wants to merge 48 commits into from

Conversation

ycscaly
Copy link
Contributor

@ycscaly ycscaly commented May 7, 2023

Resolves #225, #220, #209.
Replaces #224, #226, #219.

@ycscaly
Copy link
Contributor Author

ycscaly commented May 8, 2023

This looks ok aside from the failing tests

Sorry, saw the failing CI when it was too late yesterday, but should be good to merge now

@tarcieri
Copy link
Member

tarcieri commented May 8, 2023

As a general note there's a lot of copied and pasted code that should be DRYed out.

Really it's code that probably shouldn't have been in macros to begin with. But it definitely shouldn't be copied and pasted in nearly identical form into a second macro.

Try to find a way to compose the macros, or failing that, extract the code into a named function the macro-written code can call.

@ycscaly
Copy link
Contributor Author

ycscaly commented May 8, 2023

DRYed out.

Honestly, I thought that about the previous macros in the first place, just went with your convention. If we'd have switched to proc macros, we might have been able to avoid all of these hundreds of macro invocations? I imagined code that simply iterates through the range (64.8192).step_by(64) in a double loop and implements all the traits this way.

But if we change that, I think it'd better to change the original macros as well. Is this too much ?

@tarcieri
Copy link
Member

tarcieri commented May 8, 2023

If we'd have switched to proc macros

I don't want to add a hard dependency on e.g. syn.

Also I don't see how a proc macro would even help here. You can't use custom derive, since types like U256 are type aliases, not structs.

@tarcieri
Copy link
Member

tarcieri commented May 8, 2023

I imagined code that simply iterates through the range (64.8192).step_by(64) in a double loop and implements all the traits this way.

The real solution to all of this is to better leverage const generics when the relevant functionality, namely generic_const_exprs, is stable.

Then we can have an actual generic implementation, rather than having to use macros and hardcoding to specific sizes. All of that is a workaround so we can support stable Rust.

But this PR and #229 are pushing the scalability of that approach.

@ycscaly
Copy link
Contributor Author

ycscaly commented May 8, 2023

I imagined code that simply iterates through the range (64.8192).step_by(64) in a double loop and implements all the traits this way.

The real solution to all of this is to better leverage const generics when the relevant functionality, namely generic_const_exprs, is stable.

Then we can have an actual generic implementation, rather than having to use macros and hardcoding to specific sizes. All of that is a workaround so we can support stable Rust.

But this PR and #229 are pushing the scalability of that approach.

So, how to proceed?

What if we added a nightly feature, and have my changes require this feature?
And in the future, when generic_const_exprs becomes stable, remove the requirement of nightly for extra_trait_impls?

If not, should we continue on with the current implementation or do you have another suggestion?

@tarcieri
Copy link
Member

tarcieri commented May 8, 2023

The code will require significant refactoring to make use of generic_const_exprs and generally I don't want to deal with the maintenance headache of supporting nightly. Many of the features we need are still buggy/unstable.

I think what you're doing mostly works if you can find a way to DRY out the impls.

I can also take a shot at it. It would probably be worth refactoring the code prior to trying to expand it.

@ycscaly
Copy link
Contributor Author

ycscaly commented May 8, 2023

The code will require significant refactoring to make use of generic_const_exprs and generally I don't want to deal with the maintenance headache of supporting nightly. Many of the features we need are still buggy/unstable.

I think what you're doing mostly works if you can find a way to DRY out the impls.

I can also take a shot at it. It would probably be worth refactoring the code prior to trying to expand it.

If what you're aiming at is simply to reduce code duplication, I think I could do this refactoring. What I'm thinking of:

  1. create a single impl_extra_traits macro that calls internally all of the above macros. This would reduce the macro instantiation with all of the types to a single place.
  2. refactor the internal logic of concat into a function that will be called by both impl_concat and impl_concat_cross_sizes.
  3. Optionally, use the same function for convert, by simply concatenating the first type with the zero-value from the "difference type". Alt is to simply refactor the convert code for a single function that will be called from both From trait impls.

@tarcieri
Copy link
Member

tarcieri commented May 8, 2023

Optionally, use the same function for convert, by simply concatenating the first type with the zero-value from the "difference type"

That seems like it would perform needless work? Perhaps LLVM could optimize it away.

Otherwise that sounds fine. The main important thing is to avoid duplicating the implementation and having one place to make changes/optimizations to it.

@ycscaly
Copy link
Contributor Author

ycscaly commented May 8, 2023

That seems like it would perform needless work? Perhaps LLVM could optimize it away.

It does sound like that, which was why I didn't do it in the first place I guess. I'll avoid that. Don't know if we want to rely on compiler optimizations. Although this is really negligent work compared to big-int operations.

@ycscaly
Copy link
Contributor Author

ycscaly commented May 9, 2023

The code will require significant refactoring to make use of generic_const_exprs and generally I don't want to deal with the maintenance headache of supporting nightly. Many of the features we need are still buggy/unstable.
I think what you're doing mostly works if you can find a way to DRY out the impls.
I can also take a shot at it. It would probably be worth refactoring the code prior to trying to expand it.

If what you're aiming at is simply to reduce code duplication, I think I could do this refactoring. What I'm thinking of:

  1. create a single impl_extra_traits macro that calls internally all of the above macros. This would reduce the macro instantiation with all of the types to a single place.
  2. refactor the internal logic of concat into a function that will be called by both impl_concat and impl_concat_cross_sizes.
  3. Optionally, use the same function for convert, by simply concatenating the first type with the zero-value from the "difference type". Alt is to simply refactor the convert code for a single function that will be called from both From trait impls.

All done, however this introduced a new problem I'm not sure how to overcome: doing both extra-sizes and extra-trait-impls now really becomes a combinatorial explosion. Build is now not even finishing. I commented out most of this, and it finishes by 4 mins for like 4 sizes, pushed just so you could see (all the rest 8000 lines of code are commented out).

I can only think of two options to solve this really

  1. instead of extra-sizes and extra-trait-impls have a per-size feature: e.g. a u4352 feature that you could toggle to have this specific size and all associated traits implemented for. I like this approach most, although it would make the Cargo.toml file less easy on the eye.
  2. export the macros and let users call them for the specific types they need. I mentioned this before but you rejected it, I wasn't convinced why this is impossible?

@tarcieri I'd love your thoughts on this, maybe you'll have a better option.

@tarcieri
Copy link
Member

tarcieri commented May 9, 2023

These sorts of problems are exactly what I was worrying about with compile times, and why I wanted you to feature gate them.

We may need to pick one or the other feature.

have a per-size feature: e.g. a u4352 feature that you could toggle to have this specific size and all associated traits implemented for.

Ugh, I guess. This is adding a lot of complexity to the crate's API surface and will make the other features harder to discover for being buried in a bunch of size-specific features.

export the macros and let users call them for the specific types they need. I mentioned this before but you rejected it, I wasn't convinced why this is impossible?

The macros add trait impls to struct Uint, which is defined in crypto_bigint. You can't add trait impls to crypto_bigint::Uint from 3rd party crates (orphan rules disallow it), so just re-exporting the macros doesn't work.

While it would be possible to add macros that define a newtype and add trait impls to that, you won't be able to use them with other Uint types without redefining all of the trait impls on Uint for each size you add. Sounds like a mess. I think you'll quickly run into the same combinatorial explosion problem.

Another option is to (ab)use Deref to get to the inner Uint, which would require you add an * to any value to get the inner Uint in order to perform operations on it. I would not recommend this approach.

If you think it will work, try it, but I think you'll quickly discover it's a problematic approach.

@ycscaly
Copy link
Contributor Author

ycscaly commented May 9, 2023

These sorts of problems are exactly what I was worrying about with compile times, and why I wanted you to feature gate them.

We may need to pick one or the other feature.

have a per-size feature: e.g. a u4352 feature that you could toggle to have this specific size and all associated traits implemented for.

Ugh, I guess. This is adding a lot of complexity to the crate's API surface and will make the other features harder to discover for being buried in a bunch of size-specific features.

export the macros and let users call them for the specific types they need. I mentioned this before but you rejected it, I wasn't convinced why this is impossible?

The macros add trait impls to struct Uint, which is defined in crypto_bigint. You can't add trait impls to crypto_bigint::Uint from 3rd party crates (orphan rules disallow it), so just re-exporting the macros doesn't work.

While it would be possible to add macros that define a newtype and add trait impls to that, you won't be able to use them with other Uint types without redefining all of the trait impls on Uint for each size you add. Sounds like a mess. I think you'll quickly run into the same combinatorial explosion problem.

Another option is to (ab)use Deref to get to the inner Uint, which would require you add an * to any value to get the inner Uint in order to perform operations on it. I would not recommend this approach.

If you think it will work, try it, but I think you'll quickly discover it's a problematic approach.

Yeah, I thought that was the reason, and don’t see a way we could overcome it (for 2)

We could have extra_sizes as is, and do extra_trait_impls for powers-of-two only.

This would allow me to e.g. multiply U4096 and U256 and get it as a concrete type U4352 while avoiding combinatorical madness.

Ans this isn't limited to me, I'd argue that most use cases arise from multiplying powers of two. Maybe some are using e.g. U192 or U384 so that I could implement the traits for all existing types instead of just powers of two, but it should cover most. Ans then if someone wants in some specific use case to add just one type for the extra traits, they could.

Alt is to do per-size configs, which for the above mentioned reasons is not recommended. I guess if this crate survived so far on a per-need basis of size addition, we could do that, I just tried to solve the problem once and for all.

@ycscaly ycscaly force-pushed the extra_trait_impls branch from 55de29f to 480a89c Compare May 10, 2023 09:25
@ycscaly
Copy link
Contributor Author

ycscaly commented May 10, 2023

@tarcieri I reverted the last commit that tried to have both extra-sizes and extra-trait-impls together. Now this code should be ready to be merged, it works for the pre-existing types only.

I have tried to take the more generic path here with extra-sizes, but I failed. Instead, let's merge this for the existing types and I'll open a new PR to add the types I need, namely U4352 and U4224. Would that be OK?

(In that case, I wouldn't actually need extra-sizes, and we can decide whether we want to keep them or not. I guess some other use-cases could use extra-sizes without extra-trait-impls for these sizes but I need both.

@ycscaly ycscaly requested a review from tarcieri May 10, 2023 09:36
@@ -0,0 +1,48 @@
macro_rules! convert_internal {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is added to address issue #220, but resize() should suffice for the conversion.

@ycscaly
Copy link
Contributor Author

ycscaly commented Jun 26, 2023

I managed to reduce my diffs from crypto-bigint's main repo drastically, and now that #251 has been merged, the only thing I need to upstream is #255, after which I could depend on origin/master.

Therefore, this cumbersome PR can now be closed.

@ycscaly ycscaly closed this Jun 26, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Concatenation of different sizes Uint<>s
3 participants