-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
QKeras updates #30
QKeras updates #30
Conversation
… csim_integration
…_init__ for easier import elsewhere
…Splitting hls_model.py into hls_model.py and hls_layers.py was necessary to remove circular import dependency from optimizers importing Layers and utilities, while hls_model now needs to import optimizer
…quantized_bits(4,0).max() is 1.0, whereas it would be 0.5 with ap_fixed. So, add 1 bit to the integer for ap_fixed types
… modes (required qkeras converter split for circular import), and hooks for rounding mode to IntegerPrecisionType, FixedPrecisionType.
The new |
…imizer pass to factorize out alpha scale and insert new 'ApplyAlpha' (BatchNormalization) layer to apply it back. Attach data_unquantized to WeightVariables to retain access to them later (used in QKerasFactorizeAlpha pass)
…m optimizer. Fix other passes to prevent multiple matches.
…rom Integer type. Fix match in rounding, saturation optimizer pass, add layer name to match API.
I added the new softmax layer (with its two tables) to the other converters. I tested the onnx and TF models (successfully) but couldn't try to Pytorch one yet due to (I think) some issue with that model being from a much older Pytorch version. |
…on properly. Propagate new Softmax to keras_to_hls.
New specific things: