-
Notifications
You must be signed in to change notification settings - Fork 360
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Wrap dynamic size handling in a compilation flag #1851
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to Python style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to Python style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
core/conversion/evaluators/prim.cpp
Outdated
@@ -40,6 +40,8 @@ auto prim_registrations = | |||
// Outputs is an IValue which has list of tensors which can be found in ctx->evaluated_value_map | |||
const torch::jit::IValue* outputs = args.at(n->input()).IValue(); | |||
auto outputVec = outputs->toList().vec(); | |||
LOG_DEBUG("==== OUTPUT VEC: " << outputVec); | |||
TORCHTRT_THROW_ERROR("========= FORCED ERROR "); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These 2 lines need to be removed
core/conversion/evaluators/prim.cpp
Outdated
@@ -40,6 +40,8 @@ auto prim_registrations = | |||
// Outputs is an IValue which has list of tensors which can be found in ctx->evaluated_value_map | |||
const torch::jit::IValue* outputs = args.at(n->input()).IValue(); | |||
auto outputVec = outputs->toList().vec(); | |||
LOG_DEBUG("==== OUTPUT VEC: " << outputVec); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove this line
…en::size Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com> chore: Add allow-shape-tensors option to torchtrtc, preserve static dimensions in the aten::size layer Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com> chore: Linter fixes Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com> chore: remove debug throw error Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to Python style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
core/conversion/evaluators/aten.cpp
Outdated
if (ctx->settings.allow_shape_tensors) { | ||
return dynamic_size_layer(ctx, n, args); | ||
} else { | ||
LOG_WARNING("There may be undefined behavior using dynamic shape and aten::size "); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this undefined behavior due to using aten::size with dynamic shape without shape tensors or the other way around. Might need rewording
@@ -270,7 +270,11 @@ auto aten_registrations TORCHTRT_UNUSED = | |||
if (tensor_var.isITensor()) { | |||
auto tensor = tensor_var.ITensor(); | |||
if (ctx->input_is_dynamic) { | |||
return dynamic_size_layer(ctx, n, args); | |||
if (ctx->settings.allow_shape_tensors) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should be checking if input is dynamic, shape tensors are enabled and the input contains a placeholder dimension before using the shape tensor code path. Otherwise static aten size is sufficient and will cause less errors
@@ -286,7 +290,11 @@ auto aten_registrations TORCHTRT_UNUSED = | |||
auto dim = args.at(n->input(1)).unwrapToInt(); | |||
if (tensor_var.isITensor()) { | |||
if (ctx->input_is_dynamic) { | |||
return dynamic_size_layer(ctx, n, args); | |||
if (ctx->settings.allow_shape_tensors) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comments here from above about conditions and warnings
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to Python style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
feat: Wrap dynamic size handling in a compilation flag
Description
This PR does the following
The aten::size dynamic shape handling is now disabled by default.
allow_shape_tensors=True
will enable it.Modifications to aten::size include preserving the static dimensions in the input, thereby not converting any known types eg:
Int
intoITensors
Type of change
Please delete options that are not relevant and/or add your own.
Checklist: