Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Tensornet] Allow setting scratch size #2754

Merged
merged 3 commits into from
Apr 1, 2025

Conversation

1tnguyen
Copy link
Collaborator

@1tnguyen 1tnguyen commented Mar 23, 2025

Description

  • Add CUDAQ_TENSORNET_SCRATCH_SIZE_PERCENTAGE to control scratch size allocation. The default is still 50% (existing behavior).

  • Minor code refactoring: move implementations to a .cpp file.

  • Update docs.

Related to #2748

Signed-off-by: Thien Nguyen <thiennguyen@nvidia.com>
Copy link

CUDA Quantum Docs Bot: A preview of the documentation can be found here.

github-actions bot pushed a commit that referenced this pull request Mar 24, 2025
@1tnguyen 1tnguyen requested review from bmhowe23 and mitchdz March 24, 2025 05:02
Copy link

CUDA Quantum Docs Bot: A preview of the documentation can be found here.

github-actions bot pushed a commit that referenced this pull request Mar 24, 2025
@bmhowe23 bmhowe23 linked an issue Mar 31, 2025 that may be closed by this pull request
Copy link
Collaborator

@bmhowe23 bmhowe23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link

github-actions bot commented Apr 1, 2025

CUDA Quantum Docs Bot: A preview of the documentation can be found here.

github-actions bot pushed a commit that referenced this pull request Apr 1, 2025
@1tnguyen 1tnguyen merged commit 1af54f9 into NVIDIA:main Apr 1, 2025
197 checks passed
github-actions bot pushed a commit that referenced this pull request Apr 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

CUDA-Q program always uses 50% of the available GPU memory.
2 participants