Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possibility for GPU-backed vector computation? #8

Open
Shellcat-Zero opened this issue Aug 1, 2021 · 1 comment
Open

Possibility for GPU-backed vector computation? #8

Shellcat-Zero opened this issue Aug 1, 2021 · 1 comment

Comments

@Shellcat-Zero
Copy link

I was wondering if any performance optimization has been explored on the GPU for this. In particular, I was thinking that Python JAX would be very well suited to this optimization. I'm still familiarizing myself with the source/algorithm used here for the computation, but in addition to providing optional GPU-backed numpy for computing, JAX also provides:

jit(), for speeding up your code
grad(), for taking derivatives
vmap(), for automatic vectorization or batching.

in case any of these would be helpful here. A demo of computing the greeks via differentials on Black-Scholes from grad() is shown here, although I've found that the demo is further improved with jit() on the functions there.

@erkandem
Copy link
Owner

erkandem commented Aug 3, 2021

Hey @Shellcat-Zero, thx for the general interest in the code. Hope it solves sth for you.

GPU stuff sounds interesting and very suited for matrix stuff. I run the apps which using this package on regular chips.
If you know how to do it and it benefits you feel free to fork it.

Quite interesting but not worth it for my usage.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants