You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was wondering if any performance optimization has been explored on the GPU for this. In particular, I was thinking that Python JAX would be very well suited to this optimization. I'm still familiarizing myself with the source/algorithm used here for the computation, but in addition to providing optional GPU-backed numpy for computing, JAX also provides:
jit(), for speeding up your code
grad(), for taking derivatives
vmap(), for automatic vectorization or batching.
in case any of these would be helpful here. A demo of computing the greeks via differentials on Black-Scholes from grad()is shown here, although I've found that the demo is further improved with jit() on the functions there.
The text was updated successfully, but these errors were encountered:
Hey @Shellcat-Zero, thx for the general interest in the code. Hope it solves sth for you.
GPU stuff sounds interesting and very suited for matrix stuff. I run the apps which using this package on regular chips.
If you know how to do it and it benefits you feel free to fork it.
I was wondering if any performance optimization has been explored on the GPU for this. In particular, I was thinking that Python JAX would be very well suited to this optimization. I'm still familiarizing myself with the source/algorithm used here for the computation, but in addition to providing optional GPU-backed numpy for computing, JAX also provides:
in case any of these would be helpful here. A demo of computing the greeks via differentials on Black-Scholes from
grad()
is shown here, although I've found that the demo is further improved withjit()
on the functions there.The text was updated successfully, but these errors were encountered: