-
Notifications
You must be signed in to change notification settings - Fork 47
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Eduardo Leao
authored and
Eduardo Leao
committed
Jul 24, 2024
1 parent
098d9af
commit df5a3b2
Showing
43 changed files
with
10,830 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,120 @@ | ||
<a href="https://www.github.com/eduardoleao052/"> | ||
<img src="https://img.shields.io/badge/GitHub-%23121011.svg?style=flat-square&logo=github&logoColor=white"> | ||
</a> | ||
<a href="https://www.linkedin.com/in/eduardoleao052/"> | ||
<img src="https://img.shields.io/badge/-LinkedIn-blue?style=flat-square&logo=linkedin"> | ||
</a> | ||
|
||
# Welcome to Js-Pytorch's documentation | ||
|
||
For access to the source code, visit <a href="https://github.com/eduardoleao052", target="_blank">The GitHub repo</a>. | ||
|
||
## About | ||
|
||
- JS-PyTorch is a Deep Learning **JavaScript library** built from scratch, to closely follow PyTorch's syntax. | ||
- This means that you can use this library to train, test and deploy Neural Networks, with node.js or on a web browser. | ||
|
||
## Installation | ||
|
||
This is a **node** package, and can be installed with **npm** (Node Package Manager). It has full sopport of node 20.15.1, which is the latest LTS (Long-Term Support) node version. | ||
|
||
In most operating systems, it should also work for **more recent** versions. | ||
|
||
### MacOS | ||
|
||
* First, install **node** with the command line, as described on the <a href="https://nodejs.org/en/download/package-manager" target="_blank">node website</a>: | ||
|
||
``` | ||
# installs nvm (Node Version Manager) | ||
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash | ||
# download and install Node.js (you may need to restart the terminal) | ||
nvm install 20 | ||
# verifies the right Node.js version is in the environment | ||
node -v # should print `v20.15.1` | ||
# verifies the right npm version is in the environment | ||
npm -v # should print `10.7.0` | ||
``` | ||
|
||
* Now, use **npm** to install Js-PyTorch locally: | ||
|
||
``` | ||
# installs js-pytorch | ||
npm install js-pytorch | ||
# if needed, install older version of js-pytorch | ||
nvm install js-pytorch@0.1.0 | ||
``` | ||
|
||
* Finally, **require** the package in your javascript file: | ||
|
||
``` javascript | ||
const { torch } = require("js-pytorch"); | ||
const nn = torch.nn; | ||
const optim = torch.optim; | ||
``` | ||
|
||
|
||
### Linux | ||
|
||
* First, install **node** with the command line, as described on the <a href="https://nodejs.org/en/download/package-manager" target="_blank">node website</a>: | ||
|
||
``` | ||
# installs nvm (Node Version Manager) | ||
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash | ||
# download and install Node.js (you may need to restart the terminal) | ||
nvm install 20 | ||
# verifies the right Node.js version is in the environment | ||
node -v # should print `v20.15.1` | ||
# verifies the right npm version is in the environment | ||
npm -v # should print `10.7.0` | ||
``` | ||
|
||
* Now, use **npm** to install Js-PyTorch locally: | ||
|
||
``` | ||
# installs js-pytorch | ||
npm install js-pytorch | ||
# if needed, install older version of js-pytorch | ||
nvm install js-pytorch@0.1.0 | ||
``` | ||
|
||
* Finally, **require** the package in your javascript file: | ||
|
||
``` javascript | ||
const { torch } = require("js-pytorch"); | ||
const nn = torch.nn; | ||
const optim = torch.optim; | ||
``` | ||
|
||
|
||
### Windows | ||
|
||
* First, download **node** from the prebuilt installer on the <a href="https://nodejs.org/en/download/prebuilt-installer" target="_blank">node website</a>: | ||
|
||
* Now, use **npm** to install Js-PyTorch locally: | ||
|
||
``` | ||
# installs js-pytorch | ||
npm install js-pytorch | ||
# if needed, install older version of js-pytorch | ||
nvm install js-pytorch@0.1.0 | ||
``` | ||
|
||
> **Note:**If this throws an error, you might need to install the latest version of [Visual Studio](https://visualstudio.microsoft.com/downloads/?cid=learn-navbar-download-cta), including the "Desktop development with C++" workload. | ||
* Finally, **require** the package in your javascript file: | ||
|
||
``` javascript | ||
const { torch } = require("js-pytorch"); | ||
const nn = torch.nn; | ||
const optim = torch.optim; | ||
``` | ||
|
||
## Contributing | ||
- If you have **detected a bug** on the library, please file a <a href="https://github.com/eduardoleao052/js-pytorch/issues/new?assignees=&labels=02+Bug+Report&projects=&template=bug-report.yml", target="_blank">Bug Report</a> using a GitHub issue, and feel free to reach out to me on my LinkedIn or email. | ||
- If you would like to see a **new feature** in Js-PyTorch, file a <a href="https://github.com/eduardoleao052/js-pytorch/issues/new?assignees=&labels=enhancement&projects=&template=feature-request.yml", target="_blank">New Feature</a> issue. | ||
- Finally, if you would like to contribute, create a merge request to the `develop` branch. I will try to answer as soon as possible. All help is really appreciated! Here is a list of the **developer tools**: | ||
* **Build for Distribution** by running `npm run build`. CJS and ESM modules and `index.d.ts` will be output in the `dist/` folder. | ||
* **Check the Code** with ESLint at any time, running `npm run lint`. | ||
* **Run tests** run `npm test`. | ||
* **Improve Code Formatting** with prettier, running `npm run prettier`. | ||
* **Performance Benchmarks** are also included in the `tests/benchmarks/` directory. Run all benchmarks with `npm run bench` and save new benchmarks with `npm run bench:update`. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,175 @@ | ||
<a href="https://www.github.com/eduardoleao052/"> | ||
<img src="https://img.shields.io/badge/GitHub-%23121011.svg?style=flat-square&logo=github&logoColor=white"> | ||
</a> | ||
<a href="https://www.linkedin.com/in/eduardoleao052/"> | ||
<img src="https://img.shields.io/badge/-LinkedIn-blue?style=flat-square&logo=linkedin"> | ||
</a> | ||
|
||
# Layers | ||
|
||
In this section are listed all of the **Layers** and **Modules**. | ||
|
||
## nn.Linear | ||
|
||
``` | ||
new nn.Linear(in_size, | ||
out_size, | ||
device, | ||
bias, | ||
xavier) → Tensor | ||
``` | ||
|
||
Applies a linear transformation to the input tensor. | ||
Input is matrix-multiplied by a `w` tensor and added to a `b` tensor. | ||
|
||
#### Parameters | ||
* **in_size (number)** - Size of the last dimension of the input data. | ||
* **out_size (number)** - Size of the last dimension of the output data. | ||
* **device (string)** - Device on which the model's calculations will run. Either `'cpu'` or `'gpu'`. | ||
* **bias (boolean)** - Whether to use a bias term `b`. | ||
* **xavier (boolean)** - Whether to use Xavier initialization on the weights. | ||
|
||
#### Learnable Variables | ||
* **w** - *[input_size, output_size]* Tensor. | ||
* **b** - *[output_size]* Tensor. | ||
|
||
#### Example | ||
|
||
```javascript | ||
>>> let linear = new nn.Linear(10,15,'gpu'); | ||
>>> let x = torch.randn([100,50,10], true, 'gpu'); | ||
>>> let y = linear.forward(x); | ||
>>> y.shape | ||
// [100, 50, 15] | ||
``` | ||
</br> | ||
|
||
## nn.MultiHeadSelfAttention | ||
|
||
``` | ||
new nn.MultiHeadSelfAttention(in_size, | ||
out_size, | ||
n_heads, | ||
n_timesteps, | ||
dropout_prob, | ||
device) → Tensor | ||
``` | ||
|
||
Applies a self-attention layer on the input tensor. | ||
|
||
* Matrix-multiplies input by `Wk`, `Wq`, `Wv`, resulting in Key, Query and Value tensors. | ||
* Computes attention multiplying Query and transpose Key. | ||
* Applies Mask, Dropout and Softmax to attention activations. | ||
* Multiplies result by Values. | ||
* Multiplies result by `residual_proj`. | ||
* Applies final Dropout. | ||
|
||
#### Parameters | ||
* **in_size (number)** - Size of the last dimension of the input data. | ||
* **out_size (number)** - Size of the last dimension of the output data. | ||
* **n_heads (boolean)** - Number of parallel attention heads the data is divided into. In_size must be divided evenly by n_heads. | ||
* **n_timesteps (boolean)** - Number of timesteps computed in parallel by the transformer. | ||
* **dropout_prob (boolean)** - probability of randomly dropping an activation during training (to improve regularization). | ||
* **device (string)** - Device on which the model's calculations will run. Either `'cpu'` or `'gpu'`. | ||
|
||
#### Learnable Variables | ||
* **Wk** - *[input_size, input_size]* Tensor. | ||
* **Wq** - *[input_size, input_size]* Tensor. | ||
* **Wv** - *[input_size, input_size]* Tensor. | ||
* **residual_proj** - *[input_size, output_size]* Tensor. | ||
|
||
#### Example | ||
|
||
```javascript | ||
>>> let att = new nn.MultiHeadSelfAttention(10, 15, 2, 32, 0.2, 'gpu'); | ||
>>> let x = torch.randn([100,50,10], true, 'gpu'); | ||
>>> let y = att.forward(x); | ||
>>> y.shape | ||
// [100, 50, 15] | ||
``` | ||
</br> | ||
## nn.FullyConnected | ||
|
||
``` | ||
new nn.FullyConnected(in_size, | ||
out_size, | ||
dropout_prob, | ||
device, | ||
bias) → Tensor | ||
``` | ||
|
||
Applies a fully-connected layer on the input tensor. | ||
|
||
* Matrix-multiplies input by Linear layer `l1`, upscaling the input. | ||
* Passes tensor through ReLU. | ||
* Matrix-multiplies tensor by Linear layer `l2`, downscaling the input. | ||
* Passes tensor through Dropout. | ||
|
||
```javascript | ||
forward(x: Tensor): Tensor { | ||
let z = this.l1.forward(x); | ||
z = this.relu.forward(z); | ||
z = this.l2.forward(z); | ||
z = this.dropout.forward(z); | ||
return z; | ||
} | ||
``` | ||
|
||
#### Parameters | ||
* **in_size (number)** - Size of the last dimension of the input data. | ||
* **out_size (number)** - Size of the last dimension of the output data. | ||
* **dropout_prob (boolean)** - probability of randomly dropping an activation during training (to improve regularization). | ||
* **device (string)** - Device on which the model's calculations will run. Either `'cpu'` or `'gpu'`. | ||
* **bias (boolean)** - Whether to use a bias term `b`. | ||
|
||
#### Learnable Variables | ||
* **l1** - *[input_size, 4*input_size]* Tensor. | ||
* **l2** - *[4*input_size, input_size]* Tensor. | ||
|
||
#### Example | ||
|
||
```javascript | ||
>>> let fc = new nn.FullyConnected(10, 15, 0.2, 'gpu'); | ||
>>> let x = torch.randn([100,50,10], true, 'gpu'); | ||
>>> let y = fc.forward(x); | ||
>>> y.shape | ||
// [100, 50, 15] | ||
``` | ||
</br> | ||
|
||
## nn.Block | ||
|
||
``` | ||
new nn.Block(in_size, | ||
out_size, | ||
n_heads, | ||
n_timesteps, | ||
dropout_prob, | ||
device) → Tensor | ||
``` | ||
|
||
Applies a transformer Block layer on the input tensor. | ||
|
||
```javascript | ||
forward(x: Tensor): Tensor { | ||
// Pass through Layer Norm and Self Attention: | ||
let z = x.add(this.att.forward(this.ln1.forward(x))); | ||
// Pass through Layer Norm and Fully Connected: | ||
z = z.add(this.fcc.forward(this.ln2.forward(z))); | ||
return z; | ||
} | ||
``` | ||
|
||
#### Parameters | ||
* **in_size (number)** - Size of the last dimension of the input data. | ||
* **out_size (number)** - Size of the last dimension of the output data. | ||
* **n_heads (boolean)** - Number of parallel attention heads the data is divided into. In_size must be divided evenly by n_heads. | ||
* **n_timesteps (boolean)** - Number of timesteps computed in parallel by the transformer. | ||
* **dropout_prob (boolean)** - probability of randomly dropping an activation during training (to improve regularization). | ||
* **device (string)** - Device on which the model's calculations will run. Either `'cpu'` or `'gpu'`. | ||
|
||
#### Learnable Modules | ||
* **nn.MultiHeadSelfAttention** - `Wk`, `Wq`, `Wv`, `residual_proj`. | ||
* **nn.LayerNorm** - `gamma`, `beta`. | ||
* **nn.FullyConnecyed** - `l1`, `l2`. | ||
* **nn.LayerNorm** - `gamma`, `beta`. |
Oops, something went wrong.