Validators will need enough processing power to inference multiple models. It is required to have a GPU (atleast NVIDIA RTX 4090) with minimum 16 GB of VRAM (24 GB is recommended).
Make sure that your server provider support systemd (RunPod doesn't support it). Otherwise ollama service won't be restarting automatically and you'll have to restart it on your own from time to time.
- Clone the repo
apt update && apt upgrade -y
git clone https://github.com/It-s-AI/llm-detection
-
Setup your python virtual environment or Conda environment.
-
Install the requirements. From your virtual environment, run
cd llm-detection
python3 -m pip install -e .
python3 -m pip uninstall mathgenerator -y
python3 -m pip install git+https://github.com/synapse-alpha/mathgenerator.git
- Make sure you've created a Wallet and registered a hotkey.
btcli w new_coldkey
btcli w new_hotkey
btcli s register --netuid 32 --wallet.name YOUR_COLDKEY --wallet.hotkey YOUR_HOTKEY
Install PM2 and the jq package on your system.
sudo apt update && sudo apt install jq && sudo apt install npm && sudo npm install pm2 -g && pm2 update
Make run.sh
file executable.
chmod +x run.sh
So Ollama models can detect GPUs on your system
apt update
apt install lshw -y
Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
Run ollama service in background (make sure that you don't have any running instances of ollama before running this command)
pm2 start --name ollama "ollama serve"
If you want to update your pulled models run this:
ollama list | tail -n +2 | awk '{print $1}' | while read -r model; do
ollama pull $model
done
Install cc_net
sudo apt-get install build-essential libboost-system-dev libboost-thread-dev libboost-program-options-dev libboost-test-dev zip unzip -y
pip install -e .
pm2 start run.sh --name llm_detection_validators_autoupdate -- --wallet.name YOUR_COLDKEY --wallet.hotkey YOUR_HOTKEY --axon.port 70000 --neuron.device cuda:0