Skip to content

Commit

Permalink
Version 3.7 update (#1167)
Browse files Browse the repository at this point in the history
* Update README.md

* version update + bug fix for server

---------

Co-authored-by: Unconst <32490803+unconst@users.noreply.github.com>
  • Loading branch information
Eugene-hu and unconst authored Mar 19, 2023
1 parent bec0081 commit 8c19e9e
Show file tree
Hide file tree
Showing 3 changed files with 11 additions and 8 deletions.
13 changes: 9 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,13 +13,18 @@

</div>

This repository contains Bittensor's python API which can be used to 1) Query the Bittensor network as a [client](#31-client) 2) Run and build Bittensor miners & validators for [mining TAO](#43-running-a-template-miner), 3) Pull network [state information](#3-using-bittensor) and 4) Manage [TAO wallets](#41-cli), balances, transfers etc.
This repository contains Bittensor's Python API, which can be used for the following purposes:

Bittensor is a mining network (like Bitcoin) with inbaked incentives which are designed to drive miners to provide value; which, in our network, is achieved by hosting trained or training machine learning models, which can be queried by clients seeking inference over inputs (i.e. text-generation, or numerical embeddings from a large foundation model like GPT-NeoX-20B).
1. Querying the Bittensor network as a [client](https://github.com/opentensor/bittensor#31-client).
2. Running and building Bittensor miners and validators for [mining TAO](https://github.com/opentensor/bittensor#43-running-a-template-miner).
3. Pulling network [state information](https://github.com/opentensor/bittensor#3-using-bittensor).
4. Managing [TAO wallets](https://github.com/opentensor/bittensor#41-cli), balances, transfers, etc.

The use of token based incentives is by design, built-in to drive the network's size and as a means of distributing the value generated by the network directly to the individuals producing that value without intermediary. The network is open to those who participate and no individual or group has full power of what it learns, who can profit from it, or access it.
Bittensor is a mining network, similar to Bitcoin, that includes built-in incentives designed to encourage miners to provide value by hosting trained or training machine learning models. These models can be queried by clients seeking inference over inputs, such as token-based text generations or numerical embeddings from a large foundation model like GPT-NeoX-20B.

To learn more about Bittensor read our [paper].(https://drive.google.com/file/d/1VnsobL6lIAAqcA1_Tbm8AYIQscfJV4KU/view).
Token-based incentives are designed to drive the network's growth and distribute the value generated by the network directly to the individuals producing that value, without intermediaries. The network is open to all participants, and no individual or group has full control over what is learned, who can profit from it, or who can access it.

To learn more about Bittensor, please read our [paper](https://drive.google.com/file/d/1VnsobL6lIAAqcA1_Tbm8AYIQscfJV4KU/view).

- [1. Documentation](#1-documentation)
- [2. Install](#2-install)
Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
3.6.3
3.7.0
4 changes: 1 addition & 3 deletions bittensor/_neuron/text/core_server/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -426,28 +426,26 @@ def synapse_check(self, synapse, hotkey, inputs_x=None):
"""
## Uid that sent the request
incoming_uid = self.metagraph.hotkeys.index(hotkey)
batch_size, sequence_len = inputs_x[0].size()
if synapse.synapse_type == bittensor.proto.Synapse.SynapseType.TEXT_LAST_HIDDEN_STATE:
if self.metagraph.S[incoming_uid] < self.config.neuron.lasthidden_stake \
or (batch_size > self.config.neuron.max_batch_size) \
or (sequence_len > self.config.neuron.max_sequence_len):
return False

elif synapse.synapse_type == bittensor.proto.Synapse.SynapseType.TEXT_CAUSAL_LM:
batch_size, sequence_len = inputs_x[0].size()
if (self.metagraph.S[incoming_uid] < self.config.neuron.causallm_stake) \
or (batch_size > self.config.neuron.max_batch_size) \
or (sequence_len > self.config.neuron.max_sequence_len):
return False

elif synapse.synapse_type == bittensor.proto.Synapse.SynapseType.TEXT_CAUSAL_LM_NEXT:
batch_size, sequence_len = inputs_x[0].size()
if (self.metagraph.S[incoming_uid] < self.config.neuron.causallmnext_stake) \
or (batch_size > self.config.neuron.max_batch_size) \
or (sequence_len > self.config.neuron.max_sequence_len):
return False

elif synapse.synapse_type == bittensor.proto.Synapse.SynapseType.TEXT_SEQ_2_SEQ:
batch_size, sequence_len = inputs_x[0].size()
if (self.metagraph.S[incoming_uid] < self.config.neuron.seq2seq_stake) \
or (batch_size > self.config.neuron.max_batch_size) \
or (sequence_len > self.config.neuron.max_sequence_len) \
Expand Down

0 comments on commit 8c19e9e

Please sign in to comment.