Skip to content

Commit

Permalink
Make microchain agent runnable with local models via Ollama (#98)
Browse files Browse the repository at this point in the history
* Add notes on different model performances of the microchain agent
* Merge main
  • Loading branch information
evangriffiths authored Apr 22, 2024
1 parent 5800c66 commit 2aae70e
Show file tree
Hide file tree
Showing 3 changed files with 91 additions and 29 deletions.
64 changes: 37 additions & 27 deletions prediction_market_agent/agents/microchain_agent/microchain_agent.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
import typer
from functions import MARKET_FUNCTIONS, MISC_FUNCTIONS
from microchain import LLM, Agent, Engine, OpenAIChatGenerator
from microchain.functions import Reasoning, Stop
Expand All @@ -8,32 +9,41 @@
)
from prediction_market_agent.utils import APIKeys

engine = Engine()
engine.register(Reasoning())
engine.register(Stop())
for function in MISC_FUNCTIONS:
engine.register(function())
for function in MARKET_FUNCTIONS:
engine.register(function(market_type=MarketType.OMEN))
for function in OMEN_FUNCTIONS:
engine.register(function())

generator = OpenAIChatGenerator(
model="gpt-4-turbo-preview",
api_key=APIKeys().openai_api_key.get_secret_value(),
api_base="https://api.openai.com/v1",
temperature=0.7,
)
agent = Agent(llm=LLM(generator=generator), engine=engine)
agent.prompt = f"""Act as a agent to maximise your profit. You can use the following functions:
{engine.help}
Only output valid Python function calls.
"""
def main(
api_base: str = "https://api.openai.com/v1",
model: str = "gpt-4-turbo-preview",
) -> None:
engine = Engine()
engine.register(Reasoning())
engine.register(Stop())
for function in MISC_FUNCTIONS:
engine.register(function())
for function in MARKET_FUNCTIONS:
engine.register(function(market_type=MarketType.OMEN))
for function in OMEN_FUNCTIONS:
engine.register(function())

generator = OpenAIChatGenerator(
model=model,
api_key=APIKeys().openai_api_key.get_secret_value(),
api_base=api_base,
temperature=0.7,
)
agent = Agent(llm=LLM(generator=generator), engine=engine)
agent.prompt = f"""Act as a agent to maximise your profit. You can use the following functions:
{engine.help}
Only output valid Python function calls.
"""

agent.bootstrap = ['Reasoning("I need to reason step-by-step")']
agent.run(iterations=10)
# generator.print_usage() # Waiting for microchain release


agent.bootstrap = ['Reasoning("I need to reason step-by-step")']
agent.run(iterations=10)
generator.print_usage()
if __name__ == "__main__":
typer.run(main)
52 changes: 52 additions & 0 deletions prediction_market_agent/agents/microchain_agent/model_notes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# Microchain Agent Model Behaviour Diary

## Proprietary models

### GPT4

- Makes many reasoning steps, with coherent and good reasoning w.r.t betting strategy
- Almost always gets function calls correct
- Seems keen on betting large amounts, even when instructed not to!
- Seems keen to `Stop` program after max a couple bets. Doesn't use some of the functions (selling, getting existing positions)

## Local models

### Setup

- Instructions are for Ollama, but you can use any library that allows you to set up a local OpenAI-compatible server.
- Download Ollama [here](https://ollama.com/download/mac)
- In another terminal, run `ollama serve` to start the server. You can set the address and port with the `OLLAMA_HOST` env var, e.g.:

```bash
OLLAMA_HOST=127.0.0.1:11435 ollama serve`
```

- Run the script, passing in the API address and model name as arguments. Note that you must have downloaded these model weights in advance via `ollama run <model_name`:

```bash
python prediction_market_agent/agents/microchain_agent/microchain_agent.py --api-base "http://localhost:11435/v1" --model "nous-hermes2:latest"
```

Note that the first call to the model will be slow, as the model weights are loaded from disk into GPU memory/RAM.

## [mixtral:8x7b-instruct-v0.1-q3_K_S](https://ollama.com/library/mixtral:8x7b-instruct-v0.1-q3_K_S)

- Promising! Outputs some coherent reasoning. Chains several function calls together before starting to lose its way.
- Made several bad function calls, but followed up with reasoning to fix the function call, then made correct one.
- Didn't have a good view of all functions available. e.g.
- tried to use `EndChat()` instead of `Stop()`
- iterated through all markets to get market.p_yes, but didn't try to call a mech to predict its own p_yes.
- Questionable reasoning w.r.t betting strategy. Stated that a market with a large |p_yes - 0.5| was more likely to be mis-priced.

## [llama3:latest](https://ollama.com/library/llama3)

- Couldn't get any useful function calls from it.
- Often replied with an empty string, aborting the program
- Couldn't recover after an incorrect function call

## [nous-hermes2:latest](https://ollama.com/library/nous-hermes2)

- Took some system prompt massaging for it to get going
- Was able to recover from some bad function calls
- Correctly called a mech to predict its own p_yes, but reasoning about what to do with result unraveled
- Always falls over before placing a bet
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
redeem_from_all_user_positions,
)

from prediction_market_agent.agents.microchain_agent.utils import MicrochainAPIKeys
from prediction_market_agent.utils import APIKeys


class RedeemWinningBets(Function):
Expand All @@ -16,7 +16,7 @@ def example_args(self) -> list[str]:
return []

def __call__(self) -> None:
redeem_from_all_user_positions(MicrochainAPIKeys().bet_from_private_key)
redeem_from_all_user_positions(APIKeys().bet_from_private_key)


# Functions that interact exclusively with Omen prediction markets
Expand Down

0 comments on commit 2aae70e

Please sign in to comment.