You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The command llm --extract --model llama3.2:1b "Say hello in a markdown code block." raises an error, but llm --no-log --extract --model llama3.2:1b "Say hello in a markdown code block." works, as does llm --extract --model gpt-4o-mini "Say hello in a markdown code block." and llm --model llama3.2:1b "Say hello in a markdown code block.".
So I think there's some weird interaction between:
Logging responses and extracting code blocks
Talking to Ollama models
Changing self.json() to dict(self.json()) in llm/models.py seems to fix it, but breaks when you don't pass --extract. That's why I'm opening this issue.
llm --extract --model llama3.2:1b "Say hello in a markdown code block."
Hello
Traceback (most recent call last):
File "/Users/julia/.local/bin/llm", line 8, in <module>
sys.exit(cli())
~~~^^
File "/Users/julia/.local/pipx/venvs/llm/lib/python3.13/site-packages/click/core.py", line 1161, in __call__
return self.main(*args, **kwargs)
~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/Users/julia/.local/pipx/venvs/llm/lib/python3.13/site-packages/click/core.py", line 1082, in main
rv = self.invoke(ctx)
File "/Users/julia/.local/pipx/venvs/llm/lib/python3.13/site-packages/click/core.py", line 1697, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^
File "/Users/julia/.local/pipx/venvs/llm/lib/python3.13/site-packages/click/core.py", line 1443, in invoke
return ctx.invoke(self.callback, **ctx.params)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/julia/.local/pipx/venvs/llm/lib/python3.13/site-packages/click/core.py", line 788, in invoke
return __callback(*args, **kwargs)
File "/Users/julia/.local/pipx/venvs/llm/lib/python3.13/site-packages/llm/cli.py", line 493, in prompt
response.log_to_db(db)
~~~~~~~~~~~~~~~~~~^^^^
File "/Users/julia/.local/pipx/venvs/llm/lib/python3.13/site-packages/llm/models.py", line 315, in log_to_db
db["responses"].insert(response)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^
File "/Users/julia/.local/pipx/venvs/llm/lib/python3.13/site-packages/sqlite_utils/db.py", line 3219, in insert
return self.insert_all(
~~~~~~~~~~~~~~~^
[record],
^^^^^^^^^
...<13 lines>...
strict=strict,
^^^^^^^^^^^^^^
)
^
File "/Users/julia/.local/pipx/venvs/llm/lib/python3.13/site-packages/sqlite_utils/db.py", line 3351, in insert_all
self.insert_chunk(
~~~~~~~~~~~~~~~~~^
alter,
^^^^^^
...<11 lines>...
ignore,
^^^^^^^
)
^
File "/Users/julia/.local/pipx/venvs/llm/lib/python3.13/site-packages/sqlite_utils/db.py", line 3109, in insert_chunk
result = self.db.execute(query, params)
File "/Users/julia/.local/pipx/venvs/llm/lib/python3.13/site-packages/sqlite_utils/db.py", line 533, in execute
return self.conn.execute(sql, parameters)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
sqlite3.ProgrammingError: Error binding parameter 12: type 'ChatResponse' is not supported
The text was updated successfully, but these errors were encountered:
heyajulia
changed the title
A fix for sqlite3.ProgrammingError: Error binding parameter 12: type 'ChatResponse' is not supported when chatting with Ollama modelssqlite3.ProgrammingError: Error binding parameter 12: type 'ChatResponse' is not supported when chatting with Ollama models
Feb 6, 2025
The command
llm --extract --model llama3.2:1b "Say hello in a markdown code block."
raises an error, butllm --no-log --extract --model llama3.2:1b "Say hello in a markdown code block."
works, as doesllm --extract --model gpt-4o-mini "Say hello in a markdown code block."
andllm --model llama3.2:1b "Say hello in a markdown code block."
.So I think there's some weird interaction between:
Changing
self.json()
todict(self.json())
inllm/models.py
seems to fix it, but breaks when you don't pass--extract
. That's why I'm opening this issue.The text was updated successfully, but these errors were encountered: