Skip to content

Commit

Permalink
Merge branch preview into preview-fork
Browse files Browse the repository at this point in the history
Note: list below was generated using Claude Opus from the more verbose original
generated squashed commit message, so some items are missing and some may be
hallucinated. A quick eyeball check indicated it's very close to what a human
would create manually

🐛 fix merge mistake

Merge branch 'main' into preview

🐛 fix language undefined bug

🚧 update redirect

Revert "📝 remove redundant docs redirect"

📝 remove redundant docs redirect

Merge branch 'main' of https://github.com/continuedev/continue

📝 restore /docs base url

🍱 update continue logo

Update README image and description (continuedev#1169)

Merge branch 'main' of https://github.com/continuedev/continue

📝 update docs base path

🔥 remove disable indexing

💄 add llama3 to UI

📝 remove note about parallel ollama models

fix type for gemma of groq (continuedev#1142)

📝 update embeddings docs

📝 update custom commands docs

📝 fix shortcut in docs

📝 gemini 1.5 pro in changelog

📝 update changelog

Preview (continuedev#1114)

- ✨ shared indexing

- 🎨 indexing

- 🧑‍💻 npm i --no-save in prepackage.js

- 🚚 rename addLogs to addPromptCompletionPair

- 🩹 add filter for midline imports/top-level keywords and encoding header

- 🩹 add .t. to stop words

- 🔥 Improved Ctrl/Cmd+I (continuedev#1023)

- ⚡️ improved diff streaming algo

- 🎨 better messaging/formatting for further cmd/ctrl+I instructions

- ⚡️ more reliably filter out unwanted explanations

- 🚸 better follow up edits

- 💄 accept/reject diffs block-by-block

- ✨ cmd/ctrl+z to reject diff

- 🚚 rename variables

- 💄 allow switching files when inline diff still visible

- 🚸 don't show quick pick if not ctx providers exist

- 🚧 (sort of) allow switching editors while streaming diff

- 💄 show model being used for cmd/ctrl+I

- 💄 don't add undo stops when generating diff

- 🐛 fix shortcuts for accept/reject diff blocks

- ✨ improved GPT edit prompt, taking prefix/suffix into account

- ✨ improved prompting for empty selection ctrl/cmd+I

- ⚡️ immediately refresh codelens

- 🐛 use first model if default undefined

- ⚡️ refresh codelens after diff cleared

- 💄 update keyboard shortcuts

- ⚡️ Improved edit prompts for OS models (continuedev#1029)

- 💄 refresh codelens more frequently

- ⚡️ improved codellama edit prompt

- ⚡️ better codellama prompt

- ⚡️ use same improved prompt for most OS models

- 🎨 refactor chat templates

- 🎨 refactor llama2 prompt to allow ending assistant message

- ⚡️ separate os models prompt when no prefix/suffix

- 🎨 refactor to allow putting words in the model's mouth

- ⚡️ prune code around cmd/ctrl+I

- 🚚 rename to cmd/ctrl+I

- 🎨 make raw a base completion option

- 🩹 small improvements

- 🩹 use different prompt when completions not supported

- Keep the same statusBar item when updating it to prevent flickering of the status bar. (continuedev#1022)

- 🎨 add getRepoName to IDE, use for indexing

- 🎨 implement server client interface

- 📌 pin to vectordb=0.4.12

- 🧑‍💻 mark xhr-sync-worker.js as external in esbuild

- 🎨 break out ignore defaults into core

- 🎨 update getRepoName

- 🐛 fix import error

- 🩹 fix chat.jsonl logging

- ⚡️ improved OpenAI autocomplete support

- 🐛 fix bug causing part of completions to be skipped

- 🔥 remove URLContextProvider

- ✨ Add Groq as an official provider

- 🩹 make sure autocomplete works with claude

- 💄 update positioning of code block toolbar to not cover code

- ✨ Run in terminal button

- ✨ insert at cursor button

- ✨ Regenerate and copy buttons

- ✨ Button to force re-indexing

- 🐛 make sure tooltip IDs are unique

- ✨ Button to continue truncated response

- 🚧 WIP on inline edit browser embedding

- 🚧 inline TipTapEditor

- 🚧 WIP on inline TipTapEditor

- 🔥 remove unused test component

- 🚧 native inline edit

- 💄 nicer looking input box

- ✨ Diff Streaming in JetBrains

- 💄 line highlighting

- 💄 arial font

- ✨ Retry with further instructions

- 🚧 drop shadow

- ✨ accept/reject diffs

- ✨ accept/reject diffs

- 🐛 fix off-by-one errors

- 🚧 swap out button on enter

- 💄 styling and auto-resize

- 💄 box shadow

- 🚧 fix keyboard shortcuts to accept/reject diff

- 💄 improve small interactions

- 💄 loading icon, cancellation logic

- 🐛 handle next.value being undefined

- ✨ latex support

- Bug Fix: Add ternary operator to prevent nonexistant value error (continuedev#1052)

- Update completionProvider.ts

  Add \r\n\r\n stop to tab completion

- 📌 update package-locks

- 🐛 fix bug in edit prompt

- 🔊 print when SSL verification disabled

- 📌 pin esbuild version to match our hosted binary

- 🔥 remove unused package folder

- 👷 add note about pinning esbuild

- 🚚 rename pkg to binary

- ⚡️ update an important stop word for starcoder2, improve dev data

- 🐛 fix autocomplete bug

- Update completionProvider.ts

  as @rootedbox suggested

- ⏪ revert back to esbuild ^0.17.19 to solve no backend found error with onnxruntime

- 🩹 set default autocomplete temp to 0.01 to be strictly positive

- make the useCopyBuffer option effective (continuedev#1062)

- Con-1037: Toggle full screen bug (continuedev#1065)

- Resolve conflict, accept branch being merged in (continuedev#1076)

- continuedev#1073: update outdated documentation (continuedev#1074)

- 🩹 small tweaks to stop words

- Add abstraction for fetch to easily allow using request options (continuedev#1059)

- Add a new slash command to review code. (continuedev#1071)

- 🩹 add new starcoder artifact as stopword

- 💄 slight improvements to inline edit UI

- 🔖 update default models, bump gradle version

- 📝 recommend starcoder2

- 🐛 fix jetbrains encoding issue

- 🩹 don't index site-packages

- 🩹 error handling in JetBrains

- 🐛 fix copy to clipboard in jetbrains

- fix: cursor focus issue causing unwanted return to text area (continuedev#1086)

- 📝 mention autocomplete in jetbrains

- 📝 Tab-autocomplete README

- 🔥 remove note about custom ctx providers only being on VS Code

- 📝 docs about http context provider

- 👥 pull request template

- Update from Claude 2 to Claude 3 (continuedev#1078)

- 📝 add FAQ about single-line completions

- 📝 update autocomplete docs

- fix cursor focus issue causing unwanted return to text area

- 🔧 option to disable autocomplete from config.json

- ✨ option to disable streaming with anthropic

- ✅ Test to verify that files are packaged

- Add FIM template for CodeGemma (continuedev#1097)

  Also pass stop tokens to llama.cpp.

- ✨ customizable rerankers (continuedev#1088)
  • Loading branch information
pzaback committed Apr 23, 2024
1 parent 93fbc10 commit 5c5b168
Show file tree
Hide file tree
Showing 31 changed files with 314 additions and 146 deletions.
7 changes: 7 additions & 0 deletions .changes/extensions/intellij/0.0.42.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
## 0.0.42 - 2024-04-12
### Added
* Inline cmd/ctrl+I in JetBrains
### Fixed
* Fixed character encoding error causing display issues
* Fixed error causing input to constantly demand focus
* Fixed automatic reloading of config.json
5 changes: 5 additions & 0 deletions .changes/extensions/vscode/0.8.24.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
## 0.8.24 - 2024-04-12
### Added
* Support for improved retrieval models (Voyage embeddings/reranking)
* New @code context provider
* Personal usage analytics
4 changes: 4 additions & 0 deletions .changes/unreleased/Added-20240412-160513.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
project: extensions/vscode
kind: Added
body: Support for Gemini 1.5 Pro
time: 2024-04-12T16:05:13.251485-07:00
3 changes: 1 addition & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
# Changelog

Separate changelogs are kept for each part of the Continue repository:
Separate changelogs are kept for each extension:

- [VS Code Extension](./extensions/vscode/CHANGELOG.md)
- [Intellij Extension](./extensions/intellij/CHANGELOG.md)
- [Continue Server](./server/CHANGELOG.md)
8 changes: 6 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,14 @@
![Continue logo](media/c_d.png)
<div align="center">

![Continue logo](media/readme.png)

</div>

<h1 align="center">Continue</h1>

<div align="center">

**[Continue](https://continue.dev/docs) is an open-source autopilot for [VS Code](https://marketplace.visualstudio.com/items?itemName=Continue.continue) and [JetBrains](https://plugins.jetbrains.com/plugin/22707-continue-extension)—the easiest way to code with any LLM**
**[Continue](https://continue.dev/docs) keeps developers in flow. Our open-source [VS Code](https://marketplace.visualstudio.com/items?itemName=Continue.continue) and [JetBrains](https://plugins.jetbrains.com/plugin/22707-continue-extension) extensions enable you to easily create your own modular AI software development system that you can improve.**

</div>

Expand Down
7 changes: 6 additions & 1 deletion core/index.d.ts
Original file line number Diff line number Diff line change
Expand Up @@ -523,16 +523,21 @@ export type ModelName =
| "gpt-4-turbo"
| "gpt-4-turbo-preview"
| "gpt-4-vision-preview"
// Open Source
// Mistral
| "mistral-7b"
| "mistral-8x7b"
// Llama 2
| "llama2-7b"
| "llama2-13b"
| "llama2-70b"
| "codellama-7b"
| "codellama-13b"
| "codellama-34b"
| "codellama-70b"
// Llama 3
| "llama3-8b"
| "llama3-70b"
// Other Open-source
| "phi2"
| "phind-codellama-34b"
| "wizardcoder-7b"
Expand Down
2 changes: 2 additions & 0 deletions core/llm/llms/Groq.ts
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@ class Groq extends OpenAI {
"llama2-70b": "llama2-70b-4096",
"mistral-8x7b": "mixtral-8x7b-32768",
gemma: "gemma-7b-it",
"llama3-8b": "llama3-8b-8192",
"llama3-70b": "llama3-70b-8192",
};
protected _convertModelName(model: string): string {
return Groq.modelConversion[model] ?? model;
Expand Down
2 changes: 2 additions & 0 deletions core/llm/llms/Ollama.ts
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,8 @@ class Ollama extends BaseLLM {
"codellama-13b": "codellama:13b",
"codellama-34b": "codellama:34b",
"codellama-70b": "codellama:70b",
"llama3-8b": "llama3:8b",
"llama3-70b": "llama3:70b",
"phi-2": "phi:2.7b",
"phind-codellama-34b": "phind-codellama:34b-v2",
"wizardcoder-7b": "wizardcoder:7b-python",
Expand Down
2 changes: 2 additions & 0 deletions core/llm/llms/Replicate.ts
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,8 @@ class Replicate extends BaseLLM {
"meta/codellama-70b-instruct:a279116fe47a0f65701a8817188601e2fe8f4b9e04a518789655ea7b995851bf",
"llama2-7b": "meta/llama-2-7b-chat" as any,
"llama2-13b": "meta/llama-2-13b-chat" as any,
"llama3-8b": "meta/meta-llama-3-8b-instruct" as any,
"llama3-70b": "meta/meta-llama-3-70b-instruct" as any,
"zephyr-7b":
"nateraw/zephyr-7b-beta:b79f33de5c6c4e34087d44eaea4a9d98ce5d3f3a09522f7328eea0685003a931",
"mistral-7b":
Expand Down
2 changes: 2 additions & 0 deletions core/llm/llms/Together.ts
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@ class Together extends OpenAI {
"codellama-13b": "togethercomputer/CodeLlama-13b-Instruct",
"codellama-34b": "togethercomputer/CodeLlama-34b-Instruct",
"codellama-70b": "codellama/CodeLlama-70b-Instruct-hf",
"llama3-8b": "meta-llama/Llama-3-8b-chat-hf",
"llama3-70b": "meta-llama/Llama-3-70b-chat-hf",
"llama2-7b": "togethercomputer/llama-2-7b-chat",
"llama2-13b": "togethercomputer/llama-2-13b-chat",
"llama2-70b": "togethercomputer/llama-2-70b-chat",
Expand Down
192 changes: 96 additions & 96 deletions core/llm/llms/index.ts
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
import Handlebars from "handlebars";
import { BaseLLM } from "..";
import {
BaseCompletionOptions,
ILLM,
LLMOptions,
ModelDescription,
BaseCompletionOptions,
ILLM,
LLMOptions,
ModelDescription,
} from "../..";
import { DEFAULT_MAX_TOKENS } from "../constants";
import Anthropic from "./Anthropic";
import Cohere from "./Cohere";
import Bedrock from "./Bedrock";
import Cohere from "./Cohere";
import DeepInfra from "./DeepInfra";
import Flowise from "./Flowise";
import FreeTrial from "./FreeTrial";
Expand All @@ -29,122 +29,122 @@ import TextGenWebUI from "./TextGenWebUI";
import Together from "./Together";

function convertToLetter(num: number): string {
let result = "";
while (num > 0) {
const remainder = (num - 1) % 26;
result = String.fromCharCode(97 + remainder) + result;
num = Math.floor((num - 1) / 26);
}
return result;
let result = "";
while (num > 0) {
const remainder = (num - 1) % 26;
result = String.fromCharCode(97 + remainder) + result;
num = Math.floor((num - 1) / 26);
}
return result;
}

const getHandlebarsVars = (
value: string,
value: string,
): [string, { [key: string]: string }] => {
const ast = Handlebars.parse(value);
const ast = Handlebars.parse(value);

let keysToFilepath: { [key: string]: string } = {};
let keyIndex = 1;
for (let i in ast.body) {
if (ast.body[i].type === "MustacheStatement") {
const letter = convertToLetter(keyIndex);
keysToFilepath[letter] = (ast.body[i] as any).path.original;
value = value.replace(
new RegExp("{{\\s*" + (ast.body[i] as any).path.original + "\\s*}}"),
`{{${letter}}}`,
);
keyIndex++;
}
}
return [value, keysToFilepath];
let keysToFilepath: { [key: string]: string } = {};
let keyIndex = 1;
for (let i in ast.body) {
if (ast.body[i].type === "MustacheStatement") {
const letter = convertToLetter(keyIndex);
keysToFilepath[letter] = (ast.body[i] as any).path.original;
value = value.replace(
new RegExp("{{\\s*" + (ast.body[i] as any).path.original + "\\s*}}"),
`{{${letter}}}`,
);
keyIndex++;
}
}
return [value, keysToFilepath];
};

export async function renderTemplatedString(
template: string,
readFile: (filepath: string) => Promise<string>,
inputData: any,
template: string,
readFile: (filepath: string) => Promise<string>,
inputData: any,
): Promise<string> {
const [newTemplate, vars] = getHandlebarsVars(template);
const data: any = { ...inputData };
for (const key in vars) {
const fileContents = await readFile(vars[key]);
data[key] = fileContents || (inputData[vars[key]] ?? vars[key]);
}
const templateFn = Handlebars.compile(newTemplate);
const final = templateFn(data);
return final;
const [newTemplate, vars] = getHandlebarsVars(template);
const data: any = { ...inputData };
for (const key in vars) {
const fileContents = await readFile(vars[key]);
data[key] = fileContents || (inputData[vars[key]] ?? vars[key]);
}
const templateFn = Handlebars.compile(newTemplate);
const final = templateFn(data);
return final;
}

const LLMs = [
Anthropic,
Cohere,
FreeTrial,
Gemini,
Llamafile,
Ollama,
Replicate,
TextGenWebUI,
Together,
HuggingFaceTGI,
HuggingFaceInferenceAPI,
LlamaCpp,
OpenAI,
LMStudio,
Mistral,
Bedrock,
DeepInfra,
OpenAIFreeTrial,
Flowise,
Groq,
Anthropic,
Cohere,
FreeTrial,
Gemini,
Llamafile,
Ollama,
Replicate,
TextGenWebUI,
Together,
HuggingFaceTGI,
HuggingFaceInferenceAPI,
LlamaCpp,
OpenAI,
LMStudio,
Mistral,
Bedrock,
DeepInfra,
OpenAIFreeTrial,
Flowise,
Groq,
];

export async function llmFromDescription(
desc: ModelDescription,
readFile: (filepath: string) => Promise<string>,
completionOptions?: BaseCompletionOptions,
systemMessage?: string,
desc: ModelDescription,
readFile: (filepath: string) => Promise<string>,
completionOptions?: BaseCompletionOptions,
systemMessage?: string,
): Promise<BaseLLM | undefined> {
const cls = LLMs.find((llm) => llm.providerName === desc.provider);
const cls = LLMs.find((llm) => llm.providerName === desc.provider);

if (!cls) {
return undefined;
}
if (!cls) {
return undefined;
}

const finalCompletionOptions = {
...completionOptions,
...desc.completionOptions,
};
const finalCompletionOptions = {
...completionOptions,
...desc.completionOptions,
};

systemMessage = desc.systemMessage ?? systemMessage;
if (systemMessage !== undefined) {
systemMessage = await renderTemplatedString(systemMessage, readFile, {});
}
systemMessage = desc.systemMessage ?? systemMessage;
if (systemMessage !== undefined) {
systemMessage = await renderTemplatedString(systemMessage, readFile, {});
}

const options: LLMOptions = {
...desc,
completionOptions: {
...finalCompletionOptions,
model: (desc.model || cls.defaultOptions?.model) ?? "codellama-7b",
maxTokens:
finalCompletionOptions.maxTokens ??
cls.defaultOptions?.completionOptions?.maxTokens ??
DEFAULT_MAX_TOKENS,
},
systemMessage,
};
const options: LLMOptions = {
...desc,
completionOptions: {
...finalCompletionOptions,
model: (desc.model || cls.defaultOptions?.model) ?? "codellama-7b",
maxTokens:
finalCompletionOptions.maxTokens ??
cls.defaultOptions?.completionOptions?.maxTokens ??
DEFAULT_MAX_TOKENS,
},
systemMessage,
};

return new cls(options);
return new cls(options);
}

export function llmFromProviderAndOptions(
providerName: string,
llmOptions: LLMOptions,
providerName: string,
llmOptions: LLMOptions,
): ILLM {
const cls = LLMs.find((llm) => llm.providerName === providerName);
const cls = LLMs.find((llm) => llm.providerName === providerName);

if (!cls) {
throw new Error(`Unknown LLM provider type "${providerName}"`);
}
if (!cls) {
throw new Error(`Unknown LLM provider type "${providerName}"`);
}

return new cls(llmOptions);
return new cls(llmOptions);
}
15 changes: 11 additions & 4 deletions docs/docs/customization/slash-commands.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ To use any of the built-in slash commands, open `~/.continue/config.json` and ad

### `/edit`

Select code with ctrl/cmd + M (VS Code) or ctrl/cmd + J (JetBrains), and then type "/edit", followed by instructions for the edit. Continue will stream the changes into a side-by-side diff editor.
Select code with ctrl/cmd + L (VS Code) or ctrl/cmd + J (JetBrains), and then type "/edit", followed by instructions for the edit. Continue will stream the changes into a side-by-side diff editor.

```json
{
Expand Down Expand Up @@ -118,18 +118,25 @@ You can add custom slash commands by adding to the `customCommands` property in

- `name`: the name of the command, which will be invoked with `/name`
- `description`: a short description of the command, which will appear in the dropdown
- `prompt`: a set of instructions to the LLM, which will be shown in the prompt
- `prompt`: a templated prompt to send to the LLM

Custom commands are great when you are frequently reusing a prompt. For example, if you've crafted a great prompt and frequently ask the LLM to check for mistakes in your code, you could add a command like this:

```json title="~/.continue/config.json"
customCommands=[{
"name": "check",
"description": "Check for mistakes in my code",
"prompt": "Please read the highlighted code and check for any mistakes. You should look for the following, and be extremely vigilant:\n- Syntax errors\n- Logic errors\n- Security vulnerabilities\n- Performance issues\n- Anything else that looks wrong\n\nOnce you find an error, please explain it as clearly as possible, but without using extra words. For example, instead of saying 'I think there is a syntax error on line 5', you should say 'Syntax error on line 5'. Give your answer as one bullet point per mistake found."
"prompt": "{{{ input }}}\n\nPlease read the highlighted code and check for any mistakes. You should look for the following, and be extremely vigilant:\n- Syntax errors\n- Logic errors\n- Security vulnerabilities\n- Performance issues\n- Anything else that looks wrong\n\nOnce you find an error, please explain it as clearly as possible, but without using extra words. For example, instead of saying 'I think there is a syntax error on line 5', you should say 'Syntax error on line 5'. Give your answer as one bullet point per mistake found."
}]
```

#### Templating

The `prompt` property supports templating with Handlebars syntax. You can use the following variables:

- `input` (used in the example above): any additional input entered with the slash command. For example, if you type `/test only write one test`, `input` will be `only write one test`. This will also include highlighted code blocks.
- File names: You can reference any file by providing an absolute path or a path relative to the current working directory.

### Custom Slash Commands

If you want to go a step further than writing custom commands with natural language, you can write a custom function that returns the response. This requires using `config.ts` instead of `config.json`.
Expand All @@ -147,7 +154,7 @@ export function modifyConfig(config: Config): Config {
`${diff}\n\nWrite a commit message for the above changes. Use no more than 20 tokens to give a brief description in the imperative mood (e.g. 'Add feature' not 'Added feature'):`,
{
maxTokens: 20,
}
},
)) {
yield message;
}
Expand Down
Loading

0 comments on commit 5c5b168

Please sign in to comment.