Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
  • Loading branch information
Leizhenpeng committed Feb 16, 2025
2 parents 8bd0d6a + 12863f5 commit 377480b
Show file tree
Hide file tree
Showing 46 changed files with 1,327 additions and 338 deletions.
7 changes: 6 additions & 1 deletion .env.template
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,11 @@ ANTHROPIC_API_VERSION=
### anthropic claude Api url (optional)
ANTHROPIC_URL=


### (optional)
WHITE_WEBDAV_ENDPOINTS=

### siliconflow Api key (optional)
SILICONFLOW_API_KEY=

### siliconflow Api url (optional)
SILICONFLOW_URL=
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
MIT License

Copyright (c) 2023-2024 Zhang Yifei
Copyright (c) 2023-2025 NextChat

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
Expand Down
18 changes: 13 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,27 +1,28 @@
<div align="center">

<a href='https://nextchat.club'>
<img src="https://github.com/user-attachments/assets/287c510f-f508-478e-ade3-54d30453dc18" width="1000" alt="icon"/>
<img src="https://github.com/user-attachments/assets/83bdcc07-ae5e-4954-a53a-ac151ba6ccf3" width="1000" alt="icon"/>
</a>




<h1 align="center">NextChat (ChatGPT Next Web)</h1>

English / [简体中文](./README_CN.md)

<a href="https://trendshift.io/repositories/5973" target="_blank"><img src="https://trendshift.io/api/badge/repositories/5973" alt="ChatGPTNextWeb%2FChatGPT-Next-Web | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>



One-Click to get a well-designed cross-platform ChatGPT web UI, with Claude, GPT4 & Gemini Pro support.
✨ Light and Fast AI Assistant,with Claude, DeepSeek, GPT4 & Gemini Pro support.

[![Saas][Saas-image]][saas-url]
[![Web][Web-image]][web-url]
[![Windows][Windows-image]][download-url]
[![MacOS][MacOS-image]][download-url]
[![Linux][Linux-image]][download-url]

[NextChatAI](https://nextchat.dev/chat?utm_source=readme) / [Web App Demo](https://app.nextchat.dev) / [Desktop App](https://github.com/Yidadaa/ChatGPT-Next-Web/releases)
[NextChatAI](https://nextchat.club?utm_source=readme) / [Web App Demo](https://app.nextchat.dev) / [Desktop App](https://github.com/Yidadaa/ChatGPT-Next-Web/releases) / [Discord](https://discord.gg/YCkeafCafC) / [Enterprise Edition](#enterprise-edition) / [Twitter](https://twitter.com/NextChatDev)


Expand All @@ -34,9 +35,9 @@ One-Click to get a well-designed cross-platform ChatGPT web UI, with Claude, GPT
[MacOS-image]: https://img.shields.io/badge/-MacOS-black?logo=apple
[Linux-image]: https://img.shields.io/badge/-Linux-333?logo=ubuntu

[<img src="https://vercel.com/button" alt="Deploy on Vercel" height="30">](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FChatGPTNextWeb%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&project-name=nextchat&repository-name=NextChat) [<img src="https://zeabur.com/button.svg" alt="Deploy on Zeabur" height="30">](https://zeabur.com/templates/ZBUEFA) [<img src="https://gitpod.io/button/open-in-gitpod.svg" alt="Open in Gitpod" height="30">](https://gitpod.io/#https://github.com/Yidadaa/ChatGPT-Next-Web) [<img src="https://img.shields.io/badge/BT_Deploy-Install-20a53a" alt="BT Deply Install" height="30">](https://www.bt.cn/new/download.html)
[<img src="https://zeabur.com/button.svg" alt="Deploy on Zeabur" height="30">](https://zeabur.com/templates/ZBUEFA) [<img src="https://vercel.com/button" alt="Deploy on Vercel" height="30">](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FChatGPTNextWeb%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&project-name=nextchat&repository-name=NextChat) [<img src="https://gitpod.io/button/open-in-gitpod.svg" alt="Open in Gitpod" height="30">](https://gitpod.io/#https://github.com/ChatGPTNextWeb/NextChat)

[<img src="https://github.com/user-attachments/assets/903482d4-3e87-4134-9af1-f2588fa90659" height="60" width="288" >](https://monica.im/?utm=nxcrp)
[<img src="https://github.com/user-attachments/assets/903482d4-3e87-4134-9af1-f2588fa90659" height="50" width="" >](https://monica.im/?utm=nxcrp)

</div>

Expand Down Expand Up @@ -352,6 +353,13 @@ Customize Stability API url.

Enable MCP(Model Context Protocol)Feature

### `SILICONFLOW_API_KEY` (optional)

SiliconFlow API Key.

### `SILICONFLOW_URL` (optional)

SiliconFlow API URL.

## Requirements

Expand Down
9 changes: 8 additions & 1 deletion README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ code1,code2,code3
### `OPENAI_API_KEY` (必填项)

OpanAI 密钥,你在 openai 账户页面申请的 api key,使用英文逗号隔开多个 key,这样可以随机轮询这些 key。
OpenAI 密钥,你在 openai 账户页面申请的 api key,使用英文逗号隔开多个 key,这样可以随机轮询这些 key。

### `CODE` (可选)

Expand Down Expand Up @@ -267,6 +267,13 @@ Stability API密钥

启用MCP(Model Context Protocol)功能

### `SILICONFLOW_API_KEY` (optional)

SiliconFlow API Key.

### `SILICONFLOW_URL` (optional)

SiliconFlow API URL.

## 开发

Expand Down
3 changes: 3 additions & 0 deletions app/api/[provider]/[...path]/route.ts
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ import { handle as moonshotHandler } from "../../moonshot";
import { handle as stabilityHandler } from "../../stability";
import { handle as iflytekHandler } from "../../iflytek";
import { handle as deepseekHandler } from "../../deepseek";
import { handle as siliconflowHandler } from "../../siliconflow";
import { handle as xaiHandler } from "../../xai";
import { handle as chatglmHandler } from "../../glm";
import { handle as proxyHandler } from "../../proxy";
Expand Down Expand Up @@ -47,6 +48,8 @@ async function handle(
return xaiHandler(req, { params });
case ApiPath.ChatGLM:
return chatglmHandler(req, { params });
case ApiPath.SiliconFlow:
return siliconflowHandler(req, { params });
case ApiPath.OpenAI:
return openaiHandler(req, { params });
default:
Expand Down
3 changes: 3 additions & 0 deletions app/api/auth.ts
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,9 @@ export function auth(req: NextRequest, modelProvider: ModelProvider) {
case ModelProvider.ChatGLM:
systemApiKey = serverConfig.chatglmApiKey;
break;
case ModelProvider.SiliconFlow:
systemApiKey = serverConfig.siliconFlowApiKey;
break;
case ModelProvider.GPT:
default:
if (req.nextUrl.pathname.includes("azure/deployments")) {
Expand Down
8 changes: 6 additions & 2 deletions app/api/openai.ts
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,12 @@ function getModels(remoteModelRes: OpenAIListModelResponse) {
if (config.disableGPT4) {
remoteModelRes.data = remoteModelRes.data.filter(
(m) =>
!(m.id.startsWith("gpt-4") || m.id.startsWith("chatgpt-4o") || m.id.startsWith("o1")) ||
m.id.startsWith("gpt-4o-mini"),
!(
m.id.startsWith("gpt-4") ||
m.id.startsWith("chatgpt-4o") ||
m.id.startsWith("o1") ||
m.id.startsWith("o3")
) || m.id.startsWith("gpt-4o-mini"),
);
}

Expand Down
128 changes: 128 additions & 0 deletions app/api/siliconflow.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
import { getServerSideConfig } from "@/app/config/server";
import {
SILICONFLOW_BASE_URL,
ApiPath,
ModelProvider,
ServiceProvider,
} from "@/app/constant";
import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server";
import { auth } from "@/app/api/auth";
import { isModelNotavailableInServer } from "@/app/utils/model";

const serverConfig = getServerSideConfig();

export async function handle(
req: NextRequest,
{ params }: { params: { path: string[] } },
) {
console.log("[SiliconFlow Route] params ", params);

if (req.method === "OPTIONS") {
return NextResponse.json({ body: "OK" }, { status: 200 });
}

const authResult = auth(req, ModelProvider.SiliconFlow);
if (authResult.error) {
return NextResponse.json(authResult, {
status: 401,
});
}

try {
const response = await request(req);
return response;
} catch (e) {
console.error("[SiliconFlow] ", e);
return NextResponse.json(prettyObject(e));
}
}

async function request(req: NextRequest) {
const controller = new AbortController();

// alibaba use base url or just remove the path
let path = `${req.nextUrl.pathname}`.replaceAll(ApiPath.SiliconFlow, "");

let baseUrl = serverConfig.siliconFlowUrl || SILICONFLOW_BASE_URL;

if (!baseUrl.startsWith("http")) {
baseUrl = `https://${baseUrl}`;
}

if (baseUrl.endsWith("/")) {
baseUrl = baseUrl.slice(0, -1);
}

console.log("[Proxy] ", path);
console.log("[Base Url]", baseUrl);

const timeoutId = setTimeout(
() => {
controller.abort();
},
10 * 60 * 1000,
);

const fetchUrl = `${baseUrl}${path}`;
const fetchOptions: RequestInit = {
headers: {
"Content-Type": "application/json",
Authorization: req.headers.get("Authorization") ?? "",
},
method: req.method,
body: req.body,
redirect: "manual",
// @ts-ignore
duplex: "half",
signal: controller.signal,
};

// #1815 try to refuse some request to some models
if (serverConfig.customModels && req.body) {
try {
const clonedBody = await req.text();
fetchOptions.body = clonedBody;

const jsonBody = JSON.parse(clonedBody) as { model?: string };

// not undefined and is false
if (
isModelNotavailableInServer(
serverConfig.customModels,
jsonBody?.model as string,
ServiceProvider.SiliconFlow as string,
)
) {
return NextResponse.json(
{
error: true,
message: `you are not allowed to use ${jsonBody?.model} model`,
},
{
status: 403,
},
);
}
} catch (e) {
console.error(`[SiliconFlow] filter`, e);
}
}
try {
const res = await fetch(fetchUrl, fetchOptions);

// to prevent browser prompt for credentials
const newHeaders = new Headers(res.headers);
newHeaders.delete("www-authenticate");
// to disable nginx buffering
newHeaders.set("X-Accel-Buffering", "no");

return new Response(res.body, {
status: res.status,
statusText: res.statusText,
headers: newHeaders,
});
} finally {
clearTimeout(timeoutId);
}
}
12 changes: 12 additions & 0 deletions app/client/api.ts
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ import { SparkApi } from "./platforms/iflytek";
import { DeepSeekApi } from "./platforms/deepseek";
import { XAIApi } from "./platforms/xai";
import { ChatGLMApi } from "./platforms/glm";
import { SiliconflowApi } from "./platforms/siliconflow";

export const ROLES = ["system", "user", "assistant"] as const;
export type MessageRole = (typeof ROLES)[number];
Expand Down Expand Up @@ -164,6 +165,9 @@ export class ClientApi {
case ModelProvider.ChatGLM:
this.llm = new ChatGLMApi();
break;
case ModelProvider.SiliconFlow:
this.llm = new SiliconflowApi();
break;
default:
this.llm = new ChatGPTApi();
}
Expand Down Expand Up @@ -254,6 +258,8 @@ export function getHeaders(ignoreHeaders: boolean = false) {
const isDeepSeek = modelConfig.providerName === ServiceProvider.DeepSeek;
const isXAI = modelConfig.providerName === ServiceProvider.XAI;
const isChatGLM = modelConfig.providerName === ServiceProvider.ChatGLM;
const isSiliconFlow =
modelConfig.providerName === ServiceProvider.SiliconFlow;
const isEnabledAccessControl = accessStore.enabledAccessControl();
const apiKey = isGoogle
? accessStore.googleApiKey
Expand All @@ -273,6 +279,8 @@ export function getHeaders(ignoreHeaders: boolean = false) {
? accessStore.deepseekApiKey
: isChatGLM
? accessStore.chatglmApiKey
: isSiliconFlow
? accessStore.siliconflowApiKey
: isIflytek
? accessStore.iflytekApiKey && accessStore.iflytekApiSecret
? accessStore.iflytekApiKey + ":" + accessStore.iflytekApiSecret
Expand All @@ -290,6 +298,7 @@ export function getHeaders(ignoreHeaders: boolean = false) {
isDeepSeek,
isXAI,
isChatGLM,
isSiliconFlow,
apiKey,
isEnabledAccessControl,
};
Expand Down Expand Up @@ -317,6 +326,7 @@ export function getHeaders(ignoreHeaders: boolean = false) {
isDeepSeek,
isXAI,
isChatGLM,
isSiliconFlow,
apiKey,
isEnabledAccessControl,
} = getConfig();
Expand Down Expand Up @@ -365,6 +375,8 @@ export function getClientApi(provider: ServiceProvider): ClientApi {
return new ClientApi(ModelProvider.XAI);
case ServiceProvider.ChatGLM:
return new ClientApi(ModelProvider.ChatGLM);
case ServiceProvider.SiliconFlow:
return new ClientApi(ModelProvider.SiliconFlow);
default:
return new ClientApi(ModelProvider.GPT);
}
Expand Down
Loading

0 comments on commit 377480b

Please sign in to comment.