How to call handSubmit()
and render returned message concurrently
#1641
Replies: 3 comments 1 reply
-
What does your API route look like? Can you share the code? |
Beta Was this translation helpful? Give feedback.
-
My initial thought would be that if you're using the pages router and want the same input to go to each model, you could have N different API routes (one for each model/API endpoint), and one |
Beta Was this translation helpful? Give feedback.
-
@mcavaliere Hi, I'm using one API for all models. Here is the code for backend logic, frontend will send the model in the body request import {
StreamingTextResponse,
streamText,
} from "ai";
import { openai } from "@ai-sdk/openai";
// TODO: IMPORTANT! Set the runtime to edge
export const runtime = "nodejs";
export interface ChatRequestBody {
model: string;
messages: any;
}
export async function POST(req: Request) {
const { messages, model } = (await req.json()) as ChatRequestBody;
return await generateResponseByModel(
model,
messages
);
}
async function generateResponseByModel(
model: string,
messages: any
) {
let stream: ReadableStream<any>;
if (model === "gpt-3.5") {
const result = await streamText({
model: openai("gpt-3.5-turbo"),
messages,
});
stream = result.toAIStream();
} else if (model === "gemini") {
} else if (model === "claude") {
} else {
const result = await streamText({
model: openai("gpt-4o-2024-05-13"),
messages,
});
stream = result.toAIStream();
}
return new StreamingTextResponse(stream);
} |
Beta Was this translation helpful? Give feedback.
-
I have a list of models (gpt4, claude...etc) and I want to get the responses from all submitted models at the same time. Even the backend can stream the response back to frontend but it seems like only the last message was rendered on the frontend and added to
messages
. I noticed that two chats were created. I also tried to addsleep(1000)
afterhandleSubmit()
function but it didn't help.Beta Was this translation helpful? Give feedback.
All reactions