Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

redis.set randomly stops working #977

Open
jacobtt21 opened this issue Mar 22, 2024 · 10 comments
Open

redis.set randomly stops working #977

jacobtt21 opened this issue Mar 22, 2024 · 10 comments

Comments

@jacobtt21
Copy link

It's seemingly random but when I sometimes set some data in the redis db, it just doesn't save. I see the key in the database but not the value.

await redis.set(email, {
  name: name
},
{
  ex: 60 * 60 * 24,
  nx: true
})

when I check the data browser in my dashboard, I can find the key but the value looks like this:

{}
@jacobtt21
Copy link
Author

It happens randomly too and I cannot predict when and why it happens

@ogzhanolguncu
Copy link
Contributor

In NextJS if you don't await correctly in your serverless functions sometimes lambdas don't work properly maybe thats the case? If it persists contact us via [email protected].

@jacobtt21
Copy link
Author

How do you correctly use await?

@ogzhanolguncu
Copy link
Contributor

What I meant was that sometimes, floating promises (promises that you don't await) behave differently in serverless functions. Perhaps that was the issue?

@myhendry
Copy link

I encounter similar issue as the others

when i use nextjs runtime = "nodejs", my chatbot can work properly. but when i use runtime="edge", it works fine in dev environment however, when pushed onto vercel, it leads to very unpredictable outcome. sometimes, the new chat is written into upstash redis but sometimes, it doesn't. anyone knows how can i resolve this issue? below is my api/chat/route.ts code...

export const runtime = "nodejs";

interface NextExtendedRequest extends NextRequest {
  json: () => Promise<{
    messages: VercelChatMessage[];
    sessionId: string;
    loadMessages: boolean;
  }>;
}

export async function POST(req: NextExtendedRequest) {
  try {
    const { messages, sessionId } = await req.json();

    const { stream, handlers } = LangChainStream();

    const prompt = ChatPromptTemplate.fromMessages([
      SystemMessagePromptTemplate.fromTemplate(
        "You are a professor and your reply will be short and concise."
      ),
      new MessagesPlaceholder("history"),
      HumanMessagePromptTemplate.fromTemplate("{input}"),
    ]);

    const memory = new BufferMemory({
      chatHistory: new UpstashRedisChatMessageHistory({
        sessionId: sessionId,
        client: Redis.fromEnv(),
      }),
      aiPrefix: "assistant",
      humanPrefix: "user",
      memoryKey: "history",
      returnMessages: true,
    });

    const model = new ChatOpenAI({
      model: "gpt-3.5-turbo",
      temperature: 0,
      streaming: true,
      apiKey: process.env.OPENAI_API_KEY,
    });

    const chain = new ConversationChain({ llm: model, memory, prompt });

    const latestMessage = messages[messages.length - 1].content;

    chain.call({
      input: latestMessage,
      callbacks: [handlers],
    });

    return new StreamingTextResponse(stream);
  } catch (e: any) {
    console.log("error", e);
    return NextResponse.json({ error: e.message }, { status: e.status ?? 500 });
  }
}

@ogzhanolguncu
Copy link
Contributor

@myhendry can you try converting chain.call to this chain.invoke then await? Sometimes floating promises in edge environments cause that issue.

@myhendry
Copy link

@myhendry can you try converting chain.call to this chain.invoke then await? Sometimes floating promises in edge environments cause that issue.

@ogzhanolguncu hi, im using runtime="nodejs" environment instead of runtime="edge". let me try using invoke. so code will be... i dont need the "await" keyword before chain.invoke right? thanks

 chain.invoke({
      input: latestMessage,
      callbacks: [handlers],
    });

@ogzhanolguncu
Copy link
Contributor

No, if you are streaming you don't have to. If you are not streaming you have to await till you get a result back.

Copy link

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 30 days.

@ElectricCodeGuy
Copy link

Not sure if this is still relevant but i would suggest you update to teh newest version of ai package from vercel and try using the callback function onFinal. So like this here:

 const outputStreamChain = ChatPromptTemplate.fromMessages([
      ['system', systemPromptTemplate],
      new MessagesPlaceholder('chat_history'),
      ['human', '{question}']
    ]);
    const mainModel = new ChatAnthropic({
      model: 'claude-3-opus-20240229',
      maxTokens: 4000,
      temperature: 0,
      streaming: true
    });
 const rewriteArticleChain = outputStreamChain.pipe(mainModel);

    const fullInput = {
      question: messages[messages.length - 1].content,
      chat_history: formattedChatHistory,
      })
    };

    const stream = await rewriteArticleChain.stream(fullInput);

let partialCompletion = '';

 const aiStream = LangChainAdapter.toAIStream(stream, {
      onToken: (token: string) => {
        partialCompletion += token;
      },
      onFinal: () => {
        try {
          saveChatToRedis(
            chatSessionId,
            userId,
            messages[messages.length - 1].content,
            partialCompletion,
            Array.from(uniqueReferences)
          );
        } catch (error) {
          console.error('Error saving chat to database:', error);
        }
      }
    });

 return new StreamingTextResponse(aiStream, {
      headers: {
        'Content-Type': 'text/plain; charset=utf-8'
      }
    });

This should work just fine i have used it for many month. Also the Langchain buffer memory i would not use. I would in general avoid using langchain overall and just stick with the AI package from vercel, it contains everything you need :)

The function:

 saveChatToRedis(
            chatSessionId,
            userId,
            messages[messages.length - 1].content,
            partialCompletion,
            Array.from(uniqueReferences)
          );

is just a basic function that takes these input and stores the value in upstash/redis

The partialcompletion is if the stream is suddenly cut off or the user stops it, it then still stores what ever that have been generated in the chat memory.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants