Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] 使用nvidia提供的llama3接口,消息被截断 #4577

Open
1 of 3 tasks
EucalyZ opened this issue Apr 27, 2024 · 3 comments
Open
1 of 3 tasks

[Bug] 使用nvidia提供的llama3接口,消息被截断 #4577

EucalyZ opened this issue Apr 27, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@EucalyZ
Copy link

EucalyZ commented Apr 27, 2024

Bug Description

当我在ChatGPT-Next-Web中使用nvidia提供的llama3接口时,出现消息被截断的问题,已尝试max_tokens设置为4000、39999
image
接口在测试中表现良好
image

Steps to Reproduce

如上

Expected Behavior

能接收到完整信息

Screenshots

No response

Deployment Method

  • Docker
  • Vercel
  • Server

Desktop OS

No response

Desktop Browser

No response

Desktop Browser Version

No response

Smartphone Device

No response

Smartphone OS

No response

Smartphone Browser

No response

Smartphone Browser Version

No response

Additional Logs

data: {"id":"chatcmpl-a77268e0-c815-4f0b-8a7e-3452ea838046","object":"chat.completion.chunk","created":1714240413,"model":"meta/llama3-70b-instruct","choices":[{"index":0,"delta":{"role":"assistant","content":""},"finish_reason":null,"logprobs":null}]}

data: {"id":"chatcmpl-a77268e0-c815-4f0b-8a7e-3452ea838046","object":"chat.completion.chunk","created":1714240413,"model":"meta/llama3-70b-instruct","choices":[{"index":0,"delta":{"role":"assistant","content":"**"},"finish_reason":null,"logprobs":{"content":[{"token":null,"logprob":0.0,"bytes":null,"top_logprobs":null}]}}]}

data: {"id":"chatcmpl-a77268e0-c815-4f0b-8a7e-3452ea838046","object":"chat.completion.chunk","created":1714240413,"model":"meta/llama3-70b-instruct","choices":[{"index":0,"delta":{"role":"assistant","content":"What is a Computer?**\n\nA computer is an electronic device that can perform"},"finish_reason":null,"logprobs":{"content":[{"token":null,"logprob":0.0,"bytes":null,"top_logprobs":null}]}}]}

data: {"id":"chatcmpl-a77268e0-c815-4f0b-8a7e-3452ea838046","object":"chat.completion.chunk","created":1714240413,"model":"meta/llama3-70b-instruct","choices":[{"index":0,"delta":{"role":"assistant","content":""},"finish_reason":"stop","logprobs":null}]}

data: [DONE]

@EucalyZ EucalyZ added the bug Something isn't working label Apr 27, 2024
@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Title: [Bug] Using the llama3 interface provided by nvidia, the message is truncated

@yanye99
Copy link

yanye99 commented May 15, 2024

你怎么用nvidia的接口的?

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


How do you use nvidia's interface?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants