Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A problem with ffmepg cache? #11

Open
lordmax20000 opened this issue Mar 24, 2023 · 7 comments
Open

A problem with ffmepg cache? #11

lordmax20000 opened this issue Mar 24, 2023 · 7 comments

Comments

@lordmax20000
Copy link

lordmax20000 commented Mar 24, 2023

i am a total beginner, the program was working fine but lately after the first messages, or sometimes even at the first one, i get this message and the program crashes

libavutil 58. 2.100 / 58. 2.100
libavcodec 60. 3.100 / 60. 3.100
libavformat 60. 3.100 / 60. 3.100
libavdevice 60. 1.100 / 60. 1.100
libavfilter 9. 3.100 / 9. 3.100
libswscale 7. 1.100 / 7. 1.100
libswresample 4. 10.100 / 4. 10.100
libpostproc 57. 1.100 / 57. 1.100
[cache @ 000001c2b6453340] Inner protocol failed to seekback end : -40
Last message repeated 1 times
[mp3 @ 000001c2b6455ac0] Failed to read frame size: Could not seek to 1239.
[cache @ 000001c2b6453340] Statistics, cache hits:2 cache misses:1
cache:pipe:0: Invalid argument

i have tried updating ffmepg but it doesn't work and if i search online there seem to be different problems, so i don't really know what to do

@lordmax20000
Copy link
Author

nevermind i think the problem was that i finished my elevenlabs characters without even noticing,

@lordmax20000
Copy link
Author

i was thinking if it was possible somehow to change the tts part of the code and maybe use marytts or espeak, wouldn't we have to change only this part of the code?

"def EL_TTS(message):

url = f'https://api.elevenlabs.io/v1/text-to-speech/{EL.voice}'
headers = {
    'accept': 'audio/mpeg',
    'xi-api-key': EL.key,
    'Content-Type': 'application/json'
}
data = {
    'text': message,
    'voice_settings': {
        'stability': 0.75,
        'similarity_boost': 0.75
    }
}

response = requests.post(url, headers=headers, json=data, stream=True)
audio_content = AudioSegment.from_file(io.BytesIO(response.content), format="mp3")
play(audio_content)

"

if we do that it would be free, i mean we would still pay for chat gpt but not for evenlabs right?

@Kopbabakop
Copy link

i was thinking if it was possible somehow to change the tts part of the code and maybe use marytts or espeak, wouldn't we have to change only this part of the code?

"def EL_TTS(message):

url = f'https://api.elevenlabs.io/v1/text-to-speech/{EL.voice}'
headers = {
    'accept': 'audio/mpeg',
    'xi-api-key': EL.key,
    'Content-Type': 'application/json'
}
data = {
    'text': message,
    'voice_settings': {
        'stability': 0.75,
        'similarity_boost': 0.75
    }
}

response = requests.post(url, headers=headers, json=data, stream=True)
audio_content = AudioSegment.from_file(io.BytesIO(response.content), format="mp3")
play(audio_content)

"

if we do that it would be free, i mean we would still pay for chat gpt but not for evenlabs right?

Bro how did you do that ı cant do it because of pyproject.toml-based error

@lordmax20000
Copy link
Author

i honestly don't remember, i probably asked chatgpt though

@lordmax20000
Copy link
Author

in the end it didn't work though, so i used the default voice instead

@Fadlay
Copy link

Fadlay commented Jun 25, 2023

i have same issue pls help me, how u fix it? @Kopbabakop

@HiPach
Copy link

HiPach commented Mar 19, 2024

I have the same problem, I don’t know what to do, can anyone help?

E:\Neuro_net\AI-Vtuber\myenv\Lib\site-packages\pydub\utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work
warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning)
\Running!

2024-03-20 05:08:28 [LGOFBL]- hi

Traceback (most recent call last):
File "E:\Neuro_net\AI-Vtuber\run.py", line 149, in
read_chat()
File "E:\Neuro_net\AI-Vtuber\run.py", line 113, in read_chat
response = llm(message)
^^^^^^^^^^^^
File "E:\Neuro_net\AI-Vtuber\run.py", line 130, in llm
response = openai.Completion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\Neuro_net\AI-Vtuber\myenv\Lib\site-packages\openai\api_resources\completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\Neuro_net\AI-Vtuber\myenv\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "E:\Neuro_net\AI-Vtuber\myenv\Lib\site-packages\openai\api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\Neuro_net\AI-Vtuber\myenv\Lib\site-packages\openai\api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "E:\Neuro_net\AI-Vtuber\myenv\Lib\site-packages\openai\api_requestor.py", line 765, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: The model text-davinci-003 has been deprecated, learn more here: https://platform.openai.com/docs/deprecations

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants