Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: While creating a new brain in localhost instance of Quivr, I am getting error #2547

Open
GSrinivasKishanLal opened this issue May 6, 2024 · 16 comments
Labels
bug Something isn't working

Comments

@GSrinivasKishanLal
Copy link

What happened?

Error occurred while creating a brain

Getting response:
Internal Server Error

kindly suggest on any solution to resolve

Relevant log output

500
Internal Server Error.

CORS missing allow origin.

Twitter / LinkedIn details

No response

@GSrinivasKishanLal GSrinivasKishanLal added the bug Something isn't working label May 6, 2024
@GSrinivasKishanLal GSrinivasKishanLal changed the title [Bug]: While creating a new brain in localhost instance of Quivr, I am getting [Bug]: While creating a new brain in localhost instance of Quivr, I am getting error May 6, 2024
Copy link

dosubot bot commented May 6, 2024

The error you're encountering, "Internal Server Error" with a "CORS missing allow origin" message, suggests a CORS configuration issue in your Quivr backend. To resolve this, ensure your CORS settings allow requests from your frontend's origin. Here's how you can address the problem:

  1. Check and Update CORS Origins: In your backend/middlewares/cors.py file, ensure the origins list includes the origin from which you're trying to access the Quivr backend. If your frontend runs on a port or domain not listed, add it to this list.

  2. Middleware Configuration: Make sure the add_cors_middleware function is correctly called with your FastAPI app instance. This should be done in your main application file (e.g., app.py or main.py).

  3. Review Backend Error Logs: For more detailed insights, check your backend server's error logs. This could reveal additional information on why the CORS headers might not be correctly applied.

  4. Middleware Order: The order in which middleware is added can impact functionality. Ensure the CORS middleware is added before any middleware that could modify the response in a way that affects CORS headers.

  5. Restart the Server: After making any changes to your CORS configuration or backend code, restart your backend server to apply these changes.

By following these steps and ensuring your CORS configuration includes the origin of your frontend application, you should be able to resolve the "Internal Server Error" related to CORS issues.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@GSrinivasKishanLal
Copy link
Author

GSrinivasKishanLal commented May 6, 2024

@dosu Thank you for your kind reply, but the issue is still existing with CORS issue occurring for also:

http://localhost:5050/user

@GSrinivasKishanLal
Copy link
Author

Any solution for this also is required, kindly suggest

Copy link

dosubot bot commented May 6, 2024

To further address the CORS issue for http://localhost:5050/user, after ensuring http://localhost:5050 is included in your CORS configuration, it's also important to verify that your CORS middleware is correctly set up to handle all routes, including /user. Sometimes, specific routes might be inadvertently left out or mishandled due to the order in which middlewares are applied or due to specific route configurations.

For the persistent 500 Internal Server Error, diving deeper into the server logs is crucial. This error is a broad indication of server-side issues, which could range from problems in your application logic, issues with database connections or queries, to misconfigurations in your server environment. The exact nature of the error should be logged in more detail on your server, providing vital clues for troubleshooting.

Ensure that your server and application configurations in .env.example, server.js, or equivalent files are correctly set up. Missing or incorrect configurations can lead to various issues, including the 500 error. Double-check that all environment variables, database connections, API keys, and other configurations are correctly applied and match the requirements of your local development environment.

If these steps do not resolve the issue, it would be helpful to share more detailed error messages or logs. This additional information can offer more specific insights into what might be causing the problem, enabling more targeted troubleshooting steps.

Details

About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@GSrinivasKishanLal
Copy link
Author

@dosu
The CORS issue is fixed for almost all other endpoints with ports except for the one given below:

Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost:5050/chat/b55ab619-9838-4d33-be80-1d80d7226f58/question/stream?brain_id=40ba47d7-51b2-4b2a-9247-89e29619efb0. (Reason: CORS request did not succeed). Status code: (null).

Where the HTTP method used is : POST and I am facing this issue in the application.

Receive "Type Error: Network Error when attempting to fetch resource." in the web page and "NS_ERROR_NET_PARTIAL_TRANSFER"

Kindly request you to give a suggestion on how this issue could be fixed.

Copy link

dosubot bot commented May 8, 2024

To tackle the CORS issue you're facing with the http://localhost:5050/chat/b55ab619-9838-4d33-be80-1d80d7226f58/question/stream?brain_id=40ba47d7-51b2-4b2a-9247-89e29619efb0 endpoint, especially considering the "NS_ERROR_NET_PARTIAL_TRANSFER" and the network error messages, let's focus on a few critical areas for troubleshooting and resolution:

  1. Ensure Accurate CORS Configuration: It's crucial to double-check that your CORS settings are correctly configured to include the origin of the request. Make sure allow_credentials is set to True, and that you're allowing all necessary methods and headers. The Access-Control-Allow-Origin header must match the origin of your request accurately.

  2. Network and Proxy Settings Inspection: The "NS_ERROR_NET_PARTIAL_TRANSFER" error points towards possible network issues, such as interference from a proxy server. It's important to verify that there are no network configurations or issues that could be impacting the request's success.

  3. Server Logs Review: Delving into the server logs can provide specific insights or warnings related to this request. This step is key in identifying whether the problem is related to CORS, network issues, or something else entirely.

  4. Response Headers and HTTP Response Validation: Utilizing tools like Postman or your browser's developer tools to check the response headers can be very informative. Confirm that the Access-Control-Allow-Origin header is present and correctly configured. Also, ensure that the server is returning a complete and valid HTTP response for the POST request.

  5. Simplify CORS Origins Configuration: If your origins list in the CORS configuration includes a wildcard "*" along with specific origins, consider simplifying this setup. Removing the wildcard, if not necessary, can help avoid potential confusion and ensure your CORS policy is applied correctly.

By following these steps, you should be closer to identifying and resolving the issue with the specific endpoint. If the problem persists, sharing more detailed information about the request and how the server handles it could offer further insights for diagnosis and resolution.

Details

About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

Copy link

dosubot bot commented May 13, 2024

If the issue persists after trying the previously suggested steps, it might be beneficial to take a deeper look into a few more technical aspects and potential solutions:

  1. Debugging the Preflight Request: The CORS error you're encountering, especially with the "NS_ERROR_NET_PARTIAL_TRANSFER" message, suggests there might be an issue with how the preflight OPTIONS request is handled. Ensure your server is configured to correctly respond to OPTIONS requests with the appropriate CORS headers, without requiring authentication or other preconditions that might reject the preflight.

  2. Content-Length Header Check: The "NS_ERROR_NET_PARTIAL_TRANSFER" error can sometimes be related to the Content-Length header not matching the actual size of the response body. Verify that your server is correctly calculating and setting the Content-Length header for all responses.

  3. Streaming Responses: If your endpoint is streaming responses (as suggested by /question/stream in the URL), ensure that the streaming is correctly implemented. Streaming responses can sometimes cause issues with CORS and HTTP headers if not handled properly. Verify that CORS headers are correctly applied to the initial response before any streamed content.

  4. Server Configuration and Environment: Since the issue is isolated to a specific endpoint, review the server configuration and code handling for that endpoint. There might be specific logic or configurations that inadvertently affect the CORS headers or the response's network handling.

  5. Client-Side Request Inspection: On the client side, ensure that your fetch or XMLHttpRequest is correctly configured, including the mode set to cors if necessary. Inspect the request headers to ensure they match what the server expects and that the request is correctly formed.

  6. Cross-Reference with Working Endpoints: Compare the configuration and handling of the problematic endpoint with those that work correctly. This comparison might reveal discrepancies or configurations that are specific to the problematic endpoint.

Given the complexity of CORS and network-related issues, these steps aim to cover a broad spectrum of potential causes. If the problem remains unresolved, it might be helpful to isolate the issue further by creating a minimal reproducible example or consulting with a network specialist to delve deeper into the network error aspects.

Details

About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@GSrinivasKishanLal
Copy link
Author

@dosu and others, kindly suggest on a solution, this is the error I am getting before the "add_cors_middleware(app)" function is called in main.py file:

backend-core | raise errors_.ConfigError(f'unable to infer type for attribute "{self.name}"')
backend-core | pydantic.v1.errors.ConfigError: unable to infer type for attribute "api_key"

Copy link

dosubot bot commented May 14, 2024

The error you're encountering, unable to infer type for attribute "api_key", indicates that Pydantic requires an explicit type definition for the api_key attribute in your model. To fix this, ensure you define the api_key with an explicit type annotation in your Pydantic model. For instance, if api_key is a string, your model should look like this:

from pydantic import BaseModel

class MyModel(BaseModel):
    api_key: str  # Explicitly type-annotated

This definition tells Pydantic that api_key is expected to be a string, which should resolve the configuration error you're seeing. Make sure this adjustment is made in the part of your code where the api_key attribute is defined or used within your Pydantic models.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@GSrinivasKishanLal
Copy link
Author

GSrinivasKishanLal commented May 14, 2024

@dosu,
Error I get when I try to change the language model in existing brain:

backend-core | ValueError: Error raised by inference endpoint: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f14651a3990>: Failed to establish a new connection: [Errno 111] Connection refused'))

backend-core | INFO: 172.19.0.1:36272 - "GET /knowledge?brain_id=40ba47d7-51b2-4b2a-9247-89e29619efb0 HTTP/1.1" 200 OK

backend-core | INFO: 127.0.0.1:36282 - "GET /healthz HTTP/1.1" 200 OK

And, how to skip open AI API Key and use local Ollama.

Kindly suggest

@GSrinivasKishanLal
Copy link
Author

getting urlib3-connection- failure error

@dosu, kindly suggest

Copy link

dosubot bot commented May 14, 2024

To address the urllib3.connection.HTTPConnection failure error when attempting to switch the language model in your existing brain and to configure your system to use a local Ollama model instead of an OpenAI API Key, follow these steps:

  1. Ensure Ollama is Installed and Running: Download and install Ollama from their official site. Start it locally using the command ollama run llama2 to activate the Llama2 model.

  2. Configure Ollama API Base URL: In your backend configuration, specifically within the BrainSettings class, set the ollama_api_base_url to your local Ollama instance URL, typically "http://localhost:5000".

  3. Adjust the get_embeddings Function: Modify this function to utilize OllamaEmbeddings when the ollama_api_base_url is configured, ensuring the system uses the local Ollama model.

  4. Run Necessary Migrations for Ollama: If using Supabase, execute migrations to integrate Ollama properly. This might involve moving migration files and resetting your database as per the Quivr documentation.

  5. Update Dependencies: Confirm that urllib3 and other related dependencies are current to prevent connection issues.

  6. Implement Robust Error Handling: Enhance error handling around network requests to manage HTTPConnection failures more effectively.

  7. Add Ollama Model to Quivr: Update the user_settings table in Supabase to include "ollama/llama2" and "ollama/mistral" as model options.

  8. Restart Your Backend Application: Apply all changes by restarting your backend to ensure the new configurations take effect.

By following these steps, you should be able to switch to using a local Ollama model, bypassing the need for an OpenAI API Key and resolving the connection issues you've encountered.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@GSrinivasKishanLal
Copy link
Author

@dosu, kindly suggest on where I shall find this "get_embeddings" function?

Copy link

dosubot bot commented May 14, 2024

The "get_embeddings" function you're looking for is typically found within the Quivr application's codebase, specifically in the backend/packages/embeddings/vectors.py file. This is where you can adjust the function to utilize OllamaEmbeddings based on your configuration, especially if you're setting up to use a local Ollama model instead of relying on an external API like OpenAI. Make sure your ollama_api_base_url is correctly set in your application's settings to point to your local Ollama instance.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@GSrinivasKishanLal
Copy link
Author

@dosu, kindly request you to help me with the below issue, I am having this when changing model to existing brain.

{"message":"Network Error","name":"AxiosError","stack":"U@http://localhost:3000/_next/static/chunks/511-fa8e22a6a59a624c.js:6:86672\n7908/tT.xhr</</c.onerror@http://localhost:3000/_next/static/chunks/511-fa8e22a6a59a624c.js:6:101022\nEventHandlerNonNull*7908/tT.xhr</<@http://localhost:3000/_next/static/chunks/511-fa8e22a6a59a624c.js:6:100999\n7908/tT.xhr<@http://localhost:3000/_next/static/chunks/511-fa8e22a6a59a624c.js:6:99480\ntR@http://localhost:3000/_next/static/chunks/511-fa8e22a6a59a624c.js:6:103493\npromise callback*_request@http://localhost:3000/_next/static/chunks/511-fa8e22a6a59a624c.js:6:107164\nrequest@http://localhost:3000/_next/static/chunks/511-fa8e22a6a59a624c.js:6:105865\n7908/</e/<@http://localhost:3000/_next/static/chunks/511-fa8e22a6a59a624c.js:6:107825\n7908/a/<@http://localhost:3000/next/static/chunks/511-fa8e22a6a59a624c.js:6:81381\n@http://localhost:3000/_next/static/chunks/4957-275cd92484b2807d.js:1:10686\nupdateBrain@http://localhost:3000/_next/static/chunks/4957-275cd92484b2807d.js:1:11275\nx@http://localhost:3000/_next/static/chunks/app/studio/%5BbrainId%5D/page-713d49bd3beb26d9.js:1:33926\nonChange@http://localhost:3000/_next/static/chunks/app/studio/%5BbrainId%5D/page-713d49bd3beb26d9.js:1:27566\nw@http://localhost:3000/_next/static/chunks/app/studio/%5BbrainId%5D/page-713d49bd3beb26d9.js:1:44837\nonClick@http://localhost:3000/_next/static/chunks/app/studio/%5BbrainId%5D/page-713d49bd3beb26d9.js:1:45712\naS@http://localhost:3000/_next/static/chunks/fd9d1056-0033f815a828ffe8.js:1:71761\naN@http://localhost:3000/_next/static/chunks/fd9d1056-0033f815a828ffe8.js:1:71915\n38790/sw/<@http://localhost:3000/_next/static/chunks/fd9d1056-0033f815a828ffe8.js:1:138576\nsw@http://localhost:3000/_next/static/chunks/fd9d1056-0033f815a828ffe8.js:1:138675\nsS@http://localhost:3000/_next/static/chunks/fd9d1056-0033f815a828ffe8.js:1:139091\n38790/sN/<@http://localhost:3000/_next/static/chunks/fd9d1056-0033f815a828ffe8.js:1:145234\noK@http://localhost:3000/_next/static/chunks/fd9d1056-0033f815a828ffe8.js:1:92812\niL@http://localhost:3000/_next/static/chunks/fd9d1056-0033f815a828ffe8.js:1:122042\nsN@http://localhost:3000/_next/static/chunks/fd9d1056-0033f815a828ffe8.js:1:140399\nuE@http://localhost:3000/_next/static/chunks/fd9d1056-0033f815a828ffe8.js:1:129850\nuS@http://localhost:3000/_next/static/chunks/fd9d1056-0033f815a828ffe8.js:1:129638\nEventListener.handleEvent*sP@http://localhost:3000/_next/static/chunks/fd9d1056-0033f815a828ffe8.js:1:139888\nsE@http://localhost:3000/_next/static/chunks/fd9d1056-0033f815a828ffe8.js:1:139287\n38790/sz/<@http://localhost:3000/_next/static/chunks/fd9d1056-0033f815a828ffe8.js:1:139453\nsz@http://localhost:3000/_next/static/chunks/fd9d1056-0033f815a828ffe8.js:1:139398\n38790/t.hydrateRoot@http://localhost:3000/_next/static/chunks/fd9d1056-0033f815a828ffe8.js:1:169528\n70945/C/<@http://localhost:3000/_next/static/chunks/8069-cb25729d0aba5a55.js:1:5292\n45060/t.startTransition@http://localhost:3000/_next/static/chunks/8069-cb25729d0aba5a55.js:1:95613\nC@http://localhost:3000/_next/static/chunks/8069-cb25729d0aba5a55.js:1:5262\n35317/<@http://localhost:3000/_next/static/chunks/8069-cb25729d0aba5a55.js:1:5710\nr@http://localhost:3000/_next/static/chunks/8069-cb25729d0aba5a55.js:1:1962\nr@http://localhost:3000/_next/static/chunks/8069-cb25729d0aba5a55.js:1:2323\n35317@http://localhost:3000/_next/static/chunks/8069-cb25729d0aba5a55.js:1:5663\nl@http://localhost:3000/_next/static/chunks/webpack-ce3f0d52084a8e75.js:1:151\nn@http://localhost:3000/_next/static/chunks/main-app-4d1564f0311960d9.js:1:395\n@http://localhost:3000/_next/static/chunks/main-app-4d1564f0311960d9.js:1:442\nl.O@http://localhost:3000/_next/static/chunks/webpack-ce3f0d52084a8e75.js:1:551\nc@http://localhost:3000/_next/static/chunks/webpack-ce3f0d52084a8e75.js:1:4290\n@http://localhost:3000/_next/static/chunks/fd9d1056-0033f815a828ffe8.js:1:64\n","config":{"transitional":{"silentJSONParsing":true,"forcedJSONParsing":true,"clarifyTimeoutError":false},"adapter":["xhr","http"],"transformRequest":[null],"transformResponse":[null],"timeout":0,"xsrfCookieName":"XSRF-TOKEN","xsrfHeaderName":"X-XSRF-TOKEN","maxContentLength":-1,"maxBodyLength":-1,"env":{},"headers":{"Accept":"application/json, text/plain, /","Content-Type":"application/json","Authorization":"Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJhdXRoZW50aWNhdGVkIiwiZXhwIjoxNzE1NjkyMTYyLCJpYXQiOjE3MTU2ODg1NjIsImlzcyI6Imh0dHA6Ly8xMjcuMC4wLjE6NTQzMjEvYXV0aC92MSIsInN1YiI6IjM5NDE4ZTNiLTAyNTgtNDQ1Mi1hZjYwLTdhY2ZjYzEyNjNmZiIsImVtYWlsIjoiYWRtaW5AcXVpdnIuYXBwIiwicGhvbmUiOiIiLCJhcHBfbWV0YWRhdGEiOnsicHJvdmlkZXIiOiJlbWFpbCIsInByb3ZpZGVycyI6WyJlbWFpbCJdfSwidXNlcl9tZXRhZGF0YSI6e30sInJvbGUiOiJhdXRoZW50aWNhdGVkIiwiYWFsIjoiYWFsMSIsImFtciI6W3sibWV0aG9kIjoicGFzc3dvcmQiLCJ0aW1lc3RhbXAiOjE3MTU2ODg1NjJ9XSwic2Vzc2lvbl9pZCI6ImJhMTVkYWQ3LTY5ZGMtNGU4Yy1hYWUzLTk3MDM4NzdmMjFmMCIsImlzX2Fub255bW91cyI6ZmFsc2V9.5x8LtEnOXG-O9s3t-G-YqIVIiQccotuf0UbPJ9s0ses"},"baseURL":"http://localhost:5050","method":"put","url":"/brains/40ba47d7-51b2-4b2a-9247-89e29619efb0/","data":"{\"name\":\"Default brain","description":"This is a description","model":"ollama-phi3:latest","prompt":{"title":"","content":""},"brain_id":"40ba47d7-51b2-4b2a-9247-89e29619efb0","status":"private","last_update":"2024-04-01T18:03:45.348384","brain_type":"doc","isApiDefinitionReadOnly":false,"isUpdatingApiDefinition":true,"max_tokens":1000}"},"code":"ERR_NETWORK","status":null}

Copy link

dosubot bot commented May 14, 2024

To address the "Network Error" you're encountering when changing the model for an existing brain in your application, consider the following targeted steps:

  1. Verify Server Accessibility: Ensure the backend server is running and accessible at the specified URL and port.

  2. Check CORS Configuration: If your frontend and backend are on different ports, ensure CORS is correctly configured on the backend to allow requests from your frontend.

  3. Review Axios Configuration: Double-check the Axios request configuration, especially the Content-Type header and the URL format. Ensure the request method and headers are correctly set.

  4. Inspect Network Issues: Use network debugging tools like Chrome DevTools' Network tab to inspect the request and identify any discrepancies or errors.

  5. Local Server Protocol: Confirm if your server is using HTTP or HTTPS and ensure your Axios request matches this protocol.

  6. Error Handling: Implement error handling in your Axios request to catch and log any errors, providing more insight into the issue.

  7. Update Dependencies: Make sure you're using the latest version of Axios to avoid bugs present in older versions.

  8. Localhost Resolution: Try using 127.0.0.1 instead of localhost in your Axios base URL to see if it resolves the issue.

Ensure that "ollama-phi3:latest" is a valid model identifier and that the JWT token used in the Authorization header is valid and correctly formatted. Additionally, verify your local environment setup, particularly if using Docker, to ensure network bridging or port forwarding is correctly configured.

These steps should help you diagnose and potentially resolve the network error you're facing. If the problem persists, providing more details about your setup and the steps leading to the error could offer further insights.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant