Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug: Differences between OpenAi and Gemini #900

Open
3 of 4 tasks
vrige opened this issue Dec 10, 2024 · 0 comments
Open
3 of 4 tasks

bug: Differences between OpenAi and Gemini #900

vrige opened this issue Dec 10, 2024 · 0 comments
Labels
bug Something isn't working status: needs triage New issues that have not yet been reviewed or categorized.

Comments

@vrige
Copy link

vrige commented Dec 10, 2024

Did you check docs and existing issues?

  • I have read all the NeMo-Guardrails docs
  • I have updated the package to the latest version before submitting this issue
  • (optional) I have used the develop branch
  • I have searched the existing issues of NeMo-Guardrails

Python version (python --version)

3.10.15

Operating system/version

MacOs 14.6.1

NeMo-Guardrails version (if you must use a specific version and not the latest

No response

Describe the bug

Hi, thank you for you work.
I noticed that i obtain different results with the response generator ("..." in Colang v2) when i use different providers.
If I use OpenAi, I obtain the desired response, while if i use Gemini, I receive a syntax error. Simply by changing ONLY the config file.
I am testing this using two different nemo servers: action and non action one. But they work fine and locally I have the same problem.
For the config files, i used the following ones:

colang_version: 2.x
models:
  - type: main
    engine: vertexai
    model: gemini-1.5-pro

colang_version: 2.x
models:
  - type: main
    engine: openai
    model: gpt-3.5-turbo-instruct

Using prints and logs as debug, I noticed that the error is in the line $output = ..."'{$ref_use}'"
I have tried also variation of way to send the input (for instance: $ref_use.transcript), but they did not work.
This is the error with gemini:

# This is the current conversation between the user and the bot:                                                                                                                                           
user action: user said "Hi! Can you do the spelling of the following name <PERSON>? Thanks"                                                                                                                
                                                                                                                                                                                                           
# 'Hi! Can you do the spelling of the following name <PERSON>? Thanks'                                                                                                                                     
$output =                                                                                                                                                                                                  

/Users/vrige/Library/Caches/pypoetry/virtualenvs/llm-gateway-C7tEgGBA-py3.10/lib/python3.10/site-packages/proto/message.py:389: DeprecationWarning: The argument `including_default_value_fields` has been removed from
                Protobuf 5.x. Please use `always_print_fields_with_no_presence` instead.
                
  warnings.warn(

LLM Completion (d9093..)
user intent: user asked for spelling of a name                                                                                                                                                             

17:47:27.685 | Output Stats None
17:47:27.685 | LLM call took 2.36 seconds
WARNING:nemoguardrails.actions.action_dispatcher:Error while execution 'GenerateValueAction' with parameters '{'var_name': 'output', 'instructions': "'Hi! Can you do the spelling of the following name <PERSON>? Thanks'"}': Invalid LLM response: `user intent: user asked for spelling of a name`
ERROR:nemoguardrails.actions.action_dispatcher:Invalid LLM response: `user intent: user asked for spelling of a name`
Traceback (most recent call last):
  File "/Users/vrige/Library/Caches/pypoetry/virtualenvs/llm-gateway-C7tEgGBA-py3.10/lib/python3.10/site-packages/nemoguardrails/actions/v2_x/generation.py", line 813, in generate_value
    return literal_eval(value)
  File "/Users/vrige/.pyenv/versions/3.10.15/lib/python3.10/ast.py", line 64, in literal_eval
    node_or_string = parse(node_or_string.lstrip(" \t"), mode='eval')
  File "/Users/vrige/.pyenv/versions/3.10.15/lib/python3.10/ast.py", line 50, in parse
    return compile(source, filename, mode, flags,
  File "<unknown>", line 1
    user intent: user asked for spelling of a name
         ^^^^^^
SyntaxError: invalid syntax

The input message is the following one:
'Hi! Can you do the spelling of the following name <PERSON>? Thanks'

Finally, the simpleAction makes a custom validation of the output, but it works fine.

I need to work with Gemini, is there a way to overcome this problem?

Steps To Reproduce

import core
import llm

flow main
    user said something as $ref_use
    $output = ..."'{$ref_use}'"
    await analyse_output(ref_use=$output) as $ref_act_out

flow analyse_output $ref_use
    $result = await simpleAction(inputs=$ref_use)
    if "Error: " in $result
        bot say "I do not like it. Change your input"
    else
        bot say $result

UPDATE: I tried also the following simplified flow, but I got the same error.

import core
import llm

flow main
    user said something as $ref_use
    $output = ..."'{$ref_use}'"

Expected Behavior

The OpenAi response is the following one:
Sure, the spelling of the name << PERSON >> is <US_ITIN>. Is there anything else I can assist you with?

Actual Behavior

This is the error with gemini:

# This is the current conversation between the user and the bot:                                                                                                                                           
user action: user said "Hi! Can you do the spelling of the following name <PERSON>? Thanks"                                                                                                                
                                                                                                                                                                                                           
# 'Hi! Can you do the spelling of the following name <PERSON>? Thanks'                                                                                                                                     
$output =                                                                                                                                                                                                  

/Users/vrige/Library/Caches/pypoetry/virtualenvs/llm-gateway-C7tEgGBA-py3.10/lib/python3.10/site-packages/proto/message.py:389: DeprecationWarning: The argument `including_default_value_fields` has been removed from
                Protobuf 5.x. Please use `always_print_fields_with_no_presence` instead.
                
  warnings.warn(

LLM Completion (d9093..)
user intent: user asked for spelling of a name                                                                                                                                                             

17:47:27.685 | Output Stats None
17:47:27.685 | LLM call took 2.36 seconds
WARNING:nemoguardrails.actions.action_dispatcher:Error while execution 'GenerateValueAction' with parameters '{'var_name': 'output', 'instructions': "'Hi! Can you do the spelling of the following name <PERSON>? Thanks'"}': Invalid LLM response: `user intent: user asked for spelling of a name`
ERROR:nemoguardrails.actions.action_dispatcher:Invalid LLM response: `user intent: user asked for spelling of a name`
Traceback (most recent call last):
  File "/Users/vrige/Library/Caches/pypoetry/virtualenvs/llm-gateway-C7tEgGBA-py3.10/lib/python3.10/site-packages/nemoguardrails/actions/v2_x/generation.py", line 813, in generate_value
    return literal_eval(value)
  File "/Users/vrige/.pyenv/versions/3.10.15/lib/python3.10/ast.py", line 64, in literal_eval
    node_or_string = parse(node_or_string.lstrip(" \t"), mode='eval')
  File "/Users/vrige/.pyenv/versions/3.10.15/lib/python3.10/ast.py", line 50, in parse
    return compile(source, filename, mode, flags,
  File "<unknown>", line 1
    user intent: user asked for spelling of a name
         ^^^^^^
SyntaxError: invalid syntax
@vrige vrige added bug Something isn't working status: needs triage New issues that have not yet been reviewed or categorized. labels Dec 10, 2024
@vrige vrige changed the title bug: bug: Differences between OpenAi and Gemini Dec 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working status: needs triage New issues that have not yet been reviewed or categorized.
Projects
None yet
Development

No branches or pull requests

1 participant