-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove function calling #102
Remove function calling #102
Conversation
…ocess_json_response()
gpt-3_5 returns technologies such as "Backend: Node.js with Mongo database (Mongoose)" codellama throws an error due to missing `choices`
@@ -51,7 +51,7 @@ https://github.com/Pythagora-io/gpt-pilot/assets/10895136/0495631b-511e-451b-93d | |||
# 🔌 Requirements | |||
|
|||
|
|||
- **Python** | |||
- **Python >= 3.10** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why python 3.10? I am using 3.9.6 on my current macbook and it works fine
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ran into an (minor) issue with typing and figured that Python 3.10 came out in 2021 so it would not be likely to be used anymore.
utils/function_calling.py:5: in <module>
JsonType = str | int | float | bool | None | list["JsonType"] | dict[str, "JsonType"]
E TypeError: unsupported operand type(s) for |: 'type' and 'type'
|
||
# TODO: I don't think llm_response would ever be 'DONE'? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you are right, once it is done it will return 'INSTALLED' which is defined in 'development/env_setup/install_next_technology.prompt'. This should be updated and maybe best way would be to put it in const/llm.py with other llm constants
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but also have in mind that this feature is disabled at the moment (return on line 312) so it is not important to fix it now
cli_response, response = execute_command(convo.agent.project, command, timeout) | ||
if response is None: | ||
response = convo.send_message('dev_ops/ran_command.prompt', | ||
# TODO: Prompt mentions `command` could be `INSTALLED` or `NOT_INSTALLED`, where is this handled? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for devops agent while installing tech dependencies response will be INSTALLED
or NOT_INSTALLED
but for developer agent, or in other words commands executed while coding eg. npm run start, it can return DONE
or NEEDS_DEBUGGING
.
pilot/utils/function_calling.py
Outdated
return | ||
|
||
model: str = gpt_data['model'] | ||
is_llama = 'llama' in model or 'anthropic' in model |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is it different for llama compared to all other llms, or do we have to add implementation for each llm individually?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need to rename this to is_instruct
. Wrapping in [INST]
seemed to have a negative affect for some models.
pilot/utils/function_calling.py
Outdated
model: str = gpt_data['model'] | ||
is_llama = 'llama' in model or 'anthropic' in model | ||
|
||
# if model == 'gpt-4': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this commented part here?
I still need to fix the |
Need to fix prompts for GPT-4
See also notes in #99