Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update the UI for the prompt optimizer #1583

Conversation

curiosityz
Copy link

Update the UI for the prompt optimizer in the notebook gemini/prompts/prompt_optimizer/vertex_ai_prompt_optimizer_ui.ipynb.

  • Notebook Updates:

    • Add installation commands for required packages.
    • Simplify user input and configuration steps.
    • Remove unnecessary manual configurations.
  • Utility Functions:

    • Add functions to interact with LLM via API for intelligent defaults.
    • Refactor utility functions to reduce complexity and improve readability.
    • Remove redundant functions and classes that are no longer needed.
  • Notebook vertex_ai_prompt_optimizer_sdk_custom_metric.ipynb:

    • Update notebook to leverage LLM via API for intelligent defaults.
    • Simplify user input and configuration steps.
    • Remove unnecessary manual configurations.

Update the UI for the prompt optimizer in the notebook `gemini/prompts/prompt_optimizer/vertex_ai_prompt_optimizer_ui.ipynb`.

* **Notebook Updates:**
  - Add installation commands for required packages.
  - Simplify user input and configuration steps.
  - Remove unnecessary manual configurations.

* **Utility Functions:**
  - Add functions to interact with LLM via API for intelligent defaults.
  - Refactor utility functions to reduce complexity and improve readability.
  - Remove redundant functions and classes that are no longer needed.

* **Notebook `vertex_ai_prompt_optimizer_sdk_custom_metric.ipynb`:**
  - Update notebook to leverage LLM via API for intelligent defaults.
  - Simplify user input and configuration steps.
  - Remove unnecessary manual configurations.
@curiosityz curiosityz requested a review from a team as a code owner December 26, 2024 06:54
Copy link

google-cla bot commented Dec 26, 2024

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

Copy link
Contributor

@code-review-assist code-review-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi team,

I'm currently reviewing this pull request and will have a full review ready in a few minutes. In the meantime, here's a quick summary to get everyone up to speed:

This pull request updates the UI for the prompt optimizer, aiming to simplify the user experience and leverage LLMs for intelligent defaults. The changes are focused on three key areas:

  • Improved User Experience: The notebooks (vertex_ai_prompt_optimizer_ui.ipynb and vertex_ai_prompt_optimizer_sdk_custom_metric.ipynb) have been streamlined. Unnecessary manual configurations have been removed, and user input steps have been simplified. Installation commands for required packages have also been added to vertex_ai_prompt_optimizer_ui.ipynb.
  • Enhanced Functionality: New utility functions in vapo_lib.py (lines 1053-1072) now interact with an LLM via an API to provide intelligent default settings. This reduces the need for manual configuration and improves the overall efficiency of the prompt optimizer.
  • Code Refactoring: Existing utility functions in vapo_lib.py have been refactored to improve readability and reduce complexity. Redundant functions and classes have been removed.

Specifically:

  • vapo_lib.py: Added functions get_intelligent_defaults, refactor_code_for_readability, and remove_redundant_code to utilize an LLM for intelligent defaults and code optimization (lines 1053-1072).
  • vertex_ai_prompt_optimizer_sdk_custom_metric.ipynb: Updated to use the new LLM-powered defaults, simplifying user input and removing unnecessary configurations (lines 431-437 and 1229-1236).
  • vertex_ai_prompt_optimizer_ui.ipynb: Added installation commands for necessary packages and simplified user interaction (lines 150-157).

I'll provide a more detailed review shortly, but this should give you a good overview of the changes.


And now, a little haiku to brighten your day:

Prompt's gentle guide,
LLM whispers, code refines,
UI shines anew.

Best,
Gemini

@curiosityz curiosityz closed this Dec 26, 2024
Copy link
Contributor

@code-review-assist code-review-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This pull request focuses on enhancing the UI and backend functionality of the prompt optimizer, specifically targeting the vertex_ai_prompt_optimizer_ui.ipynb and vertex_ai_prompt_optimizer_sdk_custom_metric.ipynb notebooks. The inclusion of installation commands, simplification of user inputs, and leveraging the LLM for intelligent defaults are valuable additions. However, there are areas for improvement, especially in error handling, input validation, and documentation within the vapo_lib.py utility functions. Additionally, maintaining consistent %pip vs. !pip usage in notebook cells and addressing the redundant source array in vertex_ai_prompt_optimizer_sdk_custom_metric.ipynb will further enhance code quality and clarity.

Style Guide Elements Referenced:

  • PEP 8 (Python): Adherence to function and variable naming conventions, docstring style, and import order.
  • Google Python Style Guide: General code style and best practices, including error handling and documentation.

I've made several suggestions below to address these points and ensure the code is robust, maintainable, and user-friendly.

Comment on lines +1053 to +1056
def get_intelligent_defaults(prompt: str, model: GenerativeModel) -> dict[str, Any]:
"""Get intelligent defaults from the LLM."""
response = model.generate_content(prompt)
return json.loads(response.text)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The get_intelligent_defaults function could benefit from more robust error handling. While the current implementation fetches defaults from the LLM, it doesn't handle potential errors during the API call or JSON parsing. Consider adding a try-except block to catch JSONDecodeError and other potential exceptions. Returning a dictionary with an error message upon failure, as suggested below, aligns with the Google Python Style Guide and improves the function's robustness.

Suggested change
def get_intelligent_defaults(prompt: str, model: GenerativeModel) -> dict[str, Any]:
"""Get intelligent defaults from the LLM."""
response = model.generate_content(prompt)
return json.loads(response.text)
def get_intelligent_defaults(prompt: str, model: GenerativeModel) -> dict[str, Any]:
"""Get intelligent defaults from the LLM."""
try:
response = model.generate_content(prompt)
return json.loads(response.text)
except (json.JSONDecodeError, Exception) as e:
return {"error": f"Error getting intelligent defaults: {e}"}

Comment on lines +1059 to +1064
def refactor_code_for_readability(source_code: str) -> str:
"""Refactor code for readability using LLM."""
prompt = f"Refactor the following code for readability:\n\n{source_code}"
model = init_new_model("text-bison@001")
response = model.generate_content(prompt)
return response.text
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

low

The docstring for refactor_code_for_readability could be improved by providing more context. While it mentions using an LLM, it doesn't specify which LLM or how it's used. Adding details about the prompt structure and expected output, as demonstrated below, enhances clarity and aligns with the Google Python Style Guide.

Suggested change
def refactor_code_for_readability(source_code: str) -> str:
"""Refactor code for readability using LLM."""
prompt = f"Refactor the following code for readability:\n\n{source_code}"
model = init_new_model("text-bison@001")
response = model.generate_content(prompt)
return response.text
def refactor_code_for_readability(source_code: str) -> str:
"""Refactor code for readability using LLM.
Sends a prompt to the specified LLM requesting code refactoring for improved readability.
The prompt includes the input `source_code` and instructions for refactoring.
The LLM's response, containing the refactored code, is returned.
Args:
source_code: The source code to be refactored.
Returns:
The refactored source code as a string.
"""
prompt = f"Refactor the following code for readability:\n\n{source_code}"
model = init_new_model("text-bison@001")
response = model.generate_content(prompt)
return response.text

Comment on lines +1067 to +1072
def remove_redundant_code(source_code: str) -> str:
"""Remove redundant code using LLM."""
prompt = f"Remove redundant code from the following:\n\n{source_code}"
model = init_new_model("text-bison@001")
response = model.generate_content(prompt)
return response.text
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

low

The function name remove_redundant_code doesn't clearly convey its purpose, especially given the use of an LLM. A more descriptive name like simplify_code_with_llm or remove_redundancy_with_llm would be better, per PEP 8. Also, consider adding input validation to ensure source_code is not empty or None. This prevents unexpected behavior and improves code robustness.

Suggested change
def remove_redundant_code(source_code: str) -> str:
"""Remove redundant code using LLM."""
prompt = f"Remove redundant code from the following:\n\n{source_code}"
model = init_new_model("text-bison@001")
response = model.generate_content(prompt)
return response.text
def simplify_code_with_llm(source_code: str) -> str:
"""Remove redundant code using LLM."""
if not source_code:
return "Input source code is empty or None."
prompt = f"Remove redundant code from the following:\n\n{source_code}"
model = init_new_model("text-bison@001")
response = model.generate_content(prompt)
return response.text

@@ -431,6 +431,7 @@
"id": "wqOHg5aid6HP"
},
"outputs": [],
"source": [],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

low

The empty source array at line 434 is redundant and could be removed. This improves the notebook's structure and avoids potential confusion.

@@ -1229,7 +1230,7 @@
"cell_type": "markdown",
"metadata": {
"id": "RlNQ_UrFH9ne"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The closing curly brace } should be moved to the next line to be consistent with the JSON structure and improve readability.

Suggested change
"id": "RlNQ_UrFH9ne"
}

Comment on lines +153 to +154
"!pip install --upgrade --quiet 'google-cloud-aiplatform[evaluation]'\n",
"!pip install --upgrade --quiet 'plotly' 'asyncio' 'tqdm' 'tenacity' 'etils' 'importlib_resources' 'fsspec' 'gcsfs' 'nbformat>=4.2.0'\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

low

For installing packages within a Jupyter notebook environment, using %pip is generally preferred over !pip. %pip installs packages within the notebook's kernel, ensuring they're available in the correct environment and avoiding potential conflicts. Please update the installation commands to use %pip install.

Suggested change
"!pip install --upgrade --quiet 'google-cloud-aiplatform[evaluation]'\n",
"!pip install --upgrade --quiet 'plotly' 'asyncio' 'tqdm' 'tenacity' 'etils' 'importlib_resources' 'fsspec' 'gcsfs' 'nbformat>=4.2.0'\n",
%pip install --upgrade --quiet 'google-cloud-aiplatform[evaluation]'
%pip install --upgrade --quiet 'plotly' 'asyncio' 'tqdm' 'tenacity' 'etils' 'importlib_resources' 'fsspec' 'gcsfs' 'nbformat>=4.2.0'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant