Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Codespace stunning space umbrella jjg7w544957fq7vq #1156

Open
wants to merge 63 commits into
base: jhills20-patch-1
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
63 commits
Select commit Hold shift + click to select a range
6bc6630
fix: URL to langchain agents page (#942)
shauryr Dec 20, 2023
810e2d7
Added HoneyHive to related resources (#936)
codehruv Dec 20, 2023
fb077e5
Adds Tembo VectorDB to list of vector databases (#940)
ChuckHend Dec 20, 2023
12b161c
Update registry.yaml (#944)
shyamal-anadkat Dec 20, 2023
c6e6e0d
Fix completions tags and authors
simonpfish Dec 20, 2023
d3f79a2
Fix colin-openai profile url
simonpfish Dec 20, 2023
1abc529
updates to coloring, and intro paragraph (#945)
jhills20 Dec 20, 2023
228cde1
Fix styling for using_logprobs cookbook (#947)
shyamal-anadkat Dec 21, 2023
f6b0cb1
Logprobs suggestions (#948)
enochcheung Dec 21, 2023
3f8d3f3
Fix syntax error in W&B weave notebook (#964)
zanieb Dec 29, 2023
fa9fd04
Update Reproducible_outputs_with_the_seed_parameter.ipynb (#972)
shyamal-anadkat Jan 4, 2024
b7316a1
Update registry.yaml (#982)
colin-openai Jan 4, 2024
d891437
Fixed link to "techniques to improve reliability" (#974)
rissois Jan 9, 2024
2c441ab
Migrate all notebooks to API V1 (#914)
gaborcselle Jan 25, 2024
f1e13cf
Misc updates (#1022)
logankilpatrick Jan 25, 2024
4d37365
Update function calling examples to Python SDK (#1025)
teomusatoiu Jan 27, 2024
d75f2dd
Fix cookbook author from Joe to Colin (#1031)
ibigio Jan 29, 2024
4b35217
Update Azure Cognitive Search to latest stable/GA Python SDK version …
farzad528 Feb 1, 2024
c1bd61f
updated semantic search notebook (#1033)
jamescalam Feb 6, 2024
d4a3e14
Create How to combine GPTV with RAG - Create a Clothing Matchmaker Ap…
teomusatoiu Feb 16, 2024
c945a61
Update registry.yaml (#1053)
teomusatoiu Feb 16, 2024
ea43830
Update README.md (#1028)
shivamsupr Feb 22, 2024
3f5c076
docs: add tip (#1044)
himself65 Feb 22, 2024
e622fba
Update link in How_to_handle_rate_limits.ipynb
logankilpatrick Feb 22, 2024
8eeb249
Update Search_reranking_with_cross-encoders.ipynb (#1060)
ElmiraGhorbani Feb 23, 2024
c565262
Update How_to_handle_rate_limits.ipynb (#1065)
glojain Feb 23, 2024
1e1dd5a
Fix syntax error in DALL-E notebook (#1036)
zanieb Feb 26, 2024
d25d32e
Update chat finetune data prep notebook (#1074)
Andrew-peng Feb 27, 2024
4918721
Update docs to reflect the Python SDK changes (#1030)
teomusatoiu Feb 27, 2024
28f7e88
Update registry.yaml (#1076)
teomusatoiu Feb 27, 2024
88051d4
fix hyperlink (#1052)
jhills20 Feb 27, 2024
e92df85
Added a new notebook: "Parse PDF docs for RAG applications" (#1080)
katia-openai Feb 29, 2024
56b633b
New tag & caption with GPT-4V notebook (#1079)
katia-openai Feb 29, 2024
b8f79a9
Update registry.yaml
logankilpatrick Feb 29, 2024
b6aeae9
Updated registry (#1085)
katia-openai Mar 4, 2024
5818b81
updates authors.yaml (#1087)
shyamal-anadkat Mar 5, 2024
0d53977
Update registry.yaml (#1088)
shyamal-anadkat Mar 5, 2024
1b487d7
Fix rate limit increase link in How_to_handle_rate_limits.ipynb (#1089)
yoandresaav Mar 6, 2024
6b6f1c5
fix spelling (#1081)
MorganMarshall Mar 6, 2024
4f41695
Fix typo in Fine_tuning_for_function_calling.ipynb (#1054)
eltociear Mar 6, 2024
d00e9a4
Update How_to_finetune_chat_models.ipynb (#1011)
TirendazAcademy Mar 6, 2024
76ed3e4
Update gptv terminology to gpt-4v (#1093)
teomusatoiu Mar 8, 2024
ed6194e
Add perplexity example to the `logprobs` user guide (#1071)
ankur-oai Mar 9, 2024
e423a0e
Add moderation cookbook (#1078)
teomusatoiu Mar 11, 2024
1dad5f9
Update gpt-4v to gpt-4V and gpt-4 with Vision (#1096)
teomusatoiu Mar 11, 2024
bed4110
Update How_to_use_moderation.ipynb (#1097)
teomusatoiu Mar 11, 2024
6333678
WIP: Evals starter (#1107)
shyamal-anadkat Mar 25, 2024
27f7f36
fixed styling (#1119)
shyamal-anadkat Mar 27, 2024
60998aa
Instructions for OPENAI_API_KEY as env var and in IDEs (#1042)
gaborcselle Mar 28, 2024
fc2b61c
Add Vellum to Related Resources Page (#1098)
noanflaherty Mar 28, 2024
52b6198
Add Baserun to Related Resources (#1117)
erik-megarad Mar 28, 2024
a1dd606
Update Creating_slides_with_Assistants_API_and_DALL-E3.ipynb (#1121)
eltociear Mar 28, 2024
ac7f655
Update deprecated comment to reflect use of gpt-4 in Clustering.ipynb…
alpsencer Mar 28, 2024
7c3aaa8
Updating the "Using embeddings" cookbook to reflect the latest SDK (#…
jbeutler-openai Apr 1, 2024
a405468
updating for latest embedding models (#1129)
jbeutler-openai Apr 2, 2024
3c4e4bd
Joe at openai/summarize with controllable detail (#1128)
joe-at-openai Apr 8, 2024
df56d84
Dylanra/clip based rag (#1110)
dylanra-openai Apr 10, 2024
1d82e43
Dylanra/synthetic data gen (#1109)
dylanra-openai Apr 10, 2024
0c010b2
Adds notebook for GPTV with function calling (#1139)
shyamal-anadkat Apr 10, 2024
43bd32a
Update Summarizing_with_controllable_detail.ipynb (#1140)
shyamal-anadkat Apr 10, 2024
a92a1d2
Clean up streaming code (#1133)
SethHWeidman Apr 10, 2024
555bbc0
Fix broken link (#1141)
deining Apr 11, 2024
1a4d9e0
Initial commit
sarbazvatanatan Apr 18, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -137,3 +137,6 @@ dmypy.json
*.DS_Store
tmp_*
examples/fine-tuned_qa/local_cache/*

# PyCharm files
.idea/
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

> ✨ Navigate at [cookbook.openai.com](https://cookbook.openai.com)
Example code and guides for accomplishing common tasks with the [OpenAI API](https://platform.openai.com/docs/introduction). To run these examples, you'll need an OpenAI account and associated API key ([create a free account here](https://beta.openai.com/signup)).
Example code and guides for accomplishing common tasks with the [OpenAI API](https://platform.openai.com/docs/introduction). To run these examples, you'll need an OpenAI account and associated API key ([create a free account here](https://beta.openai.com/signup)). Set an environment variable called `OPENAI_API_KEY` with your API key. Alternatively, in most IDEs such as Visual Studio Code, you can create an `.env` file at the root of your repo containing `OPENAI_API_KEY=<your API key>`, which will be picked up by the notebooks.

Most code examples are written in Python, though the concepts can be applied in any language.

Expand Down
47 changes: 23 additions & 24 deletions articles/how_to_work_with_large_language_models.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,14 @@

The magic of large language models is that by being trained to minimize this prediction error over vast quantities of text, the models end up learning concepts useful for these predictions. For example, they learn:

* how to spell
* how grammar works
* how to paraphrase
* how to answer questions
* how to hold a conversation
* how to write in many languages
* how to code
* etc.
- how to spell
- how grammar works
- how to paraphrase
- how to answer questions
- how to hold a conversation
- how to write in many languages
- how to code
- etc.

They do this by “reading” a large amount of existing text and learning how words tend to appear in context with other words, and uses what it has learned to predict the next most likely word that might appear in response to a user request, and each subsequent word after that.

Expand All @@ -25,12 +25,12 @@ Of all the inputs to a large language model, by far the most influential is the

Large language models can be prompted to produce output in a few ways:

* **Instruction**: Tell the model what you want
* **Completion**: Induce the model to complete the beginning of what you want
* **Scenario**: Give the model a situation to play out
* **Demonstration**: Show the model what you want, with either:
* A few examples in the prompt
* Many hundreds or thousands of examples in a fine-tuning training dataset
- **Instruction**: Tell the model what you want
- **Completion**: Induce the model to complete the beginning of what you want
- **Scenario**: Give the model a situation to play out
- **Demonstration**: Show the model what you want, with either:
- A few examples in the prompt
- Many hundreds or thousands of examples in a fine-tuning training dataset

An example of each is shown below.

Expand Down Expand Up @@ -77,6 +77,7 @@ Output:
Giving the model a scenario to follow or role to play out can be helpful for complex queries or when seeking imaginative responses. When using a hypothetical prompt, you set up a situation, problem, or story, and then ask the model to respond as if it were a character in that scenario or an expert on the topic.

Example scenario prompt:

```text
Your role is to extract the name of the author from any given text

Expand Down Expand Up @@ -141,24 +142,22 @@ Large language models aren't only great at text - they can be great at code too.

GPT-4 powers [numerous innovative products][OpenAI Customer Stories], including:

* [GitHub Copilot] (autocompletes code in Visual Studio and other IDEs)
* [Replit](https://replit.com/) (can complete, explain, edit and generate code)
* [Cursor](https://cursor.sh/) (build software faster in an editor designed for pair-programming with AI)
- [GitHub Copilot] (autocompletes code in Visual Studio and other IDEs)
- [Replit](https://replit.com/) (can complete, explain, edit and generate code)
- [Cursor](https://cursor.sh/) (build software faster in an editor designed for pair-programming with AI)

GPT-4 is more advanced than previous models like `text-davinci-002`. But, to get the best out of GPT-4 for coding tasks, it's still important to give clear and specific instructions. As a result, designing good prompts can take more care.
GPT-4 is more advanced than previous models like `gpt-3.5-turbo-instruct`. But, to get the best out of GPT-4 for coding tasks, it's still important to give clear and specific instructions. As a result, designing good prompts can take more care.

### More prompt advice

For more prompt examples, visit [OpenAI Examples][OpenAI Examples].

In general, the input prompt is the best lever for improving model outputs. You can try tricks like:

* **Be more specific** E.g., if you want the output to be a comma separated list, ask it to return a comma separated list. If you want it to say "I don't know" when it doesn't know the answer, tell it 'Say "I don't know" if you do not know the answer.' The more specific your instructions, the better the model can respond.
* **Provide Context**: Help the model understand the bigger picture of your request. This could be background information, examples/demonstrations of what you want or explaining the purpose of your task.
* **Ask the model to answer as if it was an expert.** Explicitly asking the model to produce high quality output or output as if it was written by an expert can induce the model to give higher quality answers that it thinks an expert would write. Phrases like "Explain in detail" or "Describe step-by-step" can be effective.
* **Prompt the model to write down the series of steps explaining its reasoning.** If understanding the 'why' behind an answer is important, prompt the model to include its reasoning. This can be done by simply adding a line like "[Let's think step by step](https://arxiv.org/abs/2205.11916)" before each answer.


- **Be more specific** E.g., if you want the output to be a comma separated list, ask it to return a comma separated list. If you want it to say "I don't know" when it doesn't know the answer, tell it 'Say "I don't know" if you do not know the answer.' The more specific your instructions, the better the model can respond.
- **Provide Context**: Help the model understand the bigger picture of your request. This could be background information, examples/demonstrations of what you want or explaining the purpose of your task.
- **Ask the model to answer as if it was an expert.** Explicitly asking the model to produce high quality output or output as if it was written by an expert can induce the model to give higher quality answers that it thinks an expert would write. Phrases like "Explain in detail" or "Describe step-by-step" can be effective.
- **Prompt the model to write down the series of steps explaining its reasoning.** If understanding the 'why' behind an answer is important, prompt the model to include its reasoning. This can be done by simply adding a line like "[Let's think step by step](https://arxiv.org/abs/2205.11916)" before each answer.

[Fine Tuning Docs]: https://platform.openai.com/docs/guides/fine-tuning
[OpenAI Customer Stories]: https://openai.com/customer-stories
Expand Down
3 changes: 3 additions & 0 deletions articles/related_resources.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,14 @@ People are writing great tools and papers for improving outputs from GPT. Here a
## Prompting libraries & tools (in alphabetical order)

- [Arthur Shield](https://www.arthur.ai/get-started): A paid product for detecting toxicity, hallucination, prompt injection, etc.
- [Baserun](https://baserun.ai/): A paid product for testing, debugging, and monitoring LLM-based apps
- [Chainlit](https://docs.chainlit.io/overview): A Python library for making chatbot interfaces.
- [Embedchain](https://github.com/embedchain/embedchain): A Python library for managing and syncing unstructured data with LLMs.
- [FLAML (A Fast Library for Automated Machine Learning & Tuning)](https://microsoft.github.io/FLAML/docs/Getting-Started/): A Python library for automating selection of models, hyperparameters, and other tunable choices.
- [Guardrails.ai](https://shreyar.github.io/guardrails/): A Python library for validating outputs and retrying failures. Still in alpha, so expect sharp edges and bugs.
- [Guidance](https://github.com/microsoft/guidance): A handy looking Python library from Microsoft that uses Handlebars templating to interleave generation, prompting, and logical control.
- [Haystack](https://github.com/deepset-ai/haystack): Open-source LLM orchestration framework to build customizable, production-ready LLM applications in Python.
- [HoneyHive](https://honeyhive.ai): An enterprise platform to evaluate, debug, and monitor LLM apps.
- [LangChain](https://github.com/hwchase17/langchain): A popular Python/JavaScript library for chaining sequences of language model prompts.
- [LiteLLM](https://github.com/BerriAI/litellm): A minimal Python library for calling LLM APIs with a consistent format.
- [LlamaIndex](https://github.com/jerryjliu/llama_index): A Python library for augmenting LLM apps with data.
Expand All @@ -24,6 +26,7 @@ People are writing great tools and papers for improving outputs from GPT. Here a
- [Prompttools](https://github.com/hegelai/prompttools): Open-source Python tools for testing and evaluating models, vector DBs, and prompts.
- [Scale Spellbook](https://scale.com/spellbook): A paid product for building, comparing, and shipping language model apps.
- [Semantic Kernel](https://github.com/microsoft/semantic-kernel): A Python/C#/Java library from Microsoft that supports prompt templating, function chaining, vectorized memory, and intelligent planning.
- [Vellum](https://www.vellum.ai/): A paid AI product development platform to experiment with, evaluate, and deploy advanced LLM apps.
- [Weights & Biases](https://wandb.ai/site/solutions/llmops): A paid product for tracking model training and prompt engineering experiments.
- [YiVal](https://github.com/YiVal/YiVal): An open-source GenAI-Ops tool for tuning and evaluating prompts, retrieval configurations, and model parameters using customizable datasets, evaluation methods, and evolution strategies.

Expand Down
34 changes: 17 additions & 17 deletions articles/techniques_to_improve_reliability.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,25 +14,25 @@ If you were asked to multiply 13 by 17, would the answer pop immediately into yo

Similarly, if you give GPT-3 a task that's too complex to do in the time it takes to calculate its next token, it may confabulate an incorrect guess. Yet, akin to humans, that doesn't necessarily mean the model is incapable of the task. With some time and space to reason things out, the model still may be able to answer reliably.

As an example, if you ask `text-davinci-002` the following math problem about juggling balls, it answers incorrectly:
As an example, if you ask `gpt-3.5-turbo-instruct` the following math problem about juggling balls, it answers incorrectly:

```text-davinci-002
```gpt-3.5-turbo-instruct
Q: A juggler has 16 balls. Half of the balls are golf balls and half of the golf balls are blue. How many blue golf balls are there?
A:
```

```text-davinci-002
```gpt-3.5-turbo-instruct
There are 8 blue golf balls.
```

Does this mean that GPT-3 cannot do simple math problems? No; in fact, it turns out that by prompting the model with `Let's think step by step`, the model solves the problem reliably:

```text-davinci-002
```gpt-3.5-turbo-instruct
Q: A juggler has 16 balls. Half of the balls are golf balls and half of the golf balls are blue. How many blue golf balls are there?
A: Let's think step by step.
```

```text-davinci-002
```gpt-3.5-turbo-instruct
There are 16 balls in total.
Half of the balls are golf balls.
That means that there are 8 golf balls.
Expand Down Expand Up @@ -64,9 +64,9 @@ The rest of this article shares techniques for improving reliability of large la

One way to give a model more time and space to think is to break tasks into simpler pieces.

As an example, consider a task where we ask the model a multiple-choice question about some text - in this case, a game of Clue. When asked directly, `text-davinci-002` isn't able to put clues 3 & 5 together, and answers incorrectly:
As an example, consider a task where we ask the model a multiple-choice question about some text - in this case, a game of Clue. When asked directly, `gpt-3.5-turbo-instruct` isn't able to put clues 3 & 5 together, and answers incorrectly:

```text-davinci-002
```gpt-3.5-turbo-instruct
Use the following clues to answer the following multiple-choice question.

Clues:
Expand All @@ -84,7 +84,7 @@ Question: Was Colonel Mustard in the observatory with the candlestick?
Solution:
```

```text-davinci-002
```gpt-3.5-turbo-instruct
(c) Unknown; there is not enough information to determine whether Colonel Mustard was in the observatory with the candlestick
```

Expand All @@ -96,7 +96,7 @@ However, instead of asking for the answer directly, we can split the task into t
- Second, combine the relevant clues to reason out the answer to the question
- Third, write the final answer: either (a), (b), or (c)

```text-davinci-002
```gpt-3.5-turbo-instruct
Use the following clues to answer the following multiple-choice question, using the following procedure:
(1) First, go through the clues one by one and consider whether the clue is potentially relevant
(2) Second, combine the relevant clues to reason out the answer to the question
Expand All @@ -118,7 +118,7 @@ Solution:
(1) First, go through the clues one by one and consider whether the clue is potentially relevant:
```

```text-davinci-002
```gpt-3.5-turbo-instruct
Clue 1: Miss Scarlett was the only person in the lounge. This clue is not relevant.
Clue 2: The person with the pipe was in the kitchen. This clue is not relevant.
Clue 3: Colonel Mustard was the only person in the observatory. This clue is relevant.
Expand All @@ -136,9 +136,9 @@ By giving the model more time and space to think, and guiding it along a reasoni

Another benefit of splitting complex instructions into smaller subtasks is that it can help keep the model focused on each subtask.

For example, if we ask `text-davinci-002` to summarize a text in its original language, the model can lapse back into English:
For example, if we ask `gpt-3.5-turbo-instruct` to summarize a text in its original language, the model can lapse back into English:

```text-davinci-002
```gpt-3.5-turbo-instruct
Summarize the text using the original language of the text. The summary should be one sentence long.

Text:
Expand All @@ -149,13 +149,13 @@ La estadística (la forma femenina del término alemán Statistik, derivado a su
Summary:
```

```text-davinci-002
```gpt-3.5-turbo-instruct
The text explains that statistics is a science that studies the variability, collection, organization, analysis, interpretation, and presentation of data, as well as the random process that generates them following the laws of probability.
```

However, if we first ask the model to identify the language of the text, and then summarize the text, it becomes more reliable:

```text-davinci-002
```gpt-3.5-turbo-instruct
First, identify the language of the text. Second, summarize the text using the original language of the text. The summary should be one sentence long.

Text:
Expand All @@ -166,7 +166,7 @@ La estadística (la forma femenina del término alemán Statistik, derivado a su
Language:
```

```text-davinci-002
```gpt-3.5-turbo-instruct
Spanish

La estadística es una ciencia que estudia la variabilidad, colección, organización, análisis, interpretación, y presentación de los datos, así como el proceso aleatorio que los genera siguiendo las leyes de la probabilidad.
Expand Down Expand Up @@ -203,7 +203,7 @@ To learn more, read the [full paper](https://arxiv.org/abs/2205.11916).

If you apply this technique to your own tasks, don't be afraid to experiment with customizing the instruction. `Let's think step by step` is rather generic, so you may find better performance with instructions that hew to a stricter format customized to your use case. For example, you can try more structured variants like `First, think step by step about why X might be true. Second, think step by step about why Y might be true. Third, think step by step about whether X or Y makes more sense.`. And you can even give the model an example format to help keep it on track, e.g.:

```text-davinci-002
```gpt-3.5-turbo-instruct
Using the IRS guidance below, answer the following questions using this format:
(1) For each criterion, determine whether it is met by the vehicle purchase
- {Criterion} Let's think step by step. {explanation} {yes or no, or if the question does not apply then N/A}.
Expand All @@ -229,7 +229,7 @@ Solution:
- Does the vehicle have at least four wheels? Let's think step by step.
```

```text-davinci-002
```gpt-3.5-turbo-instruct
The Toyota Prius Prime has four wheels, so the answer is yes.
- Does the vehicle weigh less than 14,000 pounds? Let's think step by step. The Toyota Prius Prime weighs less than 14,000 pounds, so the answer is yes.
- Does the vehicle draw energy from a battery with at least 4 kilowatt hours that may be recharged from an external source? Let's think step by step. The Toyota Prius Prime has a battery with at least 4 kilowatt hours that may be recharged from an external source, so the answer is yes.
Expand Down
20 changes: 10 additions & 10 deletions articles/text_comparison_examples.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@ Embeddings can be used for semantic search, recommendations, cluster analysis, n

For more information, read OpenAI's blog post announcements:

* [Introducing Text and Code Embeddings (Jan 2022)](https://openai.com/blog/introducing-text-and-code-embeddings/)
* [New and Improved Embedding Model (Dec 2022)](https://openai.com/blog/new-and-improved-embedding-model/)
- [Introducing Text and Code Embeddings (Jan 2022)](https://openai.com/blog/introducing-text-and-code-embeddings/)
- [New and Improved Embedding Model (Dec 2022)](https://openai.com/blog/new-and-improved-embedding-model/)

For comparison with other embedding models, see [Massive Text Embedding Benchmark (MTEB) Leaderboard](https://huggingface.co/spaces/mteb/leaderboard)

Expand All @@ -19,14 +19,14 @@ Embeddings can be used for search either by themselves or as a feature in a larg

The simplest way to use embeddings for search is as follows:

* Before the search (precompute):
* Split your text corpus into chunks smaller than the token limit (8,191 tokens for `text-embedding-ada-002`)
* Embed each chunk of text
* Store those embeddings in your own database or in a vector search provider like [Pinecone](https://www.pinecone.io), [Weaviate](https://weaviate.io) or [Qdrant](https://qdrant.tech)
* At the time of the search (live compute):
* Embed the search query
* Find the closest embeddings in your database
* Return the top results
- Before the search (precompute):
- Split your text corpus into chunks smaller than the token limit (8,191 tokens for `text-embedding-3-small`)
- Embed each chunk of text
- Store those embeddings in your own database or in a vector search provider like [Pinecone](https://www.pinecone.io), [Weaviate](https://weaviate.io) or [Qdrant](https://qdrant.tech)
- At the time of the search (live compute):
- Embed the search query
- Find the closest embeddings in your database
- Return the top results

An example of how to use embeddings for search is shown in [Semantic_text_search_using_embeddings.ipynb](../examples/Semantic_text_search_using_embeddings.ipynb).

Expand Down
2 changes: 1 addition & 1 deletion articles/what_is_new_with_dalle_3.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ Have you ever struggled to find the perfect icon for your website or app? It wou

![icon_set](/images/dalle_3/icon_set.jpg)

In this case, I used Potrace to convert the images to SVGs, which you can download [here](http://potrace.sourceforge.net/). This is what I used to convert the images:
In this case, I used Potrace to convert the images to SVGs, which you can download [here](https://potrace.sourceforge.net/). This is what I used to convert the images:

```bash
potrace -s cat.jpg -o cat.svg
Expand Down