Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix/misc #142

Merged
merged 14 commits into from
Apr 29, 2024
Prev Previous commit
Next Next commit
more complete endpoint documentation
  • Loading branch information
pedro-devv committed Apr 24, 2024
commit 91133c3b2d60a367ce6a254c33deaf1b928de12c
25 changes: 25 additions & 0 deletions docs/src/app/api-reference/audio/page.mdx
Original file line number Diff line number Diff line change
@@ -43,6 +43,31 @@ Discover how to convert audio to text or text to audio. OpenAI compliant. {{ cla

### Optional attributes

<Properties>
<Property name="language" type="string">
The language of the input audio. Supplying the input language in ISO-639-1 format will improve accuracy and latency.
</Property>
</Properties>

<Properties>
<Property name="prompt" type="string">
An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.
</Property>
</Properties>

<Properties>
<Property name="response_format" type="string">
The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
</Property>
</Properties>

<Properties>
<Property name="temperature" type="float">
The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.
</Property>
</Properties>

<Properties>
<Property name="create_session" type="bool">
If present and true, a new audio session will be created and used for the transcription and the session's UUID is returned in the response object. A session will keep track of past inferences, this may be useful for things like live transcriptions where continuous audio is submitted across several requests.
111 changes: 111 additions & 0 deletions docs/src/app/api-reference/chat/page.mdx
Original file line number Diff line number Diff line change
@@ -39,9 +39,120 @@ Generate text from text. {{ className: 'lead' }}
</li>
</ul>


</Property>
</Properties>

### Optional attributes

<Properties>
<Property name="frequency_penalty" type="float">
A number in `[-2.0, 2.0]`. A higher number decreases the likelihood that the model repeats itself.
</Property>
</Properties>

<Properties>
<Property name="logit_bias" type="map">
A map of token IDs to `[-100.0, +100.0]`. Adds a percentage bias to those tokens before sampling; a value of `-100.0` prevents the token from being selected at all.
You could use this to, for example, prevent the model from emitting profanity.
</Property>
</Properties>

<Properties>
<Property name="max_tokens" type="integer">
The maximum number of tokens to generate. If `None`, terminates at the first stop token or the end of sentence.
</Property>
</Properties>

<Properties>
<Property name="n" type="integer">
How many choices to generate for each token in the output. `1` by default. You can use this to generate several sets of completions for the same prompt.
</Property>
</Properties>

<Properties>
<Property name="presence_penalty" type="float">
A number in `[-2.0, 2.0]`. Positive values "increase the model's likelihood to talk about new topics."
</Property>
</Properties>

<Properties>
<Property name="seed" type="integer">
The random number generator seed for the session. Random by default.
</Property>
</Properties>

<Properties>
<Property name="stop" type="string or array">
A stop phrase or set of stop phrases.
The server will pause emitting completions if it appears to be generating a stop phrase, and will terminate completions if a full stop phrase is detected.
Stop phrases are never emitted to the client.
</Property>
</Properties>

<Properties>
<Property name="stream" type="bool">
If true, stream the output as it is computed by the server, instead of returning the whole completion at the end.
You can use this to live-stream completions to a client.
</Property>
</Properties>

<Properties>
<Property name="response_format" type="string">
The format of the response stream.
This is always assumed to be JSON, which is non-conformant with the OpenAI spec.
</Property>
</Properties>

<Properties>
<Property name="temperature" type="float">
The sampling temperature, in `[0.0, 2.0]`. Higher values make the output more random.
</Property>
</Properties>

<Properties>
<Property name="top_p" type="float">
Nucleus sampling. If you set this value to 10%, only the top 10% of tokens are used for sampling, preventing sampling of very low-probability tokens.
</Property>
</Properties>

<Properties>
<Property name="tools" type="array">
A list of tools made available to the model.
</Property>
</Properties>

<Properties>
<Property name="tool_choice" type="string">
If present, the tool that the user has chosen to use.
OpenAI states:
- `none` prevents any tool from being used,
- `auto` allows any tool to be used, or
- you can provide a description of the tool entirely instead of a name.
</Property>
</Properties>

<Properties>
<Property name="user" type="string">
A unique identifier for the _end user_ creating this request. This is used for telemetry and user tracking, and is unused within Edgen.
</Property>
</Properties>

<Properties>
<Property name="one_shot" type="bool">
Indicate if this is an isolated request, with no associated past or future context. This may allow for optimisations in some implementations.
Default: `false`
</Property>
</Properties>

<Properties>
<Property name="context_hint" type="integer">
A hint for how big a context will be.
# Warning
An unsound hint may severely drop performance and/or inference quality, and in some cases even cause Edgen to crash. Do not set this value unless you know what you are doing.
</Property>
</Properties>

</Col>
<Col sticky>

14 changes: 14 additions & 0 deletions docs/src/app/api-reference/embeddings/page.mdx
Original file line number Diff line number Diff line change
@@ -40,6 +40,20 @@ Generate embeddings from text. {{ className: 'lead' }}
</Property>
</Properties>

### Optional attributes

<Properties>
<Property name="response_format" type="string">
The format to return the embeddings in. Can be either `float` or `base64`.
</Property>
</Properties>

<Properties>
<Property name="dimensions" type="integer">
The number of dimensions the resulting output embeddings should have. Only supported in some models.
</Property>
</Properties>

</Col>
<Col sticky>

55 changes: 55 additions & 0 deletions docs/src/app/api-reference/image/page.mdx
Original file line number Diff line number Diff line change
@@ -39,6 +39,61 @@ Generate images from text. {{ className: 'lead' }}
</Property>
</Properties>

### Optional attributes

<Properties>
<Property name="width" type="integer">
The width of the generated image.
</Property>
</Properties>

<Properties>
<Property name="height" type="integer">
The height of the generated image.
</Property>
</Properties>

<Properties>
<Property name="uncond_prompt" type="string">
The optional unconditional prompt.
</Property>
</Properties>

<Properties>
<Property name="steps" type="integer">
The number of steps to be used in the diffusion process.
</Property>
</Properties>

<Properties>
<Property name="images" type="integer">
The number of images to generate.
Default: 1
</Property>
</Properties>

<Properties>
<Property name="seed" type="integer">
The random number generator seed to use for the generation.
By default, a random seed is used.
</Property>
</Properties>

<Properties>
<Property name="guidance_scale" type="float">
The guidance scale to use for generation, that is, how much should the model follow the prompt.
Values below 1 disable guidance. (the prompt is ignored)
</Property>
</Properties>

<Properties>
<Property name="vae_scale" type="float">
The Variational Auto-Encoder scale to use for generation.
Required if `model` is not a pre-made descriptor name.
This value should probably not be set, if `model` is a pre-made descriptor name.
</Property>
</Properties>

</Col>
<Col sticky>