Skip to content
This repository has been archived by the owner on Jan 21, 2024. It is now read-only.

Commit

Permalink
Merge pull request #32 from dabumana/developer
Browse files Browse the repository at this point in the history
v.0.2.5
  • Loading branch information
dabumana committed Jul 12, 2023
2 parents 62a977f + ac41942 commit 18fdfc3
Show file tree
Hide file tree
Showing 5 changed files with 381 additions and 84 deletions.
88 changes: 45 additions & 43 deletions README.md
Expand Up @@ -11,74 +11,76 @@

### Description :notebook:

Our conversational assistant is designed to support a wide range of OpenAI services. It features advanced modes that allow you to customize the contextual information for specific use cases, including modifying the engine, results, and probabilities. With the ability to adjust the amount of words and predefined values, you can achieve the highest level of accuracy possible.
Our conversational assistant is designed to support a wide range of OpenAI services. It features advanced modes that allow you to customize the contextual information for specific use cases, including modifying the engine, results, and probabilities. With the ability to adjust the amount of words and predefined values, you can achieve the highest level of accuracy possible, tokenized strings with contextualized information using real-time online results to validate the responses.

Whether you need to fine-tune the performance of your language model or optimize your AI-powered chatbot, our conversational assistant provides you with the flexibility and control you need to achieve your goals. With its user-friendly interface and powerful features, you can easily configure the assistant to meet your needs and get the most out of your OpenAI services.

![console.gif](docs%2Fmedia%2Fcaos.gif)

### Build :wrench:

Installation steps:

- Download the following repository `git clone github.com/dabumana/caos`
- Install dependencies:
- `go-gpt3`
- `tview`
- `tcell`
- Add your API key provided from OpenAI into the `.env` file to use it with docker or export the value locally in your environment
- Run `./clean.sh`
- If you have Docker installed execute `./run.sh`, in any other case `./build.sh`
- Download the following repository `git clone github.com/dabumana/caos`
- Install dependencies:
- `go-gpt3`
- `tview`
- `tcell`
- Add your API key provided from OpenAI into the `.env` file to use it with docker or export the value locally in your environment
- Run `./clean.sh`
- If you have Docker installed execute `./run.sh`, in any other case `./build.sh`

### Features :sparkles:

- Test all the available models for **code/text/insert/similarity**
- Test all the available models for **code/text/insert/similarity**

#### Modes:
#### Modes:

- **Training mode**: Prepare your own sets based on the interaction
- **Edit mode**: First input will be contextual the second one instructional
- **Template**: Developer mode prompt context
- **More than 165 templates defined as characters and roles** you can refer to **[Awesome ChatGPT Prompts](https://github.com/f/awesome-chatgpt-prompts/blob/main/prompts.csv)**
- **Training mode**: Prepare your own sets based on the interaction
- **Edit mode**: First input will be contextual the second one instructional
- **Template**: Developer mode prompt context
- **More than 165 templates defined as characters and roles** you can refer to **[Awesome ChatGPT Prompts](https://github.com/f/awesome-chatgpt-prompts/blob/main/prompts.csv)**

### Advanced parameters like :dizzy:

#### Completion:

- Temperature
- Topp
- Penalty
- Frequency penalty
- Max tokens
- Engine
- Template
- Context
- Historical
- Temperature
- Topp
- Penalty
- Frequency penalty
- Max tokens
- Engine
- Template
- Context
- Historical

#### Edit:

- Contextual input
![console.gif](docs%2Fmedia%2Fedit.gif)
- Contextual input
![console.gif](docs%2Fmedia%2Fedit.gif)

#### Embedded:

- Nested input to analize embeddings
![console.gif](docs%2Fmedia%2Fembedded.gif)
- Nested input to analize embeddings
![console.gif](docs%2Fmedia%2Fembedded.gif)

#### Predict:

- Nested input to analize text (Powered by GPTZero)
![console.gif](docs%2Fmedia%2Fzero.gif)
- Multiple results and probabilities
- Detailed log according to UTC
- Nested input to analize text (Powered by GPTZero)
![console.gif](docs%2Fmedia%2Fzero.gif)
- Multiple results and probabilities
- Detailed log according to UTC

### How to use :question:

![console.gif](docs%2Fmedia%2Fgeneral.gif)

The OpenAI API provides access to a range of AI-powered services, including natural language processing (NLP), computer vision, and reinforcement learning.

- OpenAI API is a set of tools and services that allow developers to create applications that use artificial intelligence (AI) technology.
- The OpenAI API provides access to a range of AI-powered services, including natural language processing (NLP), computer vision, and reinforcement learning.
- To use the OpenAI API, developers must first register for an API key.
- OpenAI API is a set of tools and services that allow developers to create applications that use artificial intelligence (AI) technology.
- The OpenAI API provides access to a range of AI-powered services, including natural language processing (NLP), computer vision, and reinforcement learning.
- To use the OpenAI API, developers must first register for an API key.

The terminal app have a conversational assistant that is designed to work with OpenAI services, able to understand natural language queries and provide accurate results,
also includes advanced modes that allow users to modify the contextual information for specific uses for example, users can adjust the engine, results, probabilities according to the amount of words used in the query, this allows for more accurate results when using longer queries.
Expand All @@ -87,14 +89,14 @@ also includes advanced modes that allow users to modify the contextual informati

![details.png](docs%2Fmedia%2Fdetails.png)

- **Mode**: Modify the actual mode, select between **(TEXT/EDIT/CODE)**
- **Engine**: Modify the model that you want to test
- **Results**: Modify the amount of results displayed for each prompt
- **Probabilities**: According to your setup of the temperature and topp, probably you will need to use this field to populate a more accurate response according to the possibilities of results
- **Temperature**: If you are working with temperature, try to keep the topp in a higher values than temperature
- **Topp**: Applies the same concept as temperature, when you are modifying this value, you need to apply a higher value for temperature
- **Penalty**: Penalty applied to the characters an redundancy in a result completion
- **Frequency Penalty**: Establish the frequency of the penalty threshold defined
- **Mode**: Modify the actual mode, select between **(TEXT/EDIT/CODE)**
- **Engine**: Modify the model that you want to test
- **Results**: Modify the amount of results displayed for each prompt
- **Probabilities**: According to your setup of the temperature and topp, probably you will need to use this field to populate a more accurate response according to the possibilities of results
- **Temperature**: If you are working with temperature, try to keep the topp in a higher values than temperature
- **Topp**: Applies the same concept as temperature, when you are modifying this value, you need to apply a higher value for temperature
- **Penalty**: Penalty applied to the characters an redundancy in a result completion
- **Frequency Penalty**: Establish the frequency of the penalty threshold defined

### Disclaimer :bangbang:

Expand Down

0 comments on commit 18fdfc3

Please sign in to comment.