Skip to content
This repository has been archived by the owner on Jan 21, 2024. It is now read-only.

Commit

Permalink
Merge pull request #44 from dabumana/developer
Browse files Browse the repository at this point in the history
v1.0.2
  • Loading branch information
dabumana committed Jul 21, 2023
2 parents 2368600 + d2e1d7b commit d2357be
Show file tree
Hide file tree
Showing 9 changed files with 157 additions and 41 deletions.
9 changes: 4 additions & 5 deletions .github/workflows/integration.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,14 @@ name: Integration

on:
push:
branches: [ "main", "developer" ]
branches: ["main", "developer"]
pull_request:
branches: [ "main", "developer" ]
branches: ["main", "developer"]

permissions:
actions: read

jobs:

build-coverage:
name: Coverage
runs-on: ${{'ubuntu-latest' || 'macos-latest' || 'windows-latest' }}
Expand All @@ -27,10 +26,10 @@ jobs:
persist-credentials: false

- name: Dependencies
run: sudo apt-get install libcurl4-openssl-dev
run: sudo apt-get update && sudo apt-get install libcurl4-openssl-dev --fix-missing

- name: Build
run: make build APP=caos

- name: Coverage
run: make test
148 changes: 122 additions & 26 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,57 +5,145 @@
[![Codacy Badge](https://app.codacy.com/project/badge/Grade/ce2f44761a6e486999eddd05b749c1be)](https://app.codacy.com/gh/dabumana/caos/dashboard?utm_source=gh&utm_medium=referral&utm_content=&utm_campaign=Badge_grade)
[![Maintainability](https://api.codeclimate.com/v1/badges/9bf177949db99d4b2f15/maintainability)](https://codeclimate.com/github/dabumana/caos/maintainability)


[![Acceptance](https://github.com/dabumana/caos/actions/workflows/acceptance.yml/badge.svg)](https://github.com/dabumana/caos/actions/workflows/acceptance.yml)
[![Integrity](https://github.com/dabumana/caos/actions/workflows/integration.yml/badge.svg)](https://github.com/dabumana/caos/actions/workflows/integration.yml)
[![Release](https://github.com/dabumana/caos/actions/workflows/release.yml/badge.svg)](https://github.com/dabumana/caos/actions/workflows/release.yml)

### Description :notebook:

Our conversational assistant is designed to support a wide range of OpenAI services. It features advanced modes that allow you to customize the contextual information for specific use cases, including modifying the engine, results, and probabilities. With the ability to adjust the amount of words and predefined values, you can achieve the highest level of accuracy possible, tokenized strings with contextualized information using real-time online results to validate the responses.
Our conversational assistant is designed to support a wide range of OpenAI services. It features advanced modes that allow you to customize the contextual information for specific use cases, including modifying the engine, results, and probabilities, along with a search engine to scrap web results.

Whether you need to fine-tune the performance of your language model or optimize your AI-powered chatbot, our conversational assistant provides you with the flexibility and control you need to achieve your goals. With its user-friendly interface and powerful features, you can easily configure the assistant to meet your needs and get the most out of your OpenAI services.
You can achieve the highest level of accuracy possible, tokenized strings with contextualized information using **real-time online results** to validate the responses.

![console.gif](docs%2Fmedia%2Fcaos.gif)

Whether you need to fine-tune the performance of your language model or optimize your AI-powered chatbot, our conversational assistant provides you with the flexibility and control you need to test and recreate new historical prompts based on json files that can be used for furthermore training.

Contains a simplified schema to store the content in collections that can be integrated with some other API services:

```
[
{
"id": "",
"session": [
{
"timestamp": "1689910077911",
"event": {
"prompt": "",
"completion": ""
}
}
]
}
]
```

Each new conversation can be exported once you finished your prompt requests, keep in mind that a conversation can keep multiple prompts and completions but depends on the actual token limit size.

***16K Models*** can process larger training sessions, but it depends on how much context can be found once the results are filtered and processed.

### Build :wrench:

Installation steps:
Firts download the repository, installation process can be completed from the source using ***make*** and installing the required dependencies:

- docker (optional)
- libcurl
- golang
- make
- gcc

##### From source

Once you have the requirements ready to use in your environment, you need to add the API Key to be used in the building process, this can be done in two ways:

---
###### Using environment variables:

- Create an environment file called **.env** and add the following variables:

```
API_KEY=<YOUR-API-KEY>
ZERO_API_KEY=<YOUR-API-KEY>
```

*Don't include the following characters < > with your key.*

###### Using profile resources:

- Inside ***caos/src/resources/template*** you can find a file called **template.csv**

```
"API_KEY","YOUR-API-KEY"
"ZERO_API_KEY","YOUR-API-KEY"
```
---

Now that we have the variables defined we can execute and build and actual version that includes our current API Key, to accomplish this purpose just run:

```
make build
```
And copy the actual binary to your system binaries folder:
```
cp caos/src/bin/caos/caos /bin
```
Or you can run locally with:
```
make run
```

- Download the following repository `git clone github.com/dabumana/caos`
- Install dependencies:
- `go-gpt3`
- `tview`
- `tcell`
- Add your API key provided from OpenAI into the `.env` file to use it with docker or export the value locally in your environment
- Run `./clean.sh`
- If you have Docker installed execute `./run.sh`, in any other case `./build.sh`
##### Using Docker

If you want to virtualize an environment with the service ready to use you can run the following command:

```
make run-pod KEY=<YOUR-OPENAI-API-KEY> ZKEY=<YOUR-ZEROGPT-API-KEY> CPU=<CPUS-ASSIGNED>
```

### Features :sparkles:

- Test all the available models for **code/text/insert/similarity**
- Test all the available models for **code/text/insert/similarity/turbo/predict/embedding**
- Validate online results easily with your current requirements
- Use dorks to grab more accurate results from a particular site
- Train and prepare contextual sessions for further training
- Predict results and validate if a text was generated using GPT models
- Contextualize information based on web-scrapping
- Conversational assistant that can be used with more than **165 templates**
- Prepare your own set of interactions based on previous responses

#### Modes:

- **Training mode**: Prepare your own sets based on the interaction
- **Edit mode**: First input will be contextual the second one instructional
- **Template**: Developer mode prompt context
- **Streaming mode**: Stream response with online results based on a general role with turbo models.
- **Edit mode**: Edition mode to follow up previous prompts as contextual information for general use with all the models.

---
- **More than 165 templates defined as characters and roles** you can refer to **[Awesome ChatGPT Prompts](https://github.com/f/awesome-chatgpt-prompts/blob/main/prompts.csv)**
---

### Advanced parameters like :dizzy:

#### Completion:

For prompt request:
- Results
- Probabilities
- Temperature
- Topp
- Penalty
- Frequency penalty

For engine use:
- Max tokens
- Engine
- Template
- Context
- Historical

#### Dork:

- Combine with multiple online results using google dorks:
- ##### Ex. Elaborate a top 10 list of vulnerabilities for IOT in 2023 intext:iot site:cve.mitre.org
![console.gif](docs%2Fmedia%2Fdork.png)
- Search engine added

#### Edit:

- Contextual input
Expand All @@ -75,30 +163,38 @@ Installation steps:

### How to use :question:

![console.gif](docs%2Fmedia%2Fgeneral.gif)

The OpenAI API provides access to a range of AI-powered services, including natural language processing (NLP), computer vision, and reinforcement learning.

- OpenAI API is a set of tools and services that allow developers to create applications that use artificial intelligence (AI) technology.
- The OpenAI API provides access to a range of AI-powered services, including natural language processing (NLP), computer vision, and reinforcement learning.
- To use the OpenAI API, developers must first register for an API key.

The terminal app have a conversational assistant that is designed to work with OpenAI services, able to understand natural language queries and provide accurate results,
also includes advanced modes that allow users to modify the contextual information for specific uses for example, users can adjust the engine, results, probabilities according to the amount of words used in the query, this allows for more accurate results when using longer queries.
![console.gif](docs%2Fmedia%2Fgeneral.gif)

#### General parameters:
The terminal app have a conversational assistant that is designed to work with OpenAI services, able to understand natural language queries and provide accurate results based on contextualized information, also includes advanced modes that allow users to modify the contextual information for specific uses for example, users can adjust the engine, results, probabilities according to the amount of words used in the query the tokens will be calculated, this allows for more accurate results when using longer queries.

#### Menu:

- **Mode**: Shows the actual model type selected **(TEXT/EDIT/CODE/PREDICT/EMBEDDING/TURBO)**
- **Engine**: Select the model that you want to use
- **Role**: Role definition you can use **User / Assistant / System**
- **Template**: Select a role template for a contextualized prompt according to your request, **it doesn't work with turbo** models.

![details.png](docs%2Fmedia%2Fdetails.png)
![console.gif](docs%2Fmedia%2Fmenu.png)

#### General parameters:

- **Mode**: Modify the actual mode, select between **(TEXT/EDIT/CODE)**
- **Engine**: Modify the model that you want to test
- **Results**: Modify the amount of results displayed for each prompt
- **Probabilities**: According to your setup of the temperature and topp, probably you will need to use this field to populate a more accurate response according to the possibilities of results
- **Temperature**: If you are working with temperature, try to keep the topp in a higher values than temperature
- **Topp**: Applies the same concept as temperature, when you are modifying this value, you need to apply a higher value for temperature
- **Penalty**: Penalty applied to the characters an redundancy in a result completion
- **Frequency Penalty**: Establish the frequency of the penalty threshold defined

![console.gif](docs%2Fmedia%2Fpreferences.png)

---

### Disclaimer :bangbang:

This software is provided "as is" and any expressed or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. In no event shall the author or contributors be liable for any direct, indirect, incidental, special, exemplary, or consequential.
27 changes: 19 additions & 8 deletions ci/service/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,12 +1,23 @@
FROM golang:1.19

WORKDIR /usr/local/bin
FROM alpine:latest
# Define local binary folder
ENV APP_HOME /usr/local/bin/caos

# Update local package manager and install required dependencies
RUN apk update && apk upgrade
RUN apk add git musl-dev make pkgconfig gcc go
RUN apk add curl=7.79.1-r1 curl-dev=7.79.1-r1 --repository=http://dl-cdn.alpinelinux.org/alpine/v3.12/main
# Create source folder and clone the repository
RUN mkdir -p ${APP_HOME}
RUN git clone https://github.com/dabumana/caos ${APP_HOME}

RUN cp ${APP_HOME}/src/bin/caos/* /usr/local/bin
RUN git clone https://github.com/dabumana/caos ${APP_HOME}/source
WORKDIR ${APP_HOME}/source
# Build the application and delete the source folder
RUN make build
RUN cp ${APP_HOME}/source/src/bin/caos/caos /bin
RUN rm -rf ${APP_HOME}

# Setup argument variables
ARG KEY
ARG ZKEY
# Initialize environment variables
ENV API_KEY $KEY
ENV ZERO_API_KEY $ZERO-KEY
# Define entrypoint
ENTRYPOINT [ "caos" ]
Binary file removed docs/media/details.png
Binary file not shown.
Binary file added docs/media/dork.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/media/menu.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/media/preferences.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
12 changes: 10 additions & 2 deletions makefile
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
APP=caos
# Enter your credentials for OpenAI && ZeroGPT
KEY="<YOUR-API-KEY>"
ZKEY="<YOUR-API-KEY>"
# Assign resources for service pod
CPU=2
# Configuration path
CONFIG_PATH=./ci/service

Expand All @@ -13,6 +18,9 @@ clean:
coverage:
make -C ./src coverage

install: build
make -C ./src install

run: build
make -C ./src run

Expand All @@ -26,8 +34,8 @@ vendor:
make -C ./src vendor

build-pod:
docker build --no-cache -t ${APP} ${CONFIG_PATH}
docker build --build-arg KEY=${KEY} --build-arg ZKEY=${ZKEY} --pull --rm -f "ci/service/Dockerfile" -t ${APP}:latest ${CONFIG_PATH}

run-pod: build-pod

docker run ${APP}
docker run -it --cpus=${CPU} ${APP}:latest
2 changes: 2 additions & 0 deletions src/test/template/profile.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
"API_KEY",""
"ZERO_API_KEY",""

0 comments on commit d2357be

Please sign in to comment.