Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support/anyscale finetuning #2

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# Changelog

[5/2] Support for Together AI and Bedrock models. Parity with the [Python release](https://github.com/Tanuki/tanuki.py).

[25/1] Initial Typescript Release in line with the [Python release](https://github.com/Tanuki/tanuki.py).

[27/11] Renamed MonkeyPatch to Tanuki, support for [embeddings](https://github.com/monkeypatch/tanuki.py/blob/update_docs/docs/embeddings_support.md) and [function configurability](https://github.com/monkeypatch/tanuki.py/blob/update_docs/docs/function_configurability.md) is released!
* Use embeddings to integrate Tanuki with downstream RAG implementations using OpenAI Ada-2 model.
* Function configurability allows to configure Tanuki function executions to ignore certain implemented aspects (finetuning, data-storage communications) for improved latency and serverless integrations.

7 changes: 0 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,6 @@ Build LLM-powered apps that get cheaper and faster over time.

---

## Release
[25/1] Initial Typescript Release in line with the [Python release](https://github.com/Tanuki/tanuki.py).

[27/11] Renamed MonkeyPatch to Tanuki, support for [embeddings](https://github.com/monkeypatch/tanuki.py/blob/update_docs/docs/embeddings_support.md) and [function configurability](https://github.com/monkeypatch/tanuki.py/blob/update_docs/docs/function_configurability.md) is released!
* Use embeddings to integrate Tanuki with downstream RAG implementations using OpenAI Ada-2 model.
* Function configurability allows to configure Tanuki function executions to ignore certain implemented aspects (finetuning, data-storage communications) for improved latency and serverless integrations.

Join us on [Discord](https://discord.gg/uUzX5DYctk)

## Contents
Expand Down
117 changes: 0 additions & 117 deletions README_OLD.md

This file was deleted.

43 changes: 43 additions & 0 deletions docs/anyscale.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# Anyscale models

Tanuki now supports the finetuning for all models accessible by the Anyscale API as student models. Currently out of the box we support the following hosted models for finetuning
* Llama-2-7b-chat-hf
* Llama-2-13b-chat-hf
* Llama-2-70b-chat-hf
* Mistral-7B-Instruct-v0.1


Anyscale models use the OpenAI package so there is no need to install any extra packages.

To specify custom student models, a configuration flag for the student_model needs to be sent to the `patch` constructor as shown below at the examples section.

If no student_model is specified, OpenAIs gpt-3.5-turbo-1106 is used as the default student model.

## Setup

Set your Anyscale API key using:

```
export ANYSCALE_API_KEY=...
```

## Examples

### Using the Llama-2-7b-chat-hf as the student model
```python
@tanuki.patch(student_model = "Llama-2-7b-chat-hf")
def example_function(input: TypedInput) -> TypedOutput:
"""(Optional) Include the description of how your function will be used."""

@tanuki.align
def test_example_function():

assert example_function(example_typed_input) == example_typed_output

```

To use the other supported student models, the following text handler should be sent in to the student_model attribute at the `@tanuki.patch` decorator
* To use meta-llama/Llama-2-7b-chat-hf as a student model, student_model = "Llama-2-7b-chat-hf"
* To use meta-llama/Llama-2-13b-chat-hf as a student model, student_model = "Llama-2-13b-chat-hf"
* To use meta-llama/Llama-2-70b-chat-hf as a student model, student_model = "Llama-2-70b-chat-hf"
* To use mistralai/Mistral-7B-Instruct-v0.1 as a student model, student_model = "Mistral-7B-Instruct-v0.1"
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "tanuki.ts",
"version": "0.1.4-rc-1",
"version": "0.2.0",
"description": "TypeScript client for building LLM-powered applications",
"main": "./lib/index.js",
"type": "module",
Expand Down
46 changes: 39 additions & 7 deletions src/APIManager.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,19 +3,34 @@ import {
LLAMA_BEDROCK_PROVIDER,
TITAN_BEDROCK_PROVIDER,
TOGETHER_AI_PROVIDER,
ANYSCALE_PROVIDER,
} from './constants';
import { FinetuneJob } from './models/finetuneJob';
import { Embedding } from './models/embedding';
import { BaseModelConfig } from './languageModels/llmConfigs/baseModelConfig';
import { OpenAIConfig } from './languageModels/llmConfigs/openAIConfig';
import Buffer from 'buffer';

export interface Finetunable {
listFinetuned: (limit: number) => Promise<FinetuneJob[]>;
getFinetuned: (jobId: string) => Promise<FinetuneJob>;
finetune: (fileBuffer: Buffer, suffix: string) => Promise<FinetuneJob>;
listFinetuned: (
modelConfig: OpenAIConfig,
limit: number,
...args: any[]
) => Promise<FinetuneJob[]>;
getFinetuned: (
jobId: string,
modelConfig: OpenAIConfig
) => Promise<FinetuneJob>;
finetune: (
fileBuffer: Buffer,
suffix: string,
modelConfig: OpenAIConfig
) => Promise<FinetuneJob>;
}

export interface Inferable {
generate: (
model: BaseModelConfig | string,
model: BaseModelConfig,
systemMessage: string,
prompt: any,
kwargs: any
Expand All @@ -24,7 +39,7 @@ export interface Inferable {
export interface Embeddable {
embed: (
texts: string[],
model: BaseModelConfig | string,
model: BaseModelConfig,
kwargs: any
) => Promise<Embedding<any>[]>;
}
Expand All @@ -50,9 +65,26 @@ class APIManager {
}

private async addApiProvider(provider: string): Promise<void> {
if (provider === OPENAI_PROVIDER) {
if (provider === ANYSCALE_PROVIDER) {
const { AnyscaleAPI } = await import('./languageModels/anyscaleAPI');
try {
this.apiProviders[provider] = new AnyscaleAPI();
} catch (e) {
throw new Error(
`You need to install the openai package to use the Anyscale api provider.
Please install it with \`pip install openai\``
);
}
} else if (provider === OPENAI_PROVIDER) {
const { OpenAIAPI } = await import('./languageModels/openAIAPI');
this.apiProviders[provider] = new OpenAIAPI();
try {
this.apiProviders[provider] = new OpenAIAPI();
} catch (e) {
throw new Error(
`You need to install the openai package to use the OpenAI api provider.
Please install it with \`pip install openai\``
);
}
} else if (provider === LLAMA_BEDROCK_PROVIDER) {
const { LLamaBedrockAPI } = await import(
'./languageModels/llamaBedrockAPI'
Expand Down
25 changes: 25 additions & 0 deletions src/constants.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ import { ClaudeConfig } from './languageModels/llmConfigs/claudeConfig';
import { LlamaBedrockConfig } from './languageModels/llmConfigs/llamaConfig';
import { TitanBedrockConfig } from './languageModels/llmConfigs/titanConfig';
import { TogetherAIConfig } from './languageModels/llmConfigs/togetherAIConfig';
import { AnyscaleConfig } from './languageModels/llmConfigs/anyscaleConfig';
export const EXAMPLE_ELEMENT_LIMIT = 1000;

// These represent the file extensions for the symbolic patch and alignment datasets
Expand Down Expand Up @@ -36,6 +37,7 @@ export const ENVVAR = 'TANUKI_LOG_DIR';

// default models
export const DEFAULT_TEACHER_MODEL_NAMES = ['gpt-4', 'gpt-4-32k'];

export const DEFAULT_DISTILLED_MODEL_NAME = 'gpt-3.5-turbo-1106';
export const DEFAULT_EMBEDDING_MODEL_NAME = 'ada-002';

Expand All @@ -45,6 +47,8 @@ export const BEDROCK_PROVIDER = 'bedrock';
export const LLAMA_BEDROCK_PROVIDER = 'llama_bedrock';
export const TITAN_BEDROCK_PROVIDER = 'aws_titan_bedrock';
export const TOGETHER_AI_PROVIDER = 'together_ai';
export const ANYSCALE_PROVIDER = 'anyscale';

// model type strings
export const TEACHER_MODEL = 'teacher';
export const DISTILLED_MODEL = 'distillation';
Expand Down Expand Up @@ -123,6 +127,27 @@ export const DEFAULT_STUDENT_MODELS = {
'gpt-3.5-turbo-1106': new OpenAIConfig({
modelName: '',
contextLength: 14000,
baseModelForSft: 'gpt-3.5-turbo-1106',
}),
'Llama-2-7b-chat-hf': new AnyscaleConfig({
modelName: '',
contextLength: 3000,
baseModelForSft: 'meta-llama/Llama-2-7b-chat-hf',
}),
'Llama-2-13b-chat-hf': new AnyscaleConfig({
modelName: '',
contextLength: 3000,
baseModelForSft: 'meta-llama/Llama-2-13b-chat-hf',
}),
'Llama-2-70b-chat-hf': new AnyscaleConfig({
modelName: '',
contextLength: 3000,
baseModelForSft: 'meta-llama/Llama-2-70b-chat-hf',
}),
'Mistral-7B-Instruct-v0.1': new AnyscaleConfig({
modelName: '',
contextLength: 3000,
baseModelForSft: 'mistralai/Mistral-7B-Instruct-v0.1',
}),
};

Expand Down
Loading
Loading