v0.2.0
LlaMA, Baichuan and GPT-NeoX supported!
LlaMA 2 is also supported
openllm start llama --model-id meta-llama/Llama-2-13b-hf
What's Changed
- feat: GPTNeoX by @aarnphm in #106
- feat(test): snapshot testing by @aarnphm in #107
- fix(resource): correctly parse CUDA_VISIBLE_DEVICES by @aarnphm in #114
- feat(models): Baichuan by @hetaoBackend in #115
- fix: add the requirements for baichuan by @hetaoBackend in #117
- fix: build isolation by @aarnphm in #116
- ci: pre-commit autoupdate [pre-commit.ci] by @pre-commit-ci in #119
- feat: GPTQ + vLLM and LlaMA by @aarnphm in #113
New Contributors
- @hetaoBackend made their first contribution in #115
Full Changelog: v0.1.20...v0.2.0