Releases: guidance-ai/guidance
Releases · guidance-ai/guidance
0.1.7
What's Changed
- Temperature controls now work for remote models.
- When optimistically running remote models grammar violations now have much better error messages/debugability.
- Huggingface transformers models now fully respect the training tokenization patterns.
- DOC Update notebooks by @riedgar-ms in #491
- Update tutorial for Azure OpenAI by @riedgar-ms in #468
- move llama.cpp batch object allocation into a separately allocated co… by @paulbkoch in #508
- Drop unused
nest_asyncio
by @evelkey in #510 - Fixed compile error when using Microsoft toolchain. by @wmiller in #511
- Using
Model
in a loop by @riedgar-ms in #512 - Set base_url default as None and remove redundant setting by @tshu-w in #519
- Adding outputs for Azure notebook by @riedgar-ms in #533
- Further notebook updates by @riedgar-ms in #522
- Update tutorial.ipynb by @mwieler in #534
New Contributors
- @evelkey made their first contribution in #510
- @wmiller made their first contribution in #511
- @tshu-w made their first contribution in #519
- @mwieler made their first contribution in #534
Full Changelog: 0.1.6...0.1.7
0.1.6
What's Changed
- preliminary support for log_probs using the
compute_log_probs=True
model constructor arg. - A new
guidance.cpp
module that allows us to have high speed implementations of key objects. - reduce memory requirements (and probably improve speed) for llama.cpp tests by @paulbkoch in #485
- Many bug fixes :)
Full Changelog: 0.1.5...0.1.6
0.1.5
What's Changed
- Bug fixes for remote streaming models like OpenAI.
- Support for LiteLLM which should enable support for many more models (Cohere already added as direct support).
- Minor copyedit by @rpdelaney in #473
- Remove scipy dependency by @kddubey in #469
- Updated regex to account for fine-tuned models by @marawan1805 in #424
New Contributors
- @rpdelaney made their first contribution in #473
- @kddubey made their first contribution in #469
- @marawan1805 made their first contribution in #424
Full Changelog: 0.1.4...0.1.5
0.1.4
Fixes a numpy prob normalization error that can throw exceptions for non-zero temperatures.
Full Changelog: 0.1.3...0.1.4
0.1.3
What's Changed
- Now we match non-greedy sentence piece tokenization in llama.cpp.
- Move to numpy and spicy from PyTorch for basic scoring needs.
- Update LLama.cpp to use their newest API by @paulbkoch in #455
- Use len(tkz) instead of tkz.vocab_size to estimate vocabulary size by @EgorBu in #460
- Fix test warnings by @riedgar-ms in #461
New Contributors
- @EgorBu made their first contribution in #460
- @riedgar-ms made their first contribution in #461
Full Changelog: 0.1.2...0.1.3
0.1.2
0.1.1
0.1
This release represents a dramatically new and improved version of guidance :) We will release a more detailed summary of the changes later but briefly:
- All guidance programs are now pure python programs. No more worrying about a distinction between "user code" in python and "template code" in handlebars.
- In addition to now being simple python functions, guidance programs are also now a superset of regular expressions and context free grammars. Allowing extremely powerful specifications to be built up incrementally.
- A new immutable model object sits at the core of all guidance programs now and manages all the state. You essentially can add any grammar to a model object and get a new model object that represents the extension of that models state executed through that grammar.
- And a lot more!
0.65
0.0.64
What's Changed
- Fixes some streaming issues introduced in 0.0.63
- Remove the last empty assistant message for ChatGPT API call by @bhy in #222
- Adding support for OpenAI functions by @nc and @slundberg in #239
- Remove unused imports by @riya-amemiya in #251
- Complete program execution on exceptions in async_mode=True by @jprafael in #231
New Contributors
- @bhy made their first contribution in #222
- @nc made their first contribution in #239
- @jprafael made their first contribution in #231
Full Changelog: 0.0.63...0.0.64