Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Fix fine-tuned replicate models with faster cold boot (langchain-ai#1…
…0512) With the latest support for faster cold boot in replicate https://replicate.com/blog/fine-tune-cold-boots it looks like the replicate LLM support in langchain is broken since some internal replicate inputs are being returned. Screenshot below illustrates the problem: <img width="1917" alt="image" src="https://github.com/langchain-ai/langchain/assets/749277/d28c27cc-40fb-4258-8710-844c00d3c2b0"> As you can see, the new replicate_weights param is being sent down with x-order = 0 (which is causing langchain to use that param instead of prompt which is x-order = 1) FYI @baskaryan this requires a fix otherwise replicate is broken for these models. I have pinged replicate whether they want to fix it on their end by changing the x-order returned by them. Update: per suggestion I updated the PR to just allow manually setting the prompt_key which can be set to "prompt" in this case by callers... I think this is going to be faster anyway than trying to dynamically query the model every time if you know the prompt key for your model. --------- Co-authored-by: Taqi Jaffri <[email protected]>
- Loading branch information