-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added OLMo support to builder.py #1061
base: main
Are you sure you want to change the base?
Added OLMo support to builder.py #1061
Conversation
…y fake layernorm Co-authored-by: Tim Costigan <[email protected]> Co-authored-by: Tim Costigan <[email protected]>"
… and set then in our override Co-authored-by: Tim Costigan <[email protected]> Co-authored-by: Tim Costigan <[email protected]>
@microsoft-github-policy-service agree company="AMD" |
@kunal-vaishnavi ptal, thanks! |
Thanks for the contribution! Does OLMo run end-to-end with the ONNX Runtime GenAI tokenizer? Can you also update the following places?
onnxruntime-genai/test/python/_test_utils.py Lines 55 to 77 in 0f59a90
The models in
You can add it to |
# Conflicts: # src/python/py/models/builder.py
…exist, which caused errors in model_qa.py
That is be updated as requested now. It runs end to end and I've also added Qwen to the CI list. |
Thank you for adding the changes. The end-to-end tests in the CIs appear to be failing due to the
|
This should be good to go! |
After some further investigation, it appears that the tokenizer CI failure is happening because the tokenizer for OLMo is not currently supported in ONNX Runtime Extensions. Once the support is added, the main branch of ONNX Runtime GenAI can be merged into this PR to integrate the changes. |
No description provided.