-
Notifications
You must be signed in to change notification settings - Fork 5
Issues: ASC-Competition/ASC24-LLM-inference-optimization
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Is that we should fully use all 70b parameter? I mean can we use some skill like discrimination to make it smaller Or use another some small model to help its inference Or separate the whole big model into some small part and use Moe to help it move faster?
#1
opened Dec 16, 2023 by
Kevin-shihello-world
ProTip!
Find all open issues with in progress development work with linked:pr.