-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Wave] Fun projects for beginnners #278
Comments
Would this be open for someone who has almost no knowledge of GPU programming except for some basic understanding. |
Hi @NoumanAmir657 thanks for reaching out! There are plenty of tasks to go around, depending on your experience level, would probably assign easier tasks to start with. |
Hi, yes would love any pointers to which task would help me getting started. You can assign me whichever you see fit and I can work towards it |
@NoumanAmir657 I think a good starter task is to implement LMK what you think. :) Disclaimer: if this(or any task assigned to external collaborator) becomes high priority, someone internal may take on this task to get it out asap. |
Yes this seems a good starting point. I shall make a PR soon. Thanks! |
thanks @NoumanAmir657 for the awesome work with tkw.abs! let me know if you're interested in picking up other issues. The next important thing for us is adding support for tkw.minimum (elementwise) and extending it's support to tkw.min(reduction) but feel free to pick other things more aligned with your interest of course. |
Thanks for getting me started. I want to contribute more. Over the weekend, I will go over issues and decide on one. Thanks! |
@raikonenfnu Hi. You can assign me the tkw.minimum issue. |
@NoumanAmir657 SG! thanks :) |
Hi, can I volunteer for the |
Sounds great! :) |
@egebeysel Here are some sample PRs that does similar thing to what you'd be doing: (82852d1, 71eb1c8) But in this case, you'd want to lower tkw.round_to_even to math.round_even from the math standard dialect in MLIR https://mlir.llvm.org/docs/Dialects/MathOps/#mathroundeven-mathroundevenop. Do reach out if you have any questions! :) |
@raikonenfnu Hello, I have some questions regarding "Support more architectures in codegen aside from CDNA." Currently, the codegen directly emits ops from the AMDGPU dialect, like barriers, MMA ops, etc. What is the plan for supporting GPUs from different vendors? Are these ops going to be turned into a higher-level dialect, for instance a vector.contract instead of a amdgpu.mma and let iree handle the lowering? Or is the plan to directly emit the ops for the target architecture in codegen.py? |
Are you interested in learning more about GPU programming and developing cool optimizations? Do you want to help build next generation and state-of-the-art machine learning models and layers? Do you want to define the future programming paradigm of machine learning and GPU layers? Look no further, come join us in building "Wave"!
Here are some fun starter tasks to look at:
Core infrastructure
Useful Integration/Deployment on LLM and GenAI model
Useful Operations for Quantized LLM and GenAI workload
The text was updated successfully, but these errors were encountered: