Matrix Multiplication using CUDA
-
Updated
Jan 30, 2023 - C++
CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.
Matrix Multiplication using CUDA
Programs in terminal applying the parallel programming model with the CUDA arquitecture
traveling salesman problem solved with different programing models
Parallel Heterogeneous CPU/GPU computing
CUDA code for DE+PSO for Gene Regulatory Network inference
Fast and accurate protein substructure searching with simulated annealing and GPUs
Simple image processing filters for both CPU and NVIDIA GPUs
CUDA real time simulation using GPU resources to create army simulation.
This CMake-based project contains some wrappers around the CUDA functions I use frequently. The wrappers are mainly concerned with throwing an exception with meaningful error messages in case of errors or ensuring that the GPU is always shut down properly and all alocated ressources are released. Some utility functions are also available.
Small Scale Parallel Programming, Sparse Matrix multiplication with CUDA
Repository for parallel programming course.
Created by Nvidia
Released June 23, 2007