SkyPilot: Run LLMs, AI, and Batch jobs on any cloud. Get maximum savings, highest GPU availability, and managed execution—all with a simple interface.
-
Updated
May 17, 2024 - Python
SkyPilot: Run LLMs, AI, and Batch jobs on any cloud. Get maximum savings, highest GPU availability, and managed execution—all with a simple interface.
PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"
JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs welcome).
Boost hardware utilization for ML training workloads via Inter-model Horizontal Fusion
Benchmarking suite to evaluate 🤖 robotics computing performance. Vendor-neutral. ⚪Grey-box and ⚫Black-box approaches.
Artificial Intelligence
DECIMER: Deep Learning for Chemical Image Recognition using Efficient-Net V2 + Transformer
Testing framework for Deep Learning models (Tensorflow and PyTorch) on Google Cloud hardware accelerators (TPU and GPU)
Google Coral TPU DKMS Driver package for Fedora, RHEL, OpenSUSE, and OpenMandriva
Solana TpuClient Typescript Implementation
Jax/Flax implementation of DeiT and DeiT-III (ViT)
Everything we actually know about the Apple Neural Engine (ANE)
Automated KRAI X workflows for Google Cloud Platform
<케라스 창시자에게 배우는 딥러닝 2판> 도서의 코드 저장소
Train GEMMA on TPU/GPU! (Codebase for training Gemma-Ko Series)
Add a description, image, and links to the tpu topic page so that developers can more easily learn about it.
To associate your repository with the tpu topic, visit your repo's landing page and select "manage topics."