Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add jit compilation to the Torch connector with thunder #789

Open
edoaltamura opened this issue Mar 21, 2024 · 0 comments
Open

Add jit compilation to the Torch connector with thunder #789

edoaltamura opened this issue Mar 21, 2024 · 0 comments
Labels
Connector: PyTorch 🔦 Relevant to optional packages, such as external connectors Performance ⚡ improve code perfomance (memory or speed) type: enhancement ✨ Features or aspects to improve

Comments

@edoaltamura
Copy link
Collaborator

What should we add?

Thunder could be integrated with upcoming versions of the Torch connector and speed up the PyTorch compilation. This jit compiler is newly released and has promising results. From their repo:

Thunder is a source-to-source compiler for PyTorch. It makes PyTorch programs faster by combining and using different hardware executors at once (ie: nvFuser, torch.compile, cuDNN, and TransformerEngine FP8).

Works on single accelerators and in multi-GPU settings. Thunder aims to be usable, understandable, and extensible.
Thunder achieves a 40% speedup in training throughput compared to eager code on H100 using a combination of executors including nvFuser, torch.compile, cuDNN, and TransformerEngine FP8.

Link: https://github.com/Lightning-AI/lightning-thunder

Minimal example:

import torch
import thunder


def foo(a, b):
    return a + b


jfoo = thunder.jit(foo)

a = torch.full((2, 2), 1)
b = torch.full((2, 2), 3)

result = jfoo(a, b)

print(result)

# prints
# tensor(
#  [[4, 4]
#   [4, 4]])
@edoaltamura edoaltamura added Performance ⚡ improve code perfomance (memory or speed) type: enhancement ✨ Features or aspects to improve Connector: PyTorch 🔦 Relevant to optional packages, such as external connectors labels Mar 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Connector: PyTorch 🔦 Relevant to optional packages, such as external connectors Performance ⚡ improve code perfomance (memory or speed) type: enhancement ✨ Features or aspects to improve
Projects
None yet
Development

No branches or pull requests

1 participant