Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize / Parallelize InstructionLookups::polynomialize #293

Open
sragss opened this issue Apr 15, 2024 · 2 comments · May be fixed by #522
Open

Optimize / Parallelize InstructionLookups::polynomialize #293

sragss opened this issue Apr 15, 2024 · 2 comments · May be fixed by #522
Assignees
Labels
good first issue Good for newcomers help wanted Extra attention is needed

Comments

@sragss
Copy link
Collaborator

sragss commented Apr 15, 2024

For a 64 core machine at a cycle count of ~16M Jolt spends ~4% of its time on segment called InstructionLookups::polynomialize here. This segment allocates and computes the offline memory checking (a,v,t) polynomials and other inputs for the Jolt instruction execution.

Currently much of the time is spent serially. We should parallelize this to get a speedup as the core count increases.

Similar to #292.

It may be helpful to review the tracing strategy for performance testing.

@sragss sragss added good first issue Good for newcomers help wanted Extra attention is needed labels Apr 15, 2024
@sragss sragss changed the title Parallelize InstructionLookups::polynomialize Optimize / Parallelize InstructionLookups::polynomialize Apr 15, 2024
@YashBit
Copy link

YashBit commented Aug 7, 2024

@sragss I am interested. Could you assign it to me?

@mahmudsudo
Copy link

Hi , can i take on this issue ?

@mahmudsudo mahmudsudo linked a pull request Dec 13, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers help wanted Extra attention is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants