You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As we get more useful deep-learned representations, for example ColPali, the universal dot product is in more use, and we should see if there is anything we can do to accelerate it further.
For ColPali with multi-page representations, where a PDF document is represented by a screenshot of all the pages, we have two tensors:
As we get more useful deep-learned representations, for example ColPali, the universal dot product is in more use, and we should see if there is anything we can do to accelerate it further.
For ColPali with multi-page representations, where a PDF document is represented by a screenshot of all the pages, we have two tensors:
doc
tensor<float>(page{}, patch{}, v[128])
query
tensor<float>(token{}, v[128])
Typically, pages 5-20, patch 1030, and token 20:
And where we want to find the score per page by
This is obviously compute intensive and currently triggers the universal dot product optimization:
Playground link
The text was updated successfully, but these errors were encountered: