You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to ask how your hypothesis is generated. I have read the article many times but find it difficult to understand. The initial hypothesis is directly input into the trained inversion model, resulting in x^(0). Then, to obtain x^(1), we need e, \hat{e}^(0), and X^(0). How are these obtained? We currently only have one inversion model (i.e. the decoder and target embedding model). The input dimension of the decoder should be consistent with e. How do we input e and e(0) together?
Thanks
The text was updated successfully, but these errors were encountered:
Hi! Thanks for the questions. If you have any feedback on how to make the paper clearer I'm happy to make changes.
There are two models. One model, which we call the inverter, outputs hypothesis text given a true embedding. The second model, which we call the corrector, outputs hypothesis text given three things: a true embedding, a hypothesis text, and a hypothesis embedding. The corrector takes the concatenation of the text embeddings of the hypothesis as well as the 'unrolled' true embedding and 'unrolled' hypothesis embedding. They all just form one long sequence.
I would like to ask how your hypothesis is generated. I have read the article many times but find it difficult to understand. The initial hypothesis is directly input into the trained inversion model, resulting in x^(0). Then, to obtain x^(1), we need e, \hat{e}^(0), and X^(0). How are these obtained? We currently only have one inversion model (i.e. the decoder and target embedding model). The input dimension of the decoder should be consistent with e. How do we input e and e(0) together?
Thanks
The text was updated successfully, but these errors were encountered: