Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Relation Matching for MetaQA #70

Open
XLR81999 opened this issue Mar 25, 2021 · 6 comments
Open

Relation Matching for MetaQA #70

XLR81999 opened this issue Mar 25, 2021 · 6 comments

Comments

@XLR81999
Copy link

Hi,
I've been following this approach for a while, I've noticed that Relation Matching module has been added, however I couldn't find it for MetaQA/LSTM approach, will it be added later or is it not yet implemented?

@apoorvumang
Copy link
Collaborator

Hi, I will try too add ASAP

@namadjidku
Copy link

Hi, in the paper you mentioned that relation matching was performed for the larger dataset to boost its performance and typically for relatively smaller KGs like MetaQA the answer is selected based on the highest score. Using only the highest score, it is not possible to achieve the reported results for MetaQA, as mentioned in #31 the best result for 3-hop questions is around 70. Did you use relation matching also for MetaQA/LSTM? If yes, it will be highly appreciated if you upload the code. Thank you.

@apoorvumang
Copy link
Collaborator

@namadjidku Relation matching was needed for only 3-hop full setting in MetaQA since the paths are too long for the base model to handle properly. For 3-hop half, relation matching was not used (since it doesn't work well on incomplete graphs).

The code requires a bit of cleaning up before upload. Will be uploading it ASAP.

@okanvk
Copy link

okanvk commented May 13, 2021

Hello, is there any updates for relation matching, to generate 3-hop model as state-of-the-art classifier.

@okanvk
Copy link

okanvk commented Mar 9, 2022

Hi, is there any updates for Relation Matching code?

@apoorvumang
Copy link
Collaborator

Not yet unfortunately. I will pin this issue until it gets added. However, based on more recent work in this area/MetaQA dataset, I would recommend against using EmbedKGQA in full KG setting - semantic parsing models should be used in such a setting IMO.

If needed, you can use the 0.728 number mentioned in #31 for reporting.

@apoorvumang apoorvumang pinned this issue Mar 9, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants