Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proofs and Theorem Prover #1

Open
AdamIshay opened this issue Jun 27, 2020 · 4 comments
Open

Proofs and Theorem Prover #1

AdamIshay opened this issue Jun 27, 2020 · 4 comments

Comments

@AdamIshay
Copy link

Hello,

When I run the reasoning.py file, the acceptedProof is always empty, is this expected? How do I see generated proofs?

Additionally, I don't see the neural theorem prover learning in reasoning.py, I am wondering if I am missing something or maybe there is another instruction to run the theorem prover learning.

Thank you!

@ForoughA
Copy link
Owner

Hello!

The accepted proof will be non-empty only when you successfully fill CORGI's missing knowledge through user dialog. To better help, can you tell me which statement you tried and what your responses to CORGI's prompts were.

You are right thanks for bringing this up! It looks like I only released the pre-trained model. I will include the training scripts and ping you when the code is up.

Thanks!

@AdamIshay
Copy link
Author

Thanks for your response. Here are four statements that I responded to:

Statement: If I have an early morning meeting then wake me up early because I want to be ontime.
Prompt: How do I know ‘I be ontime’?
User answer: If I wake up early
Output:
Empty proof
Sorry, I do not know how to do what you asked me :(

Statement: If there are thunderstorms in the forecast within a few hours then remind me to close the windows because I want to keep my home dry.
Prompt: How do I know if “I keep my home dry.”?
User answer: If I close the windows
Output:
Empty proof
Sorry, I do not know how to do what you asked me :(

Statement: If I schedule an appointment that overlaps with another appointment then notify me immediately because I want to let my colleagues know of the conflict.
Prompt: How do I know if “I let my colleagues know of the conflict.”?
Answer: If I email them
Output:
Empty proof
Sorry, I do not know how to do what you asked me :(

Statement: If I search for a gas station in the navigation app and there is a cheaper gas station that is not too much further away then ask me immediately whether I want to switch the destination to the new gas station because I want to save money.
Prompt: How do I know if “I save money.”?
User answer: If I switch my destination to a cheaper gas station
Prompt: How do I know if “If I switch my destination to a cheaper gas station”?
User answer: If I am asked whether I want to switch to a cheaper gas station
Prompt: How do I know if “If I am asked whether I switch to a cheaper gas station”?
User answer: If there is a cheaper gas station that is not too much further away
Empty proof
Sorry, I do not know how to do what you asked me :(

@ForoughA
Copy link
Owner

ForoughA commented Jul 14, 2020

Hello and sorry for the slow response!
For the first utterance, can you try this answer: "if i wake up at 8" ?
This should produce a proof given the current knowledge base. If it does not, let me know and I will help to fix that.

In general (if there are no installation issues or files missing), being able to generate a proof or not depends on whether the user interacting with the reasoning engine is able to give the engine the knowledge needed for completing a proof. Another factor that affects the reasoning engine's capability in generating a proof is how complete the knowledge base is in the first place. The current knowledge base is intentionally sparse and has a lot of missing knowledge. This is because we would like to see if it is feasible to rely on human interaction for commonsense reasoning. Let me know if this description is not clear.

@jainkhyati
Copy link

Hello,

I got the same results as @AdamIshay . As you suggested answering "if i wake up at 8" to the first question does complete the proof. But I was wondering If I were doing something wrong when I got the following responses:


statement : If there are thunderstorms in the forecast within a few hours then remind me to close the windows because I want to keep my home dry.
Prompt: How do I know if "I keep my home dry."?
User: If the windows are closed
matchedGoal2: close_new(the_windows)

0 : sorry, i do not know how to do what you asked me :(
--
statement: If it's going to rain in the afternoon then remind me to bring an umbrella because I want to remain dry.
Prompt: How do I know if "I remain dry."?
User: If I have my umbrella
matchedGoal2: have(i,my_umbrella)
0 : sorry, i do not know how to do what you asked me :(

Here, in the two cases above, I am giving a statement that directly maps the goal to the query and is an example from the paper. CORGI is able to "match the goal". So, the proof should have been completed. But it is failing in both cases.

Frequently I also noticed, CORGI doesn't ask me a second query on the subgoal but fails after it is unable to complete the proof using the first input. What could be the reason for that?

Thank you so much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants