Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Crash during training #8

Open
jurukode opened this issue May 10, 2017 · 5 comments
Open

Crash during training #8

jurukode opened this issue May 10, 2017 · 5 comments

Comments

@jurukode
Copy link

Hi @ganeshjawahar,

i got experienced that the script always got killed after seven iterations automatically. Not sure what's happening. do you have any idea why?

error

@joeybose
Copy link

joeybose commented May 10, 2017

So in my experience, there is a resource exhaustion error and thats why the process is killed. There seems to be a bug in this code that uses a lot more GPU RAM than the original MemN2N, but I'm not sure where this bug is.

@jurukode
Copy link
Author

Hi @0220joey,

yeah, by the way i'm only using CPU right now and the training process eat so many RAM until my laptop got hang. Thanks for the info by the way!

@joeybose
Copy link

I've managed to fix the issue, its in all the tf.assign which adds more nodes to the graph. So comment out lines 156-170, 211-225 in model.py. Although, the accuracy is about 10% less than the paper.

@jurukode
Copy link
Author

Thanks @0220joey,

it works well and run faster. Wondering if the commented code are vital to increase accuracy or not

@joeybose
Copy link

I dont think so, but the paper does a few more tricks that this code doesnt do but I cant be too sure if thats the cause of the accuracy increase.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants