Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Random outputs (predictions) #2

Open
render2020 opened this issue Jan 2, 2021 · 3 comments
Open

Random outputs (predictions) #2

render2020 opened this issue Jan 2, 2021 · 3 comments

Comments

@render2020
Copy link

render2020 commented Jan 2, 2021

Hi,

Sadly your script doesn't seems to be consistence in output (prediction).
Just run it few times (press F5 refresh Chrome console and you will see different results).
Every aprox. 5+ time it's random, not stable, doesn't matter how many iterations while training.

Run it yourself.
`

<script type="text/javascript" src="/js/UNN.js"></script> <script type="text/javascript" src="/js/UNN.util.js"></script> <script> var net = UNN.Create([ ["inpt","line",2,1,1], ["full","sigm",2,1,1], ["full","sigm",1,1,1] ], 0.5); var O = []; var In = [[0,0],[0,1],[1,0],[1,1]], Ou = [[0],[1],[1],[0]]; var prm = { method:"sgd", batch_size:1 }; for(var i=0; i< 50000; i++){ UNN.Train(net,In,Ou,prm); } UNN.GetOutput(net, [0,0], O); console.log(O[2][0]); //should be aprox 0.0xxxx UNN.GetOutput(net, [1,0], O); console.log(O[2][0]); // should be aprox 0.9xxxx UNN.GetOutput(net, [0,1], O); console.log(O[2][0]); // should be aprox 0.9xxxx </script>`

FIREFOX:
image

CHROME:
image
image
image
image

IE EDGE
image
image
image

@photopea
Copy link
Owner

photopea commented Jan 3, 2021

Hi, I am aware of this. The back-propagation method is not guaranteed to reach a certain error level all the time. It is not a problem of UNN.js, but the same thing happens in any implementation of back-propagation learning.

You could improve it by trying different initial weights, , different learning method ("adadelta" instead of "sgd") or a different learning rate. Even the order of the training inputs is important.

If training the same network is more stable in another program, it is probably you are using different parameters (or their default parameters are different than ours). But for some other cases, our parameters could work better.

@render2020
Copy link
Author

Hi again

Thank you for your extensive explanation, I see
It's my lack of knowledge
I will try again and report.
Have good day and stay safe from Covid.
👌

@TrevorBlythe
Copy link

Hi again

Thank you for your extensive explanation, I see It's my lack of knowledge I will try again and report. Have good day and stay safe from Covid. ok_hand

get good kid LMAOOOOOOOPOOOOOOO

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants