NOTE: IF YOU WISH TO REPORT A NEW BUG, PLEASE POST A NEW QUESTION AND TAG AS "BUG REPORT". THANK YOU.

Bug: Learning rate of Neural Network, during parameters optimization

AlphaPiAlphaPi MemberPosts:10Contributor II
Hello everyone,

I am trying to optimize the parameters of the Neural Network algorithm. So, using the Optimize Parameters(Grid) operator, i select the Neural Net.learning_rate, from 0.01 to 1, with 10 steps. I Run the Process and the error message is the following:

Process failed abnormally
Ooops. Seems like you have found a bug. Please report it in our community athttps://community.www.turtlecreekpls.com. Reason: Cannot reset network to a smaller learning rate.

Is there any solution about this?

Thanks in advance!
0
0 votes

Needs Info·Last Updated

Comments

  • varunm1varunm1 Moderator, MemberPosts:1,207Unicorn
    Hello@AlphaPi

    Did you try setting the "Decay" parameter on? Generally, this will resolve your error.

    Do let us know if this doesn't work.
    Regards,
    Varun
    https://www.varunmandalapu.com/

    Be Safe. Follow precautions and Maintain Social Distancing

    AlphaPi
  • AlphaPiAlphaPi MemberPosts:10Contributor II
    Hello@varunm1,

    Thank you for answering!

    I just tried it. I 've got the same error message. Any other suggestions?

    BR
  • varunm1varunm1 Moderator, MemberPosts:1,207Unicorn
    Hmm, I remember this resolved my issue earlier. If this doesn't, I guess@pschlunderor@IngoRMmight help you.
    Regards,
    Varun
    https://www.varunmandalapu.com/

    Be Safe. Follow precautions and Maintain Social Distancing

    AlphaPi
  • jacobcybulskijacobcybulski Member, University ProfessorPosts:391Unicorn
    I have seen this error several times, it is a bug in the neural network Maths, I think to do with floating point operations. Somewhere within your network you generate very small or very large weights. To fix this you need to start using regularization that a simple neural net does not support. The best way out is to swap the Neural Network with a Deep Learning model. Otherwise try stopping the learning process earlier, e.g. by reducing the training cycles, increasing the epsilon, or reducing the number of layers.
    jczogalla varunm1
  • sgenzersgenzer Administrator, Moderator, Employee, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM ModeratorPosts:2,959Community Manager
    @AlphaPican you pls post your process XML and sample data set so we can replicate the issue?

    Scott
Sign InorRegisterto comment.