varunm1 yes it works
thank you
I did all the points that you told me about data but the result is funsome of the algorithm results changed to better some of the is not. Logically the result is fun
any way thank you very much my kind friend
Search for answers in this community or academy. Finally, Google is your best friend. Try searching until you find something you can understand because we cannot know which is the best one for you. Read different things, and you will get to know easily. As our time is limited, we recommend you try hard first and then ask us questions in case you have any. This the way we learn as well.
@varunm1 Thank you for all the points that you mentioned. With your perfect suggestion my thesis doesnt have any problem and I'm sure that I will pass it easily. Regards mbs
原因2,您需要从较小的及rks and then build more complex networks based on data and test performance. There is no use of building networks with more hidden layers when a simple neural network can achieve your task. For reason 3, use AUC values as performance metric instead of accuracy. Reason 2: The complex algorithms overfit some times (depends on data). A deep learning algorithm is the one which has more hidden layers. In my statement, I am saying to train and test a model with a single hidden layer first and then note the performance parameters like accuracy, kappa, etc. Then you can build another model with more hidden layers and see the performances. If your simple model is giving the best performance there is no need to use a complex model with multiple hidden layers. @varunm1 These are your suggestion but I couldnt understand them and they are important. so please make an example with them and share your xml. Thank you very much mbs
Hi @varunm1 According to your previous help please tell me how can I use more than 1 algorithm and combine them then use cross validation without using group model? According to the points that@varunm1said if we have a data with label we dont need to separate dataset in to traning and testing. And also RM with cross validation is able to separte it automatically to the train and test parts And for the testing part it will not use the label like the training part. Are these points correct? Thank you
Thank you for your great answer again.
the algorithms are: 1. deep learning 2. j48 3. random forest 4. knn 5. gradient boosted tree 6.神经网络 7. svm Thank you for the time that you spend on my questions
the result of them are perfect.The accuracy of them is around 99.5. this is "Ensemble learning". Ensemble methodsare meta-algorithmsthat combine severalmachine learningtechniques into one predictive model in order to decrease variance (bagging), bias (boosting), or improve predictions (stacking). look at this link please. https://en.wikipedia.org/wiki/Ensemble_learning
Answers
you are perfect teacher
I will try all the points that you mentioned
it works but cross validation doesnt show accuracy or kappa
please help me to solve it
thank you
About "getting a label with a single sample" is possible in predicting of cancer because cancer cell is unique between cells
please look at the screenshot it doesnt calculate kappa or accuracy.
please help me to solve that
I will try it now
yes it works thank you
I did all the points that you told me about data
but the result is funsome of the algorithm results changed to better some of the is not. Logically the result is fun
any way thank you very much my kind friend
I agree with you. but i remove all the single labeled data.
Thank you
Regards
mbs
where can I read about cross validation?
Hi
thank you for your link but sorry it is too fast
Varun
https://www.varunmandalapu.com/
Be Safe. Follow precautions and Maintain Social Distancing
according to this link:
https://community.www.turtlecreekpls.com/discussion/55112/cross-validation-and-its-outputs-in-rm-studio
because of the 2000 number of excel that I have ( large data) split data work better than cross validation.
During the test I understand that if I combine 3 or 4 algorithm and use cross validation the result is better than split data.
Regards
mbs
Thank you for all the points that you mentioned.
With your perfect suggestion my thesis doesnt have any problem and I'm sure that I will pass it easily.
Regards
mbs
For reason 3, use AUC values as performance metric instead of accuracy.
Reason 2: The complex algorithms overfit some times (depends on data). A deep learning algorithm is the one which has more hidden layers. In my statement, I am saying to train and test a model with a single hidden layer first and then note the performance parameters like accuracy, kappa, etc. Then you can build another model with more hidden layers and see the performances. If your simple model is giving the best performance there is no need to use a complex model with multiple hidden layers.
@varunm1
These are your suggestion but I couldnt understand them and they are important. so please make an example with them and share your xml.
Thank you very much
mbs
@varunm1
According to your previous help please tell me how can I use more than 1 algorithm and combine them then use cross validation without using group model?
According to the points that@varunm1said if we have a data with label we dont need to separate dataset in to traning and testing. And also RM with cross validation is able to separte it automatically to the train and test parts And for the testing part it will not use the label like the training part.
Are these points correct?
Thank you
Thank you for your great answer again.
the algorithms are:
1. deep learning
2. j48
3. random forest
4. knn
5. gradient boosted tree
6.神经网络
7. svm
Thank you for the time that you spend on my questions
Are you trying to combine all these models into a single model? (or) are you trying to get cross-validation performance of each model separately?
I never tried combining these many models into a single model. You can try using group models but not sure how it works.
Varun
https://www.varunmandalapu.com/
Be Safe. Follow precautions and Maintain Social Distancing
the result of them are perfect.The accuracy of them is around 99.5. this is "Ensemble learning".
Ensemble methodsare meta-algorithmsthat combine severalmachine learningtechniques into one predictive model in order to decrease variance (bagging), bias (boosting), or improve predictions (stacking).
look at this link please.
https://en.wikipedia.org/wiki/Ensemble_learning
please explain more.