If you have such a small number of observations (with a much higher feature space) then why do you think you can accurately train not just a single MLP, but an ensemble of them without overfitting dramatically?

On Sat, Jan 7, 2017 at 2:26 PM, Thomas Evangelidis <tevang3@gmail.com> wrote:
Regarding the evaluation, I use the leave 20% out cross validation method. I cannot leave more out because my data sets are very small, between 30 and 40 observations, each one with 600 features. Is there a limit in the number of MLPRegressors I can combine with stacking considering my small data sets? 

On Jan 7, 2017 23:04, "Joel Nothman" <joel.nothman@gmail.com> wrote:
*
 
There is no problem, in general, with overfitting, as long as your evaluation of an estimator's performance isn't biased towards the training set. We've not talked about evaluation.


_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn


_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn