If you keep everything at their default values, it seems to work -
```py from sklearn.neural_network import MLPClassifier X = [[0, 0], [0, 1], [1, 0], [1, 1]] y = [0, 1, 1, 0] clf = MLPClassifier(max_iter=1000) clf.fit(X, y) res = clf.predict([[0, 0], [0, 1], [1, 0], [1, 1]]) print(res) ```
The default is set 100 units in the hidden layer, but theoretically, it should work with 2 hidden logistic units (I think that’s the typical textbook/class example). I think what happens is that it gets stuck in local minima depending on the random weight initialization. E.g., the following works just fine: from sklearn.neural_network import MLPClassifier X = [[0, 0], [0, 1], [1, 0], [1, 1]] y = [0, 1, 1, 0] clf = MLPClassifier(solver='lbfgs', activation='logistic', alpha=0.0, hidden_layer_sizes=(2,), learning_rate_init=0.1, max_iter=1000, random_state=20) clf.fit(X, y) res = clf.predict([[0, 0], [0, 1], [1, 0], [1, 1]]) print(res) print(clf.loss_) but changing the random seed to 1 leads to: [0 1 1 1] 0.34660921283 For comparison, I used a more vanilla MLP (1 hidden layer with 2 units and logistic activation as well; https://github.com/rasbt/python-machine-learning-book/blob/master/code/ch12/...), essentially resulting in the same problem:
On Nov 23, 2016, at 6:26 AM, linjia@ruijie.com.cn wrote:
Yes,you are right @ Raghav R V, thx!
However, i found the key param is ‘hidden_layer_sizes=[2]’, I wonder if I misunderstand the meaning of parameter of hidden_layer_sizes?
Is it related to the topic : http://stackoverflow.com/questions/36819287/mlp-classifier-of-scikit-neuraln...
发件人: scikit-learn [mailto:scikit-learn-bounces+linjia=ruijie.com.cn@python.org] 代表 Raghav R V 发送时间: 2016年11月23日 19:04 收件人: Scikit-learn user and developer mailing list 主题: Re: [scikit-learn] question about using sklearn.neural_network.MLPClassifier?
Hi,
If you keep everything at their default values, it seems to work -
```py from sklearn.neural_network import MLPClassifier X = [[0, 0], [0, 1], [1, 0], [1, 1]] y = [0, 1, 1, 0] clf = MLPClassifier(max_iter=1000) clf.fit(X, y) res = clf.predict([[0, 0], [0, 1], [1, 0], [1, 1]]) print(res) ```
On Wed, Nov 23, 2016 at 10:27 AM, <linjia@ruijie.com.cn> wrote: Hi everyone
I try to use sklearn.neural_network.MLPClassifier to test the XOR operation, but I found the result is not satisfied. The following is code, can you tell me if I use the lib incorrectly?
from sklearn.neural_network import MLPClassifier X = [[0, 0], [0, 1], [1, 0], [1, 1]] y = [0, 1, 1, 0] clf = MLPClassifier(solver='adam', activation='logistic', alpha=1e-3, hidden_layer_sizes=(2,), max_iter=1000) clf.fit(X, y) res = clf.predict([[0, 0], [0, 1], [1, 0], [1, 1]]) print(res)
#result is [0 0 0 0], score is 0.5
_______________________________________________ scikit-learn mailing list scikit-learn@python.org https://mail.python.org/mailman/listinfo/scikit-learn
-- Raghav RV https://github.com/raghavrv
_______________________________________________ scikit-learn mailing list scikit-learn@python.org https://mail.python.org/mailman/listinfo/scikit-learn