[scikit-learn] Are sample weights normalized?
Abhishek Raj
abhishekraj10 at yahoo.com
Fri Jul 28 13:01:25 EDT 2017
Hi,
I am using one class svm for binary classification and was just curious
what is the range/scale for sample weights? Are they normalized internally?
For example -
Sample 1, weight - 1
Sample 2, weight - 10
Sample 3, weight - 100
Does this mean Sample 3 will always be predicted as positive and sample 1
will never be predicted as positive? What about sample 2?
Also, what would happen if I assign a high weight to majority of the
samples and low weights to the rest. Eg if 80% of my samples were weighted
1000 and 20% were weighted 1.
A clarification or a link to read up on how exactly weights affect the
training process would be really helpful.
Thanks,
Abhishek
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scikit-learn/attachments/20170728/0a34e54f/attachment.html>
More information about the scikit-learn
mailing list