[scikit-learn] ANN Scikit-learn 0.18 released
Piotr Bialecki
piotr.bialecki at hotmail.de
Tue Oct 11 08:47:28 EDT 2016
Hi Maciek,
thank you very much! Numpy and opencv were indeed the conflicted packages.
Apperently my version of opencv was using numpy 1.10, so I uninstalled opencv, updated numpy and updated scikit to 0.18.
Thank's for the fast help!
Best regards,
Piotr
On 11.10.2016 14:39, Maciek Wójcikowski wrote:
Hi Piotr,
I've been there - most probably some package is blocking you to update via numpy dependency. Try to update numpy first and the conflicting package should pop up: "conda update numpy=1.11"
----
Pozdrawiam, | Best regards,
Maciek Wójcikowski
maciek at wojcikowski.pl<mailto:maciek at wojcikowski.pl>
2016-10-11 14:32 GMT+02:00 Piotr Bialecki <piotr.bialecki at hotmail.de<mailto:piotr.bialecki at hotmail.de>>:
Congratulations to all contributors!
I would like to update to the new version using conda, but apparently it is not available:
~$ conda update scikit-learn
Fetching package metadata .......
Solving package specifications: ..........
# All requested packages already installed.
# packages in environment at /home/pbialecki/anaconda2:
#
scikit-learn 0.17.1 np110py27_2
Should I reinstall scikit?
Best regards,
Piotr
On 03.10.2016 18:23, Raghav R V wrote:
Hi Brown,
Thanks for the email. There is a working PR here at https://github.com/scikit-learn/scikit-learn/pull/7388
Would you be kind to take a look at it and comment how helpful the proposed API is for your use case?
Thanks
On Mon, Oct 3, 2016 at 6:05 AM, Brown J.B. <jbbrown at kuhp.kyoto-u.ac.jp<mailto:jbbrown at kuhp.kyoto-u.ac.jp>> wrote:
Hello community,
Congratulations on the release of 0.19 !
While I'm merely a casual user and wish I could contribute more often, I thank everyone for their time and efforts!
2016-10-01 1:58 GMT+09:00 Andreas Mueller <t3kcit at gmail.com<mailto:t3kcit at gmail.com>>:
We've got a lot in the works already for 0.19.
* multiple metrics for cross validation (#7388 et al.)
I've done something like this in my internal model building and selection libraries.
My solution has been to have
-each metric object be able to explain a "distance from optimal"
-a metric collection object, which can be built by either explicit instantiation or calculation using data
-a pareto curve calculation object
-a ranker for the points on the pareto curve, with the ability to select the N-best points.
While there are certainly smarter interfaces and implementations, here is an example of one of my doctests that may help get this PR started.
My apologies that my old docstring argument notation doesn't match the commonly used standards.
Hope this helps,
J.B. Brown
Kyoto University
26 class TrialRanker(object):
27 """An object for handling the generic mechanism of selecting optimal
28 trials from a colletion of trials."""
43 def SelectBest(self, metricSets, paretoAlg,
44 preProcessor=None):
45 """Select the best [metricSets] by using the
46 [paretoAlg] pareto selection object. Note that it is actually
47 the [paretoAlg] that specifies how many optimal [metricSets] to
48 select.
49
50 Data may be pre-processed into a form necessary for the [paretoAlg]
51 by using the [preProcessor] that is a MetricSetConverter.
52
53 Return: an EvaluatedMetricSet if [paretoAlg] selects only one
54 metric set, otherwise a list of EvaluatedMetricSet objects.
55
56 >>> from pareto.paretoDecorators import MinNormSelector
57 >>> from pareto import OriginBasePareto
58 >>> pAlg = MinNormSelector(OriginBasePareto())
59
60 >>> from metrics.TwoClassMetrics import Accuracy, Sensitivity
61 >>> from metrics.metricSet import EvaluatedMetricSet
62 >>> met1 = EvaluatedMetricSet.BuildByExplicitValue(
63 ... [(Accuracy, 0.7), (Sensitivity, 0.9)])
64 >>> met1.SetTitle("Example1")
65 >>> met1.associatedData = range(5) # property set/get
66 >>> met2 = EvaluatedMetricSet.BuildByExplicitValue(
67 ... [(Accuracy, 0.8), (Sensitivity, 0.6)])
68 >>> met2.SetTitle("Example2")
69 >>> met2.SetAssociatedData("abcdef") # explicit method call
70 >>> met3 = EvaluatedMetricSet.BuildByExplicitValue(
71 ... [(Accuracy, 0.5), (Sensitivity, 0.5)])
72 >>> met3.SetTitle("Example3")
73 >>> met3.associatedData = float
74
75 >>> from metrics.metricSet.converters import OptDistConverter
76
77 >>> ranker = TrialRanker() # pAlg selects met1
78 >>> best = ranker.SelectBest((met1,met2,met3),
79 ... pAlg, OptDistConverter())
80 >>> best.VerboseDescription(True)
81 >>> str(best)
82 'Example1: 2 metrics; Accuracy=0.700; Sensitivity=0.900'
83 >>> best.associatedData
84 [0, 1, 2, 3, 4]
85
86 >>> pAlg = MinNormSelector(OriginBasePareto(), nSelect=2)
87 >>> best = ranker.SelectBest((met1,met2,met3),
88 ... pAlg, OptDistConverter())
89 >>> for metSet in best:
90 ... metSet.VerboseDescription(True)
91 ... str(metSet)
92 ... str(metSet.associatedData)
93 'Example1: 2 metrics; Accuracy=0.700; Sensitivity=0.900'
94 '[0, 1, 2, 3, 4]'
95 'Example2: 2 metrics; Accuracy=0.800; Sensitivity=0.600'
96 'abcdef'
97
98 >>> from metrics.TwoClassMetrics import PositivePredictiveValue
99 >>> met4 = EvaluatedMetricSet.BuildByExplicitValue(
100 ... [(Accuracy, 0.7), (PositivePredictiveValue, 0.5)])
101 >>> best = ranker.SelectBest((met1,met2,met3,met4),
102 ... pAlg, OptDistConverter())
103 Traceback (most recent call last):
104 ...
105 ValueError: Metric sets contain differing Metrics.
_______________________________________________
scikit-learn mailing list
scikit-learn at python.org<mailto:scikit-learn at python.org>
https://mail.python.org/mailman/listinfo/scikit-learn
_______________________________________________
scikit-learn mailing list
scikit-learn at python.org<mailto:scikit-learn at python.org>
https://mail.python.org/mailman/listinfo/scikit-learn
_______________________________________________ scikit-learn mailing list scikit-learn at python.org<mailto:scikit-learn at python.org> https://mail.python.org/mailman/listinfo/scikit-learn
_______________________________________________
scikit-learn mailing list
scikit-learn at python.org<mailto:scikit-learn at python.org>
https://mail.python.org/mailman/listinfo/scikit-learn
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scikit-learn/attachments/20161011/2c8d1b77/attachment-0001.html>
More information about the scikit-learn
mailing list