[scikit-learn] Any plans on generalizing Pipeline and transformers?

Manuel Castejón Limas manuel.castejon at gmail.com
Mon Jan 8 18:58:07 EST 2018


Just a quick ping to share that I've kept playing with this PipeGraph toy.
The following example reflects its current state.

* As you can see scikit-learn models can be used as steps in the nodes of
the graph just by saying so, for example:

'Gaussian_Mixture':
        {'step': GaussianMixture,
         'kargs': {'n_components': 3},
         'connections': {'X': ('Concatenate_Xy', 'Xy')},
         'use_for': ['fit'],
         },

* Custom steps need succint declarations with very little code

* Graph description is nice to read, in my humble opinion.

* Optional 'fit' and/or 'run' roles

* TO-DO: Using memory option to cache and making it compatible with
gridSearchCv. I was too busy playing with template methods in order to
simplify its use.

I have convinced some nice colleagues at my university to team up with me
and write some nice documentation

Best wishes
Manolo


import pandas as pd
import numpy as np
from sklearn.cluster import DBSCAN
from sklearn.mixture import GaussianMixture
from sklearn.linear_model import LinearRegression

# work in progress library: https://github.com/mcasl/PAELLA/
from pipeGraph import (PipeGraph,
                       FirstStep,
                       LastStep,
                       CustomStep)

from paella import Paella

URL = "
https://raw.githubusercontent.com/mcasl/PAELLA/master/data/sin_60_percent_noise.csv
"
data = pd.read_csv(URL, usecols=['V1', 'V2'])
X, y = data[['V1']], data[['V2']]

class CustomConcatenationStep(CustomStep):
    def _post_fit(self):
        self.output['Xy'] = pd.concat(self.input, axis=1)


class CustomCombinationStep(CustomStep):
    def _post_fit(self):
        self.output['classification'] = np.where(self.input['dominant'] <
0, self.input['dominant'],
                                                 self.input['other'])
class CustomPaellaStep(CustomStep):
    def _pre_fit(self):
        self.sklearn_object = Paella(**self.kargs)

    def _fit(self):
        self.sklearn_object.fit(**self.input)

    def _post_fit(self):
        self.output['prediction'] =
self.sklearn_object.transform(self.input['X'], self.input['y'])



graph_description = {
    'First':
        {'step': FirstStep,
         'connections': {'X': X,
                         'y': y},
         'use_for': ['fit', 'run'],
         },

    'Concatenate_Xy':
        {'step': CustomConcatenationStep,
         'connections': {'df1': ('First', 'X'),
                         'df2': ('First', 'y')},
         'use_for': ['fit'],
         },

    'Gaussian_Mixture':
        {'step': GaussianMixture,
         'kargs': {'n_components': 3},
         'connections': {'X': ('Concatenate_Xy', 'Xy')},
         'use_for': ['fit'],
         },

    'Dbscan':
        {'step': DBSCAN,
         'kargs': {'eps': 0.05},
         'connections': {'X': ('Concatenate_Xy', 'Xy')},
         'use_for': ['fit'],
         },

    'Combine_Clustering':
        {'step': CustomCombinationStep,
         'connections': {'dominant': ('Dbscan', 'prediction'),
                         'other': ('Gaussian_Mixture', 'prediction')},
         'use_for': ['fit'],
         },

    'Paella':
        {'step': CustomPaellaStep,
         'kargs': {'noise_label': -1,
                   'max_it': 20,
                   'regular_size': 400,
                   'minimum_size': 100,
                   'width_r': 0.99,
                   'n_neighbors': 5,
                   'power': 30,
                   'random_state': None},

         'connections': {'X': ('First', 'X'),
                         'y': ('First', 'y'),
                         'classification': ('Combine_Clustering',
'classification')},
         'use_for': ['fit'],
         },

    'Regressor':
        {'step': LinearRegression,
         'kargs': {},
         'connections': {'X': ('First', 'X'),
                         'y': ('First', 'y'),
                         'sample_weight': ('Paella', 'prediction')},
         'use_for': ['fit', 'run'],
         },

    'Last':
        {'step': LastStep,
         'connections': {'prediction': ('Regressor', 'prediction'),
                         },
         'use_for': ['fit', 'run'],
         },
}

pipegraph = PipeGraph(graph_description)
pipegraph.fit()

#Fitting:  First
#Fitting:  Concatenate_Xy
#Fitting:  Dbscan
#Fitting:  Gaussian_Mixture
#Fitting:  Combine_Clustering
#Fitting:  Paella
#0  ,
#1  ,
#2  ,
#3  ,
#4  ,
#5  ,
#6  ,
#7  ,
#8  ,
#9  ,
#10  ,
#11  ,
#12  ,
#13  ,
#14  ,
#15  ,
#16  ,
#17  ,
#18  ,
#19  ,
#Fitting:  Regressor
#Fitting:  Last

pipegraph.run()
#Running:  First
#Running:  Regressor
#Running:  Last

2017-12-19 13:44 GMT+01:00 Manuel Castejón Limas <manuel.castejon at gmail.com>
:

> Dear all,
>
> Kudos to scikit-learn! Having said that, Pipeline is killing me not being
> able to transform anything other than X.
>
> My current study case would need:
> - Transformers being able to handle both X and y, e.g. clustering X and y
> concatenated
> - Pipeline being able to change other params, e.g. sample_weight
>
> Currently, I'm augmenting X through every step with the extra information
> which seems to work ok for my_pipe.fit_transform(X_train,y_train) but
> breaks on my_pipe.transform(X_test) for the lack of the y parameter. Ok, I
> can inherit and modify a descendant from Pipeline class to allow the y
> parameter which is not ideal but I guess it is an option. The gritty part
> comes when having to adapt every regressor at the end of the ladder in
> order to split the extra information from the raw data in X and not being
> able to generate more than one subproduct from each preprocessing step
>
> My current research involves clustering the data and using that
> classification along with X in order to predict outliers which generates
> sample_weight info and I would love to use that on the final regressor.
> Currently there seems not to be another option than pasting that info on X.
>
> All in all, I'm stuck with this API limitation and I would love to learn
> some tricks from you if you could enlighten me.
>
> Thanks in advance!
>
> Manuel Castejón-Limas
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scikit-learn/attachments/20180109/3183b912/attachment-0001.html>


More information about the scikit-learn mailing list