Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when using preprocessing per case in model selection #145

Open
bastian-f opened this issue Oct 11, 2022 · 2 comments
Open

Error when using preprocessing per case in model selection #145

bastian-f opened this issue Oct 11, 2022 · 2 comments

Comments

@bastian-f
Copy link

bastian-f commented Oct 11, 2022

Hello,

I followed this notebook to create an ensemble.

First I tune the base learners with a preprocessing pipeline:

preprocessing = {'sc': [StandardScaler()]}
# Fit the base learners
evl.fit(
    x_train, y_train,
    estimators=base_learners,
    param_dicts=param_dicts,
    preprocessing=preprocessing,
    n_iter=1
)

In the notebook, the model selection is done like this:

in_layer_proba = SuperLearner(model_selection=True).add(base_learners,
                                                        proba=True)
in_layer_class = SuperLearner(model_selection=True).add(base_learners,
                                                        proba=False)

This works fine, but I think the preprocessing part is missing here (the standard scaler), is it not? So I did the following:

in_layer_proba = SuperLearner(model_selection=True).add(
                estimators_per_case, preprocessing_cases, proba=True
)
in_layer_class = SuperLearner(model_selection=True).add(
        estimators_per_case, preprocessing_cases, proba=False
)
preprocess = {'proba': [('layer-1', in_layer_proba)],
                       'class': [('layer-1', in_layer_class)]}
evl.fit(
    x_train, y_train,
    meta_learners,
    param_dicts=param_dicts,
    preprocessing=preprocess,
    n_iter=1
)

And I get the following error. I am not sure if I am doing something wrong or if it is a bug?

@bastian-f
Copy link
Author

This is the error:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/_base_functions.py in transform(tr, x, y)
    232     try:
--> 233         x = tr.transform(x)
    234     except TypeError:

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/ensemble/base.py in transform(self, X, y, **kwargs)
    549                 raise TypeError(
--> 550                     "In model selection mode, y is a required argument.")
    551 

TypeError: In model selection mode, y is a required argument.

During handling of the above exception, another exception occurred:

TransportableException                    Traceback (most recent call last)
~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in retrieve(self)
    702                 if getattr(self._backend, 'supports_timeout', False):
--> 703                     self._output.extend(job.get(timeout=self.timeout))
    704                 else:

~/.conda/envs/machine_learning/lib/python3.7/multiprocessing/pool.py in get(self, timeout)
    656         else:
--> 657             raise self._value
    658 

~/.conda/envs/machine_learning/lib/python3.7/multiprocessing/pool.py in worker(inqueue, outqueue, initializer, initargs, maxtasks, wrap_exception)
    120         try:
--> 121             result = (True, func(*args, **kwds))
    122         except Exception as e:

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/_parallel_backends.py in __call__(self, *args, **kwargs)
    358             text = format_exc(e_type, e_value, e_tb, context=10, tb_offset=1)
--> 359             raise TransportableException(text, e_type)
    360 

TransportableException: TransportableException
___________________________________________________________________________
ValueError                                         Tue Oct 11 15:54:29 2022
PID: 88346Python 3.7.12: /home/bastian/.conda/envs/machine_learning/bin/python
...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in __call__(self=<mlens.externals.joblib.parallel.BatchedCalls object>)
    130     def __init__(self, iterator_slice):
    131         self.items = list(iterator_slice)
    132         self._size = len(self.items)
    133 
    134     def __call__(self):
--> 135         return [func(*args, **kwargs) for func, args, kwargs in self.items]
        self.items = [(<mlens.parallel.learner.SubLearner object>, (), {})]
    136 
    137     def __len__(self):
    138         return self._size
    139 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in <listcomp>(.0=<list_iterator object>)
    130     def __init__(self, iterator_slice):
    131         self.items = list(iterator_slice)
    132         self._size = len(self.items)
    133 
    134     def __call__(self):
--> 135         return [func(*args, **kwargs) for func, args, kwargs in self.items]
        func = <mlens.parallel.learner.SubLearner object>
        args = ()
        kwargs = {}
    136 
    137     def __len__(self):
    138         return self._size
    139 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in __call__(self=<mlens.parallel.learner.SubLearner object>)
    119         else:
    120             self.processing_index = ''
    121 
    122     def __call__(self):
    123         """Launch job"""
--> 124         return getattr(self, self.job)()
        self = <mlens.parallel.learner.SubLearner object>
        self.job = 'transform'
    125 
    126     def fit(self, path=None):
    127         """Fit sub-learner"""
    128         if path is None:

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in transform(self=<mlens.parallel.learner.SubLearner object>, path=None)
    162             f = "stdout" if self.verbose < 10 - 3 else "stderr"
    163             print_time(t0, msg, file=f)
    164 
    165     def transform(self, path=None):
    166         """Predict with sublearner"""
--> 167         return self.predict(path)
        self.predict = <bound method SubLearner.predict of <mlens.parallel.learner.SubLearner object>>
        path = None
    168 
    169     def _fit(self, transformers):
    170         """Sub-routine to fit sub-learner"""
    171         xtemp, ytemp = slice_array(self.in_array, self.targets, self.in_index)

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in predict(self=<mlens.parallel.learner.SubLearner object>, path=[('sc.0.0', <mlens.parallel.learner.IndexedEstimator object>)])
    152     def predict(self, path=None):
    153         """Predict with sublearner"""
    154         if path is None:
    155             path = self.path
    156         t0 = time()
--> 157         transformers = self._load_preprocess(path)
        transformers = undefined
        self._load_preprocess = <bound method SubLearner._load_preprocess of <mlens.parallel.learner.SubLearner object>>
        path = [('sc.0.0', <mlens.parallel.learner.IndexedEstimator object>)]
    158 
    159         self._predict(transformers, False)
    160         if self.verbose:
    161             msg = "{:<30} {}".format(self.name_index, "done")

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in _load_preprocess(self=<mlens.parallel.learner.SubLearner object>, path=[('sc.0.0', <mlens.parallel.learner.IndexedEstimator object>)])
    180         self.fit_time_ = time() - t0
    181 
    182     def _load_preprocess(self, path):
    183         """Load preprocessing pipeline"""
    184         if self.preprocess is not None:
--> 185             obj = load(path, self.preprocess_index, self.raise_on_exception)
        obj = undefined
        path = [('sc.0.0', <mlens.parallel.learner.IndexedEstimator object>)]
        self.preprocess_index = 'sc.0.2'
        self.raise_on_exception = True
    186             return obj.estimator
    187         return
    188 
    189     def _predict(self, transformers, score_preds):

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/_base_functions.py in load(path=[('sc.0.0', <mlens.parallel.learner.IndexedEstimator object>)], name='sc.0.2', raise_on_exception=True)
     24         obj = _load(f, raise_on_exception)
     25     elif isinstance(path, list):
     26         obj = [tup[1] for tup in path if tup[0] == name]
     27         if not obj:
     28             raise ValueError(
---> 29                 "No preprocessing pipeline in cache. Auxiliary Transformer "
     30                 "have not cached pipelines, or cached to another sub-cache.")
     31         elif not len(obj) == 1:
     32             raise ValueError(
     33                 "Could not load unique preprocessing pipeline. "

ValueError: No preprocessing pipeline in cache. Auxiliary Transformer have not cached pipelines, or cached to another sub-cache.
___________________________________________________________________________

During handling of the above exception, another exception occurred:

JoblibValueError                          Traceback (most recent call last)
~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/_parallel_backends.py in __call__(self, *args, **kwargs)
    349         try:
--> 350             return self.func(*args, **kwargs)
    351         except KeyboardInterrupt:

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in __call__(self)
    134     def __call__(self):
--> 135         return [func(*args, **kwargs) for func, args, kwargs in self.items]
    136 

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in <listcomp>(.0)
    134     def __call__(self):
--> 135         return [func(*args, **kwargs) for func, args, kwargs in self.items]
    136 

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in __call__(self)
    123         """Launch job"""
--> 124         return getattr(self, self.job)()
    125 

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in fit(self, path)
    336         self._fit(transformers)
--> 337         self._predict(transformers)
    338 

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in _predict(self, transformers, score_preds)
    355         self.train_score_, self.train_pred_time_ = self._score_preds(
--> 356             transformers, self.in_index)
    357 

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in _score_preds(self, transformers, index)
    365         if transformers:
--> 366             xtemp, ytemp = transformers.transform(xtemp, ytemp)
    367 

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/handles.py in transform(self, X, y)
    133         """
--> 134         return self._run(False, True, X, y)
    135 

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/handles.py in _run(self, fit, process, X, y)
     68             if len(self._pipeline) > 1 or process:
---> 69                 X, y = transform(tr, X, y)
     70 

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/_base_functions.py in transform(tr, x, y)
    234     except TypeError:
--> 235         x, y = tr.transform(x, y)
    236 

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/ensemble/base.py in transform(self, X, y, **kwargs)
    559             # blend ensemble will cut X in observation size so need to adjust y
--> 560             X = self._backend.transform(X, **kwargs)
    561             if X.shape[0] != y.shape[0]:

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/ensemble/base.py in transform(self, X, **kwargs)
    236 
--> 237         out = self._predict(X, 'transform', **kwargs)
    238 

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/ensemble/base.py in _predict(self, X, job, **kwargs)
    265                                 max(self.verbose - 4, 0)) as manager:
--> 266             out = manager.stack(self, job, X, return_preds=r, **kwargs)
    267 

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/backend.py in stack(self, caller, job, X, y, path, return_preds, warm_start, split, **kwargs)
    672             return_preds=return_preds, split=split, stack=True)
--> 673         return self.process(caller=caller, out=out, **kwargs)
    674 

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/backend.py in process(self, caller, out, **kwargs)
    717 
--> 718                 self._partial_process(task, parallel, **kwargs)
    719 

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/backend.py in _partial_process(self, task, parallel, **kwargs)
    738 
--> 739         task(self.job.args(**kwargs), parallel=parallel)
    740 

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/layer.py in __call__(self, args, parallel)
    151         parallel(delayed(sublearner, not _threading)()
--> 152                  for learner in self.learners
    153                  for sublearner in learner(args, 'main'))

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in __call__(self, iterable)
    792                 self._iterating = False
--> 793             self.retrieve()
    794             # Make sure that we get a last message telling us we are done

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in retrieve(self)
    743 
--> 744                     raise exception
    745 

JoblibValueError: JoblibValueError
___________________________________________________________________________
Multiprocessing exception:
...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/threading.py in _bootstrap(self=<DummyProcess(Thread-404, started daemon 140600733968128)>)
    885         # indeed has already been destroyed, so that exceptions in
    886         # _bootstrap_inner() during normal business hours are properly
    887         # reported.  Also, we only suppress them for daemonic threads;
    888         # if a non-daemonic encounters this, something else is wrong.
    889         try:
--> 890             self._bootstrap_inner()
        self._bootstrap_inner = <bound method Thread._bootstrap_inner of <DummyProcess(Thread-404, started daemon 140600733968128)>>
    891         except:
    892             if self._daemonic and _sys is None:
    893                 return
    894             raise

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/threading.py in _bootstrap_inner(self=<DummyProcess(Thread-404, started daemon 140600733968128)>)
    921                 _sys.settrace(_trace_hook)
    922             if _profile_hook:
    923                 _sys.setprofile(_profile_hook)
    924 
    925             try:
--> 926                 self.run()
        self.run = <bound method Thread.run of <DummyProcess(Thread-404, started daemon 140600733968128)>>
    927             except SystemExit:
    928                 pass
    929             except:
    930                 # If sys.stderr is no more (most likely from interpreter

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/threading.py in run(self=<DummyProcess(Thread-404, started daemon 140600733968128)>)
    865         from the args and kwargs arguments, respectively.
    866 
    867         """
    868         try:
    869             if self._target:
--> 870                 self._target(*self._args, **self._kwargs)
        self._target = <function worker>
        self._args = (<_queue.SimpleQueue object>, <_queue.SimpleQueue object>, None, (), None, False)
        self._kwargs = {}
    871         finally:
    872             # Avoid a refcycle if the thread is running a function with
    873             # an argument that has a member that points to the thread.
    874             del self._target, self._args, self._kwargs

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/multiprocessing/pool.py in worker(inqueue=<_queue.SimpleQueue object>, outqueue=<_queue.SimpleQueue object>, initializer=None, initargs=(), maxtasks=None, wrap_exception=False)
    116             util.debug('worker got sentinel -- exiting')
    117             break
    118 
    119         job, i, func, args, kwds = task
    120         try:
--> 121             result = (True, func(*args, **kwds))
        result = None
        func = <mlens.externals.joblib._parallel_backends.SafeFunction object>
        args = ()
        kwds = {}
    122         except Exception as e:
    123             if wrap_exception and func is not _helper_reraises_exception:
    124                 e = ExceptionWithTraceback(e, e.__traceback__)
    125             result = (False, e)

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/_parallel_backends.py in __call__(self=<mlens.externals.joblib._parallel_backends.SafeFunction object>, *args=(), **kwargs={})
    345     def __init__(self, func):
    346         self.func = func
    347 
    348     def __call__(self, *args, **kwargs):
    349         try:
--> 350             return self.func(*args, **kwargs)
        self.func = <mlens.externals.joblib.parallel.BatchedCalls object>
        args = ()
        kwargs = {}
    351         except KeyboardInterrupt:
    352             # We capture the KeyboardInterrupt and reraise it as
    353             # something different, as multiprocessing does not
    354             # interrupt processing for a KeyboardInterrupt

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in __call__(self=<mlens.externals.joblib.parallel.BatchedCalls object>)
    130     def __init__(self, iterator_slice):
    131         self.items = list(iterator_slice)
    132         self._size = len(self.items)
    133 
    134     def __call__(self):
--> 135         return [func(*args, **kwargs) for func, args, kwargs in self.items]
        self.items = [(<mlens.parallel.learner.EvalSubLearner object>, (), {})]
    136 
    137     def __len__(self):
    138         return self._size
    139 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in <listcomp>(.0=<list_iterator object>)
    130     def __init__(self, iterator_slice):
    131         self.items = list(iterator_slice)
    132         self._size = len(self.items)
    133 
    134     def __call__(self):
--> 135         return [func(*args, **kwargs) for func, args, kwargs in self.items]
        func = <mlens.parallel.learner.EvalSubLearner object>
        args = ()
        kwargs = {}
    136 
    137     def __len__(self):
    138         return self._size
    139 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in __call__(self=<mlens.parallel.learner.EvalSubLearner object>)
    119         else:
    120             self.processing_index = ''
    121 
    122     def __call__(self):
    123         """Launch job"""
--> 124         return getattr(self, self.job)()
        self = <mlens.parallel.learner.EvalSubLearner object>
        self.job = 'fit'
    125 
    126     def fit(self, path=None):
    127         """Fit sub-learner"""
    128         if path is None:

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in fit(self=<mlens.parallel.learner.EvalSubLearner object>, path=[('class.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.svc.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.svc.0.0.2', <mlens.parallel.learner.IndexedEstimator object>)])
    332         if self.scorer is None:
    333             raise ValueError("Cannot generate CV-scores without a scorer")
    334         t0 = time()
    335         transformers = self._load_preprocess(path)
    336         self._fit(transformers)
--> 337         self._predict(transformers)
        self._predict = <bound method EvalSubLearner._predict of <mlens.parallel.learner.EvalSubLearner object>>
        transformers = Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True)
    338 
    339         o = IndexedEstimator(estimator=self.estimator,
    340                              name=self.name_index,
    341                              index=self.index,

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in _predict(self=<mlens.parallel.learner.EvalSubLearner object>, transformers=Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True), score_preds=None)
    351 
    352     def _predict(self, transformers, score_preds=None):
    353         """Sub-routine to with sublearner"""
    354         # Train set
    355         self.train_score_, self.train_pred_time_ = self._score_preds(
--> 356             transformers, self.in_index)
        transformers = Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True)
        self.in_index = ((0, 1899),)
    357 
    358         # Validation set
    359         self.test_score_, self.test_pred_time_ = self._score_preds(
    360             transformers, self.out_index)

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in _score_preds(self=<mlens.parallel.learner.EvalSubLearner object>, transformers=Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True), index=((0, 1899),))
    361 
    362     def _score_preds(self, transformers, index):
    363         # Train scores
    364         xtemp, ytemp = slice_array(self.in_array, self.targets, index)
    365         if transformers:
--> 366             xtemp, ytemp = transformers.transform(xtemp, ytemp)
        xtemp = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        ytemp = array([2, 2, 1, ..., 2, 0, 3])
        transformers.transform = <bound method Pipeline.transform of Pipeline(nam...se,
       verbose=False))],
     return_y=True)>
    367 
    368         t0 = time()
    369 
    370         if self.error_score is not None:

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/handles.py in transform(self=Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True), X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), y=array([2, 2, 1, ..., 2, 0, 3]))
    129             Preprocessed input data
    130 
    131         y : array-like of shape [n_samples, ], optional
    132             Original or preprocessed targets, depending on the transformers.
    133         """
--> 134         return self._run(False, True, X, y)
        self._run = <bound method Pipeline._run of Pipeline(name='pi...se,
       verbose=False))],
     return_y=True)>
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        y = array([2, 2, 1, ..., 2, 0, 3])
    135 
    136     def fit_transform(self, X, y=None):
    137         """Fit and transform pipeline.
    138 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/handles.py in _run(self=Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True), fit=False, process=True, X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), y=array([2, 2, 1, ..., 2, 0, 3]))
     64         for tr_name, tr in self._pipeline:
     65             if fit:
     66                 tr.fit(X, y)
     67 
     68             if len(self._pipeline) > 1 or process:
---> 69                 X, y = transform(tr, X, y)
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        y = array([2, 2, 1, ..., 2, 0, 3])
        tr = SuperLearner(array_check=None, backend=None, fol...scorer=None, shuffle=False,
       verbose=False)
     70 
     71         if process:
     72             if self.return_y:
     73                 return X, y

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/_base_functions.py in transform(tr=SuperLearner(array_check=None, backend=None, fol...scorer=None, shuffle=False,
       verbose=False), x=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), y=array([2, 2, 1, ..., 2, 0, 3]))
    230 def transform(tr, x, y):
    231     """Try transforming with X and y. Else, transform with only X."""
    232     try:
    233         x = tr.transform(x)
    234     except TypeError:
--> 235         x, y = tr.transform(x, y)
        x = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        y = array([2, 2, 1, ..., 2, 0, 3])
        tr.transform = <bound method BaseEnsemble.transform of SuperLea...corer=None, shuffle=False,
       verbose=False)>
    236 
    237     return x, y
    238 
    239 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/ensemble/base.py in transform(self=SuperLearner(array_check=None, backend=None, fol...scorer=None, shuffle=False,
       verbose=False), X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), y=array([2, 2, 1, ..., 2, 0, 3]), **kwargs={})
    555                 return self.predict(X, **kwargs), y
    556 
    557             # Asked to reproduce predictions during fit, here we need to
    558             # account for that in model selection mode,
    559             # blend ensemble will cut X in observation size so need to adjust y
--> 560             X = self._backend.transform(X, **kwargs)
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        self._backend.transform = <bound method Sequential.transform of Sequential...n=True)])],
   verbose=0)],
      verbose=False)>
        kwargs = {}
    561             if X.shape[0] != y.shape[0]:
    562                 r = y.shape[0] - X.shape[0]
    563                 y = y[r:]
    564             return X, y

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/ensemble/base.py in transform(self=Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False), X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), **kwargs={})
    232         if not self.__fitted__:
    233             NotFittedError("Instance not fitted.")
    234 
    235         f, t0 = print_job(self, "Transforming")
    236 
--> 237         out = self._predict(X, 'transform', **kwargs)
        out = undefined
        self._predict = <bound method Sequential._predict of Sequential(...n=True)])],
   verbose=0)],
      verbose=False)>
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        kwargs = {}
    238 
    239         if self.verbose:
    240             print_time(t0, "{:<35}".format("Transform complete"),
    241                        file=f, flush=True)

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/ensemble/base.py in _predict(self=Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False), X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), job='transform', **kwargs={})
    261             data.
    262         """
    263         r = kwargs.pop('return_preds', True)
    264         with ParallelProcessing(self.backend, self.n_jobs,
    265                                 max(self.verbose - 4, 0)) as manager:
--> 266             out = manager.stack(self, job, X, return_preds=r, **kwargs)
        out = undefined
        manager.stack = <bound method ParallelProcessing.stack of <mlens.parallel.backend.ParallelProcessing object>>
        self = Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False)
        job = 'transform'
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        r = True
        kwargs = {}
    267 
    268         if not isinstance(out, list):
    269             out = [out]
    270         out = [p.squeeze() for p in out]

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/backend.py in stack(self=<mlens.parallel.backend.ParallelProcessing object>, caller=Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False), job='transform', X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), y=None, path=None, return_preds=True, warm_start=False, split=True, **kwargs={})
    668             Prediction array(s).
    669         """
    670         out = self.initialize(
    671             job=job, X=X, y=y, path=path, warm_start=warm_start,
    672             return_preds=return_preds, split=split, stack=True)
--> 673         return self.process(caller=caller, out=out, **kwargs)
        self.process = <bound method ParallelProcessing.process of <mlens.parallel.backend.ParallelProcessing object>>
        caller = Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False)
        out = {}
        kwargs = {}
    674 
    675     def process(self, caller, out, **kwargs):
    676         """Process job.
    677 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/backend.py in process(self=<mlens.parallel.backend.ParallelProcessing object>, caller=Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False), out=None, **kwargs={})
    713                       backend=self.backend) as parallel:
    714 
    715             for task in caller:
    716                 self.job.clear()
    717 
--> 718                 self._partial_process(task, parallel, **kwargs)
        self._partial_process = <bound method ParallelProcessing._partial_proces...lens.parallel.backend.ParallelProcessing object>>
        task = Layer(backend='threading', dtype=<class 'numpy.f...='sc', raise_on_exception=True)])],
   verbose=0)
        parallel = Parallel(n_jobs=-1)
        kwargs = {}
    719 
    720                 if task.name in return_names:
    721                     out.append(self.get_preds(dtype=_dtype(task)))
    722 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/backend.py in _partial_process(self=<mlens.parallel.backend.ParallelProcessing object>, task=Layer(backend='threading', dtype=<class 'numpy.f...='sc', raise_on_exception=True)])],
   verbose=0), parallel=Parallel(n_jobs=-1), **kwargs={})
    734         task.setup(self.job.predict_in, self.job.targets, self.job.job)
    735 
    736         if not task.__no_output__:
    737             self._gen_prediction_array(task, self.job.job, self.__threading__)
    738 
--> 739         task(self.job.args(**kwargs), parallel=parallel)
        task = Layer(backend='threading', dtype=<class 'numpy.f...='sc', raise_on_exception=True)])],
   verbose=0)
        self.job.args = <bound method Job.args of <mlens.parallel.backend.Job object>>
        kwargs = {}
        parallel = Parallel(n_jobs=-1)
    740 
    741         if not task.__no_output__ and getattr(task, 'n_feature_prop', 0):
    742             self._propagate_features(task)
    743 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/layer.py in __call__(self=Layer(backend='threading', dtype=<class 'numpy.f...='sc', raise_on_exception=True)])],
   verbose=0), args={'auxiliary': {'P': None, 'X': array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])}, 'dir': [('sc.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('sc.0.2', <mlens.parallel.learner.IndexedEstimator object>)], 'job': 'transform', 'main': {'P': array([[2.60052562e-01, 1.77754706e-03, 5.693393... 1.20693236e-04, 9.98786032e-01]], dtype=float32), 'X': array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])}}, parallel=Parallel(n_jobs=-1))
    147         if self.verbose >= 2:
    148             safe_print(msg.format('Learners ...'), file=f, end=e2)
    149             t1 = time()
    150 
    151         parallel(delayed(sublearner, not _threading)()
--> 152                  for learner in self.learners
        self.learners = [Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None)]
    153                  for sublearner in learner(args, 'main'))
    154 
    155         if self.verbose >= 2:
    156             print_time(t1, 'done', file=f)

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in __call__(self=Parallel(n_jobs=-1), iterable=<generator object Layer.__call__.<locals>.<genexpr>>)
    788             if pre_dispatch == "all" or n_jobs == 1:
    789                 # The iterable was consumed all at once by the above for loop.
    790                 # No need to wait for async callbacks to trigger to
    791                 # consumption.
    792                 self._iterating = False
--> 793             self.retrieve()
        self.retrieve = <bound method Parallel.retrieve of Parallel(n_jobs=-1)>
    794             # Make sure that we get a last message telling us we are done
    795             elapsed_time = time.time() - self._start_time
    796             self._print('Done %3i out of %3i | elapsed: %s finished',
    797                         (len(self._output), len(self._output),

---------------------------------------------------------------------------
Sub-process traceback:
---------------------------------------------------------------------------
ValueError                                         Tue Oct 11 15:54:29 2022
PID: 88346Python 3.7.12: /home/bastian/.conda/envs/machine_learning/bin/python
...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in __call__(self=<mlens.externals.joblib.parallel.BatchedCalls object>)
    130     def __init__(self, iterator_slice):
    131         self.items = list(iterator_slice)
    132         self._size = len(self.items)
    133 
    134     def __call__(self):
--> 135         return [func(*args, **kwargs) for func, args, kwargs in self.items]
        self.items = [(<mlens.parallel.learner.SubLearner object>, (), {})]
    136 
    137     def __len__(self):
    138         return self._size
    139 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in <listcomp>(.0=<list_iterator object>)
    130     def __init__(self, iterator_slice):
    131         self.items = list(iterator_slice)
    132         self._size = len(self.items)
    133 
    134     def __call__(self):
--> 135         return [func(*args, **kwargs) for func, args, kwargs in self.items]
        func = <mlens.parallel.learner.SubLearner object>
        args = ()
        kwargs = {}
    136 
    137     def __len__(self):
    138         return self._size
    139 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in __call__(self=<mlens.parallel.learner.SubLearner object>)
    119         else:
    120             self.processing_index = ''
    121 
    122     def __call__(self):
    123         """Launch job"""
--> 124         return getattr(self, self.job)()
        self = <mlens.parallel.learner.SubLearner object>
        self.job = 'transform'
    125 
    126     def fit(self, path=None):
    127         """Fit sub-learner"""
    128         if path is None:

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in transform(self=<mlens.parallel.learner.SubLearner object>, path=None)
    162             f = "stdout" if self.verbose < 10 - 3 else "stderr"
    163             print_time(t0, msg, file=f)
    164 
    165     def transform(self, path=None):
    166         """Predict with sublearner"""
--> 167         return self.predict(path)
        self.predict = <bound method SubLearner.predict of <mlens.parallel.learner.SubLearner object>>
        path = None
    168 
    169     def _fit(self, transformers):
    170         """Sub-routine to fit sub-learner"""
    171         xtemp, ytemp = slice_array(self.in_array, self.targets, self.in_index)

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in predict(self=<mlens.parallel.learner.SubLearner object>, path=[('sc.0.0', <mlens.parallel.learner.IndexedEstimator object>)])
    152     def predict(self, path=None):
    153         """Predict with sublearner"""
    154         if path is None:
    155             path = self.path
    156         t0 = time()
--> 157         transformers = self._load_preprocess(path)
        transformers = undefined
        self._load_preprocess = <bound method SubLearner._load_preprocess of <mlens.parallel.learner.SubLearner object>>
        path = [('sc.0.0', <mlens.parallel.learner.IndexedEstimator object>)]
    158 
    159         self._predict(transformers, False)
    160         if self.verbose:
    161             msg = "{:<30} {}".format(self.name_index, "done")

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in _load_preprocess(self=<mlens.parallel.learner.SubLearner object>, path=[('sc.0.0', <mlens.parallel.learner.IndexedEstimator object>)])
    180         self.fit_time_ = time() - t0
    181 
    182     def _load_preprocess(self, path):
    183         """Load preprocessing pipeline"""
    184         if self.preprocess is not None:
--> 185             obj = load(path, self.preprocess_index, self.raise_on_exception)
        obj = undefined
        path = [('sc.0.0', <mlens.parallel.learner.IndexedEstimator object>)]
        self.preprocess_index = 'sc.0.2'
        self.raise_on_exception = True
    186             return obj.estimator
    187         return
    188 
    189     def _predict(self, transformers, score_preds):

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/_base_functions.py in load(path=[('sc.0.0', <mlens.parallel.learner.IndexedEstimator object>)], name='sc.0.2', raise_on_exception=True)
     24         obj = _load(f, raise_on_exception)
     25     elif isinstance(path, list):
     26         obj = [tup[1] for tup in path if tup[0] == name]
     27         if not obj:
     28             raise ValueError(
---> 29                 "No preprocessing pipeline in cache. Auxiliary Transformer "
     30                 "have not cached pipelines, or cached to another sub-cache.")
     31         elif not len(obj) == 1:
     32             raise ValueError(
     33                 "Could not load unique preprocessing pipeline. "

ValueError: No preprocessing pipeline in cache. Auxiliary Transformer have not cached pipelines, or cached to another sub-cache.
___________________________________________________________________________

During handling of the above exception, another exception occurred:

TransportableException                    Traceback (most recent call last)
~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in retrieve(self)
    702                 if getattr(self._backend, 'supports_timeout', False):
--> 703                     self._output.extend(job.get(timeout=self.timeout))
    704                 else:

~/.conda/envs/machine_learning/lib/python3.7/multiprocessing/pool.py in get(self, timeout)
    656         else:
--> 657             raise self._value
    658 

~/.conda/envs/machine_learning/lib/python3.7/multiprocessing/pool.py in worker(inqueue, outqueue, initializer, initargs, maxtasks, wrap_exception)
    120         try:
--> 121             result = (True, func(*args, **kwds))
    122         except Exception as e:

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/_parallel_backends.py in __call__(self, *args, **kwargs)
    358             text = format_exc(e_type, e_value, e_tb, context=10, tb_offset=1)
--> 359             raise TransportableException(text, e_type)
    360 

TransportableException: TransportableException
___________________________________________________________________________
JoblibValueError                                   Tue Oct 11 15:54:32 2022
PID: 88346Python 3.7.12: /home/bastian/.conda/envs/machine_learning/bin/python
...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in __call__(self=<mlens.externals.joblib.parallel.BatchedCalls object>)
    130     def __init__(self, iterator_slice):
    131         self.items = list(iterator_slice)
    132         self._size = len(self.items)
    133 
    134     def __call__(self):
--> 135         return [func(*args, **kwargs) for func, args, kwargs in self.items]
        self.items = [(<mlens.parallel.learner.EvalSubLearner object>, (), {})]
    136 
    137     def __len__(self):
    138         return self._size
    139 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in <listcomp>(.0=<list_iterator object>)
    130     def __init__(self, iterator_slice):
    131         self.items = list(iterator_slice)
    132         self._size = len(self.items)
    133 
    134     def __call__(self):
--> 135         return [func(*args, **kwargs) for func, args, kwargs in self.items]
        func = <mlens.parallel.learner.EvalSubLearner object>
        args = ()
        kwargs = {}
    136 
    137     def __len__(self):
    138         return self._size
    139 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in __call__(self=<mlens.parallel.learner.EvalSubLearner object>)
    119         else:
    120             self.processing_index = ''
    121 
    122     def __call__(self):
    123         """Launch job"""
--> 124         return getattr(self, self.job)()
        self = <mlens.parallel.learner.EvalSubLearner object>
        self.job = 'fit'
    125 
    126     def fit(self, path=None):
    127         """Fit sub-learner"""
    128         if path is None:

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in fit(self=<mlens.parallel.learner.EvalSubLearner object>, path=[('class.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.svc.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.svc.0.0.2', <mlens.parallel.learner.IndexedEstimator object>)])
    332         if self.scorer is None:
    333             raise ValueError("Cannot generate CV-scores without a scorer")
    334         t0 = time()
    335         transformers = self._load_preprocess(path)
    336         self._fit(transformers)
--> 337         self._predict(transformers)
        self._predict = <bound method EvalSubLearner._predict of <mlens.parallel.learner.EvalSubLearner object>>
        transformers = Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True)
    338 
    339         o = IndexedEstimator(estimator=self.estimator,
    340                              name=self.name_index,
    341                              index=self.index,

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in _predict(self=<mlens.parallel.learner.EvalSubLearner object>, transformers=Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True), score_preds=None)
    351 
    352     def _predict(self, transformers, score_preds=None):
    353         """Sub-routine to with sublearner"""
    354         # Train set
    355         self.train_score_, self.train_pred_time_ = self._score_preds(
--> 356             transformers, self.in_index)
        transformers = Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True)
        self.in_index = ((0, 1899),)
    357 
    358         # Validation set
    359         self.test_score_, self.test_pred_time_ = self._score_preds(
    360             transformers, self.out_index)

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in _score_preds(self=<mlens.parallel.learner.EvalSubLearner object>, transformers=Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True), index=((0, 1899),))
    361 
    362     def _score_preds(self, transformers, index):
    363         # Train scores
    364         xtemp, ytemp = slice_array(self.in_array, self.targets, index)
    365         if transformers:
--> 366             xtemp, ytemp = transformers.transform(xtemp, ytemp)
        xtemp = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        ytemp = array([2, 2, 1, ..., 2, 0, 3])
        transformers.transform = <bound method Pipeline.transform of Pipeline(nam...se,
       verbose=False))],
     return_y=True)>
    367 
    368         t0 = time()
    369 
    370         if self.error_score is not None:

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/handles.py in transform(self=Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True), X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), y=array([2, 2, 1, ..., 2, 0, 3]))
    129             Preprocessed input data
    130 
    131         y : array-like of shape [n_samples, ], optional
    132             Original or preprocessed targets, depending on the transformers.
    133         """
--> 134         return self._run(False, True, X, y)
        self._run = <bound method Pipeline._run of Pipeline(name='pi...se,
       verbose=False))],
     return_y=True)>
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        y = array([2, 2, 1, ..., 2, 0, 3])
    135 
    136     def fit_transform(self, X, y=None):
    137         """Fit and transform pipeline.
    138 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/handles.py in _run(self=Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True), fit=False, process=True, X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), y=array([2, 2, 1, ..., 2, 0, 3]))
     64         for tr_name, tr in self._pipeline:
     65             if fit:
     66                 tr.fit(X, y)
     67 
     68             if len(self._pipeline) > 1 or process:
---> 69                 X, y = transform(tr, X, y)
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        y = array([2, 2, 1, ..., 2, 0, 3])
        tr = SuperLearner(array_check=None, backend=None, fol...scorer=None, shuffle=False,
       verbose=False)
     70 
     71         if process:
     72             if self.return_y:
     73                 return X, y

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/_base_functions.py in transform(tr=SuperLearner(array_check=None, backend=None, fol...scorer=None, shuffle=False,
       verbose=False), x=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), y=array([2, 2, 1, ..., 2, 0, 3]))
    230 def transform(tr, x, y):
    231     """Try transforming with X and y. Else, transform with only X."""
    232     try:
    233         x = tr.transform(x)
    234     except TypeError:
--> 235         x, y = tr.transform(x, y)
        x = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        y = array([2, 2, 1, ..., 2, 0, 3])
        tr.transform = <bound method BaseEnsemble.transform of SuperLea...corer=None, shuffle=False,
       verbose=False)>
    236 
    237     return x, y
    238 
    239 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/ensemble/base.py in transform(self=SuperLearner(array_check=None, backend=None, fol...scorer=None, shuffle=False,
       verbose=False), X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), y=array([2, 2, 1, ..., 2, 0, 3]), **kwargs={})
    555                 return self.predict(X, **kwargs), y
    556 
    557             # Asked to reproduce predictions during fit, here we need to
    558             # account for that in model selection mode,
    559             # blend ensemble will cut X in observation size so need to adjust y
--> 560             X = self._backend.transform(X, **kwargs)
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        self._backend.transform = <bound method Sequential.transform of Sequential...n=True)])],
   verbose=0)],
      verbose=False)>
        kwargs = {}
    561             if X.shape[0] != y.shape[0]:
    562                 r = y.shape[0] - X.shape[0]
    563                 y = y[r:]
    564             return X, y

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/ensemble/base.py in transform(self=Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False), X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), **kwargs={})
    232         if not self.__fitted__:
    233             NotFittedError("Instance not fitted.")
    234 
    235         f, t0 = print_job(self, "Transforming")
    236 
--> 237         out = self._predict(X, 'transform', **kwargs)
        out = undefined
        self._predict = <bound method Sequential._predict of Sequential(...n=True)])],
   verbose=0)],
      verbose=False)>
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        kwargs = {}
    238 
    239         if self.verbose:
    240             print_time(t0, "{:<35}".format("Transform complete"),
    241                        file=f, flush=True)

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/ensemble/base.py in _predict(self=Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False), X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), job='transform', **kwargs={})
    261             data.
    262         """
    263         r = kwargs.pop('return_preds', True)
    264         with ParallelProcessing(self.backend, self.n_jobs,
    265                                 max(self.verbose - 4, 0)) as manager:
--> 266             out = manager.stack(self, job, X, return_preds=r, **kwargs)
        out = undefined
        manager.stack = <bound method ParallelProcessing.stack of <mlens.parallel.backend.ParallelProcessing object>>
        self = Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False)
        job = 'transform'
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        r = True
        kwargs = {}
    267 
    268         if not isinstance(out, list):
    269             out = [out]
    270         out = [p.squeeze() for p in out]

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/backend.py in stack(self=<mlens.parallel.backend.ParallelProcessing object>, caller=Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False), job='transform', X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), y=None, path=None, return_preds=True, warm_start=False, split=True, **kwargs={})
    668             Prediction array(s).
    669         """
    670         out = self.initialize(
    671             job=job, X=X, y=y, path=path, warm_start=warm_start,
    672             return_preds=return_preds, split=split, stack=True)
--> 673         return self.process(caller=caller, out=out, **kwargs)
        self.process = <bound method ParallelProcessing.process of <mlens.parallel.backend.ParallelProcessing object>>
        caller = Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False)
        out = {}
        kwargs = {}
    674 
    675     def process(self, caller, out, **kwargs):
    676         """Process job.
    677 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/backend.py in process(self=<mlens.parallel.backend.ParallelProcessing object>, caller=Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False), out=None, **kwargs={})
    713                       backend=self.backend) as parallel:
    714 
    715             for task in caller:
    716                 self.job.clear()
    717 
--> 718                 self._partial_process(task, parallel, **kwargs)
        self._partial_process = <bound method ParallelProcessing._partial_proces...lens.parallel.backend.ParallelProcessing object>>
        task = Layer(backend='threading', dtype=<class 'numpy.f...='sc', raise_on_exception=True)])],
   verbose=0)
        parallel = Parallel(n_jobs=-1)
        kwargs = {}
    719 
    720                 if task.name in return_names:
    721                     out.append(self.get_preds(dtype=_dtype(task)))
    722 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/backend.py in _partial_process(self=<mlens.parallel.backend.ParallelProcessing object>, task=Layer(backend='threading', dtype=<class 'numpy.f...='sc', raise_on_exception=True)])],
   verbose=0), parallel=Parallel(n_jobs=-1), **kwargs={})
    734         task.setup(self.job.predict_in, self.job.targets, self.job.job)
    735 
    736         if not task.__no_output__:
    737             self._gen_prediction_array(task, self.job.job, self.__threading__)
    738 
--> 739         task(self.job.args(**kwargs), parallel=parallel)
        task = Layer(backend='threading', dtype=<class 'numpy.f...='sc', raise_on_exception=True)])],
   verbose=0)
        self.job.args = undefined
        kwargs = {}
        parallel = Parallel(n_jobs=-1)
    740 
    741         if not task.__no_output__ and getattr(task, 'n_feature_prop', 0):
    742             self._propagate_features(task)
    743 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/layer.py in __call__(self=Layer(backend='threading', dtype=<class 'numpy.f...='sc', raise_on_exception=True)])],
   verbose=0), args={'auxiliary': {'P': None, 'X': array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])}, 'dir': [('sc.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('sc.0.2', <mlens.parallel.learner.IndexedEstimator object>)], 'job': 'transform', 'main': {'P': array([[2.60052562e-01, 1.77754706e-03, 5.693393... 1.20693236e-04, 9.98786032e-01]], dtype=float32), 'X': array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])}}, parallel=Parallel(n_jobs=-1))
    147         if self.verbose >= 2:
    148             safe_print(msg.format('Learners ...'), file=f, end=e2)
    149             t1 = time()
    150 
    151         parallel(delayed(sublearner, not _threading)()
--> 152                  for learner in self.learners
        self.learners = [Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None)]
    153                  for sublearner in learner(args, 'main'))
    154 
    155         if self.verbose >= 2:
    156             print_time(t1, 'done', file=f)

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in __call__(self=Parallel(n_jobs=-1), iterable=<generator object Layer.__call__.<locals>.<genexpr>>)
    788             if pre_dispatch == "all" or n_jobs == 1:
    789                 # The iterable was consumed all at once by the above for loop.
    790                 # No need to wait for async callbacks to trigger to
    791                 # consumption.
    792                 self._iterating = False
--> 793             self.retrieve()
        self.retrieve = <bound method Parallel.retrieve of Parallel(n_jobs=-1)>
    794             # Make sure that we get a last message telling us we are done
    795             elapsed_time = time.time() - self._start_time
    796             self._print('Done %3i out of %3i | elapsed: %s finished',
    797                         (len(self._output), len(self._output),

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in retrieve(self=Parallel(n_jobs=-1))
    739 %s""" % (this_report, exception.message)
    740                     # Convert this to a JoblibException
    741                     exception_type = _mk_exception(exception.etype)[0]
    742                     exception = exception_type(report)
    743 
--> 744                     raise exception
        exception = undefined
    745 
    746     def __call__(self, iterable):
    747         if self._jobs:
    748             raise ValueError('This Parallel instance is already running')

JoblibValueError: JoblibValueError
___________________________________________________________________________
Multiprocessing exception:
...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/threading.py in _bootstrap(self=<DummyProcess(Thread-404, started daemon 140600733968128)>)
    885         # indeed has already been destroyed, so that exceptions in
    886         # _bootstrap_inner() during normal business hours are properly
    887         # reported.  Also, we only suppress them for daemonic threads;
    888         # if a non-daemonic encounters this, something else is wrong.
    889         try:
--> 890             self._bootstrap_inner()
        self._bootstrap_inner = <bound method Thread._bootstrap_inner of <DummyProcess(Thread-404, started daemon 140600733968128)>>
    891         except:
    892             if self._daemonic and _sys is None:
    893                 return
    894             raise

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/threading.py in _bootstrap_inner(self=<DummyProcess(Thread-404, started daemon 140600733968128)>)
    921                 _sys.settrace(_trace_hook)
    922             if _profile_hook:
    923                 _sys.setprofile(_profile_hook)
    924 
    925             try:
--> 926                 self.run()
        self.run = <bound method Thread.run of <DummyProcess(Thread-404, started daemon 140600733968128)>>
    927             except SystemExit:
    928                 pass
    929             except:
    930                 # If sys.stderr is no more (most likely from interpreter

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/threading.py in run(self=<DummyProcess(Thread-404, started daemon 140600733968128)>)
    865         from the args and kwargs arguments, respectively.
    866 
    867         """
    868         try:
    869             if self._target:
--> 870                 self._target(*self._args, **self._kwargs)
        self._target = <function worker>
        self._args = (<_queue.SimpleQueue object>, <_queue.SimpleQueue object>, None, (), None, False)
        self._kwargs = {}
    871         finally:
    872             # Avoid a refcycle if the thread is running a function with
    873             # an argument that has a member that points to the thread.
    874             del self._target, self._args, self._kwargs

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/multiprocessing/pool.py in worker(inqueue=<_queue.SimpleQueue object>, outqueue=<_queue.SimpleQueue object>, initializer=None, initargs=(), maxtasks=None, wrap_exception=False)
    116             util.debug('worker got sentinel -- exiting')
    117             break
    118 
    119         job, i, func, args, kwds = task
    120         try:
--> 121             result = (True, func(*args, **kwds))
        result = None
        func = <mlens.externals.joblib._parallel_backends.SafeFunction object>
        args = ()
        kwds = {}
    122         except Exception as e:
    123             if wrap_exception and func is not _helper_reraises_exception:
    124                 e = ExceptionWithTraceback(e, e.__traceback__)
    125             result = (False, e)

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/_parallel_backends.py in __call__(self=<mlens.externals.joblib._parallel_backends.SafeFunction object>, *args=(), **kwargs={})
    345     def __init__(self, func):
    346         self.func = func
    347 
    348     def __call__(self, *args, **kwargs):
    349         try:
--> 350             return self.func(*args, **kwargs)
        self.func = <mlens.externals.joblib.parallel.BatchedCalls object>
        args = ()
        kwargs = {}
    351         except KeyboardInterrupt:
    352             # We capture the KeyboardInterrupt and reraise it as
    353             # something different, as multiprocessing does not
    354             # interrupt processing for a KeyboardInterrupt

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in __call__(self=<mlens.externals.joblib.parallel.BatchedCalls object>)
    130     def __init__(self, iterator_slice):
    131         self.items = list(iterator_slice)
    132         self._size = len(self.items)
    133 
    134     def __call__(self):
--> 135         return [func(*args, **kwargs) for func, args, kwargs in self.items]
        self.items = [(<mlens.parallel.learner.EvalSubLearner object>, (), {})]
    136 
    137     def __len__(self):
    138         return self._size
    139 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in <listcomp>(.0=<list_iterator object>)
    130     def __init__(self, iterator_slice):
    131         self.items = list(iterator_slice)
    132         self._size = len(self.items)
    133 
    134     def __call__(self):
--> 135         return [func(*args, **kwargs) for func, args, kwargs in self.items]
        func = <mlens.parallel.learner.EvalSubLearner object>
        args = ()
        kwargs = {}
    136 
    137     def __len__(self):
    138         return self._size
    139 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in __call__(self=<mlens.parallel.learner.EvalSubLearner object>)
    119         else:
    120             self.processing_index = ''
    121 
    122     def __call__(self):
    123         """Launch job"""
--> 124         return getattr(self, self.job)()
        self = <mlens.parallel.learner.EvalSubLearner object>
        self.job = 'fit'
    125 
    126     def fit(self, path=None):
    127         """Fit sub-learner"""
    128         if path is None:

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in fit(self=<mlens.parallel.learner.EvalSubLearner object>, path=[('class.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.svc.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.svc.0.0.2', <mlens.parallel.learner.IndexedEstimator object>)])
    332         if self.scorer is None:
    333             raise ValueError("Cannot generate CV-scores without a scorer")
    334         t0 = time()
    335         transformers = self._load_preprocess(path)
    336         self._fit(transformers)
--> 337         self._predict(transformers)
        self._predict = <bound method EvalSubLearner._predict of <mlens.parallel.learner.EvalSubLearner object>>
        transformers = Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True)
    338 
    339         o = IndexedEstimator(estimator=self.estimator,
    340                              name=self.name_index,
    341                              index=self.index,

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in _predict(self=<mlens.parallel.learner.EvalSubLearner object>, transformers=Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True), score_preds=None)
    351 
    352     def _predict(self, transformers, score_preds=None):
    353         """Sub-routine to with sublearner"""
    354         # Train set
    355         self.train_score_, self.train_pred_time_ = self._score_preds(
--> 356             transformers, self.in_index)
        transformers = Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True)
        self.in_index = ((0, 1899),)
    357 
    358         # Validation set
    359         self.test_score_, self.test_pred_time_ = self._score_preds(
    360             transformers, self.out_index)

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in _score_preds(self=<mlens.parallel.learner.EvalSubLearner object>, transformers=Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True), index=((0, 1899),))
    361 
    362     def _score_preds(self, transformers, index):
    363         # Train scores
    364         xtemp, ytemp = slice_array(self.in_array, self.targets, index)
    365         if transformers:
--> 366             xtemp, ytemp = transformers.transform(xtemp, ytemp)
        xtemp = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        ytemp = array([2, 2, 1, ..., 2, 0, 3])
        transformers.transform = <bound method Pipeline.transform of Pipeline(nam...se,
       verbose=False))],
     return_y=True)>
    367 
    368         t0 = time()
    369 
    370         if self.error_score is not None:

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/handles.py in transform(self=Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True), X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), y=array([2, 2, 1, ..., 2, 0, 3]))
    129             Preprocessed input data
    130 
    131         y : array-like of shape [n_samples, ], optional
    132             Original or preprocessed targets, depending on the transformers.
    133         """
--> 134         return self._run(False, True, X, y)
        self._run = <bound method Pipeline._run of Pipeline(name='pi...se,
       verbose=False))],
     return_y=True)>
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        y = array([2, 2, 1, ..., 2, 0, 3])
    135 
    136     def fit_transform(self, X, y=None):
    137         """Fit and transform pipeline.
    138 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/handles.py in _run(self=Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True), fit=False, process=True, X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), y=array([2, 2, 1, ..., 2, 0, 3]))
     64         for tr_name, tr in self._pipeline:
     65             if fit:
     66                 tr.fit(X, y)
     67 
     68             if len(self._pipeline) > 1 or process:
---> 69                 X, y = transform(tr, X, y)
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        y = array([2, 2, 1, ..., 2, 0, 3])
        tr = SuperLearner(array_check=None, backend=None, fol...scorer=None, shuffle=False,
       verbose=False)
     70 
     71         if process:
     72             if self.return_y:
     73                 return X, y

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/_base_functions.py in transform(tr=SuperLearner(array_check=None, backend=None, fol...scorer=None, shuffle=False,
       verbose=False), x=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), y=array([2, 2, 1, ..., 2, 0, 3]))
    230 def transform(tr, x, y):
    231     """Try transforming with X and y. Else, transform with only X."""
    232     try:
    233         x = tr.transform(x)
    234     except TypeError:
--> 235         x, y = tr.transform(x, y)
        x = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        y = array([2, 2, 1, ..., 2, 0, 3])
        tr.transform = <bound method BaseEnsemble.transform of SuperLea...corer=None, shuffle=False,
       verbose=False)>
    236 
    237     return x, y
    238 
    239 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/ensemble/base.py in transform(self=SuperLearner(array_check=None, backend=None, fol...scorer=None, shuffle=False,
       verbose=False), X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), y=array([2, 2, 1, ..., 2, 0, 3]), **kwargs={})
    555                 return self.predict(X, **kwargs), y
    556 
    557             # Asked to reproduce predictions during fit, here we need to
    558             # account for that in model selection mode,
    559             # blend ensemble will cut X in observation size so need to adjust y
--> 560             X = self._backend.transform(X, **kwargs)
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        self._backend.transform = <bound method Sequential.transform of Sequential...n=True)])],
   verbose=0)],
      verbose=False)>
        kwargs = {}
    561             if X.shape[0] != y.shape[0]:
    562                 r = y.shape[0] - X.shape[0]
    563                 y = y[r:]
    564             return X, y

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/ensemble/base.py in transform(self=Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False), X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), **kwargs={})
    232         if not self.__fitted__:
    233             NotFittedError("Instance not fitted.")
    234 
    235         f, t0 = print_job(self, "Transforming")
    236 
--> 237         out = self._predict(X, 'transform', **kwargs)
        out = undefined
        self._predict = <bound method Sequential._predict of Sequential(...n=True)])],
   verbose=0)],
      verbose=False)>
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        kwargs = {}
    238 
    239         if self.verbose:
    240             print_time(t0, "{:<35}".format("Transform complete"),
    241                        file=f, flush=True)

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/ensemble/base.py in _predict(self=Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False), X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), job='transform', **kwargs={})
    261             data.
    262         """
    263         r = kwargs.pop('return_preds', True)
    264         with ParallelProcessing(self.backend, self.n_jobs,
    265                                 max(self.verbose - 4, 0)) as manager:
--> 266             out = manager.stack(self, job, X, return_preds=r, **kwargs)
        out = undefined
        manager.stack = <bound method ParallelProcessing.stack of <mlens.parallel.backend.ParallelProcessing object>>
        self = Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False)
        job = 'transform'
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        r = True
        kwargs = {}
    267 
    268         if not isinstance(out, list):
    269             out = [out]
    270         out = [p.squeeze() for p in out]

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/backend.py in stack(self=<mlens.parallel.backend.ParallelProcessing object>, caller=Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False), job='transform', X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), y=None, path=None, return_preds=True, warm_start=False, split=True, **kwargs={})
    668             Prediction array(s).
    669         """
    670         out = self.initialize(
    671             job=job, X=X, y=y, path=path, warm_start=warm_start,
    672             return_preds=return_preds, split=split, stack=True)
--> 673         return self.process(caller=caller, out=out, **kwargs)
        self.process = <bound method ParallelProcessing.process of <mlens.parallel.backend.ParallelProcessing object>>
        caller = Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False)
        out = {}
        kwargs = {}
    674 
    675     def process(self, caller, out, **kwargs):
    676         """Process job.
    677 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/backend.py in process(self=<mlens.parallel.backend.ParallelProcessing object>, caller=Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False), out=None, **kwargs={})
    713                       backend=self.backend) as parallel:
    714 
    715             for task in caller:
    716                 self.job.clear()
    717 
--> 718                 self._partial_process(task, parallel, **kwargs)
        self._partial_process = <bound method ParallelProcessing._partial_proces...lens.parallel.backend.ParallelProcessing object>>
        task = Layer(backend='threading', dtype=<class 'numpy.f...='sc', raise_on_exception=True)])],
   verbose=0)
        parallel = Parallel(n_jobs=-1)
        kwargs = {}
    719 
    720                 if task.name in return_names:
    721                     out.append(self.get_preds(dtype=_dtype(task)))
    722 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/backend.py in _partial_process(self=<mlens.parallel.backend.ParallelProcessing object>, task=Layer(backend='threading', dtype=<class 'numpy.f...='sc', raise_on_exception=True)])],
   verbose=0), parallel=Parallel(n_jobs=-1), **kwargs={})
    734         task.setup(self.job.predict_in, self.job.targets, self.job.job)
    735 
    736         if not task.__no_output__:
    737             self._gen_prediction_array(task, self.job.job, self.__threading__)
    738 
--> 739         task(self.job.args(**kwargs), parallel=parallel)
        task = Layer(backend='threading', dtype=<class 'numpy.f...='sc', raise_on_exception=True)])],
   verbose=0)
        self.job.args = <bound method Job.args of <mlens.parallel.backend.Job object>>
        kwargs = {}
        parallel = Parallel(n_jobs=-1)
    740 
    741         if not task.__no_output__ and getattr(task, 'n_feature_prop', 0):
    742             self._propagate_features(task)
    743 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/layer.py in __call__(self=Layer(backend='threading', dtype=<class 'numpy.f...='sc', raise_on_exception=True)])],
   verbose=0), args={'auxiliary': {'P': None, 'X': array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])}, 'dir': [('sc.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('sc.0.2', <mlens.parallel.learner.IndexedEstimator object>)], 'job': 'transform', 'main': {'P': array([[2.60052562e-01, 1.77754706e-03, 5.693393... 1.20693236e-04, 9.98786032e-01]], dtype=float32), 'X': array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])}}, parallel=Parallel(n_jobs=-1))
    147         if self.verbose >= 2:
    148             safe_print(msg.format('Learners ...'), file=f, end=e2)
    149             t1 = time()
    150 
    151         parallel(delayed(sublearner, not _threading)()
--> 152                  for learner in self.learners
        self.learners = [Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None)]
    153                  for sublearner in learner(args, 'main'))
    154 
    155         if self.verbose >= 2:
    156             print_time(t1, 'done', file=f)

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in __call__(self=Parallel(n_jobs=-1), iterable=<generator object Layer.__call__.<locals>.<genexpr>>)
    788             if pre_dispatch == "all" or n_jobs == 1:
    789                 # The iterable was consumed all at once by the above for loop.
    790                 # No need to wait for async callbacks to trigger to
    791                 # consumption.
    792                 self._iterating = False
--> 793             self.retrieve()
        self.retrieve = <bound method Parallel.retrieve of Parallel(n_jobs=-1)>
    794             # Make sure that we get a last message telling us we are done
    795             elapsed_time = time.time() - self._start_time
    796             self._print('Done %3i out of %3i | elapsed: %s finished',
    797                         (len(self._output), len(self._output),

---------------------------------------------------------------------------
Sub-process traceback:
---------------------------------------------------------------------------
ValueError                                         Tue Oct 11 15:54:29 2022
PID: 88346Python 3.7.12: /home/bastian/.conda/envs/machine_learning/bin/python
...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in __call__(self=<mlens.externals.joblib.parallel.BatchedCalls object>)
    130     def __init__(self, iterator_slice):
    131         self.items = list(iterator_slice)
    132         self._size = len(self.items)
    133 
    134     def __call__(self):
--> 135         return [func(*args, **kwargs) for func, args, kwargs in self.items]
        self.items = [(<mlens.parallel.learner.SubLearner object>, (), {})]
    136 
    137     def __len__(self):
    138         return self._size
    139 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in <listcomp>(.0=<list_iterator object>)
    130     def __init__(self, iterator_slice):
    131         self.items = list(iterator_slice)
    132         self._size = len(self.items)
    133 
    134     def __call__(self):
--> 135         return [func(*args, **kwargs) for func, args, kwargs in self.items]
        func = <mlens.parallel.learner.SubLearner object>
        args = ()
        kwargs = {}
    136 
    137     def __len__(self):
    138         return self._size
    139 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in __call__(self=<mlens.parallel.learner.SubLearner object>)
    119         else:
    120             self.processing_index = ''
    121 
    122     def __call__(self):
    123         """Launch job"""
--> 124         return getattr(self, self.job)()
        self = <mlens.parallel.learner.SubLearner object>
        self.job = 'transform'
    125 
    126     def fit(self, path=None):
    127         """Fit sub-learner"""
    128         if path is None:

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in transform(self=<mlens.parallel.learner.SubLearner object>, path=None)
    162             f = "stdout" if self.verbose < 10 - 3 else "stderr"
    163             print_time(t0, msg, file=f)
    164 
    165     def transform(self, path=None):
    166         """Predict with sublearner"""
--> 167         return self.predict(path)
        self.predict = <bound method SubLearner.predict of <mlens.parallel.learner.SubLearner object>>
        path = None
    168 
    169     def _fit(self, transformers):
    170         """Sub-routine to fit sub-learner"""
    171         xtemp, ytemp = slice_array(self.in_array, self.targets, self.in_index)

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in predict(self=<mlens.parallel.learner.SubLearner object>, path=[('sc.0.0', <mlens.parallel.learner.IndexedEstimator object>)])
    152     def predict(self, path=None):
    153         """Predict with sublearner"""
    154         if path is None:
    155             path = self.path
    156         t0 = time()
--> 157         transformers = self._load_preprocess(path)
        transformers = undefined
        self._load_preprocess = <bound method SubLearner._load_preprocess of <mlens.parallel.learner.SubLearner object>>
        path = [('sc.0.0', <mlens.parallel.learner.IndexedEstimator object>)]
    158 
    159         self._predict(transformers, False)
    160         if self.verbose:
    161             msg = "{:<30} {}".format(self.name_index, "done")

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in _load_preprocess(self=<mlens.parallel.learner.SubLearner object>, path=[('sc.0.0', <mlens.parallel.learner.IndexedEstimator object>)])
    180         self.fit_time_ = time() - t0
    181 
    182     def _load_preprocess(self, path):
    183         """Load preprocessing pipeline"""
    184         if self.preprocess is not None:
--> 185             obj = load(path, self.preprocess_index, self.raise_on_exception)
        obj = undefined
        path = [('sc.0.0', <mlens.parallel.learner.IndexedEstimator object>)]
        self.preprocess_index = 'sc.0.2'
        self.raise_on_exception = True
    186             return obj.estimator
    187         return
    188 
    189     def _predict(self, transformers, score_preds):

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/_base_functions.py in load(path=[('sc.0.0', <mlens.parallel.learner.IndexedEstimator object>)], name='sc.0.2', raise_on_exception=True)
     24         obj = _load(f, raise_on_exception)
     25     elif isinstance(path, list):
     26         obj = [tup[1] for tup in path if tup[0] == name]
     27         if not obj:
     28             raise ValueError(
---> 29                 "No preprocessing pipeline in cache. Auxiliary Transformer "
     30                 "have not cached pipelines, or cached to another sub-cache.")
     31         elif not len(obj) == 1:
     32             raise ValueError(
     33                 "Could not load unique preprocessing pipeline. "

ValueError: No preprocessing pipeline in cache. Auxiliary Transformer have not cached pipelines, or cached to another sub-cache.
___________________________________________________________________________
___________________________________________________________________________

During handling of the above exception, another exception occurred:

JoblibException                           Traceback (most recent call last)
/tmp/ipykernel_88346/411782330.py in <module>
----> 1 ens = mlens_opt.fit(x_train, y_train, x_test, y_test)

~/git/python/libs/training/ensemble/mlens_classifier.py in fit(self, x_train, y_train, x_test, y_test)
    177             preprocessing=preprocess,
    178             # TODO use parameter
--> 179             n_iter=2  # bump this up to do a larger grid search
    180         )
    181         print(pd.DataFrame(evl.results))

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/model_selection/model_selection.py in fit(self, X, y, estimators, param_dicts, n_iter, preprocessing)
    490         job = set_job(estimators, preprocessing)
    491         self._initialize(job, estimators, preprocessing, param_dicts, n_iter)
--> 492         self._fit(X, y, job)
    493         self._get_results()
    494         return self

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/model_selection/model_selection.py in _fit(self, X, y, job)
    178     def _fit(self, X, y, job):
    179         with ParallelEvaluation(self.backend, self.n_jobs) as manager:
--> 180             manager.process(self, job, X, y)
    181 
    182     def collect(self, path, case):

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/backend.py in process(self, caller, case, X, y, path, **kwargs)
    854 
    855             caller.indexer.fit(self.job.predict_in, self.job.targets, self.job.job)
--> 856             caller(parallel, self.job.args(**kwargs), case)

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/model_selection/model_selection.py in __call__(self, parallel, args, case)
    152                 t1 = time()
    153 
--> 154             self._run('estimators', parallel, args)
    155             self.collect(args['dir'], 'estimators')
    156 

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/model_selection/model_selection.py in _run(self, case, parallel, args)
    174 
    175         parallel(delayed(subtask, not _threading)()
--> 176                  for task in generator for subtask in task(args, inp))
    177 
    178     def _fit(self, X, y, job):

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in __call__(self, iterable)
    791                 # consumption.
    792                 self._iterating = False
--> 793             self.retrieve()
    794             # Make sure that we get a last message telling us we are done
    795             elapsed_time = time.time() - self._start_time

~/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in retrieve(self)
    742                     exception = exception_type(report)
    743 
--> 744                     raise exception
    745 
    746     def __call__(self, iterable):

JoblibException: JoblibException
___________________________________________________________________________
Multiprocessing exception:
...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/runpy.py in _run_module_as_main(mod_name='ipykernel_launcher', alter_argv=1)
    188         sys.exit(msg)
    189     main_globals = sys.modules["__main__"].__dict__
    190     if alter_argv:
    191         sys.argv[0] = mod_spec.origin
    192     return _run_code(code, main_globals, None,
--> 193                      "__main__", mod_spec)
        mod_spec = ModuleSpec(name='ipykernel_launcher', loader=<_f...b/python3.7/site-packages/ipykernel_launcher.py')
    194 
    195 def run_module(mod_name, init_globals=None,
    196                run_name=None, alter_sys=False):
    197     """Execute a module's code without importing it

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/runpy.py in _run_code(code=<code object <module> at 0x7fe079c4b0c0, file "/...3.7/site-packages/ipykernel_launcher.py", line 5>, run_globals={'__annotations__': {}, '__builtins__': <module 'builtins' (built-in)>, '__cached__': '/home/bastian/.conda/envs/machine_learning/lib/p...ges/__pycache__/ipykernel_launcher.cpython-37.pyc', '__doc__': 'Entry point for launching an IPython kernel.\n\nTh...orts until\nafter removing the cwd from sys.path.\n', '__file__': '/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/ipykernel_launcher.py', '__loader__': <_frozen_importlib_external.SourceFileLoader object>, '__name__': '__main__', '__package__': '', '__spec__': ModuleSpec(name='ipykernel_launcher', loader=<_f...b/python3.7/site-packages/ipykernel_launcher.py'), 'app': <module 'ipykernel.kernelapp' from '/home/bastia.../python3.7/site-packages/ipykernel/kernelapp.py'>, ...}, init_globals=None, mod_name='__main__', mod_spec=ModuleSpec(name='ipykernel_launcher', loader=<_f...b/python3.7/site-packages/ipykernel_launcher.py'), pkg_name='', script_name=None)
     80                        __cached__ = cached,
     81                        __doc__ = None,
     82                        __loader__ = loader,
     83                        __package__ = pkg_name,
     84                        __spec__ = mod_spec)
---> 85     exec(code, run_globals)
        code = <code object <module> at 0x7fe079c4b0c0, file "/...3.7/site-packages/ipykernel_launcher.py", line 5>
        run_globals = {'__annotations__': {}, '__builtins__': <module 'builtins' (built-in)>, '__cached__': '/home/bastian/.conda/envs/machine_learning/lib/p...ges/__pycache__/ipykernel_launcher.cpython-37.pyc', '__doc__': 'Entry point for launching an IPython kernel.\n\nTh...orts until\nafter removing the cwd from sys.path.\n', '__file__': '/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/ipykernel_launcher.py', '__loader__': <_frozen_importlib_external.SourceFileLoader object>, '__name__': '__main__', '__package__': '', '__spec__': ModuleSpec(name='ipykernel_launcher', loader=<_f...b/python3.7/site-packages/ipykernel_launcher.py'), 'app': <module 'ipykernel.kernelapp' from '/home/bastia.../python3.7/site-packages/ipykernel/kernelapp.py'>, ...}
     86     return run_globals
     87 
     88 def _run_module_code(code, init_globals=None,
     89                     mod_name=None, mod_spec=None,

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/ipykernel_launcher.py in <module>()
     12     if sys.path[0] == "":
     13         del sys.path[0]
     14 
     15     from ipykernel import kernelapp as app
     16 
---> 17     app.launch_new_instance()

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/traitlets/config/application.py in launch_instance(cls=<class 'ipykernel.kernelapp.IPKernelApp'>, argv=None, **kwargs={})
    841 
    842         If a global instance already exists, this reinitializes and starts it
    843         """
    844         app = cls.instance(**kwargs)
    845         app.initialize(argv)
--> 846         app.start()
        app.start = <bound method IPKernelApp.start of <ipykernel.kernelapp.IPKernelApp object>>
    847 
    848 #-----------------------------------------------------------------------------
    849 # utility functions, for convenience
    850 #-----------------------------------------------------------------------------

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/ipykernel/kernelapp.py in start(self=<ipykernel.kernelapp.IPKernelApp object>)
    707                 tr.run()
    708             except KeyboardInterrupt:
    709                 pass
    710         else:
    711             try:
--> 712                 self.io_loop.start()
        self.io_loop.start = <bound method BaseAsyncIOLoop.start of <tornado.platform.asyncio.AsyncIOMainLoop object>>
    713             except KeyboardInterrupt:
    714                 pass
    715 
    716 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/tornado/platform/asyncio.py in start(self=<tornado.platform.asyncio.AsyncIOMainLoop object>)
    194         except (RuntimeError, AssertionError):
    195             old_loop = None  # type: ignore
    196         try:
    197             self._setup_logging()
    198             asyncio.set_event_loop(self.asyncio_loop)
--> 199             self.asyncio_loop.run_forever()
        self.asyncio_loop.run_forever = <bound method BaseEventLoop.run_forever of <_Uni...EventLoop running=True closed=False debug=False>>
    200         finally:
    201             asyncio.set_event_loop(old_loop)
    202 
    203     def stop(self) -> None:

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/asyncio/base_events.py in run_forever(self=<_UnixSelectorEventLoop running=True closed=False debug=False>)
    536         sys.set_asyncgen_hooks(firstiter=self._asyncgen_firstiter_hook,
    537                                finalizer=self._asyncgen_finalizer_hook)
    538         try:
    539             events._set_running_loop(self)
    540             while True:
--> 541                 self._run_once()
        self._run_once = <bound method BaseEventLoop._run_once of <_UnixS...EventLoop running=True closed=False debug=False>>
    542                 if self._stopping:
    543                     break
    544         finally:
    545             self._stopping = False

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/asyncio/base_events.py in _run_once(self=<_UnixSelectorEventLoop running=True closed=False debug=False>)
   1781                         logger.warning('Executing %s took %.3f seconds',
   1782                                        _format_handle(handle), dt)
   1783                 finally:
   1784                     self._current_handle = None
   1785             else:
-> 1786                 handle._run()
        handle._run = <bound method Handle._run of <Handle <TaskWakeup...0x7fe02b1b9f10>(<Future finis...890>, ...],))>)>>
   1787         handle = None  # Needed to break cycles when an exception occurs.
   1788 
   1789     def _set_coroutine_origin_tracking(self, enabled):
   1790         if bool(enabled) == bool(self._coroutine_origin_tracking_enabled):

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/asyncio/events.py in _run(self=<Handle <TaskWakeupMethWrapper object at 0x7fe02b1b9f10>(<Future finis...890>, ...],))>)>)
     83     def cancelled(self):
     84         return self._cancelled
     85 
     86     def _run(self):
     87         try:
---> 88             self._context.run(self._callback, *self._args)
        self._context.run = <built-in method run of Context object>
        self._callback = <TaskWakeupMethWrapper object>
        self._args = (<Future finished result=(24, <bound method...7fe...1817d0>, <zmq.sugar.fr...x7fe077181890>, ...],))>,)
     89         except Exception as exc:
     90             cb = format_helpers._format_callback_source(
     91                 self._callback, self._args)
     92             msg = f'Exception in callback {cb}'

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/ipykernel/kernelbase.py in dispatch_queue(self=<ipykernel.ipkernel.IPythonKernel object>)
    499         even when the handler is async
    500         """
    501 
    502         while True:
    503             try:
--> 504                 await self.process_one()
        self.process_one = <bound method Kernel.process_one of <ipykernel.ipkernel.IPythonKernel object>>
    505             except Exception:
    506                 self.log.exception("Error in message handler")
    507 
    508     _message_counter = Any(

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/ipykernel/kernelbase.py in process_one(self=<ipykernel.ipkernel.IPythonKernel object>, wait=True)
    488         else:
    489             try:
    490                 t, dispatch, args = self.msg_queue.get_nowait()
    491             except (asyncio.QueueEmpty, QueueEmpty):
    492                 return None
--> 493         await dispatch(*args)
        dispatch = <bound method Kernel.dispatch_shell of <ipykernel.ipkernel.IPythonKernel object>>
        args = ([<zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>],)
    494 
    495     async def dispatch_queue(self):
    496         """Coroutine to preserve order of message handling
    497 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/ipykernel/kernelbase.py in dispatch_shell(self=<ipykernel.ipkernel.IPythonKernel object>, msg={'buffers': [], 'content': {'allow_stdin': True, 'code': 'ens = mlens_opt.fit(x_train, y_train, x_test, y_test)', 'silent': False, 'stop_on_error': True, 'store_history': True, 'user_expressions': {}}, 'header': {'date': datetime.datetime(2022, 10, 11, 13, 38, 13, 16000, tzinfo=tzutc()), 'msg_id': '9ef5094f-a023-46d2-bc95-96e97d6fb1ab', 'msg_type': 'execute_request', 'session': '9d7f8445-ab7d-44de-8ece-2962f6804115', 'username': '', 'version': '5.2'}, 'metadata': {'cellId': '374194c9-20cc-4172-9ef1-a3c9987f840a', 'deletedCells': [], 'recordTiming': False}, 'msg_id': '9ef5094f-a023-46d2-bc95-96e97d6fb1ab', 'msg_type': 'execute_request', 'parent_header': {}})
    395             except Exception:
    396                 self.log.debug("Unable to signal in pre_handler_hook:", exc_info=True)
    397             try:
    398                 result = handler(self.shell_stream, idents, msg)
    399                 if inspect.isawaitable(result):
--> 400                     await result
        result = <coroutine object Kernel.execute_request>
    401             except Exception:
    402                 self.log.error("Exception in message handler:", exc_info=True)
    403             except KeyboardInterrupt:
    404                 # Ctrl-c shouldn't crash the kernel here.

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/ipykernel/kernelbase.py in execute_request(self=<ipykernel.ipkernel.IPythonKernel object>, stream=<zmq.eventloop.zmqstream.ZMQStream object>, ident=[b'9d7f8445-ab7d-44de-8ece-2962f6804115'], parent={'buffers': [], 'content': {'allow_stdin': True, 'code': 'ens = mlens_opt.fit(x_train, y_train, x_test, y_test)', 'silent': False, 'stop_on_error': True, 'store_history': True, 'user_expressions': {}}, 'header': {'date': datetime.datetime(2022, 10, 11, 13, 38, 13, 16000, tzinfo=tzutc()), 'msg_id': '9ef5094f-a023-46d2-bc95-96e97d6fb1ab', 'msg_type': 'execute_request', 'session': '9d7f8445-ab7d-44de-8ece-2962f6804115', 'username': '', 'version': '5.2'}, 'metadata': {'cellId': '374194c9-20cc-4172-9ef1-a3c9987f840a', 'deletedCells': [], 'recordTiming': False}, 'msg_id': '9ef5094f-a023-46d2-bc95-96e97d6fb1ab', 'msg_type': 'execute_request', 'parent_header': {}})
    719                 user_expressions,
    720                 allow_stdin,
    721             )
    722 
    723         if inspect.isawaitable(reply_content):
--> 724             reply_content = await reply_content
        reply_content = <coroutine object IPythonKernel.do_execute>
    725 
    726         # Flush output before sending the reply.
    727         sys.stdout.flush()
    728         sys.stderr.flush()

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/ipykernel/ipkernel.py in do_execute(self=<ipykernel.ipkernel.IPythonKernel object>, code='ens = mlens_opt.fit(x_train, y_train, x_test, y_test)', silent=False, store_history=True, user_expressions={}, allow_stdin=True, cell_id='374194c9-20cc-4172-9ef1-a3c9987f840a')
    385                         store_history=store_history,
    386                         silent=silent,
    387                         cell_id=cell_id,
    388                     )
    389                 else:
--> 390                     res = shell.run_cell(code, store_history=store_history, silent=silent)
        res = undefined
        code = 'ens = mlens_opt.fit(x_train, y_train, x_test, y_test)'
        store_history = True
        silent = False
    391         finally:
    392             self._restore_input()
    393 
    394         if res.error_before_exec is not None:

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/ipykernel/zmqshell.py in run_cell(self=<ipykernel.zmqshell.ZMQInteractiveShell object>, *args=('ens = mlens_opt.fit(x_train, y_train, x_test, y_test)',), **kwargs={'silent': False, 'store_history': True})
    523         )
    524         self.payload_manager.write_payload(payload)
    525 
    526     def run_cell(self, *args, **kwargs):
    527         self._last_traceback = None
--> 528         return super().run_cell(*args, **kwargs)
        args = ('ens = mlens_opt.fit(x_train, y_train, x_test, y_test)',)
        kwargs = {'silent': False, 'store_history': True}
    529 
    530     def _showtraceback(self, etype, evalue, stb):
    531         # try to preserve ordering of tracebacks and print statements
    532         sys.stdout.flush()

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/IPython/core/interactiveshell.py in run_cell(self=<ipykernel.zmqshell.ZMQInteractiveShell object>, raw_cell='ens = mlens_opt.fit(x_train, y_train, x_test, y_test)', store_history=True, silent=False, shell_futures=True)
   2953         result : :class:`ExecutionResult`
   2954         """
   2955         result = None
   2956         try:
   2957             result = self._run_cell(
-> 2958                 raw_cell, store_history, silent, shell_futures)
        raw_cell = 'ens = mlens_opt.fit(x_train, y_train, x_test, y_test)'
        store_history = True
        silent = False
        shell_futures = True
   2959         finally:
   2960             self.events.trigger('post_execute')
   2961             if not silent:
   2962                 self.events.trigger('post_run_cell', result)

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/IPython/core/interactiveshell.py in _run_cell(self=<ipykernel.zmqshell.ZMQInteractiveShell object>, raw_cell='ens = mlens_opt.fit(x_train, y_train, x_test, y_test)', store_history=True, silent=False, shell_futures=True)
   2998             runner = self.loop_runner
   2999         else:
   3000             runner = _pseudo_sync_runner
   3001 
   3002         try:
-> 3003             return runner(coro)
        runner = <function _pseudo_sync_runner>
        coro = <coroutine object InteractiveShell.run_cell_async>
   3004         except BaseException as e:
   3005             info = ExecutionInfo(raw_cell, store_history, silent, shell_futures)
   3006             result = ExecutionResult(info)
   3007             result.error_in_exec = e

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/IPython/core/async_helpers.py in _pseudo_sync_runner(coro=<coroutine object InteractiveShell.run_cell_async>)
     73 
     74     Credit to Nathaniel Smith
     75 
     76     """
     77     try:
---> 78         coro.send(None)
        coro.send = <built-in method send of coroutine object>
     79     except StopIteration as exc:
     80         return exc.value
     81     else:
     82         # TODO: do not raise but return an execution result with the right info.

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/IPython/core/interactiveshell.py in run_cell_async(self=<ipykernel.zmqshell.ZMQInteractiveShell object>, raw_cell='ens = mlens_opt.fit(x_train, y_train, x_test, y_test)', store_history=True, silent=False, shell_futures=True, transformed_cell='ens = mlens_opt.fit(x_train, y_train, x_test, y_test)\n', preprocessing_exc_tuple=None)
   3224                 interactivity = "none" if silent else self.ast_node_interactivity
   3225                 if _run_async:
   3226                     interactivity = 'async'
   3227 
   3228                 has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
-> 3229                        interactivity=interactivity, compiler=compiler, result=result)
        interactivity = 'last_expr'
        compiler = <ipykernel.compiler.XCachingCompiler object>
   3230 
   3231                 self.last_execution_succeeded = not has_raised
   3232                 self.last_execution_result = result
   3233 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/IPython/core/interactiveshell.py in run_ast_nodes(self=<ipykernel.zmqshell.ZMQInteractiveShell object>, nodelist=[<_ast.Assign object>], cell_name='/tmp/ipykernel_88346/411782330.py', interactivity='none', compiler=<ipykernel.compiler.XCachingCompiler object>, result=<ExecutionResult object at 7fe0273f0250, executi...rue silent=False shell_futures=True> result=None>)
   3439                         mod = ast.Interactive([node])
   3440                     with compiler.extra_flags(getattr(ast, 'PyCF_ALLOW_TOP_LEVEL_AWAIT', 0x0) if self.autoawait else 0x0):
   3441                         code = compiler(mod, cell_name, mode)
   3442                         code = self._update_code_co_name(code)
   3443                         asy = compare(code)
-> 3444                     if (await self.run_code(code, result,  async_=asy)):
        self.run_code = <bound method InteractiveShell.run_code of <ipykernel.zmqshell.ZMQInteractiveShell object>>
        code = <code object <module> at 0x7fe02b17d5d0, file "/tmp/ipykernel_88346/411782330.py", line 1>
        result = <ExecutionResult object at 7fe0273f0250, executi...rue silent=False shell_futures=True> result=None>
        asy = False
   3445                         return True
   3446 
   3447             # Flush softspace
   3448             if softspace(sys.stdout, 0):

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/IPython/core/interactiveshell.py in run_code(self=<ipykernel.zmqshell.ZMQInteractiveShell object>, code_obj=<code object <module> at 0x7fe02b17d5d0, file "/tmp/ipykernel_88346/411782330.py", line 1>, result=<ExecutionResult object at 7fe0273f0250, executi...rue silent=False shell_futures=True> result=None>, async_=False)
   3519                     code = compile('last_expr', 'fake', "single")
   3520                     exec(code, {'last_expr': last_expr})
   3521                 elif async_ :
   3522                     await eval(code_obj, self.user_global_ns, self.user_ns)
   3523                 else:
-> 3524                     exec(code_obj, self.user_global_ns, self.user_ns)
        code_obj = <code object <module> at 0x7fe02b17d5d0, file "/tmp/ipykernel_88346/411782330.py", line 1>
        self.user_global_ns = {'CH_CONFIGS': ['ch_A_B'], 'CLASS_TARGET': 'genus_sex_vctrck', 'FEATURES': ['melspec_db_2_hires_lomels_htk', 'ext_temp', 'mfcc_2_midres_lomels_no_htk'], 'GENUS_SEX_CLASS_VCTRCK': 'genus_sex_vctrck', 'In': ['', 'from libs.config.mosquito.mosquito_config import GENUS_SEX_CLASS_VCTRCK', "RANDOM_SEED = 1\nCH_CONFIGS = ['ch_A_B']\nTEST_PRO..._htk', 'ext_temp', 'mfcc_2_midres_lomels_no_htk']", 'import logging\nimport sys\nsys.path.append("../.....sifier\nfrom sklearn.metrics import accuracy_score', "LOG_FORMAT = '%(asctime)s,%(msecs)d %(levelname)...:%S',\n                        level=logging.INFO)", "exp_conf = MlExpConf(CLASS_TARGET, CH_CONFIGS, '...    # strat_cols=STRAT_COLS\n                    )", 'channel_data = load_data.get_data(FEATURES, exp_...ws=True, remove_null_cols=False, normalize=False)', "x_train = channel_data['ch_A_B']['x_train']\ny_train = channel_data['y_train']", "x_test = channel_data['ch_A_B']['x_test'].values\ny_test = channel_data['y_test']", '# mlens_opt = MlensClassifier()\nmlens_opt = Mlen...tune_base_learners=True, tune_meta_learners=True)', 'ens = mlens_opt.fit(x_train, y_train, x_test, y_test)', 'preds = ens.predict(x_test)', 'print(accuracy_score(y_test, preds))', "RANDOM_SEED = 1\nCH_CONFIGS = ['ch_A_B']\nTEST_PRO..._htk', 'ext_temp', 'mfcc_2_midres_lomels_no_htk']", "exp_conf = MlExpConf(CLASS_TARGET, CH_CONFIGS, '...    # strat_cols=STRAT_COLS\n                    )", 'channel_data = load_data.get_data(FEATURES, exp_...ws=True, remove_null_cols=False, normalize=False)', "x_train = channel_data['ch_A_B']['x_train']\ny_train = channel_data['y_train']", "x_test = channel_data['ch_A_B']['x_test'].values\ny_test = channel_data['y_test']", '# mlens_opt = MlensClassifier()\nmlens_opt = Mlen...tune_base_learners=True, tune_meta_learners=True)', 'ens = mlens_opt.fit(x_train, y_train, x_test, y_test)'], 'LOG_FORMAT': '%(asctime)s,%(msecs)d %(levelname)-8s [%(filename)s:%(lineno)d] %(message)s', 'MlExpConf': <class 'libs.classes.ml_exp_conf.MlExpConf'>, 'MlensClassifier': <class 'libs.training.ensemble.mlens_classifier.MlensClassifier'>, 'Out': {}, 'PRINT_STATS': True, ...}
        self.user_ns = {'CH_CONFIGS': ['ch_A_B'], 'CLASS_TARGET': 'genus_sex_vctrck', 'FEATURES': ['melspec_db_2_hires_lomels_htk', 'ext_temp', 'mfcc_2_midres_lomels_no_htk'], 'GENUS_SEX_CLASS_VCTRCK': 'genus_sex_vctrck', 'In': ['', 'from libs.config.mosquito.mosquito_config import GENUS_SEX_CLASS_VCTRCK', "RANDOM_SEED = 1\nCH_CONFIGS = ['ch_A_B']\nTEST_PRO..._htk', 'ext_temp', 'mfcc_2_midres_lomels_no_htk']", 'import logging\nimport sys\nsys.path.append("../.....sifier\nfrom sklearn.metrics import accuracy_score', "LOG_FORMAT = '%(asctime)s,%(msecs)d %(levelname)...:%S',\n                        level=logging.INFO)", "exp_conf = MlExpConf(CLASS_TARGET, CH_CONFIGS, '...    # strat_cols=STRAT_COLS\n                    )", 'channel_data = load_data.get_data(FEATURES, exp_...ws=True, remove_null_cols=False, normalize=False)', "x_train = channel_data['ch_A_B']['x_train']\ny_train = channel_data['y_train']", "x_test = channel_data['ch_A_B']['x_test'].values\ny_test = channel_data['y_test']", '# mlens_opt = MlensClassifier()\nmlens_opt = Mlen...tune_base_learners=True, tune_meta_learners=True)', 'ens = mlens_opt.fit(x_train, y_train, x_test, y_test)', 'preds = ens.predict(x_test)', 'print(accuracy_score(y_test, preds))', "RANDOM_SEED = 1\nCH_CONFIGS = ['ch_A_B']\nTEST_PRO..._htk', 'ext_temp', 'mfcc_2_midres_lomels_no_htk']", "exp_conf = MlExpConf(CLASS_TARGET, CH_CONFIGS, '...    # strat_cols=STRAT_COLS\n                    )", 'channel_data = load_data.get_data(FEATURES, exp_...ws=True, remove_null_cols=False, normalize=False)', "x_train = channel_data['ch_A_B']['x_train']\ny_train = channel_data['y_train']", "x_test = channel_data['ch_A_B']['x_test'].values\ny_test = channel_data['y_test']", '# mlens_opt = MlensClassifier()\nmlens_opt = Mlen...tune_base_learners=True, tune_meta_learners=True)', 'ens = mlens_opt.fit(x_train, y_train, x_test, y_test)'], 'LOG_FORMAT': '%(asctime)s,%(msecs)d %(levelname)-8s [%(filename)s:%(lineno)d] %(message)s', 'MlExpConf': <class 'libs.classes.ml_exp_conf.MlExpConf'>, 'MlensClassifier': <class 'libs.training.ensemble.mlens_classifier.MlensClassifier'>, 'Out': {}, 'PRINT_STATS': True, ...}
   3525             finally:
   3526                 # Reset our crash handler in place
   3527                 sys.excepthook = old_excepthook
   3528         except SystemExit as e:

...........................................................................
/tmp/ipykernel_88346/411782330.py in <module>()
----> 1 ens = mlens_opt.fit(x_train, y_train, x_test, y_test)

...........................................................................
/home/bastian/git/python/libs/training/ensemble/mlens_classifier.py in fit(self=MlensClassifier(tune_base_learners=False, tune_meta_learners=True, random_seed=1), x_train=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...8.847113  ,
         14.94697   ,   0.05878035]]), y_train=array([2, 2, 1, ..., 2, 2, 0]), x_test=array([[  7.1236587 ,   8.4514475 ,  -4.9251547 ...3.888027  ,
        -12.220552  ,   0.        ]]), y_test=array([3, 3, 2, 0, 3, 0, 3, 2, 0, 0, 1, 2, 0, 1,...1, 0, 1, 0,
       2, 3, 1, 1, 1, 1, 3, 2, 3, 1]))
    174             x_train, y_train,
    175             meta_learners,
    176             param_dicts=param_dicts,
    177             preprocessing=preprocess,
    178             # TODO use parameter
--> 179             n_iter=2  # bump this up to do a larger grid search
    180         )
    181         print(pd.DataFrame(evl.results))
    182         ens = self._generate_super_learner(evl, in_layer_proba, in_layer_class)
    183         ens.fit(x_train, y_train)

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/model_selection/model_selection.py in fit(self=<mlens.model_selection.model_selection.Evaluator object>, X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...8.847113  ,
         14.94697   ,   0.05878035]]), y=array([2, 2, 1, ..., 2, 2, 0]), estimators=[('rf', RandomForestClassifier(random_state=1)), ('svc', SVC())], param_dicts={'rf': {'max_depth': <scipy.stats._distn_infrastructure.rv_frozen object>, 'max_features': <scipy.stats._distn_infrastructure.rv_frozen object>}, 'svc': {'C': <scipy.stats._distn_infrastructure.rv_frozen object>}}, n_iter=2, preprocessing={'class': [('layer-1', SuperLearner(array_check=None, backend=None, fol...scorer=None, shuffle=False,
       verbose=False))], 'proba': [('layer-1', SuperLearner(array_check=None, backend=None, fol...scorer=None, shuffle=False,
       verbose=False))]})
    487             class instance with stored estimator evaluation results in
    488             the ``results`` attribute.
    489         """
    490         job = set_job(estimators, preprocessing)
    491         self._initialize(job, estimators, preprocessing, param_dicts, n_iter)
--> 492         self._fit(X, y, job)
        self._fit = <bound method BaseEval._fit of <mlens.model_selection.model_selection.Evaluator object>>
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...8.847113  ,
         14.94697   ,   0.05878035]])
        y = array([2, 2, 1, ..., 2, 2, 0])
        job = 'preprocess-evaluate'
    493         self._get_results()
    494         return self
    495 
    496     def _initialize(self, job, estimators, preprocessing, param_dicts, n_iter):

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/model_selection/model_selection.py in _fit(self=<mlens.model_selection.model_selection.Evaluator object>, X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...8.847113  ,
         14.94697   ,   0.05878035]]), y=array([2, 2, 1, ..., 2, 2, 0]), job='preprocess-evaluate')
    175         parallel(delayed(subtask, not _threading)()
    176                  for task in generator for subtask in task(args, inp))
    177 
    178     def _fit(self, X, y, job):
    179         with ParallelEvaluation(self.backend, self.n_jobs) as manager:
--> 180             manager.process(self, job, X, y)
        manager.process = <bound method ParallelEvaluation.process of <mlens.parallel.backend.ParallelEvaluation object>>
        self = <mlens.model_selection.model_selection.Evaluator object>
        job = 'preprocess-evaluate'
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...8.847113  ,
         14.94697   ,   0.05878035]])
        y = array([2, 2, 1, ..., 2, 2, 0])
    181 
    182     def collect(self, path, case):
    183         """Collect cache estimators"""
    184         if case == 'transformers':

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/backend.py in process(self=<mlens.parallel.backend.ParallelEvaluation object>, caller=<mlens.model_selection.model_selection.Evaluator object>, case='preprocess-evaluate', X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...8.847113  ,
         14.94697   ,   0.05878035]]), y=array([2, 2, 1, ..., 2, 2, 0]), path=None, **kwargs={})
    851         with Parallel(n_jobs=self.n_jobs, temp_folder=tf, max_nbytes=None,
    852                       mmap_mode='w+', verbose=self.verbose,
    853                       backend=self.backend) as parallel:
    854 
    855             caller.indexer.fit(self.job.predict_in, self.job.targets, self.job.job)
--> 856             caller(parallel, self.job.args(**kwargs), case)
        caller = <mlens.model_selection.model_selection.Evaluator object>
        parallel = Parallel(n_jobs=-1)
        self.job.args = <bound method Job.args of <mlens.parallel.backend.Job object>>
        kwargs = {}
        case = 'preprocess-evaluate'

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/model_selection/model_selection.py in __call__(self=<mlens.model_selection.model_selection.Evaluator object>, parallel=Parallel(n_jobs=-1), args={'auxiliary': {'P': None, 'X': array([[ -0.7725007 ,   0.13594785,  -5.245197  ...8.847113  ,
         14.94697   ,   0.05878035]]), 'y': array([2, 2, 1, ..., 2, 2, 0])}, 'dir': [('class.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.svc.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.svc.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.svc.1.0.1', <mlens.parallel.learner.IndexedEstimator object>)], 'job': 'fit', 'main': {'P': None, 'X': array([[ -0.7725007 ,   0.13594785,  -5.245197  ...8.847113  ,
         14.94697   ,   0.05878035]]), 'y': array([2, 2, 1, ..., 2, 2, 0])}}, case='preprocess-evaluate')
    149         if 'evaluate' in case:
    150             if self.verbose >= 2:
    151                 safe_print(self._print_eval_start(), file=f)
    152                 t1 = time()
    153 
--> 154             self._run('estimators', parallel, args)
        self._run = <bound method BaseEval._run of <mlens.model_selection.model_selection.Evaluator object>>
        parallel = Parallel(n_jobs=-1)
        args = {'auxiliary': {'P': None, 'X': array([[ -0.7725007 ,   0.13594785,  -5.245197  ...8.847113  ,
         14.94697   ,   0.05878035]]), 'y': array([2, 2, 1, ..., 2, 2, 0])}, 'dir': [('class.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.svc.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.svc.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.svc.1.0.1', <mlens.parallel.learner.IndexedEstimator object>)], 'job': 'fit', 'main': {'P': None, 'X': array([[ -0.7725007 ,   0.13594785,  -5.245197  ...8.847113  ,
         14.94697   ,   0.05878035]]), 'y': array([2, 2, 1, ..., 2, 2, 0])}}
    155             self.collect(args['dir'], 'estimators')
    156 
    157             if self.verbose >= 2:
    158                 print_time(t1, '{:<13} done'.format('Evaluation'), file=f)

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/model_selection/model_selection.py in _run(self=<mlens.model_selection.model_selection.Evaluator object>, case='estimators', parallel=Parallel(n_jobs=-1), args={'auxiliary': {'P': None, 'X': array([[ -0.7725007 ,   0.13594785,  -5.245197  ...8.847113  ,
         14.94697   ,   0.05878035]]), 'y': array([2, 2, 1, ..., 2, 2, 0])}, 'dir': [('class.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.svc.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.svc.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.svc.1.0.1', <mlens.parallel.learner.IndexedEstimator object>)], 'job': 'fit', 'main': {'P': None, 'X': array([[ -0.7725007 ,   0.13594785,  -5.245197  ...8.847113  ,
         14.94697   ,   0.05878035]]), 'y': array([2, 2, 1, ..., 2, 2, 0])}})
    171         else:
    172             generator = self._learners
    173             inp = 'main'
    174 
    175         parallel(delayed(subtask, not _threading)()
--> 176                  for task in generator for subtask in task(args, inp))
        generator = [EvalLearner(attr='predict', backend='threading',...   scorer=make_scorer(accuracy_score), verbose=6), EvalLearner(attr='predict', backend='threading',...   scorer=make_scorer(accuracy_score), verbose=6), EvalLearner(attr='predict', backend='threading',...   scorer=make_scorer(accuracy_score), verbose=6), EvalLearner(attr='predict', backend='threading',...   scorer=make_scorer(accuracy_score), verbose=6), EvalLearner(attr='predict', backend='threading',...   scorer=make_scorer(accuracy_score), verbose=6), EvalLearner(attr='predict', backend='threading',...   scorer=make_scorer(accuracy_score), verbose=6), EvalLearner(attr='predict', backend='threading',...   scorer=make_scorer(accuracy_score), verbose=6), EvalLearner(attr='predict', backend='threading',...   scorer=make_scorer(accuracy_score), verbose=6)]
        args = {'auxiliary': {'P': None, 'X': array([[ -0.7725007 ,   0.13594785,  -5.245197  ...8.847113  ,
         14.94697   ,   0.05878035]]), 'y': array([2, 2, 1, ..., 2, 2, 0])}, 'dir': [('class.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.svc.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.svc.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.svc.1.0.1', <mlens.parallel.learner.IndexedEstimator object>)], 'job': 'fit', 'main': {'P': None, 'X': array([[ -0.7725007 ,   0.13594785,  -5.245197  ...8.847113  ,
         14.94697   ,   0.05878035]]), 'y': array([2, 2, 1, ..., 2, 2, 0])}}
    177 
    178     def _fit(self, X, y, job):
    179         with ParallelEvaluation(self.backend, self.n_jobs) as manager:
    180             manager.process(self, job, X, y)

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in __call__(self=Parallel(n_jobs=-1), iterable=<generator object BaseEval._run.<locals>.<genexpr>>)
    788             if pre_dispatch == "all" or n_jobs == 1:
    789                 # The iterable was consumed all at once by the above for loop.
    790                 # No need to wait for async callbacks to trigger to
    791                 # consumption.
    792                 self._iterating = False
--> 793             self.retrieve()
        self.retrieve = <bound method Parallel.retrieve of Parallel(n_jobs=-1)>
    794             # Make sure that we get a last message telling us we are done
    795             elapsed_time = time.time() - self._start_time
    796             self._print('Done %3i out of %3i | elapsed: %s finished',
    797                         (len(self._output), len(self._output),

---------------------------------------------------------------------------
Sub-process traceback:
---------------------------------------------------------------------------
JoblibValueError                                   Tue Oct 11 15:54:32 2022
PID: 88346Python 3.7.12: /home/bastian/.conda/envs/machine_learning/bin/python
...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in __call__(self=<mlens.externals.joblib.parallel.BatchedCalls object>)
    130     def __init__(self, iterator_slice):
    131         self.items = list(iterator_slice)
    132         self._size = len(self.items)
    133 
    134     def __call__(self):
--> 135         return [func(*args, **kwargs) for func, args, kwargs in self.items]
        self.items = [(<mlens.parallel.learner.EvalSubLearner object>, (), {})]
    136 
    137     def __len__(self):
    138         return self._size
    139 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in <listcomp>(.0=<list_iterator object>)
    130     def __init__(self, iterator_slice):
    131         self.items = list(iterator_slice)
    132         self._size = len(self.items)
    133 
    134     def __call__(self):
--> 135         return [func(*args, **kwargs) for func, args, kwargs in self.items]
        func = <mlens.parallel.learner.EvalSubLearner object>
        args = ()
        kwargs = {}
    136 
    137     def __len__(self):
    138         return self._size
    139 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in __call__(self=<mlens.parallel.learner.EvalSubLearner object>)
    119         else:
    120             self.processing_index = ''
    121 
    122     def __call__(self):
    123         """Launch job"""
--> 124         return getattr(self, self.job)()
        self = <mlens.parallel.learner.EvalSubLearner object>
        self.job = 'fit'
    125 
    126     def fit(self, path=None):
    127         """Fit sub-learner"""
    128         if path is None:

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in fit(self=<mlens.parallel.learner.EvalSubLearner object>, path=[('class.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.svc.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.svc.0.0.2', <mlens.parallel.learner.IndexedEstimator object>)])
    332         if self.scorer is None:
    333             raise ValueError("Cannot generate CV-scores without a scorer")
    334         t0 = time()
    335         transformers = self._load_preprocess(path)
    336         self._fit(transformers)
--> 337         self._predict(transformers)
        self._predict = <bound method EvalSubLearner._predict of <mlens.parallel.learner.EvalSubLearner object>>
        transformers = Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True)
    338 
    339         o = IndexedEstimator(estimator=self.estimator,
    340                              name=self.name_index,
    341                              index=self.index,

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in _predict(self=<mlens.parallel.learner.EvalSubLearner object>, transformers=Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True), score_preds=None)
    351 
    352     def _predict(self, transformers, score_preds=None):
    353         """Sub-routine to with sublearner"""
    354         # Train set
    355         self.train_score_, self.train_pred_time_ = self._score_preds(
--> 356             transformers, self.in_index)
        transformers = Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True)
        self.in_index = ((0, 1899),)
    357 
    358         # Validation set
    359         self.test_score_, self.test_pred_time_ = self._score_preds(
    360             transformers, self.out_index)

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in _score_preds(self=<mlens.parallel.learner.EvalSubLearner object>, transformers=Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True), index=((0, 1899),))
    361 
    362     def _score_preds(self, transformers, index):
    363         # Train scores
    364         xtemp, ytemp = slice_array(self.in_array, self.targets, index)
    365         if transformers:
--> 366             xtemp, ytemp = transformers.transform(xtemp, ytemp)
        xtemp = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        ytemp = array([2, 2, 1, ..., 2, 0, 3])
        transformers.transform = <bound method Pipeline.transform of Pipeline(nam...se,
       verbose=False))],
     return_y=True)>
    367 
    368         t0 = time()
    369 
    370         if self.error_score is not None:

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/handles.py in transform(self=Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True), X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), y=array([2, 2, 1, ..., 2, 0, 3]))
    129             Preprocessed input data
    130 
    131         y : array-like of shape [n_samples, ], optional
    132             Original or preprocessed targets, depending on the transformers.
    133         """
--> 134         return self._run(False, True, X, y)
        self._run = <bound method Pipeline._run of Pipeline(name='pi...se,
       verbose=False))],
     return_y=True)>
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        y = array([2, 2, 1, ..., 2, 0, 3])
    135 
    136     def fit_transform(self, X, y=None):
    137         """Fit and transform pipeline.
    138 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/handles.py in _run(self=Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True), fit=False, process=True, X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), y=array([2, 2, 1, ..., 2, 0, 3]))
     64         for tr_name, tr in self._pipeline:
     65             if fit:
     66                 tr.fit(X, y)
     67 
     68             if len(self._pipeline) > 1 or process:
---> 69                 X, y = transform(tr, X, y)
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        y = array([2, 2, 1, ..., 2, 0, 3])
        tr = SuperLearner(array_check=None, backend=None, fol...scorer=None, shuffle=False,
       verbose=False)
     70 
     71         if process:
     72             if self.return_y:
     73                 return X, y

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/_base_functions.py in transform(tr=SuperLearner(array_check=None, backend=None, fol...scorer=None, shuffle=False,
       verbose=False), x=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), y=array([2, 2, 1, ..., 2, 0, 3]))
    230 def transform(tr, x, y):
    231     """Try transforming with X and y. Else, transform with only X."""
    232     try:
    233         x = tr.transform(x)
    234     except TypeError:
--> 235         x, y = tr.transform(x, y)
        x = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        y = array([2, 2, 1, ..., 2, 0, 3])
        tr.transform = <bound method BaseEnsemble.transform of SuperLea...corer=None, shuffle=False,
       verbose=False)>
    236 
    237     return x, y
    238 
    239 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/ensemble/base.py in transform(self=SuperLearner(array_check=None, backend=None, fol...scorer=None, shuffle=False,
       verbose=False), X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), y=array([2, 2, 1, ..., 2, 0, 3]), **kwargs={})
    555                 return self.predict(X, **kwargs), y
    556 
    557             # Asked to reproduce predictions during fit, here we need to
    558             # account for that in model selection mode,
    559             # blend ensemble will cut X in observation size so need to adjust y
--> 560             X = self._backend.transform(X, **kwargs)
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        self._backend.transform = <bound method Sequential.transform of Sequential...n=True)])],
   verbose=0)],
      verbose=False)>
        kwargs = {}
    561             if X.shape[0] != y.shape[0]:
    562                 r = y.shape[0] - X.shape[0]
    563                 y = y[r:]
    564             return X, y

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/ensemble/base.py in transform(self=Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False), X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), **kwargs={})
    232         if not self.__fitted__:
    233             NotFittedError("Instance not fitted.")
    234 
    235         f, t0 = print_job(self, "Transforming")
    236 
--> 237         out = self._predict(X, 'transform', **kwargs)
        out = undefined
        self._predict = <bound method Sequential._predict of Sequential(...n=True)])],
   verbose=0)],
      verbose=False)>
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        kwargs = {}
    238 
    239         if self.verbose:
    240             print_time(t0, "{:<35}".format("Transform complete"),
    241                        file=f, flush=True)

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/ensemble/base.py in _predict(self=Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False), X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), job='transform', **kwargs={})
    261             data.
    262         """
    263         r = kwargs.pop('return_preds', True)
    264         with ParallelProcessing(self.backend, self.n_jobs,
    265                                 max(self.verbose - 4, 0)) as manager:
--> 266             out = manager.stack(self, job, X, return_preds=r, **kwargs)
        out = undefined
        manager.stack = <bound method ParallelProcessing.stack of <mlens.parallel.backend.ParallelProcessing object>>
        self = Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False)
        job = 'transform'
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        r = True
        kwargs = {}
    267 
    268         if not isinstance(out, list):
    269             out = [out]
    270         out = [p.squeeze() for p in out]

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/backend.py in stack(self=<mlens.parallel.backend.ParallelProcessing object>, caller=Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False), job='transform', X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), y=None, path=None, return_preds=True, warm_start=False, split=True, **kwargs={})
    668             Prediction array(s).
    669         """
    670         out = self.initialize(
    671             job=job, X=X, y=y, path=path, warm_start=warm_start,
    672             return_preds=return_preds, split=split, stack=True)
--> 673         return self.process(caller=caller, out=out, **kwargs)
        self.process = <bound method ParallelProcessing.process of <mlens.parallel.backend.ParallelProcessing object>>
        caller = Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False)
        out = {}
        kwargs = {}
    674 
    675     def process(self, caller, out, **kwargs):
    676         """Process job.
    677 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/backend.py in process(self=<mlens.parallel.backend.ParallelProcessing object>, caller=Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False), out=None, **kwargs={})
    713                       backend=self.backend) as parallel:
    714 
    715             for task in caller:
    716                 self.job.clear()
    717 
--> 718                 self._partial_process(task, parallel, **kwargs)
        self._partial_process = <bound method ParallelProcessing._partial_proces...lens.parallel.backend.ParallelProcessing object>>
        task = Layer(backend='threading', dtype=<class 'numpy.f...='sc', raise_on_exception=True)])],
   verbose=0)
        parallel = Parallel(n_jobs=-1)
        kwargs = {}
    719 
    720                 if task.name in return_names:
    721                     out.append(self.get_preds(dtype=_dtype(task)))
    722 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/backend.py in _partial_process(self=<mlens.parallel.backend.ParallelProcessing object>, task=Layer(backend='threading', dtype=<class 'numpy.f...='sc', raise_on_exception=True)])],
   verbose=0), parallel=Parallel(n_jobs=-1), **kwargs={})
    734         task.setup(self.job.predict_in, self.job.targets, self.job.job)
    735 
    736         if not task.__no_output__:
    737             self._gen_prediction_array(task, self.job.job, self.__threading__)
    738 
--> 739         task(self.job.args(**kwargs), parallel=parallel)
        task = Layer(backend='threading', dtype=<class 'numpy.f...='sc', raise_on_exception=True)])],
   verbose=0)
        self.job.args = undefined
        kwargs = {}
        parallel = Parallel(n_jobs=-1)
    740 
    741         if not task.__no_output__ and getattr(task, 'n_feature_prop', 0):
    742             self._propagate_features(task)
    743 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/layer.py in __call__(self=Layer(backend='threading', dtype=<class 'numpy.f...='sc', raise_on_exception=True)])],
   verbose=0), args={'auxiliary': {'P': None, 'X': array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])}, 'dir': [('sc.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('sc.0.2', <mlens.parallel.learner.IndexedEstimator object>)], 'job': 'transform', 'main': {'P': array([[2.60052562e-01, 1.77754706e-03, 5.693393... 1.20693236e-04, 9.98786032e-01]], dtype=float32), 'X': array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])}}, parallel=Parallel(n_jobs=-1))
    147         if self.verbose >= 2:
    148             safe_print(msg.format('Learners ...'), file=f, end=e2)
    149             t1 = time()
    150 
    151         parallel(delayed(sublearner, not _threading)()
--> 152                  for learner in self.learners
        self.learners = [Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None)]
    153                  for sublearner in learner(args, 'main'))
    154 
    155         if self.verbose >= 2:
    156             print_time(t1, 'done', file=f)

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in __call__(self=Parallel(n_jobs=-1), iterable=<generator object Layer.__call__.<locals>.<genexpr>>)
    788             if pre_dispatch == "all" or n_jobs == 1:
    789                 # The iterable was consumed all at once by the above for loop.
    790                 # No need to wait for async callbacks to trigger to
    791                 # consumption.
    792                 self._iterating = False
--> 793             self.retrieve()
        self.retrieve = <bound method Parallel.retrieve of Parallel(n_jobs=-1)>
    794             # Make sure that we get a last message telling us we are done
    795             elapsed_time = time.time() - self._start_time
    796             self._print('Done %3i out of %3i | elapsed: %s finished',
    797                         (len(self._output), len(self._output),

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in retrieve(self=Parallel(n_jobs=-1))
    739 %s""" % (this_report, exception.message)
    740                     # Convert this to a JoblibException
    741                     exception_type = _mk_exception(exception.etype)[0]
    742                     exception = exception_type(report)
    743 
--> 744                     raise exception
        exception = undefined
    745 
    746     def __call__(self, iterable):
    747         if self._jobs:
    748             raise ValueError('This Parallel instance is already running')

JoblibValueError: JoblibValueError
___________________________________________________________________________
Multiprocessing exception:
...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/threading.py in _bootstrap(self=<DummyProcess(Thread-404, started daemon 140600733968128)>)
    885         # indeed has already been destroyed, so that exceptions in
    886         # _bootstrap_inner() during normal business hours are properly
    887         # reported.  Also, we only suppress them for daemonic threads;
    888         # if a non-daemonic encounters this, something else is wrong.
    889         try:
--> 890             self._bootstrap_inner()
        self._bootstrap_inner = <bound method Thread._bootstrap_inner of <DummyProcess(Thread-404, started daemon 140600733968128)>>
    891         except:
    892             if self._daemonic and _sys is None:
    893                 return
    894             raise

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/threading.py in _bootstrap_inner(self=<DummyProcess(Thread-404, started daemon 140600733968128)>)
    921                 _sys.settrace(_trace_hook)
    922             if _profile_hook:
    923                 _sys.setprofile(_profile_hook)
    924 
    925             try:
--> 926                 self.run()
        self.run = <bound method Thread.run of <DummyProcess(Thread-404, started daemon 140600733968128)>>
    927             except SystemExit:
    928                 pass
    929             except:
    930                 # If sys.stderr is no more (most likely from interpreter

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/threading.py in run(self=<DummyProcess(Thread-404, started daemon 140600733968128)>)
    865         from the args and kwargs arguments, respectively.
    866 
    867         """
    868         try:
    869             if self._target:
--> 870                 self._target(*self._args, **self._kwargs)
        self._target = <function worker>
        self._args = (<_queue.SimpleQueue object>, <_queue.SimpleQueue object>, None, (), None, False)
        self._kwargs = {}
    871         finally:
    872             # Avoid a refcycle if the thread is running a function with
    873             # an argument that has a member that points to the thread.
    874             del self._target, self._args, self._kwargs

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/multiprocessing/pool.py in worker(inqueue=<_queue.SimpleQueue object>, outqueue=<_queue.SimpleQueue object>, initializer=None, initargs=(), maxtasks=None, wrap_exception=False)
    116             util.debug('worker got sentinel -- exiting')
    117             break
    118 
    119         job, i, func, args, kwds = task
    120         try:
--> 121             result = (True, func(*args, **kwds))
        result = None
        func = <mlens.externals.joblib._parallel_backends.SafeFunction object>
        args = ()
        kwds = {}
    122         except Exception as e:
    123             if wrap_exception and func is not _helper_reraises_exception:
    124                 e = ExceptionWithTraceback(e, e.__traceback__)
    125             result = (False, e)

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/_parallel_backends.py in __call__(self=<mlens.externals.joblib._parallel_backends.SafeFunction object>, *args=(), **kwargs={})
    345     def __init__(self, func):
    346         self.func = func
    347 
    348     def __call__(self, *args, **kwargs):
    349         try:
--> 350             return self.func(*args, **kwargs)
        self.func = <mlens.externals.joblib.parallel.BatchedCalls object>
        args = ()
        kwargs = {}
    351         except KeyboardInterrupt:
    352             # We capture the KeyboardInterrupt and reraise it as
    353             # something different, as multiprocessing does not
    354             # interrupt processing for a KeyboardInterrupt

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in __call__(self=<mlens.externals.joblib.parallel.BatchedCalls object>)
    130     def __init__(self, iterator_slice):
    131         self.items = list(iterator_slice)
    132         self._size = len(self.items)
    133 
    134     def __call__(self):
--> 135         return [func(*args, **kwargs) for func, args, kwargs in self.items]
        self.items = [(<mlens.parallel.learner.EvalSubLearner object>, (), {})]
    136 
    137     def __len__(self):
    138         return self._size
    139 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in <listcomp>(.0=<list_iterator object>)
    130     def __init__(self, iterator_slice):
    131         self.items = list(iterator_slice)
    132         self._size = len(self.items)
    133 
    134     def __call__(self):
--> 135         return [func(*args, **kwargs) for func, args, kwargs in self.items]
        func = <mlens.parallel.learner.EvalSubLearner object>
        args = ()
        kwargs = {}
    136 
    137     def __len__(self):
    138         return self._size
    139 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in __call__(self=<mlens.parallel.learner.EvalSubLearner object>)
    119         else:
    120             self.processing_index = ''
    121 
    122     def __call__(self):
    123         """Launch job"""
--> 124         return getattr(self, self.job)()
        self = <mlens.parallel.learner.EvalSubLearner object>
        self.job = 'fit'
    125 
    126     def fit(self, path=None):
    127         """Fit sub-learner"""
    128         if path is None:

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in fit(self=<mlens.parallel.learner.EvalSubLearner object>, path=[('class.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.rf.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('class.svc.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.1.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.1.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.rf.0.0.2', <mlens.parallel.learner.IndexedEstimator object>), ('proba.svc.0.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('proba.svc.0.0.2', <mlens.parallel.learner.IndexedEstimator object>)])
    332         if self.scorer is None:
    333             raise ValueError("Cannot generate CV-scores without a scorer")
    334         t0 = time()
    335         transformers = self._load_preprocess(path)
    336         self._fit(transformers)
--> 337         self._predict(transformers)
        self._predict = <bound method EvalSubLearner._predict of <mlens.parallel.learner.EvalSubLearner object>>
        transformers = Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True)
    338 
    339         o = IndexedEstimator(estimator=self.estimator,
    340                              name=self.name_index,
    341                              index=self.index,

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in _predict(self=<mlens.parallel.learner.EvalSubLearner object>, transformers=Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True), score_preds=None)
    351 
    352     def _predict(self, transformers, score_preds=None):
    353         """Sub-routine to with sublearner"""
    354         # Train set
    355         self.train_score_, self.train_pred_time_ = self._score_preds(
--> 356             transformers, self.in_index)
        transformers = Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True)
        self.in_index = ((0, 1899),)
    357 
    358         # Validation set
    359         self.test_score_, self.test_pred_time_ = self._score_preds(
    360             transformers, self.out_index)

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in _score_preds(self=<mlens.parallel.learner.EvalSubLearner object>, transformers=Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True), index=((0, 1899),))
    361 
    362     def _score_preds(self, transformers, index):
    363         # Train scores
    364         xtemp, ytemp = slice_array(self.in_array, self.targets, index)
    365         if transformers:
--> 366             xtemp, ytemp = transformers.transform(xtemp, ytemp)
        xtemp = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        ytemp = array([2, 2, 1, ..., 2, 0, 3])
        transformers.transform = <bound method Pipeline.transform of Pipeline(nam...se,
       verbose=False))],
     return_y=True)>
    367 
    368         t0 = time()
    369 
    370         if self.error_score is not None:

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/handles.py in transform(self=Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True), X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), y=array([2, 2, 1, ..., 2, 0, 3]))
    129             Preprocessed input data
    130 
    131         y : array-like of shape [n_samples, ], optional
    132             Original or preprocessed targets, depending on the transformers.
    133         """
--> 134         return self._run(False, True, X, y)
        self._run = <bound method Pipeline._run of Pipeline(name='pi...se,
       verbose=False))],
     return_y=True)>
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        y = array([2, 2, 1, ..., 2, 0, 3])
    135 
    136     def fit_transform(self, X, y=None):
    137         """Fit and transform pipeline.
    138 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/handles.py in _run(self=Pipeline(name='pipeline-11',
     pipeline=[('la...lse,
       verbose=False))],
     return_y=True), fit=False, process=True, X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), y=array([2, 2, 1, ..., 2, 0, 3]))
     64         for tr_name, tr in self._pipeline:
     65             if fit:
     66                 tr.fit(X, y)
     67 
     68             if len(self._pipeline) > 1 or process:
---> 69                 X, y = transform(tr, X, y)
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        y = array([2, 2, 1, ..., 2, 0, 3])
        tr = SuperLearner(array_check=None, backend=None, fol...scorer=None, shuffle=False,
       verbose=False)
     70 
     71         if process:
     72             if self.return_y:
     73                 return X, y

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/_base_functions.py in transform(tr=SuperLearner(array_check=None, backend=None, fol...scorer=None, shuffle=False,
       verbose=False), x=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), y=array([2, 2, 1, ..., 2, 0, 3]))
    230 def transform(tr, x, y):
    231     """Try transforming with X and y. Else, transform with only X."""
    232     try:
    233         x = tr.transform(x)
    234     except TypeError:
--> 235         x, y = tr.transform(x, y)
        x = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        y = array([2, 2, 1, ..., 2, 0, 3])
        tr.transform = <bound method BaseEnsemble.transform of SuperLea...corer=None, shuffle=False,
       verbose=False)>
    236 
    237     return x, y
    238 
    239 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/ensemble/base.py in transform(self=SuperLearner(array_check=None, backend=None, fol...scorer=None, shuffle=False,
       verbose=False), X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), y=array([2, 2, 1, ..., 2, 0, 3]), **kwargs={})
    555                 return self.predict(X, **kwargs), y
    556 
    557             # Asked to reproduce predictions during fit, here we need to
    558             # account for that in model selection mode,
    559             # blend ensemble will cut X in observation size so need to adjust y
--> 560             X = self._backend.transform(X, **kwargs)
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        self._backend.transform = <bound method Sequential.transform of Sequential...n=True)])],
   verbose=0)],
      verbose=False)>
        kwargs = {}
    561             if X.shape[0] != y.shape[0]:
    562                 r = y.shape[0] - X.shape[0]
    563                 y = y[r:]
    564             return X, y

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/ensemble/base.py in transform(self=Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False), X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), **kwargs={})
    232         if not self.__fitted__:
    233             NotFittedError("Instance not fitted.")
    234 
    235         f, t0 = print_job(self, "Transforming")
    236 
--> 237         out = self._predict(X, 'transform', **kwargs)
        out = undefined
        self._predict = <bound method Sequential._predict of Sequential(...n=True)])],
   verbose=0)],
      verbose=False)>
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        kwargs = {}
    238 
    239         if self.verbose:
    240             print_time(t0, "{:<35}".format("Transform complete"),
    241                        file=f, flush=True)

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/ensemble/base.py in _predict(self=Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False), X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), job='transform', **kwargs={})
    261             data.
    262         """
    263         r = kwargs.pop('return_preds', True)
    264         with ParallelProcessing(self.backend, self.n_jobs,
    265                                 max(self.verbose - 4, 0)) as manager:
--> 266             out = manager.stack(self, job, X, return_preds=r, **kwargs)
        out = undefined
        manager.stack = <bound method ParallelProcessing.stack of <mlens.parallel.backend.ParallelProcessing object>>
        self = Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False)
        job = 'transform'
        X = array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])
        r = True
        kwargs = {}
    267 
    268         if not isinstance(out, list):
    269             out = [out]
    270         out = [p.squeeze() for p in out]

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/backend.py in stack(self=<mlens.parallel.backend.ParallelProcessing object>, caller=Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False), job='transform', X=array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]]), y=None, path=None, return_preds=True, warm_start=False, split=True, **kwargs={})
    668             Prediction array(s).
    669         """
    670         out = self.initialize(
    671             job=job, X=X, y=y, path=path, warm_start=warm_start,
    672             return_preds=return_preds, split=split, stack=True)
--> 673         return self.process(caller=caller, out=out, **kwargs)
        self.process = <bound method ParallelProcessing.process of <mlens.parallel.backend.ParallelProcessing object>>
        caller = Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False)
        out = {}
        kwargs = {}
    674 
    675     def process(self, caller, out, **kwargs):
    676         """Process job.
    677 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/backend.py in process(self=<mlens.parallel.backend.ParallelProcessing object>, caller=Sequential(backend='threading', dtype=<class 'nu...on=True)])],
   verbose=0)],
      verbose=False), out=None, **kwargs={})
    713                       backend=self.backend) as parallel:
    714 
    715             for task in caller:
    716                 self.job.clear()
    717 
--> 718                 self._partial_process(task, parallel, **kwargs)
        self._partial_process = <bound method ParallelProcessing._partial_proces...lens.parallel.backend.ParallelProcessing object>>
        task = Layer(backend='threading', dtype=<class 'numpy.f...='sc', raise_on_exception=True)])],
   verbose=0)
        parallel = Parallel(n_jobs=-1)
        kwargs = {}
    719 
    720                 if task.name in return_names:
    721                     out.append(self.get_preds(dtype=_dtype(task)))
    722 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/backend.py in _partial_process(self=<mlens.parallel.backend.ParallelProcessing object>, task=Layer(backend='threading', dtype=<class 'numpy.f...='sc', raise_on_exception=True)])],
   verbose=0), parallel=Parallel(n_jobs=-1), **kwargs={})
    734         task.setup(self.job.predict_in, self.job.targets, self.job.job)
    735 
    736         if not task.__no_output__:
    737             self._gen_prediction_array(task, self.job.job, self.__threading__)
    738 
--> 739         task(self.job.args(**kwargs), parallel=parallel)
        task = Layer(backend='threading', dtype=<class 'numpy.f...='sc', raise_on_exception=True)])],
   verbose=0)
        self.job.args = <bound method Job.args of <mlens.parallel.backend.Job object>>
        kwargs = {}
        parallel = Parallel(n_jobs=-1)
    740 
    741         if not task.__no_output__ and getattr(task, 'n_feature_prop', 0):
    742             self._propagate_features(task)
    743 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/layer.py in __call__(self=Layer(backend='threading', dtype=<class 'numpy.f...='sc', raise_on_exception=True)])],
   verbose=0), args={'auxiliary': {'P': None, 'X': array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])}, 'dir': [('sc.0.1', <mlens.parallel.learner.IndexedEstimator object>), ('sc.0.2', <mlens.parallel.learner.IndexedEstimator object>)], 'job': 'transform', 'main': {'P': array([[2.60052562e-01, 1.77754706e-03, 5.693393... 1.20693236e-04, 9.98786032e-01]], dtype=float32), 'X': array([[ -0.7725007 ,   0.13594785,  -5.245197  ...5.025976  ,
        -21.679056  ,   0.        ]])}}, parallel=Parallel(n_jobs=-1))
    147         if self.verbose >= 2:
    148             safe_print(msg.format('Learners ...'), file=f, end=e2)
    149             t1 = time()
    150 
    151         parallel(delayed(sublearner, not _threading)()
--> 152                  for learner in self.learners
        self.learners = [Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None), Learner(attr='predict_proba', backend='threading...a=True, raise_on_exception=True,
    scorer=None)]
    153                  for sublearner in learner(args, 'main'))
    154 
    155         if self.verbose >= 2:
    156             print_time(t1, 'done', file=f)

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in __call__(self=Parallel(n_jobs=-1), iterable=<generator object Layer.__call__.<locals>.<genexpr>>)
    788             if pre_dispatch == "all" or n_jobs == 1:
    789                 # The iterable was consumed all at once by the above for loop.
    790                 # No need to wait for async callbacks to trigger to
    791                 # consumption.
    792                 self._iterating = False
--> 793             self.retrieve()
        self.retrieve = <bound method Parallel.retrieve of Parallel(n_jobs=-1)>
    794             # Make sure that we get a last message telling us we are done
    795             elapsed_time = time.time() - self._start_time
    796             self._print('Done %3i out of %3i | elapsed: %s finished',
    797                         (len(self._output), len(self._output),

---------------------------------------------------------------------------
Sub-process traceback:
---------------------------------------------------------------------------
ValueError                                         Tue Oct 11 15:54:29 2022
PID: 88346Python 3.7.12: /home/bastian/.conda/envs/machine_learning/bin/python
...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in __call__(self=<mlens.externals.joblib.parallel.BatchedCalls object>)
    130     def __init__(self, iterator_slice):
    131         self.items = list(iterator_slice)
    132         self._size = len(self.items)
    133 
    134     def __call__(self):
--> 135         return [func(*args, **kwargs) for func, args, kwargs in self.items]
        self.items = [(<mlens.parallel.learner.SubLearner object>, (), {})]
    136 
    137     def __len__(self):
    138         return self._size
    139 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/externals/joblib/parallel.py in <listcomp>(.0=<list_iterator object>)
    130     def __init__(self, iterator_slice):
    131         self.items = list(iterator_slice)
    132         self._size = len(self.items)
    133 
    134     def __call__(self):
--> 135         return [func(*args, **kwargs) for func, args, kwargs in self.items]
        func = <mlens.parallel.learner.SubLearner object>
        args = ()
        kwargs = {}
    136 
    137     def __len__(self):
    138         return self._size
    139 

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in __call__(self=<mlens.parallel.learner.SubLearner object>)
    119         else:
    120             self.processing_index = ''
    121 
    122     def __call__(self):
    123         """Launch job"""
--> 124         return getattr(self, self.job)()
        self = <mlens.parallel.learner.SubLearner object>
        self.job = 'transform'
    125 
    126     def fit(self, path=None):
    127         """Fit sub-learner"""
    128         if path is None:

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in transform(self=<mlens.parallel.learner.SubLearner object>, path=None)
    162             f = "stdout" if self.verbose < 10 - 3 else "stderr"
    163             print_time(t0, msg, file=f)
    164 
    165     def transform(self, path=None):
    166         """Predict with sublearner"""
--> 167         return self.predict(path)
        self.predict = <bound method SubLearner.predict of <mlens.parallel.learner.SubLearner object>>
        path = None
    168 
    169     def _fit(self, transformers):
    170         """Sub-routine to fit sub-learner"""
    171         xtemp, ytemp = slice_array(self.in_array, self.targets, self.in_index)

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in predict(self=<mlens.parallel.learner.SubLearner object>, path=[('sc.0.0', <mlens.parallel.learner.IndexedEstimator object>)])
    152     def predict(self, path=None):
    153         """Predict with sublearner"""
    154         if path is None:
    155             path = self.path
    156         t0 = time()
--> 157         transformers = self._load_preprocess(path)
        transformers = undefined
        self._load_preprocess = <bound method SubLearner._load_preprocess of <mlens.parallel.learner.SubLearner object>>
        path = [('sc.0.0', <mlens.parallel.learner.IndexedEstimator object>)]
    158 
    159         self._predict(transformers, False)
    160         if self.verbose:
    161             msg = "{:<30} {}".format(self.name_index, "done")

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/learner.py in _load_preprocess(self=<mlens.parallel.learner.SubLearner object>, path=[('sc.0.0', <mlens.parallel.learner.IndexedEstimator object>)])
    180         self.fit_time_ = time() - t0
    181 
    182     def _load_preprocess(self, path):
    183         """Load preprocessing pipeline"""
    184         if self.preprocess is not None:
--> 185             obj = load(path, self.preprocess_index, self.raise_on_exception)
        obj = undefined
        path = [('sc.0.0', <mlens.parallel.learner.IndexedEstimator object>)]
        self.preprocess_index = 'sc.0.2'
        self.raise_on_exception = True
    186             return obj.estimator
    187         return
    188 
    189     def _predict(self, transformers, score_preds):

...........................................................................
/home/bastian/.conda/envs/machine_learning/lib/python3.7/site-packages/mlens/parallel/_base_functions.py in load(path=[('sc.0.0', <mlens.parallel.learner.IndexedEstimator object>)], name='sc.0.2', raise_on_exception=True)
     24         obj = _load(f, raise_on_exception)
     25     elif isinstance(path, list):
     26         obj = [tup[1] for tup in path if tup[0] == name]
     27         if not obj:
     28             raise ValueError(
---> 29                 "No preprocessing pipeline in cache. Auxiliary Transformer "
     30                 "have not cached pipelines, or cached to another sub-cache.")
     31         elif not len(obj) == 1:
     32             raise ValueError(
     33                 "Could not load unique preprocessing pipeline. "

ValueError: No preprocessing pipeline in cache. Auxiliary Transformer have not cached pipelines, or cached to another sub-cache.
___________________________________________________________________________
___________________________________________________________________________

@bastian-f
Copy link
Author

I don't know if it helps, but if I put an empty dictionary as param_dicts, I get warnings like this:

UserWarning: No valid parameters found for class.svc. Will fit and score once with given parameter settings.

But then it works. Of course, without tuning the metalearners.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant