[scikit-learn] Tikhonov regularization

Michael Eickenberg michael.eickenberg at gmail.com
Tue Aug 11 11:23:14 EDT 2020


Hi David,

I am assuming you mean that T acts on w.
If T is invertible, you can absorb it into the design matrix by making a
change of variable v=Tw, w=T^-1 v, and use standard ridge regression for v.
If it is not (e.g. when T is a standard finite difference derivative
operator) then this trick won't work.
A second thing you can do is to fit standard linear regression on the
augmented data matrix vstack([X, factor * T]) and the augmented target
concatenate([y, np.zeros(T.shape[0])]).

At worst you can compute the gradient of your loss function X^T(Xw - y) +
T^Tw and perform gradient descent or compute w = (X^T X + T^T T)^{-1}X^T y.

Hope this helps

Michael

On Mon, Aug 10, 2020 at 11:39 PM David Kleiven <davidkleiven446 at gmail.com>
wrote:

> Hi,
>
> I was looking at docs for Ridge regression and it states that it minimizes
>
> ||y - Xw||^2 + alpha*||w||^2
>
> I would like to minimize the function
>
> ||y-Xw||^2 + ||Tx||^2, where T is a matrix, in order to impose certain
> properties on the solution vectors, but I haven't found any way to achieve
> that in scikit-learn. Is this type of regularisation supported in
> scikit-learn?
>
> More details on the ||Tx||^2 regularisation can be found here
>
> https://en.wikipedia.org/wiki/Tikhonov_regularization
>
> Best,
> David
> _______________________________________________
> scikit-learn mailing list
> scikit-learn at python.org
> https://mail.python.org/mailman/listinfo/scikit-learn
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scikit-learn/attachments/20200811/2cf5b405/attachment.html>


More information about the scikit-learn mailing list