[ad_1]

Given a source and a target probability measure supported on Rd\mathbbR^dRd, the Monge problem aims for the most efficient way to map one distribution to the other.
This efficiency is quantified by defining a cost function between source and target data.
Such a cost is often set by default in the machine learning literature to the squared-Euclidean distance, ℓ22(x,y)=12∥x−y∥22\ell^2_2(x,y)=\tfrac12\|x-y\|_2^2ℓ22​(x,y)=21​∥x−y∥22​.
The benefits of using elastic costs, defined through a regularizer τ\tauτ as c(x,y)=ℓ22(x,y)+τ(x−y)c(x, y)=\ell^2_2(x,y)+\tau(x-y)c(x,y)=ℓ22​(x,y)+τ(x−y), was…

[ad_2]

Source link