Estimation of Non-Normalized Statistical Models by Score Matching

目录

Hyv"{a}rinen A. Estimation of Non-Normalized Statistical Models by Score Matching. Journal of Machine Learning Research, 2005.

我们常常会建模如下的概率模型:

\[p(\xi;\theta) = \frac{1}{Z(\theta)} q(\xi; \theta). \]

比如energy-based models.
上述问题一般来说用极大似然不易求解, 因为

\[Z(\theta) = \int_{\xi} q(\xi;\theta) \mathrm{d}\xi, \]

常常不易估计(特别是高维的情形, 用MCMC是致命的).
所以倘若能够抛开\(Z(\theta)\)就能估计参数就好了, 本文就是提出了这个一个方法(虽然要求二阶导, 倘若用梯度方法求解便是需要三阶偏导了.)

我发现这个人也是噪声对比估计(负样本采样)的作者之一.

主要内容

方法

\[\psi(\xi;\theta) = \left ( \begin{array}{cc} \frac{\partial \log p(\xi;\theta)}{\partial \xi_1} \\ \vdots \\ \frac{\partial \log p(\xi;\theta)}{\partial \xi_n} \\ \end{array} \right ) =\left ( \begin{array}{cc} \psi_1(\xi;\theta) \\ \vdots \\ \psi_n(\xi;\theta) \\ \end{array} \right ) =\nabla_{\xi} \log p(\xi;\theta), \]

并令

\[\psi_x(\xi) = \nabla_{\xi} \log p_x(\xi), \]

其中\(p_x(\xi)\)表示数据真实的分布.

最小化下列损失能够保证\(p(\xi;\theta)\)逼近\(p_x(\xi)\):

\[J(\theta) = \frac{1}{2} \int_{\xi \in \mathbb{R}^n} p_x(\xi) \| \psi(\xi;\theta) - \psi_{x}(\xi) \|^2 d\xi. \]

损失函数的转换

显然

\[\psi_x(\xi) = \nabla_{\xi} \log p_x(\xi), \]

设及真实分布, 不易求解, 但是通过对损失函数的转换, 我们发现其与真实分布并没有大的联系.

\[\nabla_{\xi} \log p_x(\xi) = \frac{\nabla p_x(\xi)}{p_x(\xi)}, \\ \psi(\xi;\theta) = \nabla_{\xi} \log p(\xi;\theta) = \nabla_{\xi} \log q (\xi;\theta). \]

\[\| \psi(\xi;\theta) - \psi_{x}(\xi) \|^2 =\|\psi(\xi;\theta)\|^2 - 2\psi^T(\xi;\theta) \psi_x(\xi) + \|\psi_x(\xi)\|^2, \]

第一项与\(p_x\)无关, 最后一项与\(\theta\)无关, 故只需考虑第二项:

\[\psi^T(\xi;\theta)\psi_x(\xi) = \sum_{i=1}^n \psi_{i}\psi_{x,i} = \sum_{i=1}^n \psi_{i}\frac{1}{p_x(\xi)} \frac{\partial p_x(\xi)}{\partial \xi_i}, \]

\[\begin{array}{ll} \int p_x(\xi) \psi^T(\xi;\theta)\psi_x(\xi) \mathrm{d}\xi &=\int \sum_{i=1}^n \psi_{i}\frac{\partial p_x(\xi)}{\partial \xi_i} \mathrm{d}\xi \\ &=\sum_{i=1}^n \int \psi_{i}\frac{\partial p_x(\xi)}{\partial \xi_i} \mathrm{d}\xi \\ &=\sum_{i=1}^n \int \psi_{i}p_x(\xi)|_{\xi_i=-\infty}^{\xi_i=+\infty} \mathrm{d}\xi_{\setminus i} - \int p_x(\xi) \frac{\partial \psi_i}{\partial \xi_i} \mathrm{d}\xi.\\ &=-\sum_{i=1}^n \int p_x(\xi) \frac{\partial \psi_i}{\partial \xi_i} \mathrm{d}\xi. \end{array} \]

故:

\[J(\theta) = \sum_{i=1}^n\int_{\xi} p_x(\xi) [\frac{1}{2}(\frac{\partial q(\xi;\theta)}{\partial \xi_i})^2+ \frac{\partial^2 \log q(\xi;\theta)}{\partial^2 \xi_i}] \mathrm{d}\xi + \text{ const }. \]

故我们可以用如下损失近似:

\[\hat{J}(\theta) = \frac{1}{2}\sum_{t=1}^T \sum_{i=1}^n [\partial_i \psi_i(x(t); \theta) + \frac{1}{2} \psi_i(\xi;\theta)^2]. \]

注: 上述证明需要用到如下条件:

  1. \(p_x(\xi), \psi(\xi;\theta)\)可微;
  2. \(p_x(\xi) \psi(\xi;\theta) \rightarrow 0, \text{ if } \|\xi\| \rightarrow +\infty\).

一个例子

考虑多为正态分布:

\[p(x;\mu, M) = \frac{1}{Z(\mu, M)} \exp (-\frac{1}{2}(x-\mu)^2 M(x-\mu)), \]

此时\(\hat{J}\)存在显示解, 且恰为:

\[\mu^* = \frac{1}{T}\sum_{t=1}^T x(t), \\ M^* = [\frac{1}{T}\sum_{t=1}^T (x(t) - \mu^*) (x(t) - \mu^*)^T]^{-1}, \]

为极大似然估计的解.

上一篇:cv::HoughLinesP()&CV::HoughLines()&cv::HoughCircles()


下一篇:机器学习_线性回归