Random feature

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Random features (RF) are a technique used in machine learning to approximate kernel methods, introduced by Ali Rahimi and Ben Recht in their 2007 paper "Random Features for Large-Scale Kernel Machines",[1] and extended by.[2][3] RF uses a Monte Carlo approximation to kernel functions by randomly sampled feature maps. It is used for datasets that are too large for traditional kernel methods like support vector machine, kernel ridge regression, and gaussian process.

Mathematics

[edit | edit source]

Kernel method

[edit | edit source]

Given a feature map ϕ:dV, where V is a Hilbert space (more specifically, a reproducing kernel Hilbert space), the kernel trick replaces inner products in feature space ϕ(xi),ϕ(xj)V by a kernel functionk(xi,xj):d×dKernel methods replaces linear operations in high-dimensional space by operations on the kernel matrix: KX:=[k(xi,xj)]i,j1:N where N is the number of data points.

Random kernel method

[edit | edit source]

The problem with kernel methods is that the kernel matrix KX has size N×N. This becomes computationally infeasible when N reaches the order of a million. The random kernel method replaces the kernel function k by an inner product in low-dimensional feature space D:k(x,y)z(x),z(y) where z is a randomly sampled feature map z:dD.

This converts kernel linear regression into linear regression in feature space, kernel SVM into SVM in feature space, etc. Since we have KXZXTZX where ZX=[z(x1),,z(xN)], these methods no longer involve matrices of size O(N2), but only random feature matrices of size O(DN).

Random Fourier feature

[edit | edit source]

Radial basis function kernel

[edit | edit source]

The radial basis function (RBF) kernel on two samples xi,xjd is defined as[4]

k(xi,xj)=exp(xixj22σ2)

where xixj2 is the squared Euclidean distance and σ is a free parameter defining the shape of the kernel. It can be approximated by a random Fourier feature map z:d2D:z(x):=1D[cosω1,x,sinω1,x,,cosωD,x,sinωD,x]Twhere ω1,...,ωD are IID samples from the multidimensional normal distribution N(0,σ2I).

Theorem-

  1. (Unbiased estimation) E[z(x),z(y)]=exy2/(2σ2).
  2. (Variance bound) Var[z(x),z(y)]=O(D1)
  3. (Convergence) As D, the approximation converges in probability to the true kernel.
Proof

(Unbiased estimation) By independence of ω1,...,ωD, it suffices to prove the case of D=1. By the trigonometric identity cos(ab)=cos(a)cos(b)+sin(a)sin(b),z(x),z(y)=1Di=1Dcosωi,xyApply the spherical symmetry of normal distribution, then evaluate the integral: cos(kx)ex2/22πdx=ek2/2.

(Variance bound) Since ω1,...,ωD are IID, it suffices to prove that the variance of cosω1,xy is finite, which is true since it is bounded within [1,+1].

(Convergence) By Chebyshev's inequality.

Since

cos,sin

are bounded, there is a stronger convergence guarantee by Hoeffding's inequality.[1]: Claim 1 

Random Fourier features

[edit | edit source]

By Bochner's theorem, the above construction can be generalized to arbitrary positive definite shift-invariant kernel k(x,y)=k(xy).

Define its Fourier transformp(ω)=12πdejω,Δk(Δ)dΔthen ω1,...,ωD are sampled IID from the probability distribution with probability density p. This applies for other kernels like the Laplace kernel and the Cauchy kernel.

Neural network interpretation

[edit | edit source]

Given a random Fourier feature map z, training the feature on a dataset by featurized linear regression is equivalent to fitting complex parameters θ1,,θD such thatfθ(x)=Re(kθkeiωk,x)which is a neural network with a single hidden layer, with activation function teit, zero bias, and the parameters in the first layer frozen.

In the overparameterized case, when 2DN, the network linearly interpolates the dataset {(xi,yi)}i1:N, and the network parameters is the least-norm solution:θ^=argminθD,fθ(xk)=ykk1:NθAt the limit of D, the L2 norm θ^fKH where fK is the interpolating function obtained by the kernel regression with the original kernel, and H is the norm in the reproducing kernel Hilbert space for the kernel.[5]

Other examples

[edit | edit source]

Random binning features

[edit | edit source]

A random binning features map partitions the input space using randomly shifted grids at randomly chosen resolutions and assigns to an input point a binary bit string that corresponds to the bins in which it falls. The grids are constructed so that the probability that two points xi,xjd are assigned to the same bin is proportional to K(xi,xj). The inner product between a pair of transformed points is proportional to the number of times the two points are binned together, and is therefore an unbiased estimate of K(xi,xj). Since this mapping is not smooth and uses the proximity between input points, Random Binning Features works well for approximating kernels that depend only on the L1 distance between datapoints.

Orthogonal random features

[edit | edit source]

Orthogonal random features[6] uses a random orthogonal matrix instead of a random Fourier matrix.

Historical context

[edit | edit source]

In NIPS 2006, deep learning had just become competitive with linear models like PCA and linear SVMs for large datasets, and people speculated about whether it could compete with kernel SVMs. However, there was no way to train kernel SVM on large datasets. The two authors developed the random feature method to train those.

It was then found that the O(1/D) variance bound did not match practice: the variance bound predicts that approximation to within 0.01 requires D104, but in practice required only 102. Attempting to discover what caused this led to the subsequent two papers.[2][3][7]

See also

[edit | edit source]

References

[edit | edit source]
  1. ^ a b Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  2. ^ a b Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  3. ^ a b Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  4. ^ Jean-Philippe Vert, Koji Tsuda, and Bernhard Schölkopf (2004). "A primer on kernel methods". Kernel Methods in Computational Biology.
  5. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  6. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  7. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
[edit | edit source]