Block Wiedemann algorithm

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

The block Wiedemann algorithm for computing kernel vectors of a matrix over a finite field is a generalization by Don Coppersmith of an algorithm due to Doug Wiedemann.

Wiedemann's algorithm

[edit | edit source]

Let M be an n×n square matrix over some finite field F, let xbase be a random vector of length n, and let x=Mxbase. Consider the sequence of vectors S=[x,Mx,M2x,] obtained by repeatedly multiplying the vector by the matrix M; let y be any other vector of length n, and consider the sequence of finite-field elements Sy=[yx,yMx,yM2x]

We know that the matrix M has a minimal polynomial; by the Cayley–Hamilton theorem we know that this polynomial is of degree (which we will call n0) no more than n. Say r=0n0prMr=0. Then r=0n0y(pr(Mrx))=0; so the minimal polynomial of the matrix annihilates the sequence S and hence Sy.

But the Berlekamp–Massey algorithm allows us to calculate relatively efficiently some sequence q0qL with i=0LqiSy[i+r]=0r. Our hope is that this sequence, which by construction annihilates yS, actually annihilates S; so we have i=0LqiMix=0. We then take advantage of the initial definition of x to say Mi=0LqiMixbase=0 and so i=0LqiMixbase is a hopefully non-zero kernel vector of M.

The block Wiedemann (or Coppersmith-Wiedemann) algorithm

[edit | edit source]

The natural implementation of sparse matrix arithmetic on a computer makes it easy to compute the sequence S in parallel for a number of vectors equal to the width of a machine word – indeed, it will normally take no longer to compute for that many vectors than for one. If you have several processors, you can compute the sequence S for a different set of random vectors in parallel on all the computers.

It turns out, by a generalization of the Berlekamp–Massey algorithm to provide a sequence of small matrices, that you can take the sequence produced for a large number of vectors and generate a kernel vector of the original large matrix. You need to compute yiMtxj for some i=0imax,j=0jmax,t=0tmax where imax,jmax,tmax need to satisfy tmax>dimax+djmax+O(1) and yi are a series of vectors of length n; but in practice you can take yi as a sequence of unit vectors and simply write out the first imax entries in your vectors at each time t.

Invariant Factor Calculation

[edit | edit source]

The block Wiedemann algorithm can be used to calculate the leading invariant factors of the matrix, ie, the largest blocks of the Frobenius normal form. Given MFqn×n and U,VFqb×n where Fq is a finite field of size q, the probability p that the leading k<b invariant factors of M are preserved in i=02n1UMiVTxi is

p{1/64,if b=k+1 and q=2(132bk)21/16if bk+2 and q=2(12qbk)21/9if bk+1 and q>2.[1]

References

[edit | edit source]
  1. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  • Wiedemann, D., "Solving sparse linear equations over finite fields," IEEE Trans. Inf. Theory IT-32, pp. 54-62, 1986.
  • D. Coppersmith, Solving homogeneous linear equations over GF(2) via block Wiedemann algorithm, Math. Comp. 62 (1994), 333-350.