Jump to content

Hilbert matrix

From Wikipedia, the free encyclopedia

In linear algebra, a Hilbert matrix, introduced by Hilbert (1894), is a square matrix with entries being the unit fractions

For example, this is the 5 × 5 Hilbert matrix:

The entries can also be defined by the integral

that is, as a Gramian matrix for powers of x. It arises in the least squares approximation of arbitrary functions by polynomials.

The Hilbert matrices are canonical examples of ill-conditioned matrices, being notoriously difficult to use in numerical computation. For example, the 2-norm condition number of the matrix above is about 4.8×105.

Historical note

[edit]

Hilbert (1894) introduced the Hilbert matrix to study the following question in approximation theory: "Assume that I = [a, b], is a real interval. Is it then possible to find a non-zero polynomial P with integer coefficients, such that the integral

is smaller than any given bound ε > 0, taken arbitrarily small?" To answer this question, Hilbert derives an exact formula for the determinant of the Hilbert matrices and investigates their asymptotics. He concludes that the answer to his question is positive if the length ba of the interval is smaller than 4.

Properties

[edit]

The Hilbert matrix is symmetric and positive definite. The Hilbert matrix is also totally positive (meaning that the determinant of every submatrix is positive).

The Hilbert matrix is an example of a Hankel matrix. It is also a specific example of a Cauchy matrix.

The determinant can be expressed in closed form, as a special case of the Cauchy determinant. The determinant of the n × n Hilbert matrix is

where

Hilbert already mentioned the curious fact that the determinant of the Hilbert matrix is the reciprocal of an integer (see sequence OEISA005249 in the OEIS), which also follows from the identity

Using Stirling's approximation of the factorial, one can establish the following asymptotic result:

where an converges to the constant as , where A is the Glaisher–Kinkelin constant.

The inverse of the Hilbert matrix can be expressed in closed form using binomial coefficients; its entries are

where n is the order of the matrix.[1] It follows that the entries of the inverse matrix are all integers, and that the signs form a checkerboard pattern, being positive on the principal diagonal. For example,

The condition number of the n × n Hilbert matrix grows as .

Applications

[edit]

The method of moments applied to polynomial distributions results in a Hankel matrix, which in the special case of approximating a probability distribution on the interval [0, 1] results in a Hilbert matrix. This matrix needs to be inverted to obtain the weight parameters of the polynomial distribution approximation.[2]

References

[edit]
  1. ^ Choi, Man-Duen (1983). "Tricks or Treats with the Hilbert Matrix". The American Mathematical Monthly. 90 (5): 301–312. doi:10.2307/2975779. JSTOR 2975779.
  2. ^ Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper (2017). "Polynomial probability distribution estimation using the method of moments". PLOS ONE. 12 (4): e0174573. Bibcode:2017PLoSO..1274573M. doi:10.1371/journal.pone.0174573. PMC 5386244. PMID 28394949.

Further reading

[edit]