LAPACK is a freely available package which provides high-quality functions
for solving linear algebra problems such as linear equations, least-square
problems and eigenvalues. It relies on the BLAS (Basic Linear Algebra
Subprograms) for low-level operations. For more information, please refer to the "LAPACK
Users' Guide", 3rd ed., Anderson, E. *et al.*,
Society for Industrial and Applied Mathematics, Philadelphia (USA), 1999,
ISBN 0-89871-447-8. You can download the source code of LAPACK from
http://www.netlib.org.

LAPACK functions are not integrated in LME, but rather provided as replacements and additions to the LME core functions. While it does not change the way you use the functions, this approach offers more flexibility for future improvements and permits to keep LME light for applications where memory is limited. Currently, depending on the platform, the LME functions based on LAPACK weight between 700 and 800 kilobytes, more than the core of LME itself; if new functions are implemented, or if better versions become available, it will be possible to replace the LAPACK extension without changing LME itself or the whole application.

Currently, only a subset of LAPACK is available for LME. The functions have been chosen from the set of double-precision subroutines and usually apply to general real or complex matrices.

Matrix multiplication.

x * y M1 * M2 M * x

`x*y` multiplies the operands together. Operands can be
scalars (plain arithmetic product), matrices (matrix product), or mixed
scalar and matrix.

2 * 3 6 [1,2;3,4] * [3;5] 13 29 [3 4] * 2 6 8

`dgemm`, `zgemm`

operator `/`,
operator `\`,
operator `*` (LME)

Matrix left division.

a \ b a \ B A \ B

`a\b`, where `a` is a scalar, divides the second operand by
the first operand. If the second
operand is a matrix, each element is divided by `a`.

In the case where the left operand `A` is a matrix, `A\B`
solves the set of linear equations `A*x=B`. If `B` is an
m-by-n matrix, the n columns are handled separately and the solution `x` has
also n columns. The solution `x` is `inv(A)*B` if `A`
is square; otherwise, it is a least-square solution.

[1,2;3,4] \ [2;6] 2 0 [1;2] \ [1.1;2.1] 1.06 [1,2] \ 1 0.2 0.4

`dgesv`, `zgesv` (square matrices);
`dgelss`, `zgelss` (non-square matrices)

operator `/`,
`inv`,
`pinv`,
operator `\` (LME)

Matrix right division.

a / b A / b A / B

`a/b`, where `b` is a scalar, divides the first operand by
the second operand. If the first
operand is a matrix, each element is divided by `b`.

In the case where the right operand `B` is a matrix, `A/B`
solves the set of linear equations `A=x*B`. If `A` is an
m-by-n matrix, the m rows are handled separately and the solution `x` has
also m rows. The solution `x` is `A*inv(B)` if `B`
is square; otherwise, it is a least-square solution.

[2,6] / [1,3;2,4] 2 0 [1.1,2.1] / [1,2] 1.06 1 / [1;2] 0.2 0.4

`dgesv`, `zgesv` (square matrices);
`dgelss`, `zgelss` (non-square matrices)

operator `\`,
`inv`,
`pinv`,
operator `/` (LME)

Diagonal similarity transform for balancing a matrix.

B = balance(X) (T, B) = balance(X)

`balance(X)` applies a diagonal similarity transform to the
square matrix `X` to make the rows and columns as close in norm
as possible. Balancing may reduce the 1-norm of the matrix, and improves the
accuracy of the computed eigenvalues and/or eigenvectors.

`balance` returns the balanced matrix
`B` which has the same eigenvalues and singular values as
`X`, and optionally the scaling matrix `T` such that
`T\X*T=B`.

A = [1,2e6;3e-6,4]; (T,B) = balance(A) T = 1e6 0 0 1 B = 1 2 3 4 eig(A)-eig(B) 0 0

`dgebal`, `zgebal`

`balance` (LME)

Cholesky decomposition.

C = chol(X)

If a square matrix X is symmetric (or hermitian) and positive definite, it can be decomposed into the following product:

where `C` is an upper triangular matrix.

The part of `X` below the main diagonal
is not used, because `X` is assumed to be symmetric or hermitian. An error occurs
if `X` is not positive definite.

C = chol([5,3;3,8]) C = 2.2361 1.3416 0 2.4900 C'*C 5 3 3 8

`dpotrf`, `zpotrf`

Determinant.

d = det(X)

`det(X)` is the determinant of the square matrix `M`, which is
0 (up to the rounding errors) if `M` is singular. The function `rank`
is a numerically more robust test for singularity.

det([1,2;3,4]) -2 det([1,2;1,2]) 0

`dgetrf`, `zgetrf`

`det` (LME)

Eigenvalues and eigenvectors.

e = eig(A) (V,D) = eig(A) e = eig(A,B) (V,D) = eig(A,B)

`eig(A)` returns the vector of eigenvalues of the square matrix `A`.

`(V,D) = eig(A)` returns a diagonal matrix `D` of eigenvalues
and a matrix `V` whose columns are the corresponding eigenvectors. They are
such that `A*V = V*D`.

`eig(A,B)` returns the generalized eigenvalues and eigenvectors of square
matrices `A` and `B`, such that `A*V = B*V*D`.

A = [1,2;3,4]; B = [2,1;3,3]; eig(A) 5.3723 -0.3723 (V,D) = eig(A,B) V = -0.5486 -1 -1 0.8229 D = 1.2153 0 0 -0.5486 A*V,B*V*D ans = -2.5486 0.6458 -5.6458 0.2915 ans = -2.5486 0.6458 -5.6458 0.2915

`dgeev`, `zgeev` for eigenvalues and eigenvectors;
`dgegv`, `zgegv` for generalized eigenvalues and eigenvectors

`eig` (LME)

Hessenberg reduction.

(P,H) = hess(A) H = hess(A)

`hess(A)` reduces the square matrix `A` A to the upper
Hessenberg form `H` using an orthogonal similarity transformation
`P'*H*P=A`.
The result `H` is zero below the first subdiagonal and has the same
eigenvalues as A.

(P,H)=hess([1,2,3;4,5,6;7,8,9]) P = 1 0 0 0 -0.4961 -0.8682 0 -0.8682 0.4961 H = 1 -3.597 -0.2481 -8.0623 14.0462 2.8308 0 0.8308 -4.6154e-2

`dgehrd`, `zgehrd`;
`dorghr`, `zunghr` for computing `P`

Matrix inverse.

R = inv(X)

`inv(X)` returns the inverse of the square matrix
`X`. `X`
must not be singular; otherwise, its elements are infinite.

To solve a set of linear of equations, the operator `\` is more efficient.

inv([1,2;3,4]) -2 1 1.5 -0.5

`dgesv`, `zgesv`

operator `/`,
operator `\`,
`lu`,
`pinv`,
`inv` (LME)

Matrix logarithm.

L = logm(X)

`logm(A)` returns the square matrix logarithm of A, the inverse
of the matrix exponential. The matrix logarithm does not always exist.

L = logm([1,2;3,4]) L = -0.3504+2.3911j 0.9294-1.0938j 1.394-1.6406j 1.0436+0.7505j expm(L) 1-4.4409e-16j 2-6.1062e-16j 3-9.4369e-16j 4

`zgees`

LU factorization.

(L,U,P) = lu(X) (L2,U) = lu(X) M = lu(X)

`(L,U,P)=lu(X)` factorizes the square matrix `X` such
that `P*X=L*U`, where `L` is a lower-triangular square matrix,
`U` is an upper-triangular square matrix, and `P` is a
permutation square matrix (a real matrix where each line and each column has
a single 1 element and zeros elsewhere).

`(L2,U)=lu(X)` factorizes the square matrix `X` such
that `X=L2*U`, where `L2=P\L`.

`M=lu(X)` yields a square matrix M whose upper triangular part is
`U` and whose lower triangular part (below the main diagonal) is `L`
without the diagonal.

X = [1,2,3;4,5,6;7,8,8]; (L,U,P) = lu(X) L = 1 0 0 0.143 1 0 0.571 0.5 1 U = 7 8 8 0 0.857 1.857 0 0 0.5 P = 0 0 1 1 0 0 0 1 0 P*X-L*U ans = 0 0 0 0 0 0 0 0 0

`dgetrf`, `zgetrf`

Null space.

Z = null(A)

`null(A)` returns a matrix `Z` whose columns are an orthonormal
basis for the null space of m-by-n matrix `A`. `Z` has
`n-rank(A)` columns, which are the last right singular values of
`A` (that is, those corresponding to the negligible singular values).

Without input argument, `null` gives the null value (the unique
value of the special null type, not related to linear algebra).

null([1,2,3;1,2,4;1,2,5]) -0.8944 0.4472 8.0581e-17

`dgesvd`, `zgesvd`

Orthogonalization.

Q = orth(A)

`orth(A)` returns a matrix `Q` whose columns are
an orthonormal basis for the range of those of matrix `A`.
`Q` has `rank(A)` columns, which are the first left
singular vectors of `A` (that is, those corresponding to the
largest singular values).

orth([1,2,3;1,2,4;1,2,5]) -0.4609 0.788 -0.5704 8.9369e-2 -0.6798 -0.6092

`dgesvd`, `zgesvd`

Matrix pseudo-inverse.

R = pinv(A) R = pinv(A,tol)

`pinv(A)` returns the pseudo-inverse of matrix `A`, i.e.
a matrix `B` such that `B*A*B=B` and `A*B*A=A`.
For a full-rank square matrix `A`, `pinv(A)` is the same
as `inv(A)`.

`pinv` is based on the singular value decomposition; all values below
the tolerance times the largest singular value are neglected. The default
tolerance is the maximum dimension times `eps`; another value may
be supplied to `pinv` as a second parameter.

pinv([1,2;3,4]) -2 1 1.5 -0.5 B = pinv([1;2]) B = 0.2 0.4 [1;2] * B * [1;2] 1 2 B * [1;2] * B 0.2 0.4

`dgelss`, `zgelss`

QR decomposition.

(Q, R, E) = qr(A) (Q, R) = qr(A) (Qe, Re, e) = qr(A, false) (Qe, Re) = qr(A, false)

With three output arguments, `qr(A)` computes the QR decomposition of
matrix `A` with column pivoting,
i.e. a square unitary matrix `Q` and an upper triangular matrix `R`
such that `A*E=Q*R`. With two output arguments, `qr(A)` computes
the QR decomposition without pivoting, such that `A=Q*R`.

With a second input argument with the value `false`,
if `A` has `m` rows and `n` columns with `m>n`,
`qr` produces an m-by-n `Q` and an n-by-n `R`. Bottom rows
of zeros of `R`, and the corresponding columns of `Q`, are discarded.
With column pivoting, the third output argument `e` is a permutation vector:
`A(:,e)=Q*R`.

(Q,R) = qr([1,2;3,4;5,6]) Q = -0.169 0.8971 0.4082 -0.5071 0.276 -0.8165 -0.8452 -0.345 0.4082 R = -5.9161 -7.4374 0 0.8281 0 0 (Qe,Re) = qr([1,2;3,4;5,6],false) Qe = -0.169 0.8971 -0.5071 0.276 -0.8452 -0.345 Re = -5.9161 -7.4374 0 0.8281

`dgeqrf`, `zgeqrf` for decomposition without pivoting;
`dgeqpf`, `zgeqpf` for decomposition with pivoting;
`dorgqr`, `zung2r` for computing `Q`

Generalized Schur factorization.

(S, T, Q, Z) = qz(A, B)

`qz(A,B)` computes the generalized
Schur factorization for square matrices `A` and `B`, such
that `Q*S*Z=A` and `Q*B*Z=B`. For real matrices, the result
is real with `S` block upper-diagonal with 1x1 and 2x2 blocks, and
`T` upper-diagonal. For complex matrices, the result is complex
with both `S` and `T` upper-diagonal.

(S, T, Q, Z) = qz([1,2;3,4], [5,6;7,8]) S = 5.3043 0.5927 -1.2062 0.2423 T = 13.19 0 0 0.1516 Q = -0.5921 -0.8059 -0.8059 0.5921 Z = -0.6521 -0.7581 0.7581 -0.6521 Q*S*Z 1 2 3 4 Q*T*Z 5 6 7 8

`dgges`, `zgges`

Rank of a matrix.

n = rank(X) n = rank(X, tol)

`rank(X)` returns the rank of matrix `X`, i.e. the
number of lines or columns linearly independent. To obtain it, the singular
values are computed and the number of values significantly larger than 0 is
counted. The value below which they are considered to be 0 can be specified with
the optional second argument.

rank([1,1;0,0]) 1

`dgesvd`, `zgesvd`

`svd`,
`cond`,
`orth`,
`null`,
`det`,
`rank` (LME)

Reciprocal condition number.

r = rcond(A)

`rcond(A)` computes the reciprocal condition number of matrix
`A`, i.e. a number `r=1/(norm(A,1)*norm(inv(A),1))`.
The reciprocal condition number is near 1 for well-conditioned matrices
and near 0 for bad-conditioned ones.

rcond([1,1;0,1]) 0.3 rcond([1,1e6;2,1e6]) 5e-7

`dlange`, `zlange` for the 1-norm of `X`;
`dgecon`, `zgecon` for the rcond of `X` using
its 1-norm

`lu`,
`inv`,
`cond` (LME)

Schur factorization.

(U,T) = schur(A) T = schur(A)

`schur(A)` computes the Schur factorization of square matrix `A`,
i.e. a unitary matrix `U` and a square matrix `T` (the *Schur
matrix*) such that `A=U*T*U'`. If `A` is complex, the
Schur matrix is upper triangular, and its diagonal contains the eigenvalues of
`A`; if `A` is real, the Schur matrix is
real upper triangular, except that there may be 2-by-2 blocks on the main diagonal
which correspond to the complex eigenvalues of `A`.

(U,T) = schur([1,2;3,4]) U = 0.416 -0.9094 0.9094 0.416 T = 5.3723 -1 0 -0.3723 eig([1,2;3,4]) 5.3723 -0.3723

`dgees`, `zgees`

`lu`,
`hess`,
`qr`,
`logm`,
`sqrtm`,
`eig`

Matrix square root.

S = sqrtm(X)

`sqrtm(A)` returns the square root of the square matrix `A`,
i.e. a matrix `S` such that `S*S=A`.

S = sqrtm([1,2;3,4]) S = 0.5537+0.4644j 0.807-0.2124j 1.2104-0.3186j 1.7641+0.1458j S*S 1 2 3 4

`zgees`

Singular value decomposition.

s = svd(X) (U,S,V) = svd(X) (U,S,V) = svd(X,false)

The singular value decomposition `(U,S,V) = svd(X)`
decomposes the m-by-n matrix `X` such that
`X = U*S*V'`, where `S` is an m-by-n diagonal
matrix with decreasing positive diagonal elements (the singular values of
`X`), `U` is an m-by-m unitary matrix, and `V` is an
n-by-n unitary matrix. The number of non-zero diagonal elements of `S` (up to
rounding errors) gives the rank of `X`.

When `m>n`, `(U,S,V) = svd(X,false)` produces
an n-by-n diagonal matrix `S` and an m-by-n matrix `U`.
The relationship `X = U*S*V'` still holds.

With one output argument,
`s = svd(X)` returns the vector of singular values.

(U,S,V)=svd([1,2,3;4,5,6]) U = -0.3863 -0.9224 -0.9224 0.3863 S = 9.508 0 0 0 0.7729 0 V = -0.4287 0.806 0.4082 -0.5663 0.1124 -0.8165 -0.7039 -0.5812 0.4082 U*S*V' 1 2 3 4 5 6 (U,S,V)=svd([1,2,3;4,5,6],false) U = -0.3863 -0.9224 -0.9224 0.3863 S = 9.508 0 0 0.7729 V = -0.4287 0.806 -0.5663 0.1124 -0.7039 -0.5812 U*S*V 1.4944 -2.4586 2.4944 3.7929 -7.2784 4.7929

`dgesvd`, `zgesvd`

Copyright 2000-2013, Calerga.

All rights reserved.