Copyright © The Octave Project Developers
Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies.
Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one.
Permission is granted to copy and distribute translations of this manual into another language, under the above conditions for modified versions.
The linear-algebra package must be installed and then loaded to be used.
It can be installed in GNU Octave directly from octave-forge,
If running in Windows, the package may already be installed, to check run:
pkg list linear-algebra
With an internet connection available, the linear-algebra package can be installed from packages.octave.org using the following command within GNU Octave:
pkg install linear-algebra
The latest released version of the package will be downloaded and installed.
Otherwise, if the package file has already been downloaded it can be installed using the following command in GNU Octave:
pkg install linear-algebra-2.2.4.tar.gz
out = vec_projection (x, y)Compute the vector projection of a 3-vector onto another. x : size 1 x 3 and y : size 1 x 3 tol : size 1 x 1
vec_projection ([1,0,0], [0.5,0.5,0])
⇒ 0.7071
Vector projection of x onto y, both are 3-vectors, returning the value of x along y. Function uses dot product, Euclidean norm, and angle between vectors to compute the proper length along y.
(varargin)Computes the cartesian product of given column vectors ( row vectors ). The vector elements are assumend to be numbers.
Alternatively the vectors can be specified by as a matrix, by its columns.
To calculate the cartesian product of vectors, P = A x B x C x D ... . Requires A, B, C, D be column vectors. The algorithm is iteratively calcualte the products, ( ( (A x B ) x C ) x D ) x etc.
cartprod(1:2,3:4,0:1)
ans = 1 3 0
2 3 0
1 4 0
2 4 0
1 3 1
2 3 1
1 4 1
2 4 1
See also: kron.
[q, r, z] = cod (a)[q, r, z, p] = cod (a)[…] = cod (a, '0')Computes the complete orthogonal decomposition (COD) of the matrix a:
a = q*r*z'
Let a be an M-by-N matrix, and let K = min(M, N).
Then q is M-by-M orthogonal, z is N-by-N orthogonal,
and r is M-by-N such that r(:,1:K) is upper
trapezoidal and r(:,K+1:N) is zero.
The additional p output argument specifies that pivoting should be used in
the first step (QR decomposition). In this case,
a*p = q*r*z'
If a second argument of ’0’ is given, an economy-sized factorization is returned so that r is K-by-K.
NOTE: This is currently implemented by double QR factorization plus some tricky manipulations, and is not as efficient as using xRZTZF from LAPACK.
See also: qr.
B = funm (A, F)Compute matrix equivalent of function F; F can be a function name or a function handle and A must be a square matrix.
For trigonometric and hyperbolic functions, thfm is automatically
invoked as that is based on expm and diagonalization is avoided.
For other functions diagonalization is invoked, which implies that
-depending on the properties of input matrix A- the results
can be very inaccurate without any warning. For easy diagonizable and
stable matrices the results of funm will be sufficiently accurate.
Note that you should not use funm for ’sqrt’, ’log’ or ’exp’; instead use sqrtm, logm and expm as these are more robust.
Examples:
B = funm (A, sin); (Compute matrix equivalent of sin() )
function bk1 = besselk1 (x)
bk1 = besselk(1, x);
endfunction
B = funm (A, besselk1);
(Compute matrix equivalent of bessel function K1();
a helper function is needed here to convey extra
arguments for besselk() )
Note that a much improved funm.m function has been implemented in Octave 11.1.0, so funm.m will be removed from the linear-algebra package if that is installed in Octave 11+.
See also: thfm, expm, logm, sqrtm.
[blockVectorX, lambda] = lobpcg (blockVectorX, operatorA)[blockVectorX, lambda, failureFlag] = lobpcg (blockVectorX, operatorA)[blockVectorX, lambda, failureFlag, lambdaHistory, residualNormsHistory] = lobpcg (blockVectorX, operatorA, operatorB, operatorT, blockVectorY, residualTolerance, maxIterations, verbosityLevel)Solves Hermitian partial eigenproblems using preconditioning.
The first form outputs the array of algebraic smallest eigenvalues lambda and corresponding matrix of orthonormalized eigenvectors blockVectorX of the Hermitian (full or sparse) operator operatorA using input matrix blockVectorX as an initial guess, without preconditioning, somewhat similar to:
# for real symmetric operator operatorA opts.issym = 1; opts.isreal = 1; K = size (blockVectorX, 2); [blockVectorX, lambda] = eigs (operatorA, K, 'SR', opts); # for Hermitian operator operatorA K = size (blockVectorX, 2); [blockVectorX, lambda] = eigs (operatorA, K, 'SR');
The second form returns a convergence flag. If failureFlag is 0 then all the eigenvalues converged; otherwise not all converged.
The third form computes smallest eigenvalues lambda and corresponding eigenvectors blockVectorX of the generalized eigenproblem Ax=lambda Bx, where Hermitian operators operatorA and operatorB are given as functions, as well as a preconditioner, operatorT. The operators operatorB and operatorT must be in addition positive definite. To compute the largest eigenpairs of operatorA, simply apply the code to operatorA multiplied by -1. The code does not involve any matrix factorizations of operatorA and operatorB, thus, e.g., it preserves the sparsity and the structure of operatorA and operatorB.
residualTolerance and maxIterations control tolerance and max number of steps, and verbosityLevel = 0, 1, or 2 controls the amount of printed info. lambdaHistory is a matrix with all iterative lambdas, and residualNormsHistory are matrices of the history of 2-norms of residuals
Required input:
Optional function input:
operatorB = I.
operatorT(blockVectorX) = blockVectorX.
Optional constraints input:
Optional scalar input parameters:
residualTolerance = n * sqrt (eps)
maxIterations = min (n, 20)
verbosityLevel = 0.
Required output:
blockSize = size (blockVectorX, 2)
for the initial guess blockVectorX if it is full rank.
Optional output:
Functions operatorA(blockVectorX), operatorB(blockVectorX) and
operatorT(blockVectorX) must support blockVectorX being a matrix, not
just a column vector.
Every iteration involves one application of operatorA and operatorB, and one of operatorT.
Main memory requirements: 6 (9 if isempty(operatorB)=0) matrices of the
same size as blockVectorX, 2 matrices of the same size as blockVectorY
(if present), and two square matrices of the size 3*blockSize.
In all examples below, we use the Laplacian operator in a 20x20 square with the mesh size 1 which can be generated in MATLAB by running:
A = delsq (numgrid ('S', 21));
n = size (A, 1);
or in MATLAB and Octave by:
[~,~,A] = laplacian ([19, 19]); n = size (A, 1);
Note that laplacian is a function of the specfun octave-forge package.
The following Example:
[blockVectorX, lambda, failureFlag] = lobpcg (randn (n, 8), A, 1e-5, 50, 2);
attempts to compute 8 first eigenpairs without preconditioning, but not all eigenpairs converge after 50 steps, so failureFlag=1.
The next Example:
blockVectorY = []; lambda_all = []; for j = 1:4 [blockVectorX, lambda] = lobpcg (randn (n, 2), A, blockVectorY, 1e-5, 200, 2); blockVectorY = [blockVectorY, blockVectorX]; lambda_all = [lambda_all' lambda']'; pause; end
attemps to compute the same 8 eigenpairs by calling the code 4 times with blockSize=2 using orthogonalization to the previously founded eigenvectors.
The following Example:
R = ichol (A, struct('michol', 'on'));
precfun = @(x)R\(R'\x);
[blockVectorX, lambda, failureFlag] = lobpcg (randn (n, 8), A, [], @(x)precfun(x), 1e-5, 60, 2);
computes the same eigenpairs in less then 25 steps, so that failureFlag=0
using the preconditioner function precfun, defined inline. If precfun
is defined as an octave function in a file, the function handle
@(x)precfun(x) can be equivalently replaced by the function name precfun. Running:
[blockVectorX, lambda, failureFlag] = lobpcg (randn (n, 8), A, speye (n), @(x)precfun(x), 1e-5, 50, 2);
produces similar answers, but is somewhat slower and needs more memory as technically a generalized eigenproblem with B=I is solved here.
The following example for a mostly diagonally dominant sparse matrix A demonstrates different types of preconditioning, compared to the standard use of the main diagonal of A:
clear all; close all;
n = 1000;
M = spdiags ([1:n]', 0, n, n);
precfun = @(x)M\x;
A = M + sprandsym (n, .1);
Xini = randn (n, 5);
maxiter = 15;
tol = 1e-5;
[~,~,~,~,rnp] = lobpcg (Xini, A, tol, maxiter, 1);
[~,~,~,~,r] = lobpcg (Xini, A, [], @(x)precfun(x), tol, maxiter, 1);
subplot (2,2,1), semilogy (r'); hold on;
semilogy (rnp', ':>');
title ('No preconditioning (top)'); axis tight;
M(1,2) = 2;
precfun = @(x)M\x; % M is no longer symmetric
[~,~,~,~,rns] = lobpcg (Xini, A, [], @(x)precfun(x), tol, maxiter, 1);
subplot (2,2,2), semilogy (r'); hold on;
semilogy (rns', '--s');
title ('Nonsymmetric preconditioning (square)'); axis tight;
M(1,2) = 0;
precfun = @(x)M\(x+10*sin(x)); % nonlinear preconditioning
[~,~,~,~,rnl] = lobpcg (Xini, A, [], @(x)precfun(x), tol, maxiter, 1);
subplot (2,2,3), semilogy (r'); hold on;
semilogy (rnl', '-.*');
title ('Nonlinear preconditioning (star)'); axis tight;
M = abs (M - 3.5 * speye (n, n));
precfun = @(x)M\x;
[~,~,~,~,rs] = lobpcg (Xini, A, [], @(x)precfun(x), tol, maxiter, 1);
subplot (2,2,4), semilogy (r'); hold on;
semilogy (rs', '-d');
title ('Selective preconditioning (diamond)'); axis tight;
This main function lobpcg is a version of the preconditioned conjugate
gradient method (Algorithm 5.1) described in A. V. Knyazev, Toward the Optimal
Preconditioned Eigensolver:
Locally Optimal Block Preconditioned Conjugate Gradient Method,
SIAM Journal on Scientific Computing 23 (2001), no. 2, pp. 517-541.
http://dx.doi.org/10.1137/S1064827500366124
operatorA = diag ([1 1.99 2:99]); [blockVectorX, lambda] = lobpcg (randn (100, 1),operatorA, 1e-10, 80, 2); [blockVectorX, lambda] = lobpcg (randn (100, 2),operatorA, 1e-10, 80, 2); [blockVectorX, lambda] = lobpcg (randn (100, 3),operatorA, 1e-10, 80, 2);
The main distribution site: http://math.ucdenver.edu/~aknyazev/
A C-version of this code is a part of the http://code.google.com/p/blopex/ package and is directly available, e.g., in PETSc and HYPRE.
y = ndcovlt (x, t1, t2, …)Computes an n-dimensional covariant linear transform of an n-d tensor, given a transformation matrix for each dimension. The number of columns of each transformation matrix must match the corresponding extent of x, and the number of rows determines the corresponding extent of y. For example:
size (x, 2) == columns (t2) size (y, 2) == rows (t2)
The element y(i1, i2, …) is defined as a sum of
x(j1, j2, ...) * t1(i1, j1) * t2(i2, j2) * ...
over all j1, j2, …. For two dimensions, this reduces to
y = t1 * x * t2.'
[] passed as a transformation matrix is converted to identity matrix for the corresponding dimension.
[vstacked, astacked] = rotparams (rstacked)The function w = rotparams (r) - Inverse to rotv(). Using, w = rotparams(r) is such that rotv(w)*r’ == eye(3).
If used as, [v,a]=rotparams(r) , idem, with v (1 x 3) s.t. w == a*v.
0 <= norm(w)==a <= pi
:-O !! Does not check if ’r’ is a rotation matrix.
Ignores matrices with zero rows or with NaNs. (returns 0 for them)
See also: rotv.
r = rotv ( v, ang )The functionrotv calculates a Matrix of rotation about v w/ angle |v| r = rotv(v [,ang])
Returns the rotation matrix w/ axis v, and angle, in radians, norm(v) or ang (if present).
rotv(v) == w’*w + cos(a) * (eye(3)-w’*w) - sin(a) * crossmat(w)
where a = norm (v) and w = v/a.
v and ang may be vertically stacked : If ’v’ is 2x3, then rotv( v ) == [rotv(v(1,:)); rotv(v(2,:))]
See also: rotparams, rota, rot.
x = smwsolve (a, u, v, b)(solver, u, v, b)Solves the square system (A + U*V')*X == B, where u and v are
matrices with several columns, using the Sherman-Morrison-Woodbury formula,
so that a system with a as left-hand side is actually solved. This is
especially advantageous if a is diagonal, sparse, triangular or
positive definite.
a can be sparse or full, the other matrices are expected to be full.
Instead of a matrix a, a user may alternatively provide a function
solver that performs the left division operation.
y = thfm (x, mode)Trigonometric/hyperbolic functions of square matrix x.
mode must be the name of a function. Valid functions are ’sin’, ’cos’, ’tan’, ’sec’, ’csc’, ’cot’ and all their inverses and/or hyperbolic variants, and ’sqrt’, ’log’ and ’exp’.
The code thfm (x, 'cos') calculates matrix cosinus even if input
matrix x is not diagonalizable.
Important note:
This algorithm does not use an eigensystem similarity transformation. It
maps the mode functions to functions of expm, logm and
sqrtm, which are known to be robust with respect to non-diagonalizable
(’defective’) x.
See also: funm.
[W, H, iter, HIS] = nmf_bpas (A, k)Nonnegative Matrix Factorization by Alternating Nonnegativity Constrained Least Squares using Block Principal Pivoting/Active Set method.
This function solves one the following problems: given A and k, find W and H such that
(1) minimize 1/2 * || A-WH ||_F^2
(2) minimize 1/2 * ( || A-WH ||_F^2 + alpha * || W ||_F^2 + beta * || H ||_F^2 )
(3) minimize 1/2 * ( || A-WH ||_F^2 + alpha * || W ||_F^2 + beta * (sum_(i=1)^n || H(:,i) ||_1^2 ) )
where W>=0 and H>=0 elementwise. The input arguments are A : Input data matrix (m x n) and k : Target low-rank.
Optional Inputs
Default is ’regularized’, which is recommended for quick application testing unless ’sparse’ or ’plain’ is explicitly needed. If sparsity is needed for ’W’ factor, then apply this function for the transpose of ’A’ with formulation (3). Then, exchange ’W’ and ’H’ and obtain the transpose of them. Imposing sparsity for both factors is not recommended and thus not included in this software.
to use formulation (1)
to use formulation (2)
to use formulation (3)
Default is ’bp’, which is in general faster.
to use the algorithm in [1]
to use the algorithm in [2]
Parameter alpha in the formulation (2) or (3). Default is the average of all elements in A. No good justfication for this default value, and you might want to try other values.
Parameter beta in the formulation (2) or (3). Default is the average of all elements in A. No good justfication for this default value, and you might want to try other values.
Maximum number of iterations. Default is 100.
Minimum number of iterations. Default is 20.
Maximum amount of time in seconds. Default is 100,000.
(m x k) initial value for W.
(k x n) initial value for H.
Stopping tolerance. Default is 1e-3. If you want to obtain a more accurate solution, decrease TOL and increase MAX_ITER at the same time.
If present the function will show information during the calculations.
Outputs
Obtained basis matrix (m x k)
Obtained coefficients matrix (k x n)
Number of iterations
If present the history of computation is returned.
Usage Examples:
nmf_bpas (A,10) nmf_bpas (A,20,'verbose') nmf_bpas (A,30,'verbose','nnlssolver','as') nmf_bpas (A,5,'verbose','type','sparse') nmf_bpas (A,60,'verbose','type','plain','Winit',rand(size(A,1),60)) nmf_bpas (A,70,'verbose','type','sparse','nnlssolver','bp','alpha',1.1,'beta',1.3)
References:
[1] For using this software, please cite:
Jingu Kim and Haesun Park, Toward Faster Nonnegative Matrix Factorization: A New Algorithm and Comparisons,
In Proceedings of the 2008 Eighth IEEE International Conference on Data Mining (ICDM’08), 353-362, 2008
[2] If you use ’nnls_solver’=’as’ (see below), please cite:
Hyunsoo Kim and Haesun Park, Nonnegative Matrix Factorization Based
on Alternating Nonnegativity Constrained Least Squares and Active Set Method,
SIAM Journal on Matrix Analysis and Applications, 2008, 30, 713-730
Check original code at http://www.cc.gatech.edu/~jingu
See also: nmf_pg.
[W, H] = nmf_pg (V, Winit, Hinit, tol, timelimit, maxiter)Non-negative matrix factorization by alternative non-negative least squares using projected gradients.
The matrix V is factorized into two possitive matrices W and
H such that V = W*H + U. Where U is a matrix of residuals
that can be negative or positive. When the matrix V is positive the order
of the elements in U is bounded by the optional named argument tol
(default value 1e-9).
The factorization is not unique and depends on the inital guess for the matrices W and H. You can pass this initalizations using the optional named arguments Winit and Hinit.
timelimit, maxiter: limit of time and iterations
Examples:
A = rand(10,5); [W H] = nmf_pg(A,tol=1e-3); U = W*H -A; disp(max(abs(U)));
s = blksparse (i, j, sv)s = blksparse (i, j, sv, m, n)s = blksparse (…, mode)Construct a block sparse matrix. The meaning of arguments is analogous to the
built-in sparse function, except that i, j are indices of
blocks rather than elements, and sv is a 3-dimensional array, the first two
dimensions determining the block size. Optionally, m and n can be
specified as the true block dimensions; if not, the maximum values of i, j
are taken instead. The resulting sparse matrix has the size
[m*p, n*q]
where
p = size (sv, 1) q = size (sv, 2)
The blocks are located so that
s(i(k):i(k)+p-1, j(k):j(K)+q-1) = sv(:,:,k)
Multiple blocks corresponding to the same pair of indices are summed, unless mode is "unique", in which case the last of them is used.
ctranspose (x)Returns the conjugate transpose of a block sparse matrix x.
ismatrix (s)Returns true (a blksparse object is a matrix).
issparse (s)Returns true since a blksparse is sparse by definition.
mldivide (x, y)Performs a left division with a block sparse matrix. If x is a block sparse matrix, it must be either diagonal or triangular, and y must be full. If x is built-in sparse or full, y is converted accordingly, then the built-in division is used.
mrdivide (x, y)Performs a left division with a block sparse matrix. If y is a block sparse matrix, it must be either diagonal or triangular, and x must be full. If y is built-in sparse or full, x is converted accordingly, then the built-in division is used.
mtimes (x, y)Multiplies a block sparse matrix with a full matrix, or two block sparse matrices. Multiplication of block sparse * sparse is not implemented. If one of arguments is a scalar, it’s a scalar multiply.
sparse (x)Converts a block sparse matrix to (built-in) sparse.
columns (KP)Return the number of columns in the Kronecker product KP.
See also: @kronprod/rows, @kronprod/size, @kronprod/numel.
ctranspose (KP)The complex conjugate transpose of a Kronecker product. This is equivalent to
KP'
See also: ctranspose, @kronprod/transpose.
det (KP)Compute the determinant of a Kronecker product.
If KP is the Kronecker product of the n-by-n matrix A and the q-by-q matrix B, then the determinant is computed as
det (A)^q * det (B)^n
If KP is not a Kronecker product of square matrices the determinant is computed by forming the full matrix and then computing the determinant.
See also: det, @kronprod/trace, @kronprod/rank, @kronprod/full.
disp (KP)Show the content of the Kronecker product KP. To avoid evaluating the
Kronecker product, this function displays the two matrices defining the product.
To display the actual values of KP, use disp (full (KP)).
This function is equivalent to @kronprod/display.
See also: @kronprod/display, @kronprod/full.
display (KP)Show the content of the Kronecker product KP. To avoid evaluating the
Kronecker product, this function displays the two matrices defining the product.
To display the actual values of KP, use display (full (KP)).
See also: @kronprod/displ, @kronprod/full.
full (KP)Return the full matrix representation of the Kronecker product KP.
If KP is the Kronecker product of an n-by-m matrix and a q-by-r matrix, then the result is a nq-by-mr matrix. Thus, the result can require vast amount of memory, so this function should be avoided whenever possible.
See also: full, @kronprod/sparse.
inv (KP)Return the inverse of the Kronecker product KP.
If KP is the Kronecker product of two square matrices A and B, the inverse will be computed as the Kronecker product of the inverse of A and B.
If KP is square but not a Kronecker product of square matrices, the inverse will be computed using the SVD
See also: @kronprod/sparse.
iscomplex (KP)Return true if the Kronecker product KP contains any complex values.
See also: iscomplex, @kronprod/isreal.
ismatrix (KP)Return true to indicate that the Kronecker product KP always is a
matrix.
isreal (KP)Return true if the Kronecker product KP is real, i.e. has no
imaginary components.
See also: isreal, @kronprod/iscomplex.
issparse (KP)Return true if one of the matrices in the Kronecker product KP
is sparse.
See also: @kronprod/sparse.
issquare (KP)Return true if the Kronecker product KP is a square matrix.
See also: @kronprod/size.
kronprod (A, B)kronprod (A, B, P)Construct a Kronecker product object. XXX: Write proper documentation
With two input arguments, the following matrix is represented: kron (A, B);
With three input arguments, the following matrix is represented: P * kron (A, B) * P’ (P must be a permutation matrix)
minus (a, a)Return the difference between a Kronecker product and another matrix. This is performed by forming the full matrix of both inputs and is as such a potential expensive operation.
See also: minus, @kronprod/plus.
numel (KP)Return the number of elements in the Kronecker product KP.
See also: numel, @kronprod/rows, @kronprod/columns, @kronprod/size.
plus (a, a)Return the sum of a Kronecker product and another matrix. This is performed by forming the full matrix of both inputs and is as such a potential expensive operation.
See also: plus, @kronprod/minus.
rank (KP)Return the rank of the Kronecker product KP. This is computed as the product of the ranks of the matrices forming the product.
See also: rank, @kronprod/det, @kronprod/trace.
rows (KP)Return the number of rows in the Kronecker product KP.
See also: rows, @kronprod/size, @kronprod/columns, @kronprod/numel.
size (KP)size (KP, dim)Return the size of the Kronecker product KP as a vector.
See also: size, @kronprod/rows, @kronprod/columns, @kronprod/numel.
sparse (KP)Return the Kronecker product KP represented as a sparse matrix.
See also: sparse, @kronprod/issparse, @kronprod/full.
trace (KP)Returns the trace of the Kronecker product KP.
If KP is a Kronecker product of two square matrices, the trace is computed as the product of the trace of these two matrices. Otherwise the trace is computed by forming the full matrix.
See also: @kronprod/det, @kronprod/rank, @kronprod/full.
transpose (KP)Returns the transpose of the Kronecker product KP. This is equivalent to
KP.'
See also: transpose, @kronprod/ctranspose.
lambda = circulant_eig (v)[vs, lambda] = circulant_eig (v)Fast, compact calculation of eigenvalues and eigenvectors of a circulant matrix
Given an n*1 vector v, return the eigenvalues lambda and optionally eigenvectors vs of the n*n circulant matrix C that has v as its first column
Theoretically same as eig(make_circulant_matrix(v)), but many fewer computations; does not form C explicitly
Reference: Robert M. Gray, Toeplitz and Circulant Matrices: A Review, Now Publishers, http://ee.stanford.edu/~gray/toeplitz.pdf, Chapter 3
See also: gallery, circulant_matrix_vector_product, circulant_inv.
c = circulant_inv (v)Fast, compact calculation of inverse of a circulant matrix
Given an n*1 vector v, return the inverse c of the n*n circulant matrix C that has v as its first column
The returned c is the first column of the inverse, which is also circulant – to get the full matrix, use ‘circulant_make_matrix(c)’
Theoretically same as inv(make_circulant_matrix(v))(:, 1), but requires many fewer computations and does not form matrices explicitly
Roundoff may induce a small imaginary component in c even if v is real – use real(c) to remedy this
Reference: Robert M. Gray, Toeplitz and Circulant Matrices: A Review, Now Publishers, http://ee.stanford.edu/~gray/toeplitz.pdf, Chapter 3
See also: gallery, circulant_matrix_vector_product, circulant_eig.
C = circulant_make_matrix (v)Produce a full circulant matrix given the first column.
Note: this function has been deprecated and will be removed in the
future. Instead, use gallery with the the circul option.
To obtain the exactly same matrix, transpose the result, i.e., replace
circulant_make_matrix (v) with
gallery ("circul", v)'.
Given an n*1 vector v, returns the n*n circulant matrix C where v is the left column and all other columns are downshifted versions of v.
Note: If the first row r of a circulant matrix is given, the first
column v can be obtained as v = r([1 end:-1:2]).
Reference: Gene H. Golub and Charles F. Van Loan, Matrix Computations, 3rd Ed., Section 4.7.7
See also: gallery, circulant_matrix_vector_product, circulant_eig, circulant_inv.
y = circulant_matrix_vector_product (v, x)Fast, compact calculation of the product of a circulant matrix with a vector
Given n*1 vectors v and x, return the matrix-vector product y = Cx, where C is the n*n circulant matrix that has v as its first column
Theoretically the same as make_circulant_matrix(x) * v, but does not form C explicitly; uses the discrete Fourier transform
Because of roundoff, the returned y may have a small imaginary component even if v and x are real (use real(y) to remedy this)
Reference: Gene H. Golub and Charles F. Van Loan, Matrix Computations, 3rd Ed., Section 4.7.7
See also: gallery, circulant_eig, circulant_inv.