Matrix Calculations Guide

A complete reference for all matrix operations — from basic arithmetic to advanced decompositions. Each section includes the rule, worked examples, and key properties.

1. What is a Matrix?

A matrix is a rectangular array of numbers arranged in rows and columns. An m×n matrix has m rows and n columns. Each entry is identified by its row index i and column index j, written as A[i][j] or A_{ij}.

Example of a 2×3 matrix (2 rows, 3 columns):

123
456

Special cases: a 1×n matrix is a row vector; an m×1 matrix is a column vector; an n×n matrix is a square matrix.

314

The above is a 1×3 row vector. Below is a 3×1 column vector:

2
7
1

2. Matrix Addition & Subtraction

Two matrices can be added or subtracted only if they have the same dimensions. Each entry of the result is the sum (or difference) of the corresponding entries.

Worked Example

Let A and B be:

12
34
56
78

Then A + B =

68
1012

Properties: Addition is commutative (A + B = B + A) and associative ((A + B) + C = A + (B + C)).

Dimensions must match exactly. You cannot add a 2×3 matrix to a 3×2 matrix.

3. Scalar Operations

Multiplying a matrix by scalar k means multiplying every element by k. Similarly, adding scalar k means adding k to every element.

Example: 3 × A

If A is:

12
34

Then 3A =

36
912

Key property: det(kA) = k^n · det(A) for an n×n matrix.

4. Matrix Multiplication

To multiply A (m×n) by B (n×p), the number of columns in A must equal the number of rows in B. The result is an m×p matrix. Each entry (AB)[i][j] is the dot product of the i-th row of A and the j-th column of B.

Worked 2×2 Example

Let A and B be:

12
34
56
78

Computing AB:

  1. (AB)[1][1] = 1·5 + 2·7 = 5 + 14 = 19
  2. (AB)[1][2] = 1·6 + 2·8 = 6 + 16 = 22
  3. (AB)[2][1] = 3·5 + 4·7 = 15 + 28 = 43
  4. (AB)[2][2] = 3·6 + 4·8 = 18 + 32 = 50

Result AB =

1922
4350
Matrix multiplication is NOT commutative. AB ≠ BA in general. Order matters.

Properties: Associative — (AB)C = A(BC). Distributive — A(B+C) = AB + AC.

5. Hadamard (Element-wise) Product

The Hadamard product A ⊙ B multiplies corresponding elements of two matrices of the same size. Also called the Schur product.

Example with A and B:

12
34
50
23

A ⊙ B =

50
612
Unlike standard matrix multiplication, the Hadamard product is commutative: A ⊙ B = B ⊙ A.

6. Kronecker Product

The Kronecker product A ⊗ B replaces each element A[i][j] with the scaled block A[i][j]·B. If A is m×n and B is p×q, the result is (mp)×(nq).

Example: Let A = [[1,2],[3,4]] and B = [[0,5],[6,7]]. Then A ⊗ B is the 4×4 matrix:

05010
671214
015020
18212428
Keep matrices small — a 4×4 ⊗ 4×4 Kronecker product produces a 16×16 matrix.

7. Matrix Transpose

The transpose of A is written A^T. Rows become columns: entry (i,j) of A becomes entry (j,i) of A^T. A 2×3 matrix becomes a 3×2 matrix.

Example

Original 2×3 matrix A:

123
456

Transpose A^T (3×2):

14
25
36

Properties: (A^T)^T = A. (AB)^T = B^T A^T. A matrix satisfying A = A^T is called symmetric.

8. Matrix Determinant

The determinant is a scalar associated with a square matrix. Geometrically, it represents the signed scaling factor of the linear transformation: the signed volume of the parallelepiped spanned by the row vectors. If det(A) = 0, the rows are linearly dependent and the transformation collapses space — the matrix is singular and has no inverse.

8.1 Determinant of a 2×2 Matrix

For a 2×2 matrix [[a,b],[c,d]], the determinant is ad − bc:

Example — find det of [[3,8],[4,6]]:

38
46

det = (3)(6) − (8)(4) = 18 − 32 = −14

8.2 Determinant of a 3×3 Matrix — Laplace (Cofactor) Expansion

Expand along the first row. Each term uses a 2×2 minor (delete the row and column of that entry) with alternating signs (+, −, +):

Where M_{ij} is the determinant of the 2×2 submatrix obtained by deleting row i and column j.

Worked Example: Compute det of a 3×3 Matrix

Let A be:

231
045
123

Expand along row 1. The three cofactors use the 2×2 minors:

M_{11} — delete row 1, col 1 — gives [[4,5],[2,3]]:

45
23

M_{11} = (4)(3) − (5)(2) = 12 − 10 = 2

M_{12} — delete row 1, col 2 — gives [[0,5],[1,3]]:

05
13

M_{12} = (0)(3) − (5)(1) = 0 − 5 = −5

M_{13} — delete row 1, col 3 — gives [[0,4],[1,2]]:

04
12

M_{13} = (0)(2) − (4)(1) = 0 − 4 = −4

Applying the signs (+, −, +):

8.3 Sarrus' Rule for 3×3 Matrices

Write the matrix and repeat the first two columns to the right. Sum the three downward diagonals, subtract the three upward diagonals:

Sarrus' rule works ONLY for 3×3 matrices. Do not apply it to larger matrices.

8.4 Properties of the Determinant

PropertyFormula / Rule
Product ruledet(AB) = det(A) · det(B)
Transposedet(A^T) = det(A)
Scalar scalingdet(kA) = k^n · det(A) for n×n matrix
Row swapSwapping two rows negates the determinant
Row scaleMultiplying one row by k multiplies det by k
Row addAdding a multiple of one row to another: det unchanged
Identitydet(I) = 1
Triangular matrixdet = product of diagonal entries
Singular matrixdet = 0 (no inverse exists)
💡Triangular shortcut: for upper or lower triangular matrices, the determinant is simply the product of the main diagonal entries.

9. Matrix Trace

The trace of a square matrix is the sum of its main diagonal elements.

Example:

521
037
409

tr(A) = 5 + 3 + 9 = 17

Key properties: tr(A) equals the sum of all eigenvalues of A. tr(AB) = tr(BA) (cyclic permutation property). tr(A + B) = tr(A) + tr(B).

10. Matrix Rank

The rank of a matrix is the number of linearly independent rows (equivalently, columns). It equals the number of non-zero rows in the RREF of the matrix.

To find rank, row-reduce to echelon form and count pivots. Example — find rank of:

123
246
012

Step 1: R2 → R2 − 2·R1 gives [0,0,0]. Step 2: R3 stays [0,1,2]. After RREF:

10-1
012
000

Two non-zero rows → rank = 2.

💡A square n×n matrix has full rank (rank = n) if and only if det(A) ≠ 0 — i.e., it is invertible.

11. Matrix Norm

A matrix norm measures the 'size' of a matrix. Common choices:

NormFormulaDescription
Frobenius√(Σᵢⱼ aᵢⱼ²)Square root of sum of all squared entries
1-normmax over columns: Σᵢ |aᵢⱼ|Maximum absolute column sum
∞-normmax over rows: Σⱼ |aᵢⱼ|Maximum absolute row sum

Example — Frobenius norm of [[1,2],[3,4]]: √(1+4+9+16) = √30 ≈ 5.477.

The Frobenius norm satisfies ‖A‖_F = √tr(A^T A). It is the most commonly used matrix norm in practice.

12. Matrix Inverse

The inverse A⁻¹ of a square matrix A satisfies A·A⁻¹ = A⁻¹·A = I. A matrix is invertible (non-singular) if and only if det(A) ≠ 0.

12.1 Direct Formula for 2×2 Inverse

Example: find the inverse of [[2,1],[5,3]]. det = 2·3 − 1·5 = 6 − 5 = 1.

21
53

A⁻¹ = (1/1)·[[3,−1],[−5,2]] =

3-1
-52

12.2 Gauss-Jordan Method for Larger Matrices

Form the augmented matrix [A|I] and row-reduce until the left side becomes I. The right side is then A⁻¹.

Example: find the inverse of [[1,2],[3,4]].

1210
3401

R2 → R2 − 3·R1:

1210
0-2-31

R2 → R2 / (−2):

1210
013/2-1/2

R1 → R1 − 2·R2:

10-21
013/2-1/2

So A⁻¹ = [[-2,1],[3/2,−1/2]]. Verify: A·A⁻¹ = I.

If at any point during row reduction you get an all-zero row on the left side, the matrix is singular — no inverse exists.

Properties: (AB)⁻¹ = B⁻¹ A⁻¹. (A^T)⁻¹ = (A⁻¹)^T. (A⁻¹)⁻¹ = A.

13. Cofactor Matrix & Adjugate

The minor M_{ij} is the determinant of the submatrix formed by deleting row i and column j. The cofactor C_{ij} is the signed minor:

The cofactor matrix (or comatrix) is the matrix of all cofactors. The adjugate (classical adjoint) is the transpose of the cofactor matrix:

The sign pattern of the cofactor matrix for a 3×3 matrix is:

++
+
++

14. Reduced Row Echelon Form (RREF)

A matrix is in RREF if: (1) all zero rows are at the bottom, (2) the leading entry (pivot) of each non-zero row is 1, (3) each pivot lies to the right of the pivot above it, and (4) all other entries in each pivot column are 0.

Step-by-Step Example

Starting matrix:

242
123
311

R1 ↔ R2 (swap to get a 1 in the pivot position):

123
242
311

R2 → R2 − 2·R1, R3 → R3 − 3·R1:

123
00-4
0-5-8

R2 ↔ R3:

123
0-5-8
00-4

Scale rows to get leading 1s and eliminate above pivots to reach RREF.

💡RREF is unique for every matrix. It is the standard form for solving linear systems, finding rank, and computing the null space.

15. LU Decomposition

LU decomposition factors a square matrix as PA = LU, where P is a permutation matrix (records row swaps), L is unit lower triangular (1s on diagonal), and U is upper triangular.

Example: for A = [[2,1,1],[4,3,3],[8,7,9]], one possible factorization gives:

L (lower triangular with 1s on diagonal):

100
210
431

U (upper triangular):

211
011
002

Applications: det(A) = ±(product of diagonal of U). Solve Ax = b via forward then back substitution. Each additional right-hand side costs only O(n²) after the O(n³) factorization.

16. Eigenvalues & Eigenvectors

A scalar λ is an eigenvalue of A, and nonzero vector v is the corresponding eigenvector, if Av = λv. Eigenvalues are the roots of the characteristic polynomial det(A − λI) = 0.

Worked 2×2 Example

Find eigenvalues of A = [[4,1],[2,3]].

Form A − λI:

4-λ1
23-λ

Characteristic polynomial: (4−λ)(3−λ) − (1)(2) = 0

Eigenvalues: λ₁ = 5, λ₂ = 2.

Find eigenvector for λ₁ = 5: solve (A − 5I)v = 0, i.e., [[-1,1],[2,-2]]v = 0 → v = [1,1] (up to scale).

Find eigenvector for λ₂ = 2: solve (A − 2I)v = 0, i.e., [[2,1],[2,1]]v = 0 → v = [1,−2] (up to scale).

Key properties: sum of eigenvalues = tr(A) = 4+3 = 7 = 5+2 ✓. Product of eigenvalues = det(A) = 12−2 = 10 = 5·2 ✓.

Real symmetric matrices always have real eigenvalues. Non-symmetric matrices may have complex eigenvalues appearing as conjugate pairs.

17. QR Decomposition

QR decomposition factors A = QR where Q has orthonormal columns (Q^T Q = I) and R is upper triangular. Computed via Gram-Schmidt orthogonalization.

Example: for A = [[1,1],[1,0],[0,1]], the QR decomposition gives Q (3×2) with orthonormal columns and R (2×2) upper triangular:

1/√21/√6
1/√2-1/√6
02/√6
√21/√2
0√(3/2)

Applications: Solving least-squares problems (Ax ≈ b when m > n). QR algorithm for computing eigenvalues. Numerically more stable than Gaussian elimination.

18. Singular Value Decomposition (SVD)

Every real m×n matrix A has an SVD: A = UΣV^T, where U (m×k) has orthonormal columns, Σ is diagonal with non-negative singular values σ₁ ≥ σ₂ ≥ ... ≥ 0, and V (n×k) has orthonormal columns.

Singular values σᵢ = √(eigenvalues of A^T A). The number of non-zero singular values equals rank(A).

Example: A = [[1,2],[3,4],[5,6]] has singular values approximately σ₁ ≈ 9.525, σ₂ ≈ 0.514. U is 3×2, Σ is 2×2, V^T is 2×2.

ConceptFormula / Fact
Singular valuesσᵢ = √(λᵢ of A^T A)
RankNumber of non-zero σᵢ
Pseudo-inverseA⁺ = V Σ⁺ U^T
Frobenius norm‖A‖_F = √(σ₁² + ... + σₖ²)

Applications: Principal Component Analysis (PCA). Image and data compression (low-rank approximation). Solving under/over-determined systems via pseudo-inverse.

19. Special Matrices

TypeDescriptionKey Property
Zero matrixAll entries are 0A + 0 = A; 0·A = 0
Identity matrix I1s on diagonal, 0s elsewhereAI = IA = A
Diagonal matrixNon-zero entries only on main diagonaldet = product of diagonal
Symmetric matrixA = A^T (equals its transpose)Always has real eigenvalues
Skew-symmetricA = −A^T (diagonal entries must be 0)Eigenvalues are purely imaginary or 0
Orthogonal matrixA^T A = I (columns are orthonormal)det(A) = ±1; preserves lengths
Upper triangularAll entries below diagonal are 0det = product of diagonal entries
Lower triangularAll entries above diagonal are 0det = product of diagonal entries
Singular matrixdet(A) = 0No inverse; rows/cols linearly dependent
Idempotent matrixA² = AEigenvalues are 0 or 1

Frequently Asked Questions

What is the difference between matrix multiplication and element-wise multiplication?

Matrix multiplication (AB) computes dot products between rows of A and columns of B, requiring the inner dimensions to match. Element-wise (Hadamard) multiplication simply multiplies corresponding entries and requires both matrices to have identical dimensions.

When does a matrix have no inverse?

A matrix has no inverse (it is singular) when its determinant equals zero. This happens when the rows (or columns) are linearly dependent — one row is a linear combination of the others. Geometrically, the transformation collapses space to a lower dimension.

How do I find the determinant of a 4×4 or larger matrix?

Use cofactor expansion recursively — expand along the row or column with the most zeros to minimize arithmetic. Alternatively, row-reduce to upper triangular form (keeping track of sign changes from row swaps and factors from row scaling), then multiply the diagonal entries.

What is the relationship between eigenvalues, trace, and determinant?

For an n×n matrix: the sum of all eigenvalues (with multiplicity) equals the trace, and the product of all eigenvalues equals the determinant. This holds even for complex eigenvalues.

What does it mean for a matrix to have full rank?

A matrix has full rank when its rank equals the smaller of its dimensions — min(m,n). For a square n×n matrix, full rank means rank = n, which is equivalent to the matrix being invertible (det ≠ 0) and having no linearly dependent rows or columns.

Why is matrix multiplication not commutative?

The (i,j) entry of AB involves the i-th row of A and j-th column of B, while BA involves different pairings. Geometrically, applying transformation A then B is generally different from applying B then A. Often BA is not even defined when AB is.

What is the pseudo-inverse and when do I need it?

The Moore-Penrose pseudo-inverse A⁺ generalizes the inverse to non-square and singular matrices. It is computed via SVD as A⁺ = V Σ⁺ U^T. Use it when solving over-determined (more equations than unknowns) or under-determined systems in the least-squares sense.

What is the difference between LU and QR decomposition?

LU decomposition (PA = LU) is optimized for square systems — solving Ax = b efficiently and computing determinants. QR decomposition (A = QR) is more numerically stable and is the go-to method for least-squares problems with rectangular matrices. QR is also the basis of the QR eigenvalue algorithm.

How do singular values relate to eigenvalues?

Singular values of A are the square roots of the eigenvalues of A^T A (or AA^T). They are always real and non-negative. For symmetric positive definite matrices, singular values equal eigenvalues. In general, singular values and eigenvalues of A are different.

What is the rank-nullity theorem?

For an m×n matrix A: rank(A) + nullity(A) = n (the number of columns). The nullity is the dimension of the null space — the set of all vectors x satisfying Ax = 0. The theorem means every column not containing a pivot corresponds to a free variable in the solution.