Antisymmetric matrix (or skew-symmetric matrix)

On this post we explain what antisymmetric matrices are. In addition, you will see several examples along with its typical form to understand it perfectly. We also explain what is special about the calculation of the determinant of an antisymmetric matrix and all the properties of this type of matrix. And finally, you will find how to decompose any square matrix into the sum of a symmetric matrix plus another antisymmetric matrix.

What is an antisymmetric matrix?

The definition of antisymmetric matrix is as follows:

An antisymmetric matrix is a square matrix whose transpose is equal to its negative.

 A^T = -A

Where A^T represents the transpose matrix of A and -A is matrix A with all its elements changed sign.

See: definition of transpose of a matrix.

In mathematics, antisymmetric matrices are also called skew-symmetric or antimetric matrices.

Examples of antisymmetric matrices

Once we know the meaning of antisymmetric matrix, let’s see several examples of antisymmetric matrices to fully understand the concept:

Example of an antisymmetric matrix of order 2

example of 2x2 antisymmetric or skew symmetric matrix

Example of 3×3 dimension antisymmetric matrix

example of 3x3 antisymmetric or skew symmetric matrix

Example of 4×4 size antisymmetric matrix

example of 4x4 antisymmetric or skew symmetric matrix

Transposing matrices A, B and C show that they are antisymmetric (or skew symmetric), because the transposed matrices are equivalent to their respective original matrices with the sign changed.

Form of an antisymmetric matrix

For the antisymmetric matrix condition to be fulfilled, they must always have the same type of form: the numbers on the main diagonal are all equal to zero and the element in row i and column j is the negative of the element in row j and column i. That is, the shape of the antisymmetric matrices is as follows:

  \displaystyle \begin{pmatrix} 0 & a & b & \cdots & c \\[1.1ex] -a & 0 & d & \cdots & e \\[1.1ex] -b & -d & 0 & \cdots & f \\[1.1ex] \vdots & \vdots & \vdots & \ddots & \vdots \\[1.1ex] -c & -e & -f & \cdots & 0 \end{pmatrix}

Therefore, the main diagonal of an antisymmetric matrix acts as axis of antisymmetry. And this is where the name of this peculiar matrix comes from.

Determinant of an antisymmetric matrix

The determinant of an antisymmetric matrix depends on the dimension of the matrix. This is due to the properties of the determinants:

 \displaystyle \text{det}(A)=\text{det}\left(A^T\right)=\text{det}(-A)=(-1)^n \text{det}(A)

So if the antisymmetric matrix is of odd order, its determinant will be equal to 0. But if the antisymmetric matrix is of even dimension, the determinant can take any value.

Therefore, an odd-dimensional antisymmetric matrix is a non-invertible matrix. In contrast, an antisymmetric matrix of even order is always an invertible matrix.

Properties of antisymmetric matrices

The characteristics of the antisymmetric matrices are as follows:

  • The addition (or subtraction) of two antisymmetric matrices results in another antisymmetric matrix, since transposing two added (or subtracted) matrices is equivalent to transposing each matrix separately:

 \displaystyle \left(A+B\right)^T=A^T+B^T=-AB

  • Any antisymmetric matrix multiplied by a scalar also results in another antisymmetric matrix.
  • The power of an antisymmetric matrix is equivalent to an antisymmetric matrix or a symmetric matrix. If the exponent is an even number the result of the power is a symmetric matrix, but if the exponent is an odd number the result of the potentiation is an antisymmetric matrix.

See: what is a symmetric matrix?

  • The trace of an antisymmetric matrix is always equal to zero.
  • The sum of any antisymmetric matrix plus the unit matrix results in an invertible matrix.

 \displaystyle \text{det} (A + I) \neq 0

  • All real eigenvalues of an antisymmetric matrix are 0. However, an antisymmetric matrix can also have complex eigenvalues.

See: properties of eigenvalues.

  • All antisymmetric matrices are normal matrices. Therefore, they are subject to the spectral theorem, which states that an antisymmetric matrix is diagonalizable by the identity matrix.

See: how to diagonalize a matrix.

Decomposition of a square matrix into a symmetric and an antisymmetric matrix

A peculiarity that square matrices have is that they can be decomposed into the sum of a symmetric matrix plus an antisymmetric matrix.

The formula that allows us to do it is the following:

 \displaystyle \begin{array}{c} C = S + A \\[2ex] S = \cfrac{1}{2} \cdot \left(C + C^T\right) \qquad A = \cfrac{1}{2} \cdot \left(C-C^T\right) \end{array}

Where C is the square matrix that we want to decompose, CT its transpose, and finally S and A are the symmetric and antisymmetric matrices respectively into which matrix C is decomposed.

Below you have a solved exercise to prove the above formula. We are going to decompose the following matrix:

 \displaystyle C = \begin{pmatrix} 1 & 5 \\[1.1ex] -3 & 2 \end{pmatrix}

We calculate the symmetric and antisymmetric matrices with the formulas:

 \displaystyle S = \cfrac{1}{2} \cdot \left(C+C^T\right) = \begin{pmatrix} 1 & 1 \\[1.1ex] 1 & 2 \end{pmatrix}

 \displaystyle A = \cfrac{1}{2} \cdot \left(C-C^T\right) = \begin{pmatrix} 0 & 4 \\[1.1ex] -4 & 0 \end{pmatrix}

And we can check that the equation is satisfied by adding both matrices:

 \displaystyle C = S + A \quad ?

 \displaystyle \begin{pmatrix} 1 & 1 \\[1.1ex] 1 & 2 \end{pmatrix} + \begin{pmatrix} 0 & 4 \\[1.1ex] -4 & 0 \end{pmatrix} = \begin{pmatrix} 1 & 5 \\[1.1ex] -3 & 2 \end {pmatrix}

 \displaystyle C = S + A

Leave a Comment

Your email address will not be published. Required fields are marked *