Was attending a workshop on Linear Algebra, and one of the lectures was by Dr. C. R. Pradeep. It was *supposed *to be on positive-definite matrices (whatever that might be!), but finding that no one really understood what a matrix was or stood for, it became a geometry class, to everyone’s benefit.

The standard way to look at a matrix is that is a set of numbers arranged in some order in between some brackets, and that they can be added with some effort and are multiplied in a completely obscure manner. This much one learns by the time one leaves Pre-University, and this does not help one bit in appreciating the whats and the whys behind the whole thing.

The best way to start off is with an example. Consider a general vector multiplying a matrix:

If we take the general vector to be some point on the plane, then note that every point on the right hand side of the above equation has its x-coordinate equal to its y-coordinate. Therefore, if we think of this matrix as a machine that takes in all vectors in the plane and spits out some other vectors also in the plane, then things begin to look very nice.

This is because, if we start feeding this machine all the points on the plane, what we will get as a result is all the points whose coordinates are equal. From elementary geometry, this is a line passing through the origin at an angle of 45 degrees to both the coordinate axes:

How is it managing to do this ? Consider two special cases: and . These are the well known unit vectors on the plane, representing the y axis and the x axis respectively. Substituting these values in the equation, we see that they both are sent to the same point! Therefore, this matrix is collapsing the plane onto a single line, very much like closing a Chinese hand held paper fan. We can write any point on the plane in terms of the unit vectors, and similarly, we can write any point on the line using both the columns of the matrix. In this case they are the same, so it is a trivial relationship. But in general, the columns of the matrix are such that any point at the output can be written uniquely in terms of them.

Therefore, this matrix seems to be taking as its input a 2 dimensional ‘space’, i.e, the entire plane and giving back a 1 dimensional ‘space’ – a line through the origin. Another interesting thing to note is that both the points and both end up in the same point . This means given a point , we would not know which point on the plane it corresponds to, i.e, the inverse is not well-defined. (In fact, given any point at the output there are an (uncountably) infinite number of points which could have produced that) When we say that a matrix is singular or non-invertible, this is what we mean. We normally check this by finding the determinant of the matrix (which is zero in the singular case), but this normally does not appeal much to intuition.

To generalise this, any matrix that takes a higher dimensional space to a lower dimensional space is not invertible. The natural next question to ask is that if a matrix maintains the dimension of the output space equal to the input, is it invertible ? This can in fact be proved to be true, and this can be taken as a simple interpretation of the Rank-Nullity Theorem.

Going back to our example, what if the point/vector at the input was already on the line ? For example, the point would end up at , which is the same as . This is consistent with out Chinese fan approach, the line in the middle of the fan does not really move when it is closed (It does not get elongated either, but that is a special case of this). In Linear Algebra terminology, such points/vectors are called eigenvectors, and the value by which they are multiplied is called the eigenvalues of the matrix. In the general case, the axes do not collapse into each other, but maintain some angle between them. Even then, there will be some set of points/vectors which do not move, and these are called the eigenvectors.

Almost all the basic concepts of Linear algebra can be interpreted in this geometric manner. The heart of this whole discussion is the concept of linear transformations, which are represented by matrices for convenience and analysis.

Linear Algebra is an interesting subject, and on this is built almost all of engineering!!

I have some very basic questions pertaining to this subject:

1.What is the physical significance of a determinant? Geometrically, what does the determinant actually imply?

2. What does matrix multiplication imply geometrically? I have tried to picture multiplication as the area of a rectangle or a square whose dimensions are the numbers in question. Eg : 2X3 is the area of a rectangle of having sides as 2 and 3.

For all those who are interested, Prof Strang’s book on Linear Algebra and its applications and also his course videos on the same subject are very informative and interesting.

Somewhat long, apologies!!

Answers:

1. for 2×2 matrices, determinant is the area of the paralellogram determined by the columns of the matrix.

3×3, it is the volume of the paralellopiped whose sides are the columns of the matrix.

generally, it is a generalization of volume to n-dimensions.

Now, how to interpret singular matrices geometrically: singular means some column can be written as a linear combination of the other columns. for 2×2, this means that one column is a scalar multiple of the other, meaning both lie on a line. Now, a line has zero area => area of paralellogram = 0 => determinant is zero.

for 3×3, singular means that the columns represent vectors that are coplanar. Now, a plane has zero volume => determinant is zero.

now you probably see that in n dimensions, a singular matrix will have vectors which lie in a subspace of dimension lesser than n => determinant is zero.

2. This is a question that I meant to address, but did not since it requires more space :) Matrix multiplication is done in this way because every matrix represents a linear transformation and therefore its action on a vector must be the equivalent to the action of a linear transformation. Was planning to write a more comprehensive set of notes and post it as a pdf, do keep checking once in a while!!

Yup, Strang’s book is very good. would recommend it to anyone myself.

About 1 : ok that makes sense. Matrix with linearly dependent columns would mean that the null space does not include the zero vector alone, and therefore is not invertible. I guess what you are saying is also the same thing. We used to calculate inverse as adjoint of that matrix divided by its determinant. So the determinant being zero would imply matrix is not invertible. Cool :)

About 2: Will be eagerly waiting for that comprehensive notes.

No problems with long replies :)

http://www-math.mit.edu/18.013A/MathML/chapter04/section01.xhtml

check this out as well.