Please
send comments or report wrong links, typo and other things to
wsw@math.jhu.edu.
For the period of this grant, most of the videos will be on
linear algebra with the hope of building up a library of linear algebra videos
that will help both our regular linear algebra course and our honors linear
algebra course. In addition, we hope the videos will be useful for those who
need to review the material for other courses they are taking.
By the end, we hope to have videos that cover much of the course. We intend to
focus mostly on computational examples of the material covered rather than full
fledged lecture style videos.
We intend to document what is in the videos here on the website so that this
page is searchable and topics of interest can be found quickly.
*********************************************************************************************************************************************************
Table of Contents:
*********************************************************************************************************************************************************
One of the best places to find good computational examples
is in old exams, because many well thought out problems have been produced for
the purpose of exams.
We start with the linear
algebra exams from Spring 2003.
The first midterm
was a little
abnormal because it consisted of 45 true/false questions.
There are 5 videos, one for each page of the exam.
All of the questions on the exam are basic facts (or non-facts) that everyone
should know. There is nothing that can't be answered quickly and easily without
computation except in
part4, problem 35. This requires multiplying a 2x2 matrix times itself 3
times, but it is a very simple matrix.
Although Exam 1 certainly tests some basic understanding, it lacks testing
knowledge of how to compute.
Exam 2 moves on to another level of abstraction
that often is found confusing. We can often add more structure to vector spaces
by putting an inner product on it. It is complicated enough when we work just in
Euclidean space, but we can have inner products in more abstract places. Inner
products allow us to talk about the distance between two vectors in a vector
space.
The series of problems at the beginning of this exam is about function spaces,
in particular, the continuous functions on the interval [-1,1]. We look at
polynomials of degree less than or equal to 3 in here. This is a subspace. We
put some more restrictions on and question 1 asks to show this is a subspace,
i.e. a vector space. Problem 2 asks us to compute a basis. These are good solid,
but basic, computations. They just happen to take place with levels of
abstraction that students often find difficult. Consequently, the problems are
worth studying. Problems 1 and 2 are covered in videos
part1 and
part1a.
Videos parts
2,
3 and
4 cover problems 3-10, which are all about finding a linear approximation to
x squared on the interval [0,1]. This involves finding an orthogonal basis, an
orthonormal basis, a projection, the matrix for the projection, a kernel (null
space), a basis for the kernel, etc. Really understanding this sequence of
problems would go a long way in the course. Video
part4a is a "cultural aside" where we show how to get the linear
approximation to x-squared using multi-variable calculus instead of orthogonal
projections. We get the same answer.
It is this sequence of problems that led to the
video sequence on least squares below.
The rest of the exam consists of basic questions about determinants, some
definitions, and some really elementary computations.
The final exam for the spring of 2003 starts
off easy with some basic definitions and statement of facts. The only
interesting computation in
part1 is problem 5, finding a basis for the orthogonal complement of a given
subspace.
Problems 6 through 19 of the final exam are all about a given 3x3 matrix. You
have to find the trace, the determinant, solve the homogeneous equation it
gives, solve a non-homogeneous equation, find a basis for the kernel (null
space) of the linear transformation the matrix defines, find a basis for the row
space of the matrix, find the rank of the matrix, find a basis for the image of
the linear transformation the matrix gives, find the characteristic polynomial,
compute the eigenvalues and the eigenvectors, find the matrix in the new basis,
find the change of basis matrix, and find its inverse. All of this is covered in
the videos
part2,
part3, and
part4.
Problems 20-27 are similar (or identical) to those on Exam 2 for function spaces
and inner products. (part5)
The only thing new and interesting on this video is problem 28. This is just a
simple set of 2 equations in 2 unknowns that is inconsistent. The goal here is
to find all of the least squares solutions. This is a good computational problem
and stuff well worth knowing.
Video
part6 of the final exam starts with some basic questions and then moves on
to a sequence, problems 31-34, that analyze a particular quadratic form,
including finding principal axes and what it is in a new coordinate system.
Video
part7, the last one of the final exam, again starts off with some basics,
but the last 3 problems of the exam, 38-40, require some serious computations of
one of the many ways you can analyze the matrix of a linear transformation.
*********************************************************************************************************************************************************
Linear
Algebra Exams from Fall 2003
In keeping with the idea of finding good problems to work, we look at exams from
the Fall of 2003.
For the first midterm, Exam 1, we have 4 videos.
Video 1: The first 3
pages (6 problems) of the exam are about projecting 3-space down to a given
line. Only some of the questions require computations. Others just require
understanding what is going on and using computations or knowledge that has come
before.
In particular, we calculate a basis for the image, a basis for the kernel, the
3x3 matrix for the linear transformation, a "basis for the orthogonal complement
of the kernel" (either a trick question or one for conceptual understanding),
and, similarly, "a basis for the orthogonal complement of the image," and,
finally, the rank of the transformation.
Video 2: This is the next
2 pages (4 problems) of the exam. The first 3 questions are about a specific
linear transformation from 2-space to 3-space. We have to find a matrix for the
map, a basis for the image, and the rank of the transformation. Finally, there
is a simple question about a change of basis problem. It requires no work if you
know what you are doing.
Video 3: Here we work
with one 4x4 matrix. We compute its reduced row echelon form (rref), a basis for
the kernel, a basis for the image, and the rank of the matrix. All basic
straightforward computations.
Video 4: The last four
problems of the exam are intertwined. A simple linear transformation on 2-space
is given and we have to compute the matrix for it. Easy. Then we have to compute
the inverse of a 2x2 matrix which is relevant in the next problem where we have
to find the matrix of our linear transformation with respect to a new basis.
Finally, we need to compute the coordinates of the image of a vector, but the
coordinates are those from the new basis. All basic computations.
Exam 2 comes along deeper into the linear
algebra course, in particular, after abstract inner product spaces have been
introduced. So, rather than just work with Euclidean space and the standard
inner product, we can work with more exotic spaces here.
Video 1: We work with an
inner product on 2x2 matrices.We have to be able to compute the length of a
vector, i.e. a 2x2 matrix, and find an orthonormal basis for the subspace
spanned by two given matrices.
Video 2: We find the
orthogonal projection of a given matrix onto the subspace found in video 1.
Videos 3 and
4: There is a sequence of
6 problems that use degree 2 or less polynomials on the interval [-1,1] with the
usual inner product given by integration. We need to know what that usual inner
product is and then find an orthonormal basis for degree less than or equal to 1
polynomials. We then find the orthogonal projection of x-squared to this, i.e.
find the linear approximation.We extend our orthonormal basis to all of degree
less than or equal to 2 polynomials and find the matrix for the orthogonal
projection with respect to both the new orthonormal basis and with respect to
the standard basis of monomials.
Video 5: Some basic
problems about determinants are solved such as the volume of a parallelepiped. We
need the definition of an orthogonal matrix and what we can say about the
columns. We need to know what the inverse is as well and also the determinant.
These are all basic definitions. Then we need to know some basics about
determinants.
Video 6: For the last
problem on the test we need to compute an orthonormal basis for the orthogonal
complement of a one-dimensional subspace of 3-space. This is a basic
computation.
The Final
exam seems to be endless. It is 28 pages and has 46 problems. It really
helps to know things cold.
The exam is worked in 13 videos and then there are two additional videos that
develop the math for a few of the problems that might not be in everyone's
linear algebra course.
(1) Part 1
The very first problem introduces a matrix that sticks with the final exam for
well over half the problems. Because of this it is important to get the early
problems right so you don't have to rework things later.
This matrix is symmetric (and real), so problem 1 just asks for the definition
of a symmetric matrix and problem 2 asks the definition of orthogonally
diagonalizable. These are basic definitions students should know.
In problem 3 we compute the rank of the matrix and problem 4 we ask for the
definition of rank.
Problem 5 has us compute the square of the matrix, i.e. multiply it by itself.
We do this because we need this later as well as testing knowledge of matrix
multiplication.
We then compute the trace of the matrix (problem 6) and the determinant (problem
7).
Then we solve the equation Ax=0 for our matrix (problem 8).
(2) Part 2
asks for the least squares solutions to Ax=b for our A and a given b (problem
9), a basis for the kernel of A (problem 10), and a basis for the image of A
(problem 11). We are asked to find the characteristic polynomial for A (problem
12) and the Eigenvalues for A (problem 13).
(3) Part 3
starts with finding the Eigenvectors for A (problem 14). Problem 15 just asks
for the diagonal matrix for this linear transformation using the basis of
eigenvectors. We then have to find the change of basis matrix for this (problem
16) and its inverse (problem 17)
(4) Part 4
starts by orthogonally diagonalizing A, i.e. giving an orthonormal basis of
eigenvectors (problem 18). We then need the new change of basis matrix (problem
19) and its inverse (problem 20).
(5) Part 5
starts with a 2x3 matrix and we analyze this linear transformation in a special
way (problem 21). This is not always done in linear algebra courses so this is
developed in the first appendix to this final. A shows up again. This is
problems 21-23.
(6) Part 6
finds the Singular-value decomposition of the 2x3 matrix above (problem 34). The
math for this is developed in appendix 2.
(7) Part 7
analyzes a quadratic form (problems 25-31). Our matrix A comes up again. In the
process we study a quadratic surface.
(8) Part 8
does some elementary stuff with determinants (problems 32-34).
(9) Part 9
moves on to inner products on function spaces. This starts with finding a basis
for the orthogonal complement of degree 1 (or less) polynomials in the space of
polynomials of degree 3 or less (problem 35).
(10) Part 10
finds an orthonormal basis for the degree 1 or less polynomials in this function
space (problem 36).
(11) Part 11
finds the matrix for orthogonal projection to the degree 1 or less polynomials
from the degree 3 or less polynomials (problem 37).
(12) Part 12
continues the study of the last 3 videos. We find a matrix for the orthogonal
projections using a different basis (problem 38) and then project a particular
polynomial (problem 39). We find the least squares linear approximation to
x-cubed on our interval (problem 40) and finally, show what integral is
minimized (problem 41).
(13) Part 13
is the final video of this endless final exam. This is a sequence of problems
(42-46) about a more abstract linear transformation. If you understand this
sequence, you are probably in really good shape.
Appendix 1
shows how the decomposition used in problems 21-23 is done theoretically.
Appendix 2
proves the Singular-value decomposition theorem used in problem 34.
*********************************************************************************************************************************************************
Linear
Algebra Exams from Spring 2004
Now we look at exams from Spring of 2004. Again these contain two midterms
and the final exam.
For the first midterm, Exam 1, we have 5 videos.
Video 1:
The first 9 problems of the exam are about an orthogonal projection of a 2-space down to a given
line. The first video go over the first 3 problems. Problems 1 &2 (finding a
basis for the image and a basis for the kernel) require basic
understanding of orthogonal projection and little computation. Problem 3 is the
standard problem of finding the matrix of a linear transformation. The video
gives an answer based on the formula regarding projections, as well as how to
derive the matrix from the scratch, which is very helpful.
Video 2:
Discuss the next 4 problem (problems 4 to 7) of the exam. These problems
concentrate on finding the matrix of the orthogonal projection with respect to a
new basis. Again there are two ways, use the formula in the book or figure it
out by inspection. Guess which way is faster in this case. Along the way there
are problems about checking if two vectors give a basis, writing down the matrix
corresponding to the basis (test the understanding of the definition), as well
as finding the inverse of a 2x2 matrix.
Video 3:
Finish the sequence of the problem (problem 8 and 9). Re-write the basis of the
image and the kernel with respect to the new basis. These are more about notions
than computation.
Video 4:
The next sequence of the problems deal with a particular 3x3 matrix. Video 4
(problem 10 to 15) discuss some simple computations (reduced row echelon form,
rank of A, basis of image and kernel, rank of A2, etc).
Video 5:
Deal with problems 16 to 19. It involves computing basis of
the image and the kernel of A2, then finishes the exam by studying
the intersection of the image and the kernel.
For the second midterm, Exam 2, we have 8
videos.
Video 1:
The exam starts with the space P3, polynomials of degree less or
equal to 3. With certain restrictions, it is easy to show that the polynomials
satisfying the restrictions is a subspace (problem 1). It is not so trivial to
find a basis for this subspace (problem 2). The video shows two ways to solving the problem,
provides two sets of bases, and check that they can generate each other.
Video 2:
Discuss problem 3 to 5, using the inner product of integration on interval
[-1,1] on P3. Problem 3 check if an element is in the orthogonal
complement of the subspace in problem 1. Next we define a map on P3
and show that it is a linear transformation (problem 4), then determine its
matrix in the standard basis of P3. This particular space, with its
inner product, and the linear transformation are not like the commonly used
examples in the book. Thus it makes some interesting computations. A solid
understanding of related definitions is useful for problems throughout this
exam.
Video 3:
Discuss just one problem (problem 6). That is to find the kernel of the linear
transformation defined in problem 4. Again the computation here is
interesting. The video gives two ways of solving the problem.
Video 4:
We go back to space R3. Problem 7 is to find an orthogonal basis of a
subspace, using Gram-Schmidt process. Problem 8 is to find a basis of the
orthogonal complement of the subspace in problem 7.
Video 5:
Deals with a computation of determinant using its basis properties (problem 9),
two problems (10 and 11) of definitions (Cramer's rule and finding inverse using
adjoint), then a problem of finding the area of a parallelogram (problem 12).
Problem 12 can be solved using a nice formula involving determinant. We also
explain another way, using the method we did in problem 7 (see video 4 above).
Video 6:
For problem 13 we go back to polynomials, this time to find an orthogonal basis of a subspace
in P2 (polynomials of degree less or equal to 2). The difference
between this problem and problem 7 (in video 4) is that we are using the special
inner product on P2, thus a long computation that needs a whole video
to explain the process.
Video 7:
Discuss problem 14, which is to find the orthogonal projection of a function in
P2 to the subspace in problem 13. The video explain the problem in
standard linear algebra way, then explain how this can be done in calculus
(using the shortest distance).
Video 8:
The last problem in this exam (problem 15) tries to explain problem 14 via
geometric way. In other words the answer to problem 14 is the closest parabola
going through the origin to the constant function 1. Drawing the graph with
explanation will finish the exam nicely.
The Final Exam of Spring 2004 has 39 problems in 27 pages. We go over the exam in 15 videos. There are some interesting problems in this exam that are normally nor seen in other exams.
First we are given a real symmetric 3x3 matrix and problems 1 to 9 are related to this matrix.
Video 1 does some basic computations regarding this matrix (problems 1 to 4): compute the determinant and rank. Find the inverse matrix and verify it.
Video 2 computes the characteristic polynomial (problem 5), finds its factorization to determine the eigenvalues (problem 6). Then finds an orthonormal eigenbasis for the matrix (problem 7), which involves some computation.
Video 3 computes the change of base matrix (problem 8) and the inverse of the change of base matrix (problem 9). Then continue to the next sequence of problems. After recall a formula in the book (problem 10), we are given a new matrix. In the computation to determine the singular values of the new matrix (problem 11), we find out the relation between this matrix and the matrix in problem 1 to 9. From there we solve problem 12 (find one set of vectors) by using the formula in problem 10.
Video 4 continue to find another set of vectors for the new matrix (problem 13), then use the result from problem 12 and 13 to explicitly write out the mapping for the matrix (as a linear transformation). This is problem 14.
Video 5 starts with recalling the definition of singular- value decomposition (problem 15), then compute that for our matrix (problem 16).
Video 6 discusses a new sequence of problems. We work on polynomial of degree at most 1 and 2. First we recall the usual inner product (problem 17). Then use that to find an orthonormal basis for polynomial of degree at most 1 (problem 18). The computation is not trivial, since it involves integral.
Video 7 deal with projections. First we compute the projection of one vector (problem 19), then we find the projection matrix (problem 20). Further, we compute the dimension of the kernel of the projection and find a basis (problem 21 and 22). Problem 23 uses the result from problem 19 to find a least square integral.
Video 8 discusses problem 24, which is to find the area of the parallelogram generated by two vectors. The video gives two way to solve the problem. One uses the determinant formula (see 6.3 video 3). The other is a direct computation without using the determinant.
Video 9 computes three determinant problems (problem 25-27) using basic properties of determinant.
Video 10 solves problem 28, which is to determine an orthonormal basis by finding the orthonormal compliment of a subspace.
Video 11 first computes the matrix representing a certain linear transformation (problem 29), then finds a basis of the kernel for this linear transformation (problem 30).
Video 12 and video 13 do the next sequence of problem by using a new 4x4 matrix. In problem 31-36 we compute a basis for the kernel, an orthonormal basis for the kernel, a basis for the orthonormal compliment of the kernel, an orthonormal basis for the orthonormal compliment of the kernel, a basis for the image, and an orthonormal basis for the image.
Video 14 start with a quadratic equation of three variables. Problem 37 is to find points that are closet to the origin. We solve the problem using quadratic forms. Problem 38 computes the distance from those points to the origin.
Video 15 discuss the last problem (problem 39). From the space of the
continuous functions, we find a length one vector in the subspace generated by
one element.
********************************************************************************************************************
There are a couple of sequences of videos about
Fibonacci numbers. The first sequence
doesn't really fit with our linear algebra focus except that it does use
induction, something we need in the linear algebra course.
In fact, the proof in the first sequence of videos requires a double induction.
The goal is to prove a particular formula for the 2n-th Fibonacci number in
terms of the k-th Fibonacci numbers for k less than or equal to n. This involves
a lot of binomial coefficients. It requires finding a formula for the odd
Fibonacci numbers and then using the double induction. This came up in an
elementary number theory course I was teaching and the students had trouble with
it, so I made the videos.
In terms of use for linear algebra, it is a good place to see a
double induction.
Our
second sequence of
videos about Fibonacci numbers is a beautiful application of eigenvalues and
eigenvectors in linear algebra. There is a closed formula for the Fibonacci
numbers that involves things like the square root of 5 that appears to make no
sense.
There are 5 videos. The first
video sets up the problem by stating the formula and what we will do. The
second video proves the
formula (Binet's formula) using induction, but this gives no insight into where
the formula comes from. The
3rd video describes how we will go about using eigenvalues and eigenvectors
to get the formula for Fibonacci numbers. The
4th video does many of the
preliminary computations and the
5th video finally produces
(and proves) the formula of interest.
********************************************************************************************************************
The next sequence of videos is about least squares
approximations. This sequence was motivated by the problem in the Spring
2003 exam series about finding the linear approximation to x-squared on [0,1]. I
try to connect this up with the line of best fit for a scatter plot as well.
This sequence could be made more elegant if the adjoint of a matrix is developed
more fully first. However, I didn't do that, so some of this is ad hoc.
The sequence has 9 videos in it. The
videos cover things as follows
(1) Sets up the problem and
describes what we'll do in the videos.
(2) Talks about how least
squares is just the Pythagorean theorem and sets up the problem of line of best
fit for a scatter plot.
(3) Describes the linear
algebra that solves the problem of the line of best fit for a scatter plot and
connects it to least squares.
(4) Generalizes the above
to finding the parabola (i.e. quadratic) of best fit, or higher degree
polynomial of best fit.
(5) Does something a bit
unusual by connecting up the line of best fit for a scatter plot to the linear
approximation of a continuous function. The way we do that is to make a scatter
plot out of points on the curve x-squared and show that in the limit as we put
more and more points on it, we end up minimizing the integral that comes from
the inner product.
(6) This just sets up the
basic math of doing an orthogonal projection.
(7) Here we consider an
inconsistent system of equations Ax=y (such as those we had in the line of best
fit for a scatter plot), and show how to solve this using the transpose of A.
Since we are dealing only with real numbers, the transpose is the adjoint. If we
developed the adjoint more, this could be made more elegant.
(8) Since we've set up all
the of the necessary mathematics, we actually derive the formulas for the slope
and y-intercept for the line of best fit for a scatter plot. These are formulas
whose derivation should not be done in public.
(9) We don't do the
computations (here in public), but we look at the problem of x-squared where we
space out n+1 points for a scatter plot that is on it. We show the answer, i.e.
the line of best fit, has, as its limit, the linear approximation of x-squared.
*********************************************************************************************************************************************************
In keeping with the philosophy that we should work some really basic problems that require computations, this single video just checks to see if 3 vectors in three space form a basis.
*********************************************************************************************************************************************************
Quotient spaces are ultimately very important for linear algebra, but you
can do a lot without them so they are not always developed much. In
this sequence of 7 videos we
give the necessary definitions and prove the basic properties of quotient spaces
as follows:
(1) Cosets of a subspace
are defined.
(2) We show how to tell
if two cosets are the same.
(3) Addition and scalar
multiplication is defined on the set of cosets of a subspace.
(4) We show that the set
of all cosets of a subspace forms a vector space we call the quotient space.
(5) Here we show a
connection between bases for the quotient space and the total space as well as
a relation between the dimensions of all the involved vector spaces.
(6) Next we show that
the map V -> V/W of the total space to the quotient spaces is a linear
transformation that is onto and that the kernel (null space) is just W.
(7) Finally, we show
that any linear transformation factors through the quotient space of the source
space by the kernel.
More could be done, but this covers some of the basics.
*********************************************************************************************************************************************************
Videos for direct sum
The direct sum is an important concept in linear algebra but is not always
pursued in courses. This is a short sequence of videos
to set up the basic properties of the direct sum. It is not exhaustive at all.
(1) We define the direct sum and
show that n-tuples are just the n-fold direct sum of the field.
(2) We give conditions on V and
two subspaces so that V is the direct sum of the two subspaces. Bunch of
nit-picky things to state and prove here to get the basic properties of this
situation.
(3) We talk about general products
and sums here, just briefly, for a cultural aside (i.e. optional). But, really,
we just want to show we can construct a new vector space from two vector spaces
in such a way that the new one is the direct sum of the two.
(4) Here we give the relationship
between the dimensions of a vector space and the dimension of two subspaces
whose direct sum is the vector space. (Also, if the sum of two subspaces is V
but it isn't the direct sum, we discuss relationships among dimensions.)
(5) Here we explore the connection
between the bases for subspaces whose direct some is V and a basis for V.
(6) Just a continuation of (5).
(7) We show that for any subspace
of a finite dimensional vector space, there is a complementary subspace such
that the two have the big vector space as the direct sum.
(8) We show how to project onto
one summand of a direct sum and how to analyze the kernel etc.
*********************************************************************************************************************************************************
Web lectures videos for topics in the textbook (for 110.201 Linear Algebra)
These are the videos lectures that discuss topics in Chapters 1 through 8 from the textbook.
Videos for Chapter
1 (topics on linear equations) The first chapter discusses
system of linear equations. It introduces matrices and Gauss-Jordan
eliminations. It also discusses number of solutions of linear systems. It also
introduces a lot of definitions and notations that will be used throughout of
the course.
1.1 and 1.2 These two sections discuss basic notions of linear systems, then define matrices and use matrix notions to do Gauss-Jordan eliminations. The first video lecture introduces system of linear equations. It discusses two different ways to solve a system of linear equations of two variables, then give geometric interpretation of the solution (intersection of two straight lines in the plane). The second video gives geometric interpretation of three possibilities of solutions in two-variable case: one solution (two lines intersect), infinitely many solutions (two lines coincide) and no solution (two parallel lines). The third video introduces the notion of matrices, rows and columns, then use this notations to re-do previous examples. Video 4 defines general linear systems (n equations in m unknowns) in sum notions, as well as the coefficient matrix and the augmented matrix of a linear system. Then it discusses the very important reduced row echelon form (rref), and three technique steps to get a matrix in rref. The next two videos (Video 5 and Video 6) compute row-reduction examples (including a real-life problem). The computations here are key techniques in the early stage of learning linear algebra.
1.3 This section discusses various cases of solutions of linear systems. Video 7 first defines the rank of a matrix using rref, then discusses when the linear system have none, unique or infinite solutions depending on the rank, number of equations and unknowns. Video 8 use the vector form to demonstrate how solutions of a linear system are related to linear combination of vectors. Video 9 then discusses general cases and give the formal definition of linear combination of vectors. It also simplifies the notation of linear combination into the form of multiplication of a matrix and a vector. Then it discusses sum of matirces and scalar multiple of matrices. Video 10 explains how to vide the product of a matrix and a vector in the form of dot product. Video 11 uses dot product to explain vectors that are perpendicular, and work a 4-dimensional example by solving a system of linear equations in 4 variables. Video 12 defines standard basis of n-dimensional space and identity matrix, and how convenient when it is used for matrix and vector product.
Videos for Chapter 5
(topics on orthogonality) This chapter discusses various topics regarding
orthonormal bases, including some important applications (Gram-Schmidt process
and least squares). These are very useful material and tools. Certain
computations look complicated for students. These videos intend to help students
to understand the definitions, and give examples to solve problems regarding orthogonality.
5.1. This section discusses orthonormal bases and orthonormal projections. The first video discusses basic definitions (perpendicular, length, unit vector), then defines orthogonal vectors and give two basic examples. The second video explains that orthogonal vectors are linearly independent, defines orthogonal basis and show that orthogonal complement is a subspace. The third video discusses orthogonal projections and gives the formula for orthogonal projections. The fourth video uses Pythagorean theorem to compare the length of a vector and its projections, then discusses Cauchy-Schwarz inequality. The fifth video discusses angle between two vectors and correlation coefficient.
5.2. This discusses Gram-Schmidt process, a very important way to construct orthonormal basis and one of the most useful tools in linear algebra. There is one video for this section. It discusses the detail of Gram-Schmidt process.
5.3. This section discusses orthonormal transformations and orthogonal matrices. There are six videos for this section. The first and second video review some basic facts about orthogonality from 5.1 and 5.2. The third, fourth and fifth defines orthogonal transformations and explains their basic properties and their relation to orthogonal bases. The sixth video defines the transpose of a matrix and use that to discusses matrices of orthogonal projections.
5.4. This section apply what we learned from previous sections to least squares. To understand this application, follow the video sequence of least squares. Specifically, follow the order of video 7, 2, 3, 4 and 8. Since some of these videos use inner product spaces in discussion, it is better to go over videos in 5.5 first.
5.5. This sections discusses inner product spaces. It is an important application of orthogonality. Some of the computations here are complicated. There are six videos for this section. The first video defines inner product and give a couple of examples. The second video defines the trace of a matrix and use that to give one more example of inner product spaces. The third video defines norm of a vector, distance between two vectors and continue with important examples like Fourier analysis, then discuss orthonomal projections using inner product, and mention more examples as videos to least squares. The fourth, fifth and sixth videos continue to work on more examples. Some problems in this section are different from other computations. It's useful to study video in 5.5, 5.4 and problems involving orthogonality in various exam videos. See:
From: linear
algebra exams from the Spring 2003.
Exam 2
Videos parts
2,
3 and
4
part4a.
The final exam Video
part5
From Linear
Algebra Exams from Fall 2003
Exam 2: Video
1,
2,
3,
4, and
6.
Final
exam: Part 1,
Part 2,
Part 4,
Part 9,
Part 10,
Part 11, and
Part 12.
From Linear
Algebra Exams from Spring 2004
Exam 2:
Video 2,
Video 3,
Video 4, Video 6,
Video 7
and Video 8.
Videos for Chapter 6 (topics on determinants) This chapter discusses various topics regarding determinants.
6.1 and 6.2. These two section define determinants of square matrices, describe various properties of determinants. They also give formula for computing 2x2 matrices, 3x3 matrices, as well as methods for general cases. The videos use their own approach. The first video introduces basic properties and use that to define the determinant. The second video discusses elementary operations of matrices (row reductions) and how they affect the determinant of the matrix. These are used in the third video to show how to compute determinants. Video 4 discusses the determinant formula for 2x2 and 3x3 matrices. Video 5 discuss the determinant of products and inverses. Video 6 discusses the determinants of the transpose of matrices. Video 7 discusses problems involving determinants of matrices with variables. These will be used in Chapter when eigenvalues are discussed. Video 8 discusses the determinants of upper-triangular matrices. Video 9 gives another definition of the determinant (which is given in the book in 6.1).
6.3. These section discusses geometrical interpretations of the determinant. The first video shows how to use determinant to get a formula of the area of the parallelogram. The second video discusses the volume formula using determinant. The third video show the determinant as the expansion factor. Video 4 shows the determinant of the product of transformations is the product of the determinants, and also explains that in the language of multi-variable calculus. Video 5 discusses Cramer's rule, a way to solve linear systems using determinant.
Videos for Chapter 7 (topics on eigenvalus and eigenvectors) This chapter discusses various topics regarding eigenvalues and eigenvectors.
7.1 This section defines eigenvalues and eigenvectors of linear transformations and give some simple examples. Correspondingly, the first video here introduces these definitions via a couple of examples.
7.2 and 7.3 These are two key sections in this chapter. They discuss various ways to find eigenvalues and eigenvectors of matrices. In the process they also introduce some important terms like trace, characteristic polynomial and multiplicity of an eigenvalues and eigenbases. The second video use a simple example to demonstrate how to find eigenvalues and eigenvectors of 2x2 matrices. Then the third video continue to use the example to illustrate how to find new basis using the eigenvectors and similarity between bases. Video 4 discusses characteristic polynomial, eigenspaces associated with the eigenvalues, as well as the definition of algebraic and geometric multiplicities. Video 5 proves the linear independence of eigenvectors associated with dinstinct eigenvalues, and gives conditions when an eigenbasis exists.Video 6 discusses basic properties of similar matrices.
7.4 This section discuss diagonalization. Video 7 shows that when an eigenbasis exists for a linear transformation then the matrix with respect to the eigenbasis is diagonal, and this is if and only if condition for the linear transformation to be diagonalizable. Video 8 discusses some techniques to determine if a linear transformation is diagonalizable. We can use this technique for some nice applications like obtaining formula for Febonacci numbers.
7.5 This section discusses complex eigenvalues. Video 9 gives an introduction to this topic.
Videos for Chapter 8 (topics on symmetric and quadratic forms) This chapter discusses symmetric matrices and apply the results to quadratic forms.
8.1 This section discusses various important properties of symmetric matrices. In particular they are orthogonally diagonalizable and their eigenvalues are real. The first video show that if a matrix is orthogonally diagonalizable (means it has a basis of orthogonal eigenvectors), then it is a symmetric matrix. The video goes on to state that this is a sufficient condition. The next several videos discuss the proof of this statement (called spectral theorem). The second video show that eigenvectors that belong to different eigenvalues of a symmetric matrix are perpendicular to each other. The third video show that the characteristic polynomial of a symmetric real matrix cannot have an irreducible quadratic factor, thus all the eigenvalues are real. Now what's left is to show that the algebraic multiplicity of any eigenvalue is the same as geometric multiplicity. Instead of proving that, Video 4 finishes the proof of the spectral theorem via direct computation and mathematical induction. The next several videos (Video 5, Video 6, Video 7, Video 8 and Video 9) give series of examples with detailed computation, from direct observation to computing the characteristic polynomial, to an example with geometric multiplicity bigger than one.
8.2 This section applies the results from 8.1 to quadratic forms. Video 10 explains how quadratic forms are associated with symmetric matrices and explains that after orthonormal base changes they can be rewritten without cross terms (i.e, diagonalizable). Video 11 defines positive definite quadratic forms. Then it discusses level sets of quadratic forms and explain the interesting relations between level sets of quadratic forms and ellipses or hyperbolas with various principle axes. Next are several videos of exmaples. Video 12 works through a concrete example to explain all the definitions involved in the previous video. Video 13 continue the example by asking which points on the level sets are closest or furthest from the origin. Video 14 computes the same distance-to-origin question for a 3-dimensional example. Video 15 does complete computation for another 3-dimensional example.
8.3 This section discuss singular values of matrices and the singular value decomposition, using the result from 8.1 and 8.2. Video 16 gives an overview of what to do, then discuss that, from a matrix A we can get a symmetric matrix ATA whose eigenvalues are all non-negative. Video 17 defines singular values of a matrix. Then use the tools we learn so far to explain the singular-value decomposition. The rest of the videos for chapter 8 are example of computation of singular values and singular-value decompositions. Video 18 shows that the image of a unit circle under invertible linear transformation is an ellipse (similarly the image of a sphere in 3-dimensional case must be ellipsoidal, and also for higher dimensional cases). Video 19, Video 20, Video 21, Video 22 , Video 23, and Video 24 discusses several concrete examples.
*********************************************************************************************************************************************************
Exam 1: part1 part2 part3 part4 part5 exam paper (pdf)
Exam 2: part1 part1a part2 part3 part4 part4a part5 exam paper (pdf)
Final Exam:
part1
part2
part3
part4
part5
part6
part7
exam
paper (pdf)
Exam 1: part1 part2 part3 part4 exam paper (pdf)
Exam 2: part1 part2 part3 part4 part5 part6 exam paper (pdf)
Final Exam:
part1
part2
part3
part4
part5
part6
part7
part8
part9
part10
part11
part12
part13
appendix1
appendix2
exam
paper (pdf)
Exam 1: part1 part2 part3 part4 part5 exam paper (pdf)
Exam 2: part1 part2 part3 part4 part5 part6 part7 part8 exam paper (pdf)
Final Exam part1 part2 part3 part4 part5 part6 part7 part8 part9
part10 part11 part12 part13 part14 part15 exam paper (pdf)
5.1 Web Lecture Videos 1 2 3 4 5
5.3 Web Lecture Videos 1 2 3 4 5 6
5.5 Web Lecture Videos
1
2
3
4
5
6
Funding for this project is provided by the
Provost's Office through a Gateway Science Initiative grant.
Please send comments or report wrong links, typo
and other things to wsw@math.jhu.edu.