William Gilbert Strang(威廉·吉尔伯特·斯特朗),1934年11月27日于芝加哥出生,是美国享有盛誉的数学家,在有限元理论、变分法、小波分析及线性代数方面均有所建树。他对教育的贡献尤为卓著,包括所著有的七部经典数学教材及一部专著。斯特朗自1962年担任麻省理工学院教授,其所授课程《线性代数导论》、《计算科学与工程》均在麻省理工学院开放式课程计划(MIT Open Course Ware)中收录,并获得广泛好评。
目錄:
1 Vectors and Matrices 1
1.1 Vectors and Linear Combinations 2
1.2 Lengths and Angles from Dot Products 9
1.3 Matrices and Their Column Spaces 18
1.4 Matrix Multiplication AB and CR 27
2 Solving Linear Equations Ax = b 39
2.1 Elimination and Back Substitution 40
2.2 Elimination Matrices and Inverse Matrices 49
2.3 Matrix Computations and A = LU 57
2.4 Permutations and Transposes 64
2.5 Derivatives and Finite Difference Matrices 74
3 The Four Fundamental Subspaces 84
3.1 Vector Spaces and Subspaces 85
3.2 Computing the Nullspace by Elimination: A = CR 93
3.3 The Complete Solution to Ax = b 104
3.4 Independence, Basis, and Dimension 115
3.5 Dimensions of the Four Subspaces 129
4 Orthogonality 143
4.1 Orthogonality of Vectors and Subspaces 144
4.2 Projections onto Lines and Subspaces 151
4.3 Least Squares Approximations 163
4.4 Orthonormal Bases and Gram-Schmidt 176
4.5 The Pseudoinverse of a Matrix 190
5 Determinants 198
5.1 3 by 3 Determinants and Cofactors 199
5.2 Computing and Using Determinants 205
5.3 Areas and Volumes by Determinants 211
6 Eigenvalues and Eigenvectors 216
6.1 Introduction to Eigenvalues : Ax = λx 217
6.2 Diagonalizing a Matrix 232
6.3 Symmetric Positive De nite Matrices 246
6.4 Complex Numbers and Vectors and Matrices 262
6.5 Solving Linear Differential Equations 270
vii
viii Table of Contents
7 The Singular Value Decomposition (SVD) 286
7.1 Singular Values and Singular Vectors 287
7.2 Image Processing by Linear Algebra 297
7.3 Principal Component Analysis (PCA by the SVD) 302
8 Linear Transformations 308
8.1 The Idea of a Linear Transformation 309
8.2 The Matrix of a Linear Transformation 318
8.3 The Search for a Good Basis 327
9 Linear Algebra in Optimization 335
9.1 Minimizing a Multivariable Function 336
9.2 Backpropagation and Stochastic Gradient Descent 346
9.3 Constraints, Lagrange Multipliers, Minimum Norms 355
9.4 Linear Programming, Game Theory, and Duality 364
10 Learning from Data 370
10.1 Piecewise Linear Learning Functions 372
10.2 Creating and Experimenting 381
10.3 Mean, Variance, and Covariance 386
Appendix 1 The Ranks of AB and A B 400
Appendix 2 Matrix Factorizations 401
Appendix 3 Counting Parameters in the Basic Factorizations 403
Appendix 4 Codes and Algorithms for Numerical Linear Algebra 404
Appendix 5 The Jordan Form of a Square Matrix 405
Appendix 6 Tensors 406
Appendix 7 The Condition Number of a Matrix Problem 407
Appendix 8 Markov Matrices and Perron-Frobenius 408
Appendix 9 Elimination and Factorization 410
Appendix 10 Computer Graphics 414
Index of Equations 419
Index of Notations 422
Index 423
內容試閱:
One goal of this Preface can be achieved right away. You need to know about the video lectures for MIT’s Linear Algebra course Math 18.06. Those videos go with this book, and they are part of MIT’s OpenCourseWare. The direct links to linear algebra are
https://ocw.mit.edu/courses/18-06-linear-algebra-spring-2010/ https://ocw.mit.edu/courses/18-06sc-linear-algebra-fall-2011/
On YouTube those lectures are at https://ocw.mit.edu/1806videos and /1806scvideos
The .rst link brings the original lectures from the dawn of OpenCourseWare. Problem solutions by graduate students (really good) and also a short introduction to linear algebra were added to the new 2011 lectures. And the course today has a new start—the crucial ideas of linear independence and the column space of a matrix have moved near the front.
I would like to tell you about those ideas in this Preface.
Start with two column vectors a1 and a2. They can have three components each, so they correspond to points in 3-dimensional space. The picture needs a center point which locates the zero vector :
. . . . . .
2 1 0
a1 = . 3 . a2 = . 4 . zero vector = . 0 . .
1 2 0
The vectors are drawn on this 2-dimensional page. But we all have practice in visualizing three-dimensional pictures. Here are a1, a2, 2a1, and the vector sum a1 a2.
..
3
.. ..
a1 a2 =71 3
24
0
.. ..
a1 =3 2a1 =6
12
i
That picture illustrated two basic operations—adding vectors a1 a2 and multiplying a vector by 2. Combining those operations produced a “linear combination” 2a1 a2 :
Linear combination =ca1 da2 for any numbers c and d
Those numbers cand dcan be negative. In that case ca1 and da2 will reverse their direc-tions : they go right to left. Also very important, c and d can involve fractions. Here is a picture with a lot more linear combinations. Eventually we want all vectors ca1 da2.