How would you describe X A? Generalize to similar statements con- cerning n x n matrices, and their products with unit vectors. Let A, B be the matrices of Exercise 3 a. Do the same for 3 b and 3 c. Prove the same rule for any two matrices A, B which can be multiplied. Identify a 1 x 1 matrix with a number. Show that the conditions of a scalar product are satisfied, except possibly the condition concerning positivity.
Generalize to 4 x 4 matrices. Let X be the indicated column vector, and A the indicated matrix. Find AX as a column vector. Let and What is AX? Let X be a column vector having all its components equal to 0 except the i-th component which is equal to 1. Let A be an arbitrary matrix, whose size is such that we can form the product AX. What is AX? Show that the k-th column C k can be written This win be useful in finding the determinant of a product. Show that A is invertible. What is An where n is a positive integer?
Show that the matrix A in Exercise 19 has an inverse. What is this inverse? Show that if A, Bare n x n matrices which have inverses, then AB has an inverse. Let A be an n x n matrix. Define the trace of A to be the sum of the diagonal elements. Let A, B be the indicated matrices. What is A 2 , A 3 , Ak for any positive integer k? Let A be an invertible n x n matrix.
Show that We may therefore write tA -1 without fear of confusion. Show that We then write simply t A. What is its inverse? Let A be a strictly upper triangular matrix, i. The general case can be done by induction. Let A be a triangular matrix Among mappings, the linear mappings are the most important. A good deal of mathematics is devoted to reducing questions concerning arbitrary mappings to linear mappings. For one thing, they are interesting in themselves, and many mappings are linear.
On the other hand, it is often possible to approximate an arbitrary map- ping by a linear one, whose study is much easier than the study of the original mapping. This is done in the calculus of several variables. A mapping from S to S' is an association which to every element of S associates an element of S'. A mapping will also be called a map, for the sake of brevity. A function is a special type of mapping, namely it is a mapping from a set into the set of numbers, i. We extend to mappings some of the terminology we have used for functions.
We call T u the value of T at u, or also the image of u under T. The symbols T u are read "T of u". The set of all elements T u , when u ranges over all elements of S, is called the image of T.
Let S and Sf be both equal to R. Then f is a mapping from R into R. According to our terminology, Df is the value of the mapping D at f. Let S be the set of continuous functions on the interval [0, 1] and let Sf be the set of differentiable functions on that interval.
Then cI f is differentiable function. Just as we did with functions, we describe a mapping by giving its values. This is somewhat incorrect, but is briefer, and does not usually give rise to confusion. Let x, y be a point on the circle of radius 1. Therefore the image under F of the circle of radius 1 is a subset of the circle of radius 2. Hence every point on the circle of radius 2 is the image of some point on the circle of radius 1.
We conclude finally that the im- age of the circle of radius 1 under F is precisely the circle of radius 2. In general, let S, S' be two sets. This is what we did in the preceding argument. Example 7. Let S be a set and let V be a vector space over the field K. Let F, G be mappings of S into V. We also de- fine the product of F by an element c of K to be the map whose value at an element t of S is cF t.
It is easy to verify that conditions VS 1 through VS 8 are satisfied. Example 8. Let S be a set. For each element t of S, the value of F at t is a vector F t.
The coordinates of F t depend on t. Hence there are functions 11' Let 11' Example 9. Then we can form the composite mapping from U into W, denoted by G F. Let U, V, W, S be sets. Here again, the proof is very simple. We shall discuss inverse mappings, but before that, we need to men- tion two special properties which a mapping may have.
In other words, f is injective means that f takes on distinct values at distinct elements of S. Example We shall say that f is surjective if the image of f is all of S'. Thus every number is in the image of our map. A map which is both injective and surjective is defined to be bijective. To have a completely accurate notation, we should write fs,s' or some such symbol which specifies Sand S' into the notation, but this becomes too clumsy, and we prefer to use the context to make our meaning clear.
We note that the identity map is both injec- tive and surjective. If we do not need to specify the reference to S be- cause it is made clear by the context , then we write I instead of Is. We sometimes denote Is by ids or simply ide Finally, we define inverse mappings. Then f has an inverse mapping which is nothing but the logarithm. This example is particularly important in geometric ap- plications. Let V be a vector space, and let u be a fixed element of V.
In the next picture, we draw a set S and its translation by a vector u. Then f is both injective and surjective, that is f is bijective. Let x, yE S. To prove that f is surjective, let z E S'.
This proves that f is surjective. Then f has an inverse map- ping. Prove the statement about translations in Example Let F be as in Exer- cise 4. Let F be as in Exercise 4. In Exercises 8 through 12, refer to Example 6. In each case, to prove that the image is equal to a certain set S, you must prove that the image is contained in S, and also that every element of S is in the image. Describe geo- metrically the image of the t, u -plane under F.
Since we usually deal with a fixed field K, we omit the prefix K, and say sim pI y that F is linear. We define a map by associating to each element v E V its coordinate vector X with respect to the basis.
Thus if We assert that F is a linear map. This proves that F is linear. We leave it to you to check that the conditions LM 1 and LM 2 are satisfied. Then we have a projection mapping defined by the rule It is trivially verified that this map is linear. More generally, let K be a field, and A a fixed vector in Kn.
We have a linear map Le. We can even generalize this to matrices. Let A be an m x n matrix in a field K. We obtain a linear map such that for every column vector X in Kn. Again the linearity follows from prop- erties of multiplication of matrices. Let V be any vector space. The mapping which associates to any element u of V this element itself is obviously a linear mapping, which is called the identity mapping.
We denote it by id or simply I. Let V, V' be any vector spaces over the field K. The mapping which associates the element 0 in V'to any element u of V is called the zero mapping and is obviously linear. It is also denoted by O. Example 6. The space of linear maps. Let V, V' be two vector spaces over the field K. We consider the set of all linear mappings from V into V', and denote this set by.
P V, V' , or simply. P if the reference to V, V' is clear. We shall define the addition of linear mappings and their mul- tiplication by numbers in such a way as to make. P into a vector space. Indeed, it is easy to verify that the two conditions which define a linear map are satisfied. Then it is easily verified that aT is a linear map. We leave this as an exerCIse. We have just defined operations of addition and scalar multiplication in our set!
Finally, we have the zero-map, which to every ele- ment of V associates the element 0 of V'. In other words, the set of linear maps from V into V'is itself a vector space.
The verification that the rules VS 1 through VS 8 for a vector space are satisfied is easy and left to the reader. Let D be the deriva- tive. Let u, v, w be elements of V. This can be seen stepwise, using the definition of linear mappings.
Similarly, given a sum of more than three elements, an analogous prop- erty is satisfied. For instance, let u I' Then The sum on the right can be taken in any order.
A formal proof can easily be given by induction, and we omit it. If aI' Let V and W be vector spaces. We shall prove that a linear map T satisfying the required conditions exists. Let v be an element of V, and let Xl' Let c be a number.
We have therefore proved that T is linear, and hence that there exists a linear map as asserted in the theorem. Determine which of the following mappings F are linear. Let V' be the vector space of vector fields on U. Is it linear? For this part h we assume you know some calculus.
Let v be an element of V. Show that F is lin- ear. Prove that U is a subspace of V. Show that the image under L of a line seg- ment in V is a line segment in U. Between what points? Show that the image of a line under L is either a line or a point. Let V be a vector space, and let Vi' v2 be two elements of V which are linearly independent. Let V 1 , V 2 be linearly independent elements of V, and assume that F v1 ' F v2 are linearly independent. Show that the image under F of the parallelogram spanned by V 1 and V 2 is the parallelogram spanned by F v 1 , F v 2.
Let S be the square whose corners are at 0,0 , 1, 0 , 1, 1 , and 0, 1. Show that the image of this square under F is a parallelogram. Describe the image under T of the rec- tangle whose corners are 0, 1 , 3, 0 , 0, 0 , and 3, 1.
For which vectors u is Tu a linear map? Let W 1, In Exercise 16, show that W is a subspace of V. In each case compute L 1, 0. Find L O, 1. We denote the kernel of F by Ker F. Of course, this generalizes to n-space. Its kernel can be interpreted as the set of all X which are perpendicular to A.
Then P is a linear map whose kernel consists of all vectors in R3 whose first two coordinates are equal to 0, i. Let v, w be in the kernel. Hence the kernel is a subspace. We contend that follow- ing two conditions are equivalent: 1. In other words, F is injective. Conversely, assume that F is injective. The kernel of F is also useful to describe the set of all elements of V which have a given image in Wunder F.
We refer the reader to Exercise 4 for this. Since v l , The image of F is a subspace of W. Next, suppose that WI' W 2 are in the image. If c is a number, then Hence CW I is in the image. This proves that the image is a subspace of W. We denote the image of F by 1m F. The next theorem relates the dimensions of the kernel and image of a linear map with the dimension of the space on which the map is defined.
Let n be the dimension of V, q the dimen- sion of the kernel of L, and s the dimension of the image of L. If the image of L consists of 0 only, then our assertion is triv- ial. This will suffice to prove our asser- tion. Let V be any element of V. Then there exist numbers Xl' We now show that they are linearly independent, and hence that they constitute a basis.
This concludes the proof of our assertion. Example 1 Cont. Thus its image has dimension 1. Hence its kernel has dimension 2. Example 2 Cont. By the formula of Theorem 3. Hence L is bijective as was to be shown. Let A, B be two vectors in R2 forming a basis of R2. Let A be a non-zero vector in R2. Determine the dimension of the subspace of R4 consisting of all X E R4 such that and 4.
Let w be an element of W. What is the kernel of D? Let D2 be the second derivative i. What is the kernel of D2? In general, what is the kernel of D n n-th derivative? Let V be again the vector space of functions which have derivatives of all orders. Determine the dimension of W. Let V be the vector space of all infinitely differentiable functions. Let g be an element of V. Describe how the problem of finding a solution of the differential equation can be interpreted as fitting the abstract situation described in Exercise 4.
What is the kernel of L? For the general defini- tion, cf. Chapter V. Let S be the set of symmetric n x n matrices.
Show that S is a vector space. What is the dimension of S? Let A be a real symmetric n x n matrix. Let M be the space of all n x n matrices. Show that U x W is a vector space with these definitions.
What is the zero element? To be done after you have done Exercise What is its image? What is its kernel? We can say something additional in the case of linear maps. Let U, V, W be vector spaces over a field K. Then the composite map G F is also a linear map.
This is very easy to prove. Let u, v be elements of U. This proves that Go F is a linear mapping. The proofs are all simple. We shall just prove the first assertion and leave the others as exercises. Let u be an element of U. Then we may form FoG and G 0 F. It is not always true that these two composite mappings are equal. One some- times calls F an operator. Then we can form the composite F F, which 0 is again a linear map of V into itself. We shall denote this com- posite by Fn.
Then G is a linear map. Corollary 4. Then F has an inverse linear map. Hence we conclude that F is both injective and surjective, so that an inverse mapping exists, and is linear by Theorem 4.
But the im- age of F is a subspace of R2, which has also dimension 2, and hence this image is equal to all of R2, so that F is surjective. Hence F has an in- verse, and this inverse is a linear map by Theorem 4.
Let be a basis for V. The image of L is all of V, because V l' By Corollary 4. Remark on notation. We often, and even usually, write FG instead of FoG. If F and G commute, then you can work with the arithmetic of linear maps just as with the arithmetic of numbers. Finish the proof of Theorem 4. Show that L has an inverse linear map. Let F, G be invertible linear maps of a vector space V onto itself.
Show that L is invertible. Show that L is invertible in each case. Show that I - L is invertible. I is the identity mapping on v. Show that I - L is in- vertible. Let V be a vector space, and let P, Q be linear maps of V into itself. Show that V is equal to the direct sum of 1m P and 1m Q. Notations being as in Exercise 11, show that the image of P is equal to the kernel of Q.
Show that Go F is invertible, and that Let V, W be two vector spaces over K, of finite dimension n. Show that V and Ware isomorphic. Show that A - 1 exists and is equal to I-A. Generalize cf. Let A, B be linear maps of a vector space into itself.
Assume that A is surjective and that B is surjective. This line segment is illustrated in the following figure. Let S be the line segment in V be- tween two points v, w. This is obvious from 2 because We shall now generalize this discussion to higher dimensional figures. Let v, w be linearly independent elements of the vector space V.
This definition is clearly justified since t 1 v is a point of the segment be- tween 0 and v Fig. We obtain the most general parallelogram Fig. We begin with triangles located at the origin. Let v, w again be linearly independent. We define the triangle spanned by 0, v, w to be the set of all points 3 and We must convince ourselves that this is a reasonable definition.
We do this by showing that the triangle defined above coincides with the set of points on all line segments between v and all the points of the segment between 0 and w. From Fig. Hence all points satisfying 4 also satisfy 3. This justifies our definition of a triangle. As with parallelograms, an arbitrary triangle is obtained by translating a triangle located at the origin. In fact, we have the following descrip- tion of a triangle. Then S is the translation by V3 of the triangle spanned by 0, v, w.
Figure 8 Proof. Hence our point P is a translation by V3 of a point sat- isfying 3. Actually, it is 5 which is the most useful description of a triangle, be- cause the vertices V1, V2, V3 occupy a symmetric position in this defini- tion.
Assume that L v and L w are also linearly in- dependent. Let S be the triangle spanned by 0, v, w. In- deed, it is the set of all points with and Similarly, let S be the triangle spanned by v 1, v 2 , v 3. Then the image of Sunder L is the triangle spanned by L v 1 , L v 2 , L V3 if these do not lie on a straight line because it consists of the set of points The conditions of 5 are those which generalize to the fruitful con- cept of convex set which we now discuss.
Let S be a subset of a vector space V. In Fig. The set on the right is not convex since the line segment between P and Q is not entirely contained in S. Convex set Not convex Figure 9 Theorem 5. Let P l' Then S is convex. From Theorem 5. Any convex set S' which contains P l' We prove this by induction. We shall prove it for n. Let t l' But then lies in S' be definition of a convex set, as was to be shown. This proves our assertion. F or a generalization of this example, see Exercise 6.
Show that the image under a linear map of a convex set is convex. Let S1 and S2 be convex sets in V. Show that S is convex. Let A be a non-zero vector in Rn and c a number.
If you fumbled around with notation in Exercises 3, 4, 5 then show why these exercises are special cases of Exercise 6, which gives the general princi- ple behind them. The set S in Exercise 6 is called the inverse image of Sf under L. Show that a parallelogram is convex. Let S be a convex set in V and let u be an element of V. Let S be a convex set in the vector space V and let c be a number.
Let cS denote the set of all elements cv with v in S. Show that cS is convex. Let u, w be linearly independent elements of a vector space V. Assume that F v , F w are linearly dependent. Show that the image under F of the parallelogram spanned by v and w is either a point or a line segment. We can then associate with A a map by letting for every column vector X in Kn.
That LA is linear is simply a special case of Theorem 3. We call LA the linear map associated with the matrix A. I n other words, if matrices A, B give rise to the same linear map, then they are equal.
We can give a new interpretation for a system of homogeneous linear equations in terms of the linear map associated with a matrix. In each case, find the vector LA X. Let be a linear map. We shall now generalize this to the case of an arbitrary linear map into K m , not just into K. Proof As usual, let E 1, We can write any vector X in K n as a linear combination where Xj is the j-th component of X.
We view E 1, By linearity, we find that and we can write each L Ej in terms of e 1, We also call A the matrix associated with the linear map L.
We know that this matrix is uniquely determined by Theorem 1. Then the matrix associated with F is 1 0 0. Then the matrix associated with I is the matrix 0 0 1 having components equal to 1 on the diagonal, and 0 otherwise. According to Theorem 2. We can define a rotation in terms of matrices. SIn e The geometric justification for this definition comes from Fig. Thus our definition corresponds precisely to the picture.
When the ma- trix of the rotation is as above, we say that the rotation is by an angle e. Similarly for compositions of mappings. Indeed, let and be linear maps, and let A, B be the matrices associated with F and G respectively. Hence the product BA is the matrix associated with the composite linear map GoF.
Let A be an n x n matrix, and let A 1, Then A is invertible if and only if A 1, Suppose A 1, Thus A is invertible. Conversely, suppose A is invertible. Hence A 1, This proves the theorem. Find the matrix associated with the following linear maps. The vectors are written horizontally with a transpose sign for typographical reasons. Find the matrix R e associated with the rotation for each of the following values of e. What is the matrix associated with the rotation by an angle - e i.
Show that if A is the matrix associated with F, then A - 1 is the matrix associated with the inverse of F. Let F be a rotation through an angle e. What is the matrix associated with this linear map? Let Fo be rotation by an angle e. Show that F 0 is invertible, and determine the matrix associated with F ; 1.
Now let V, W be arbitrary finite dimensional vector spaces over K. Let and be bases of V and W respectively. Then we know that elements of V and W have coordinate vectors with respect to these bases. To use a notation which shows that the coordinate vector X depends on v and on the basis 81 we let X 8iJ v denote this coordinate vector.
Then the above property can be stated in a formula. Let V be a vector space, and let 81, 81' be bases of V. Then The corollary expresses in a succinct way the manner in which the coordinates of a vector change when we change the basis of the vector space. This is immediately veri- fied. Then the matrix associated with the identity mapping of V into itself relative to these two distinct bases will not be the unit matrix! Let V, W be vector spaces. Let f, g be two linear maps of V into W. We now pass from the additive properties of the associated matrix to the multiplicative properties.
Let U, V, W be sets. Let F: U V be a mapping, and let G: V W be a mapping. Then we can form a composite mapping from U into W as discussed previously, namely Go F.
Let V, W, U be vector spaces. Let F: V W and G: W U be linear maps. Then Note. Relative to our choice of bases, the theorem expresses the fact that composition of mappings corresponds to multiplication of matrices. Let v be an element of V and let X be its column coordinate vec- tor relative to PA. By definition, this means that BA is the matrix associated with G 0 F, and proves our theorem. In many applications, one deals with linear maps of a vector space V into itself.
From the definition, we see that where I is the unit matrix. As a direct consequence of Theorem 3. Then In particular, M:, id is invertible. Our assertion then drops out. The general formula of Theorem 3. Let F: V Then there exists an invertible matrix N such that I n fact, we can take Proof. Applying Theorem 3. A basis f! If there exists such a basis which diagonalizes F, then we say that F is diagonalizable. It is not always true that a linear map can be diagonalized.
Later in this book, we shall find sufficient conditions under which it can. If A is an n x n matrix in K, we say that A can be diagonalized in K if the linear map on Kn represented by A can be diagonalized. From Theorem 3. In view of the importance of the map M r N- 1 MN, we give it a spe- cial name. In each one of the following cases, find M:, id. The vector space in each case is R3. Suppose that there are numbers C i , What is the matrix associated with the identity map, and rotation of bases by an angle - f i.
In general, let F be the rotation through an angle O. Let x, y be a point of the plane in the standard coordinate system. Let x', y' be the coordinates of this point in the rotated system. Express x', y' in terms of x, y, and O. We give a set of linearly independent functions fJl. These generate a vector space V, and D is a linear map from V into itself.
Find the matrix associated with D relative to the bases fJl, fJl. Prove that if N is nilpotent, then I - N is invertible. Let I be the identity mapping. Prove that the following linear maps are invertible: a I - D2. Y, which to elements X, Y E K n associates their dot product as we defined it previously, is a scalar product in the present sense. Let V be the space of continuous real-valued functions on the interval [0, 1]. Simple properties of the integral show that this is a scalar product.
In both examples the scalar product is non-degenerate. We had point- ed this out previously for the dot product of vectors in Kn. In the sec- ond example, it is also easily shown from simple properties of the integral. In calculus, we study the second example, which gives rise to the theo- ry of Fourier series. Here we discuss only general properties of scalar products and applications to Euclidean spaces. Let V be a vector space with a scalar product.
If S is a subset of V, we denote by S. Let U be the subspace of V generated by the ele- ments of S. If W is perpendicular to S, and if v 1 , V 2 E S, then If c is a scalar, then Hence w is perpendicular to linear combinations of elements of S, and hence w is perpendicular to U. Let U be the space consisting of all vectors in K n perpendicular to A l' We shall discuss this dimension at greater length later. Let V again be a vector space over the field K, with a scalar product.
We shall show later that if V is a finite dimensional vector space, with a scalar product, then there always exists an orthogonal basis. However, we shall first discuss important special cases over the real and complex numbers. The real positive definite case Let V be a vector space over R, with a scalar product. The ordinary dot product of vectors in Rn is positive definite, and so is the scalar product of Example 2 above. Let V be a vector space over R, with a positive definite scalar product denoted by ,.
Then W has a scalar product defined by the same rule defining the scalar product in V. In other words, if w, w' are elements of W, we may form their product w, w'. This scalar product on W is obviously positive definite. For instance, if W is the subspace of R3 generated by the two vectors 1, 2, 2 and n, -1, 0 , then W is a vector space in its own right, and we can take the dot product of vectors lying in W to define a positive defi- nite scalar product on W We often have to deal with such subspaces, and this is one reason why we develop our theory on arbitrary finite di- mensional spaces over R with a given positive definite scalar product, instead of working only on Rn with the dot product.
This definition stems from the Pythagoras theorem. We can also justify our definition of perpendicularity. This is the desired justification. Basic properties which were proved without coordinates can be proved for our more general scalar product.
We shall carry such proofs out, and meet other examples as we go along. If v E V and v - 0, then viii v II is a unit vector. The following two identities follow directly from the definition of the length. The Pythagoras theorem. If v, ware perpendicular, then The parallelogram law.
For any v, w we have The proofs are tri vial. We give the first, and leave the second as an exercise. This proves Pythagoras. Let w be an elelnent of V such that Ilwll - O. For any v there exists a unique number c such that v - cw is perpendicular to w. We call c the component of v along w.
We call cw the projection of v along w. Let V be the space of continuous functions on [- n, n]. In the present example of a vector space of functions, the component of g along f is called the Fourier coefficient of g with respect to f. Theorem 1. Schwarz inequality. Triangle inequality. Proof Each side of this inequality is positive or O. Let Ci be the component of v along Vi. This proves that and thus our theorem is proved. The next theorem is known as the Bessel inequality. Assume that the scalar product is positive definite.
Let Vi' Show that they are linearly independent. Let M be a square n x n matrix which is equal to its transpose. If X, Yare column n-vectors, then is a 1 x 1 matrix, which we identify with a number.
Give an example of a 2 x 2 ma- trix M such that the product is not positive definite. If in addition each element of the basis has norm 1, then the basis is called orthonormal. The standard unit vectors of Rn form an orthonormal basis of Rn, with respect to the ordinary dot product. Let V be a finite dimensional vector space, with a positive definite scalar product.
Of course, it is not an orthogonal basis. This concludes the proof. Corollary 2. Let V be a finite dimensional vector space with a positive definite scalar product. Then V has an orthogo- nal basis. We let W be the subspace generated by Vb and apply the theorem to geOt the desired basis. We wish to orthogonalize it. We proceed as follows. Given an orthogonal basis, we can always obtain an orthonormal ba- sis by dividing each vector by its norm.
Find an orthonormal basis for the vector space generated by the vectors 1,1,0,1 , 1, -2,0,0 , and 1,0, -1,2. Let us denote these vectors by A, B, C. Then B' is perpendicular to A. The vectors A, B', C' are non-zero and mutually perpendicular. They lie in the space generated by A, B, C.
Let V be a vector space over R with a positive definite scalar product, of dimension n. Let W be a subspace of V of dimension r. Let u be an element of W-L. Since they are mutually perpendicu- lar, and of norm 1, they form an orthonormal basis of W1-, whose di- mension is therefore n - r.
Hence V is the direct sum of Wand W The space W 1- is called the orthogonal complement of W Example 2. Consider R3. Let A, B be two linearly independent vec- tors in R3. Then the space of vectors which are perpendicular to both A and B is a I-dimensional space.
Again in R 3 , let N be a non-zero vector. The space of vectors perpen- dicular to N is a 2-dimensional space, i. Let V be a finite dimensional vector space over R, with a posItIve definite scalar product. Let v, WE V. There exist numbers Xl' This is definitely not the case if we deal with a basis which is not orthonormal.
We wish to preserve the notion of a positive definite scalar product as far as possible. Since the dot product of vectors with complex coordinates may be equal to 0 without the vectors being equal to 0, we must change something in the definition.
Is Serge Lang's Algebra still worth reading? Ask Question. Asked 8 years, 9 months ago. Active 7 years, 10 months ago. Viewed 28k times. Is this one of the practical, answerable questions based on actual problems that you face? It's answerable in the sense that you can say either "Yes, it's still perfectly up to date" or "No, there are other Algebra books which incorporate newer viewpoints that make it easier to get used to the matter.
Jasper's answer has averted this by simply taking both sides and making a long list of most modern algebra textbooks, thereby appeasing everyone. This does not change the fact that your question is entirely subjective.
Show 1 more comment. Active Oldest Votes. Add a comment. Its two-hundred odd pages fill in many of the gaps and provide much supplementary content. While they're very nicely written,they're almost comically concise.
I don't know how anyone could actually learn algebra from them. As abstract as Lang's book is,at least it has lots of examples. There are almost none in Cohn. If used alongside a more concrete and detailed treatment-such as Vinberg's comprehensive and wonderfully concrete text-their union could certainly be the basis for a strong graduate algebra course.
If I could only have one book,my choice would be Aluffi's beautiful text. Sign up or log in Sign up using Google. Sign up using Facebook. Sign up using Email and Password.
0コメント