Course Notes for Mathematics 211-212: Multivariable Calculus

(Copyright 2008, 2009, 2010, 2011, 2012, 2013 by Jerry Shurman. Any part of the material protected by this copyright notice may be reproduced in any form for any purpose without the permission of the copyright owner, but only the reasonable costs of reproduction may be charged. Reproduction for profit is prohibited.)

This is the text for a two-semester multivariable calculus course. The setting is n-dimensional Euclidean space, with the material on differentiation culminating in the Inverse Function Theorem and its consequences, and the material on integration culminating in the Generalized Fundamental Theorem of Integral Calculus (often called Stokes's Theorem) and some of its consequences in turn. The prerequisite is a proof-based course in one-variable calculus. Some familiarity with the complex number system and complex mappings is occasionally assumed as well, but the reader can get by without it.

The book's aim is to use multivariable calculus to teach mathematics as a blend of reasoning, computing, and problem-solving, doing justice to the structure, the details, and the scope of the ideas. To this end, I have tried to write in a style that communicates intent early in the discussion of each topic rather than proceeding coyly from opaque definitions. Also, I have tried occasionally to speak to the pedagogy of mathematics and its effect on the process of learning the subject. Most importantly, I have tried to spread the weight of exposition among diagrams, formulas, and words. The premise is that the reader is ready to do mathematics resourcefully by marshaling the complementary skills of

  • geometric intuition (the visual cortex being quickly instinctive),
  • algebraic manipulation (symbol-patterns being precise and robust),
  • and incisive use of natural language (slogans that encapsulate central ideas enabling a large-scale grasp of the subject).
Thinking in these ways renders mathematics coherent, inevitable, and fluent.

In my own student days, I learned this material from books by Apostol, Buck, Rudin, and Spivak, books that thrilled me. My debt to those sources pervades these pages. There are many other fine books on the subject as well, such as the more recent one by Hubbard and Hubbard. Indeed, nothing in these notes is claimed as new, not even their neuroses.

By way of a warm-up, chapter 1 reviews some ideas from one-variable calculus, and then covers the one-variable Taylor's Theorem in detail.

Chapters 2 and 3 cover what might be called multivariable pre-calculus, introducing the requisite algebra, geometry, analysis, and topology of Euclidean space, and the requisite linear algebra, for the calculus to follow. A pedagogical theme of these chapters is that mathematical objects can be better understood from their characterizations than from their constructions. Vector geometry follows from the intrinsic (coordinate-free) algebraic properties of the vector inner product, with no reference to the inner product formula. The fact that passing a closed and bounded subset of Euclidean space through a continuous mapping gives another such set is clear once such sets are characterized in terms of sequences. The multiplicativity of the determinant, and the fact that that the determinant indicates whether a linear mapping is invertible, are consequences of the determinant's characterizing properties. The geometry of the cross product follows from its intrinsic algebraic characterization. Furthermore, the only possible formula for the (suitably normalized) inner product, or for the determinant, or for the cross product, is dictated by the relevant properties. As far as the theory is concerned, the only role of the formula to show that an object with the desired properties exists at all. The intent here is that the student who is introduced to mathematical objects via their characterizations will see quickly how the objects work, and that how they work makes their constructions inevitable.

In the same vein, chapter 4 characterizes the multivariable derivative as a well approximating linear mapping. The chapter then solves some multivariable problems that have one-variable counterparts. Specifically, the multivariable chain rule helps with change of variable in partial differential equations, a multivariable analogue of the max/min test helps with optimization, and the multivariable derivative of a scalar-valued function helps to find tangent planes and trajectories.

Chapter 5 uses the results of the three chapters preceding it to prove the Inverse Function Theorem, then the Implicit Function Theorem as a corollary, and finally the Lagrange Multiplier Criterion as a consequence of the Implicit Function Theorem. Lagrange multipliers help with a type of multivariable optimization problem that has no one-variable analogue, optimization with constraints. For example, given two curves in space, what pair of points---one on each curve---is closest to each other? Not only does this problem have six variables (the three coordinates of each point), but furthermore they are not fully independent: the first three variables must specify a point on the first curve, and similarly for the second three. In this problem, the coordinates can be conceived of as varying though a subset of six-dimensional space, conceptually a two-dimensional subset (one degree of freedom for each curve) that is bending around in the ambient six dimensions. That is, optimization with constraints can be viewed as a beginning example of calculus on curved spaces.

Moving to integral calculus, chapter 6 introduces the integral of a scalar-valued function of many variables, taken over a domain of its inputs. When the domain is a box, the definitions and the basic results are essentially the same as for one variable. However, in multivariable calculus we want to integrate over regions other than boxes, and ensuring that we can do so takes a little work. After this is done, the chapter proceeds to two main tools for multivariable integration, Fubini's Theorem and the Change of Variable Theorem. Fubini's Theorem reduces one n-dimensional integral to n one-dimensional integrals, and the Change of Variable Theorem replaces one n-dimensional integral with another that may be easier to evaluate.

Chapter 7 discusses the fact that continuous functions, or once-differentiable functions, or twice-differentiable functions, are well approximated by smooth functions, meaning functions that can be differentiated endlessly. The approximation technology is an integral called the convolution. With approximation by convolution in hand, we feel free to assume in the sequel that functions are smooth.

Chapter 8 introduces parameterized curves as a warmup for chapter 9 to follow. The subject of chapter 9 is integration over k-dimensional parameterized surfaces in n-dimensional space, and parameterized curves are the special case k=1. Aside from being one-dimensional surfaces, parameterized curves are interesting in their own right.

Chapter 9 presents the integration of differential forms. This subject poses the pedagogical dilemma that fully describing its structure requires an investment in machinery untenable for students who are seeing it for the first time, whereas describing it purely operationally is unmotivated. The approach here begins with the integration of functions over k-dimensional surfaces in n-dimensional space, a natural thing to want to do, with a natural definition of how to do it suggesting itself. For certain such integrals, called flow and flux integrals, the integrand takes a particularly workable form consisting of sums of determinants of derivatives. It is easy to see what other integrands---including integrands suitable for n-dimensional integration in the sense of chapter 6, and including functions in the usual sense---have similar features. These integrands can be uniformly described in algebraic terms as objects called differential forms. That is, differential forms assemble the smallest coherent algebraic structure encompassing the various integrands of interest to us. The fact that differential forms are algebraic makes them easy to study without thinking directly about integration. The algebra leads to a general version of the Fundamental Theorem of Integral Calculus that is rich in geometry. The theorem subsumes the three classical vector integration theorems, Green's Theorem, Stokes's Theorem, and Gauss's Theorem, also called the Divergence Theorem.

Comments and corrections should be sent to jerry@reed.edu.

211 Lectures | 211 Assignments | 211 Solutions | Back to my home page