Introduction To Partial Derivatives M, The Derivative Matrix

Functions of Several Variables

Multivariable calculus is the extension of calculus in one variable to calculus in more than one variable.

Đang xem: Partial derivatives m

Key Takeaways

Key PointsMultivariable calculus can be applied to analyze deterministic systems that have multiple degrees of freedom.Unlike a single variable function f(x), for which the limits and continuity of the function need to be checked as x varies on a line (x-axis), multivariable functions have infinite number of paths approaching a single point.In multivariable calculus, gradient, Stokes’, divergence, and Green theorems are specific incarnations of a more general theorem: the generalized Stokes’ theorem.Key Termsdeterministic: having exactly predictable time evolutiondivergence: a vector operator that measures the magnitude of a vector field’s source or sink at a given point, in terms of a signed scalar

Multivariable calculus (also known as multivariate calculus) is the extension of calculus in one variable to calculus in more than one variable: the differentiated and integrated functions involve multiple variables, rather than just one. Multivariable calculus can be applied to analyze deterministic systems that have multiple degrees of freedom. Functions with independent variables corresponding to each of the degrees of freedom are often used to model these systems, and multivariable calculus provides tools for characterizing the system dynamics.

A Scalar Field: A scalar field shown as a function of (x,y). Extensions of concepts used for single variable functions may require caution.

Multivariable calculus is used in many fields of natural and social science and engineering to model and study high-dimensional systems that exhibit deterministic behavior. Non-deterministic, or stochastic, systems can be studied using a different kind of mathematics, such as stochastic calculus. Quantitative analysts in finance also often use multivariate calculus to predict future trends in the stock market.

As we will see, multivariable functions may yield counter-intuitive results when applied to limits and continuity. Unlike a single variable function f(x), for which the limits and continuity of the function need to be checked as x varies on a line (x-axis), multivariable functions have infinite number of paths approaching a single point.Likewise, the path taken to evaluate a derivative or integral should always be specified when multivariable functions are involved.

We have also studied theorems linking derivatives and integrals of single variable functions. The theorems we learned are gradient theorem, Stokes’ theorem, divergence theorem, and Green’s theorem. In a more advanced study of multivariable calculus, it is seen that these four theorems are specific incarnations of a more general theorem, the generalized Stokes’ theorem, which applies to the integration of differential forms over manifolds.

Limits and Continuity

A study of limits and continuity in multivariable calculus yields counter-intuitive results not demonstrated by single-variable functions.

Learning Objectives

Describe the relationship between the multivariate continuity and the continuity in each argument

Key Takeaways

Key PointsThe function f(x,y) = frac{x^2y}{x^4+y^2} has different limit values at the origin, depending on the path taken for the evaluation.Continuity in each argument does not imply multivariate continuity.When taking different paths toward the same point yields different values for the limit, the limit does not exist.Key Termscontinuity: lack of interruption or disconnection; the quality of being continuous in space or timelimit: a value to which a sequence or function convergesscalar function: any function whose domain is a vector space and whose value is its scalar field

A study of limits and continuity in multivariable calculus yields many counter-intuitive results not demonstrated by single- variable functions. For example, there are scalar functions of two variables with points in their domain which give a particular limit when approached along any arbitrary line, yet give a different limit when approached along a parabola. For example, the function f(x,y) = frac{x^2y}{x^4+y^2} approaches zero along any line through the origin. However, when the origin is approached along a parabola y = x^2, it has a limit of 0.5. Since taking different paths toward the same point yields different values for the limit, the limit does not exist.

Continuity in each argument does not imply multivariate continuity. For instance, in the case of a real-valued function with two real-valued parameters, f(x,y), continuity of f in x for fixed y and continuity of f in y for fixed x does not imply continuity of f. As an example, consider

f(x,y)= egin{cases} displaystyle{frac{y}{x}}-y & ext{if } 1 geq x > y geq 0 \ displaystyle{frac{x}{y}}-x & ext{if } 1 geq y > x geq 0 \ 1-x & ext{if } x=y>0 \ 0 & ext{else}. end{cases}

It is easy to check that all real-valued functions (with one real-valued argument) that are given by f_y(x)= f(x,y) are continuous in x (for any fixed y). Similarly, all f_x are continuous as f is symmetric with regards to x and y. However, f itself is not continuous as can be seen by considering the sequence f left(frac{1}{n},frac{1}{n}
ight)
(for natural n), which should converge to displaystyle{f (0,0) = 0} if f is continuous. However, lim f left(frac{1}{n},frac{1}{n}
ight) = 1
.

Partial Derivatives

A partial derivative of a function of several variables is its derivative with respect to a single variable, with the others held constant.

Learning Objectives

Identify proper ways to express the partial derivative

Key Takeaways

Key PointsThe partial derivative of a function f with respect to the variable x is variously denoted by f^prime_x, f_{,x}, partial_x f, ext{ or } frac{partial f}{partial x}.To every point on this surface describing a multi-variable function, there is an infinite number of tangent lines. Partial differentiation is the act of choosing one of these lines and finding its slope.As an ordinary derivative, partial derivatives are defined in limit: frac{ partial }{partial a_i }f(mathbf{a}) = lim_{h
ightarrow 0}{ f(a_1, dots, a_{i-1}, a_i+h, a_{i+1}, dots,a_n) – f(a_1, dots, a_i, dots,a_n) over h }
.Key Termsdifferential geometry: the study of geometry using differential calculusEuclidean: adhering to the principles of traditional geometry, in which parallel lines are equidistant

A partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant (as opposed to the total derivative, in which all variables are allowed to vary). Partial derivatives are used in vector calculus and differential geometry. The partial derivative of a function f with respect to the variable x is variously denoted by f^prime_x, f_{,x}, partial_x f, ext{ or } frac{partial f}{partial x}.

Suppose that f is a function of more than one variable. For instance, z = f(x, y) = x^2 + xy + y^2. The graph of this function defines a surface in Euclidean space. To every point on this surface, there is an infinite number of tangent lines. Partial differentiation is the act of choosing one of these lines and finding its slope. Usually, the lines of most interest are those which are parallel to the xz-plane and those which are parallel to the yz-plane (which result from holding either y or x constant, respectively).

To find the slope of the line tangent to the function at P(1, 1, 3) that is parallel to the xz-plane, the y variable is treated as constant. By finding the derivative of the equation while assuming that y is a constant, the slope of f at the point (x, y, z) is found to be:

displaystyle{frac{partial z}{partial x} = 2x+y}

So at (1, 1, 3), by substitution, the slope is 3. Therefore,

See also  Proving The Derivative Of Cos (X), Differentiation Of Trigonometric Functions

displaystyle{frac{partial z}{partial x} = 3}

at the point (1, 1, 3). That is to say, the partial derivative of z with respect to x at (1, 1, 3) is 3.

Formal Definition

Like ordinary derivatives, the partial derivative is defined as a limit. Let U be an open subset of R^n and f:U
ightarrow R
 a function. The partial derivative of f at the point a = (a_1, cdots, a_n) in U with respect to the ith variable is defined as:

displaystyle{frac{ partial }{partial a_i }f(mathbf{a}) = lim_{h
ightarrow 0}{ f(a_1, cdots, a_{i-1}, a_i+h, a_{i+1}, cdots,a_n) – f(a_1, cdots, a_i, cdots,a_n) over h }}

Tangent Planes and Linear Approximations

The tangent plane to a surface at a given point is the plane that “just touches” the surface at that point.

Learning Objectives

Explain why the tangent plane can be used to approximate the surface near the point

Key Takeaways

Key PointsFor a surface given by a differentiable multivariable function z=f(x,y), the equation of the tangent plane at (x_0,y_0,z_0) is given as fx(x0,y0)(x−x0)+fy(x0,y0)(y−y0)−(z−z0)=0f_x(x_0,y_0) (x-x_0) + f_y(x_0,y_0) (y-y_0) – (z-z_0) = 0.Since a tangent plane is the best approximation of the surface near the point where the two meet, the tangent plane can be used to approximate the surface near the point.The plane describing the linear approximation for a surface described by z=f(x,y) is given as z = z_0 + f_x(x_0,y_0) (x-x_0) + f_y(x_0,y_0) (y-y_0).Key Termsdifferentiable: having a derivative, said of a function whose domain and co-domain are manifoldsdifferential geometry: the study of geometry using differential calculusslope: also called gradient; slope or gradient of a line describes its steepness

The tangent line (or simply the tangent) to a plane curve at a given point is the straight line that “just touches” the curve at that point. Similarly, the tangent plane to a surface at a given point is the plane that “just touches” the surface at that point. The concept of a tangent is one of the most fundamental notions in differential geometry and has been extensively generalized.

*

Equations

When the curve is given by y = f(x) the slope of the tangent is frac{dy}{dx}, so by the point–slope formula the equation of the tangent line at (x_0, y_0) is:

frac{dy}{dx}(x_0,y_0) cdot (x-x_0) – (y-y_0)

where (x, y) are the coordinates of any point on the tangent line, and where the derivative is evaluated at x=x_0.

The tangent plane to a surface at a given point p is defined in an analogous way to the tangent line in the case of curves. It is the best approximation of the surface by a plane at p, and can be obtained as the limiting position of the planes passing through 3 distinct points on the surface close to p as these points converge to p. For a surface given by a differentiable multivariable function z=f(x,y), the equation of the tangent plane at (x_0,y_0,z_0) is given as:

f_x(x_0,y_0) (x-x_0) + f_y(x_0,y_0) (y-y_0) – (z-z_0) = 0

where (x_0,y_0,z_0) is a point on the surface. Note the similarity of the equations for tangent line and tangent plane.

Linear Approximation

Since a tangent plane is the best approximation of the surface near the point where the two meet, tangent plane can be used to approximate the surface near the point. The approximation works well as long as the point (x,y,z)  under consideration is close enough to (x_0,y_0,z_0), where the tangent plane touches the surface. The plane describing the linear approximation for a surface described by z=f(x,y) is given as:

z = z_0 + f_x(x_0,y_0) (x-x_0) + f_y(x_0,y_0) (y-y_0).

The Chain Rule

For a function U with two variables x and y, the chain rule is given as frac{d U}{dt} = frac{partial U}{partial x} cdot frac{dx}{dt} + frac{partial U}{partial y} cdot frac{dy}{dt}.

Learning Objectives

Express a chain rule for a function with two variables

Key Takeaways

Key PointsThe chain rule can be easily generalized to functions with more than two variables.For a single variable functions, the chain rule is a formula for computing the derivative of the composition of two or more functions. For example, the chain rule for f circ g (x) ≡ f is frac {df}{dx} = frac {df}{dg} cdot frac {dg}{dx}.The chain rule can be used when we want to calculate the rate of change of the function U(x,y) as a function of time t, where x=x(t) and y=y(t).Key Termspotential energy: the energy possessed by an object because of its position (in a gravitational or electric field), or its condition (as a stretched or compressed spring, as a chemical reactant, or by having rest mass)

The chain rule is a formula for computing the derivative of the composition of two or more functions. That is, if f is a function and g is a function, then the chain rule expresses the derivative of the composite function f circ g (x) ≡ f in terms of the derivatives of f and g. For example, the chain rule for f circ g is frac {df}{dx} = frac {df}{dg} , frac {dg}{dx}.

The chain rule above is for single variable functions f(x) and g(x). However, the chain rule can be generalized to functions with multiple variables. For example, consider a function U with two variables x and y: U=U(x,y). U could be electric potential energy at a location (x,y). The motion of a test charge on the xy-plane can be described by x=x(t), y=y(t) where t is a parameter representing time t. What we want to calculate is the rate of change of the potential energy U as a function of time t. Assuming x=x(t), y=y(t), and U=U(x,y) are all differentiable at t and (x,y), the chain rule is given as:

displaystyle{frac{d U}{dt} = frac{partial U}{partial x} cdot frac{dx}{dt} + frac{partial U}{partial y} cdot frac{dy}{dt}}

This relation can be easily generalized for functions with more than two variables.

Key Takeaways

Key PointsThe directional derivative is defined by the limit
abla_{mathbf{v}}{f}(mathbf{x}) = lim_{h
ightarrow 0}{frac{f(mathbf{x} + hmathbf{v}) – f(mathbf{x})}{h}}
.If the function f is differentiable at mathbf{x}, then the directional derivative exists along any vector mathbf{v}, and one gets
abla_{mathbf{v}}{f}(mathbf{x}) =
abla f(mathbf{x}) cdot mathbf{v}
.Many of the familiar properties of the ordinary derivative hold for the directional derivative.Key Termschain rule: a formula for computing the derivative of the composition of two or more functions.gradient: of a function y=f(x) or the graph of such a function, the rate of change of y with respect to x; that is, the amount by which y changes for a certain (often unit) change in x.

The directional derivative of a multivariate differentiable function along a given vector mathbf{v} at a given point mathbf{x} intuitively represents the instantaneous rate of change of the function, moving through mathbf{x} with a velocity specified by mathbf{v}. It therefore generalizes the notion of a partial derivative, in which the rate of change is taken along one of the coordinate curves, all other coordinates being constant.

Definition

The directional derivative of a scalar function f(mathbf{x}) = f(x_1, x_2, ldots, x_n) along a vector mathbf{v} = (v_1, ldots, v_n) is the function defined by the limit:

displaystyle{
abla_{mathbf{v}}{f}(mathbf{x}) = lim_{h
ightarrow 0}{frac{f(mathbf{x} + hmathbf{v}) – f(mathbf{x})}{h}}}

If the function f is differentiable at mathbf x, then the directional derivative exists along any vector mathbf v, and one has
abla_{mathbf{v}}{f}(mathbf{x}) =
abla f(mathbf{x}) cdot mathbf{v}
, where the
abla f(mathbf{x})
 is the gradient vector and cdot is the dot product. At any point mathbf x, the directional derivative of f intuitively represents the rate of change of f with respect to time when it is moving at a speed and direction given by mathbf v at the point mathbf x. The name “directional derivative” is a bit misleading since it depends on both the length and direction of mathbf v.

See also  Derivative Graph Of A Parabola, Derivative And Tangent Line

We can imagine the directional derivative
abla_{mathbf{v}}{f}(mathbf{x})
 as the slope of the tangent line to the 2-dimensional slice of the graph of f that lies parallel to the vector mathbf{v}. However, this slice will be stretched or compressed horizontally unless mathbf{v}=1.

Properties

Many of the familiar properties of the ordinary derivative hold for the directional derivative.

Xem thêm: Basic Math Gl O Geometry Words That Start With O ⭐ Opposite, Oblique, Octagon

The Sum Rule


abla_mathbf{v} (f + g) =
abla_mathbf{v} f +
abla_mathbf{v} g

The Constant Factor Rule

For any constant c,
abla_mathbf{v} (cf) = c
abla_mathbf{v} f
.

The Product Rule (or Leibniz Rule)


abla_mathbf{v} (fg) = g
abla_mathbf{v} f + f
abla_mathbf{v} g

The Chain Rule

If g is differentiable at p and h is differentiable at g(p), then
abla_mathbf{v} hcirc g (p) = h”(g(p))
abla_mathbf{v} g (p)
.

Maximum and Minimum Values

The second partial derivative test is a method used to determine whether a critical point is a local minimum, maximum, or saddle point.

Learning Objectives

Apply the second partial derivative test to determine whether a critical point is a local minimum, maximum, or saddle point

Key Takeaways

Key PointsFor a function of two variables, the second partial derivative test is based on the sign of M(x,y)= f_{xx}(x,y)f_{yy}(x,y) – left( f_{xy}(x,y)
ight)^2
and f_{xx}(a,b), where (a,b) is a critical point.There are substantial differences between the functions of one variable and the functions of more than one variable in the identification of global extrema.The maximum and minimum of a function, known collectively as extrema, are the largest and smallest values that the function takes at a point either within a given neighborhood (local or relative extremum) or on the function domain in its entirety (global or absolute extremum).Key Termscritical point: a maximum, minimum, or point of inflection on a curve; a point at which the derivative of a function is zero or undefinedintermediate value theorem: a statement that claims that, for each value between the least upper bound and greatest lower bound of the image of a continuous function, there is a corresponding point in its domain that the function maps to that valueRolle’s theorem: a theorem stating that a differentiable function which attains equal values at two distinct points must have a point somewhere between them where the first derivative (the slope of the tangent line to the graph of the function) is zero

The maximum and minimum of a function, known collectively as extrema, are the largest and smallest values that the function takes at a point either within a given neighborhood (local or relative extremum) or on the function domain in its entirety (global or absolute extremum).

Finding Maxima and Minima of Multivariable Functions

The second partial derivative test is a method in multivariable calculus used to determine whether a critical point (a,b, cdots ) of a function f(x,y, cdots ) is a local minimum, maximum, or saddle point.

For a function of two variables, suppose that M(x,y)= f_{xx}(x,y)f_{yy}(x,y) – left( f_{xy}(x,y)
ight)^2
.

If M(a,b)>0 and f_{xx}(a,b)>0, then (a,b) is a local minimum of f.If M(a,b)>0M(a,b)>0 and fxx(a,b)If M(a,b)If M(a,b)=0, then the second derivative test is inconclusive.

There are substantial differences between functions of one variable and functions of more than one variable in the identification of global extrema. For example, if a bounded differentiable function f defined on a closed interval in the real line has a single critical point, which is a local minimum, then it is also a global minimum (use the intermediate value theorem and Rolle’s theorem). In two and more dimensions, this argument fails, as the function f(x,y)= x^2+y^2(1-x)^3,,, x,yinmathbb{R} shows. Its only critical point is at (0,0), which is a local minimum with f(0,0) = 0. However, it cannot be a global one, because f(4,1) = 11.

Lagrange Multiplers

The method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equality constraints.

Key Takeaways

Key PointsTo maximize f(x,y) subject to g(x,y)=c, we introduce a new variable lambda, called a Lagrange multiplier, and study the Lagrange function (or Lagrangian) defined by Lambda(x,y,lambda) = f(x,y) + lambda cdot Big(g(x,y)-cBig).When the contour line for g = c meets the contour lines of f tangentially do we not increase or decrease the value of f — that is, when the contour lines touch but do not cross. This will often be the situation where a solution to the constrained maximum problem above exists.Solve
abla_{x,y,lambda} Lambda(x, y, lambda)=0
, and we find a necessary condition for extrema under the given constraint.Key Termsgradient: of a function y = f(x) or the graph of such a function, the rate of change of y with respect to x; that is, the amount by which y changes for a certain (often unit) change in xcontour: a line on a map or chart delineating those points which have the same altitude or other plotted quantity: a contour line or isopleth

In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) is a strategy for finding the local maxima and minima of a function subject to equality constraints.

For instance, consider the following optimization problem: Maximize f(x,y) subject to g(x,y)=c. We need both f and g to have continuous first partial derivatives. We introduce a new variable (lambda) called a Lagrange multiplier, and study the Lagrange function (or Lagrangian) defined by:

Lambda(x,y,lambda) = f(x,y) + lambda cdot left(g(x,y)-c
ight)

where the lambda term may be either added or subtracted. If f(x_0, y_0) is a maximum of f(x,y) for the original constrained problem, then there exists lambda_0 such that (x_0,y_0,lambda_0) is a stationary point for the Lagrange function (stationary points are those points where the partial derivatives of Lambda are zero, i.e.,
ablaLambda = 0
). However, not all stationary points yield a solution of the original problem. Thus, the method of Lagrange multipliers yields a necessary condition for optimality in constrained problems. Sufficient conditions for a minimum or maximum also exist.

Introduction

One of the most common problems in calculus is that of finding maxima or minima (in general, “extrema”) of a function, but it is often difficult to find a closed form for the function being extremized. Such difficulties often arise when one wishes to maximize or minimize a function subject to fixed outside conditions or constraints. The method of Lagrange multipliers is a powerful tool for solving this class of problems without the need to explicitly solve the conditions and use them to eliminate extra variables.

Consider the two-dimensional problem introduced above. Maximize f(x,y) subject to g(x,y)=c. We can visualize contours of f given by f(x, y)=d for various values of d, and the contour of g given by g (x, y) = c. Suppose we walk along the contour line with g = c. In general, the contour lines of f and g may be distinct, so following the contour line for g = c, one could intersect with or cross the contour lines of f. This is equivalent to saying that while moving along the contour line for g = c, the value of f can vary. When the contour line for g = c meets contour lines of f tangentially we neither increase nor decrease the value of f—that is, when the contour lines touch but do not cross.

See also  Finding A Derivative H Method, Use Definition To Find Derivative

The contour lines of f and g touch when the tangent vectors of the contour lines are parallel. Since the gradient of a function is perpendicular to the contour lines, this is the same as saying that the gradients of f and g are parallel. Thus, we want points:

(x,y) where g(x,y)=c

and


abla_{x,y} f = – lambda
abla_{x,y} g
, where
abla_{x,y} f= left( frac{partial f}{partial x}, frac{partial f}{partial y}
ight)
 and
abla_{x,y} g= left( frac{partial g}{partial x}, frac{partial g}{partial y}
ight)
are the respective gradients.

The constant is required because, although the two gradient vectors are parallel, the magnitudes of the gradient vectors are generally not equal. Note that lambda
eq 0
; otherwise we cannot assert the two gradients are parallel.

To incorporate these conditions into one equation, we introduce an auxiliary function, Lambda(x,y,lambda) = f(x,y) + lambda cdot Big(g(x,y)-cBig), and solve
abla_{x,y,lambda} Lambda(x, y, lambda)=0
. This is the method of Lagrange multipliers. Note that
abla_{lambda} Lambda(x, y, lambda)=0
 implies g(x,y)=c.

Where the Lagrange multiplier lambda=0 we can have a local extremum and the two contours cross instead of meeting tangentially. Consider the following example.

Minimize f(x,y) = sin(x), given that g(x,y) = x^2 + y^2=9. Every point left(frac{-pi}{2}, y
ight)
f=-1 is a global minimum of f with value -1. Therefore where the constraint g=c crosses the contour line f=-1, is a local minimum of f on the constraint. The trace and the contour f=-1 cross at the minimum as we can see in the figure. It is easy to verify that f_x=0 and f_y=0 when x = frac{pi}{2}. Since both g_x
eq 0
 and g_y
eq 0
, the Lagrange multiplier lambda = 0 at the minimum.

Key Takeaways

Key PointsMathematical optimization is the selection of a best element (with regard to some criteria) from some set of available alternatives.An optimization process that involves only a single variable is rather straightforward. After finding out the function f(x) to be optimized, local maxima or minima at critical points can easily be found. End points may have maximum/minimum values as well.For a rectangular cuboid shape, given the fixed volume, a cube is the geometric shape that minimizes the surface area.Key Termsoptimization: the design and operation of a system or process to make it as good as possible in some defined sensecuboid: a parallelepiped having six rectangular faces

Mathematical optimization is the selection of a best element (with regard to some criteria) from some set of available alternatives. An optimization process that involves only a single variable is rather straightforward. After finding out the function f(x) to be optimized, local maxima or minima at critical points can be easily found. (Of course, end points may have maximum/minimum values as well.) The same strategy applies for optimization with several variables. In this atom, we will solve a simple example to see how optimization involving several variables can be achieved.

Cardboard Box with a Fixed Volume

A packaging company needs cardboard boxes in rectangular cuboid shape with a given volume of 1000 cubic centimeters and would like to minimize the material cost for the boxes. What should be the dimensions x, y, z of a box?

First of all, the material cost would be proportional to the surface area S of the cuboid. Therefore, the goal of the optimization is to minimize a function S(x,y,z) = 2(xy + yz+zx). The constraint in the case is that the volume is fixed: V = xyz = 1000.

We will first remove z from S(x,y,z). We can do this by using the constraint z = frac{1000}{xy}. Inserting the expression for z in S(x,y,z) yields:

displaystyle{S(x,y,z) = 2left(xy + frac{1000}{x} + frac{1000}{y}
ight)}

To find the critical points:

displaystyle{frac{partial S}{partial x} = 2 left(y – frac{1000}{x^2}
ight) = 0\ herefore y = frac{1000}{x^2}}

and

displaystyle{frac{partial S}{partial y} = 2left(x – frac{1000}{y^2}
ight) = 0\ herefore x = frac{1000}{y^2}}

Then, substituting in the expression found equal to y above yields:

x^3 = 1000

Therefore, we find that:

x=y=z=10

That is to say, the box that minimizes the cost of materials while maintaining the desired volume should be a 10-by-10-by-10 cube.

Applications of Minima and Maxima in Functions of Two Variables

Finding extrema can be a challenge with regard to multivariable functions, requiring careful calculation.

Learning Objectives

Identify steps necessary to find the minimum and maximum in multivariable functions

Key Takeaways

Key PointsThe second derivative test is a criterion for determining whether a given critical point of a real function of one variable is a local maximum or a local minimum using the value of the second derivative at the point.To find minima/maxima for functions with two variables, we must first find the first partial derivatives with respect to x and y of the function.The function z = f(x, y) = (x+y)(xy + xy^2) has saddle points at (0,-1) and (1,-1) and a local maximum at left(frac{3}{8}, -3.4
ight)
.Key Termsmultivariable: concerning more than one variablecritical point: a maximum, minimum, or point of inflection on a curve; a point at which the derivative of a function is zero or undefined

We have learned how to find the minimum and maximum in multivariable functions. As previously mentioned, finding extrema can be a challenge with regard to multivariable functions. In particular, we learned about the second derivative test, which is a criterion for determining whether a given critical point of a real function of one variable is a local maximum or a local minimum, using the value of the second derivative at the point. In this atom, we will find extrema for a function with two variables.

Example

Find and label the critical points of the following function:

z = f(x, y) = (x+y)(xy + xy^2)

Plot of z = (x+y)(xy+xy^2): The maxima and minima of this plot cannot be found without extensive calculation.

To solve this problem we must first find the first partial derivatives of the function with respect to x and y:

displaystyle{frac{partial z}{partial x} = y(2x +y)(y+1)}

displaystyle{frac{partial z}{partial y} = x left( 3y^2 +2y(x+1) + x
ight)}

Looking at frac{partial z}{partial x} = 0, we see that y must equal , -1, or -2x.

We plug the first solution, y=0, into the next equation, and get:

displaystyle{frac{partial z}{partial y} = x left( 3y^2 +2y(x+1) + x
ight)\ ,quad = x^2}

There were other possibilities for y, so for y=-1 we have:

displaystyle{frac{partial z}{partial y} = x left( 3 -2(x+1) + x
ight) \ ,quad= x(1-x)\ ,quad= 0}

So x must be equal to 1 or . Finally, for y=-2x:

displaystyle{frac{partial z}{partial y} = x left( 3(-2x)^2 +2(-2x)(x+1) + x
ight) \ ,quad= x^2(8x-3) \ ,quad= 0}

So x must equal or for y = 0 and y = -frac{3}{4}, respectively.

Xem thêm: Applying Algebra To Geometry And Algebra To Geometry, Algebraic Geometry

Let’s list all the critical values now:

displaystyle{(x,y) in {(0,0), (0, -1), (1,-1), left(frac{3}{8}, -frac{3}{4}
ight)}}

Now we have to label the critical values using the second derivative test. Plugging in all the different critical values we found to label them, we have:

D(0, 0) = 0D(0, -1) = -1D(1, -1) = -1Dleft(frac{3}{8}, -frac{3}{4}
ight) = 0.210938

We can now label some of the points:

at (0, −1), f(x, y) has a saddle pointat (1, −1), f(x, y) has a saddle pointat left(frac{3}{8}, -frac{3}{4}
ight)
 f(x, y) has a local maximum, since f_{xx} = -frac{3}{8}

At the remaining point we need higher-order tests to find out what exactly the function is doing.

See more articles in category: Derivative

Leave a Reply

Back to top button