In calculus, the **second derivative**, or the **second order derivative**, of a function *f* is the derivative of the derivative of *f*. Roughly speaking, the second derivative measures how the rate of change of a quantity is itself changing; for example, the second derivative of the position of an object with respect to time is the instantaneous acceleration of the object, or the rate at which the velocity of the object is changing with respect to time. In Leibniz notation:

where *a* is acceleration, *v* is velocity, *t* is time, *x* is position, and d is the instantaneous "delta" or change. The last expression is the second derivative of position (x) with respect to time.

On the graph of a function, the second derivative corresponds to the curvature or concavity of the graph. The graph of a function with a positive second derivative is upwardly concave, while the graph of a function with a negative second derivative curves in the opposite way.

## Second derivative power rule

The power rule for the first derivative, if applied twice, will produce the second derivative power rule as follows:

## Notation

The second derivative of a function is usually denoted .^{[1]}^{[2]}^{[3]} That is:

When using Leibniz's notation for derivatives, the second derivative of a dependent variable *y* with respect to an independent variable *x* is written

This notation is derived from the following formula:

## Alternative Notation

As the previous section notes, the standard Leibniz notation for the second derivative is . However, this form is not algebraically manipulable. That is, although it is formed looking like a fraction of differentials, the fraction cannot be split apart into pieces, the terms cannot be cancelled, etc. However, this limitation can be remedied by using an alternative formula for the second derivative. This one is derived from applying the quotient rule to the first derivative.^{[4]} Doing this yields the formula:

In this formula, represents the differential operator applied to , i.e., , represents applying the differential operator twice, i.e., , and refers to the square of the differential operator applied to , i.e., .

When written this way (and taking into account the meaning of the notation given above), the terms of the second derivative can be freely manipulated as any other algebraic term. For instance, the inverse function formula for the second derivative can be deduced from algebraic manipulations of the above formula, as well as the chain rule for the second derivative. Whether making such a change to the notation is sufficiently helpful to be worth the trouble is still under debate.^{[5]}

## Example

Given the function

the derivative of *f* is the function

The second derivative of *f* is the derivative of , namely

## Relation to the graph

### Concavity

The second derivative of a function *f* can be used to determine the **concavity** of the graph of *f*.^{[3]} A function whose second derivative is positive will be concave up (also referred to as convex), meaning that the tangent line will lie below the graph of the function. Similarly, a function whose second derivative is negative will be concave down (also simply called concave), and its tangent lines will lie above the graph of the function.

### Inflection points

If the second derivative of a function changes sign, the graph of the function will switch from concave down to concave up, or vice versa. A point where this occurs is called an **inflection point**. Assuming the second derivative is continuous, it must take a value of zero at any inflection point, although not every point where the second derivative is zero is necessarily a point of inflection.

### Second derivative test

The relation between the second derivative and the graph can be used to test whether a stationary point for a function (i.e., a point where ) is a local maximum or a local minimum. Specifically,

- If , then has a local maximum at .
- If , then has a local minimum at .
- If , the second derivative test says nothing about the point , a possible inflection point.

The reason the second derivative produces these results can be seen by way of a real-world analogy. Consider a vehicle that at first is moving forward at a great velocity, but with a negative acceleration. Clearly, the position of the vehicle at the point where the velocity reaches zero will be the maximum distance from the starting position – after this time, the velocity will become negative and the vehicle will reverse. The same is true for the minimum, with a vehicle that at first has a very negative velocity but positive acceleration.

## Limit

It is possible to write a single limit for the second derivative:

The limit is called the second symmetric derivative.^{[6]}^{[7]} Note that the second symmetric derivative may exist even when the (usual) second derivative does not.

The expression on the right can be written as a difference quotient of difference quotients:

This limit can be viewed as a continuous version of the second difference for sequences.

However, the existence of the above limit does not mean that the function has a second derivative. The limit above just gives a possibility for calculating the second derivative—but does not provide a definition. A counterexample is the sign function , which is defined as:^{[1]}

The sign function is not continuous at zero, and therefore the second derivative for does not exist. But the above limit exists for :

## Quadratic approximation

Just as the first derivative is related to linear approximations, the second derivative is related to the best quadratic approximation for a function *f*. This is the quadratic function whose first and second derivatives are the same as those of *f* at a given point. The formula for the best quadratic approximation to a function *f* around the point *x* = *a* is

This quadratic approximation is the second-order Taylor polynomial for the function centered at *x* = *a*.

## Eigenvalues and eigenvectors of the second derivative

For many combinations of boundary conditions explicit formulas for eigenvalues and eigenvectors of the second derivative can be obtained. For example, assuming and homogeneous Dirichlet boundary conditions (i.e., ), the eigenvalues are and the corresponding eigenvectors (also called eigenfunctions) are . Here,

For other well-known cases, see Eigenvalues and eigenvectors of the second derivative.

## Generalization to higher dimensions

### The Hessian

The second derivative generalizes to higher dimensions through the notion of second partial derivatives. For a function *f*: **R**^{3} → **R**, these include the three second-order partials

and the mixed partials

If the function's image and domain both have a potential, then these fit together into a symmetric matrix known as the **Hessian**. The eigenvalues of this matrix can be used to implement a multivariable analogue of the second derivative test. (See also the second partial derivative test.)

### The Laplacian

Another common generalization of the second derivative is the **Laplacian**. This is the differential operator (or ^{[1]}) defined by

The Laplacian of a function is equal to the divergence of the gradient, and the trace of the Hessian matrix.

## See also

- Chirpyness, second derivative of instantaneous phase
- Finite difference, used to approximate second derivative
- Second partial derivative test
- Symmetry of second derivatives

## References

- ^
^{a}^{b}^{c}"List of Calculus and Analysis Symbols".*Math Vault*. 2020-05-11. Retrieved 2020-09-16. **^**"Content - The second derivative".*amsi.org.au*. Retrieved 2020-09-16.- ^
^{a}^{b}"Second Derivatives".*Math24*. Retrieved 2020-09-16. **^**Bartlett, Jonathan; Khurshudyan, Asatur Zh (2019). "Extending the Algebraic Manipulability of Differentials".*Dynamics of Continuous, Discrete and Impulsive Systems, Series A: Mathematical Analysis*.**26**(3): 217–230. arXiv:1801.09553.**^**Editors (December 20, 2019). "Reviews".*Mathematics Magazine*.**92**(5): 396–397. doi:10.1080/0025570X.2019.1673628. S2CID 218542586.CS1 maint: extra text: authors list (link)**^**A. Zygmund (2002).*Trigonometric Series*. Cambridge University Press. pp. 22–23. ISBN 978-0-521-89053-3.**^**Thomson, Brian S. (1994).*Symmetric Properties of Real Functions*. Marcel Dekker. p. 1. ISBN 0-8247-9230-0.

## Further reading

- Anton, Howard; Bivens, Irl; Davis, Stephen (February 2, 2005),
*Calculus: Early Transcendentals Single and Multivariable*(8th ed.), New York: Wiley, ISBN 978-0-471-47244-5 - Apostol, Tom M. (June 1967),
*Calculus, Vol. 1: One-Variable Calculus with an Introduction to Linear Algebra*,**1**(2nd ed.), Wiley, ISBN 978-0-471-00005-1 - Apostol, Tom M. (June 1969),
*Calculus, Vol. 2: Multi-Variable Calculus and Linear Algebra with Applications*,**1**(2nd ed.), Wiley, ISBN 978-0-471-00007-5 - Eves, Howard (January 2, 1990),
*An Introduction to the History of Mathematics*(6th ed.), Brooks Cole, ISBN 978-0-03-029558-4 - Larson, Ron; Hostetler, Robert P.; Edwards, Bruce H. (February 28, 2006),
*Calculus: Early Transcendental Functions*(4th ed.), Houghton Mifflin Company, ISBN 978-0-618-60624-5 - Spivak, Michael (September 1994),
*Calculus*(3rd ed.), Publish or Perish, ISBN 978-0-914098-89-8 - Stewart, James (December 24, 2002),
*Calculus*(5th ed.), Brooks Cole, ISBN 978-0-534-39339-7 - Thompson, Silvanus P. (September 8, 1998),
*Calculus Made Easy*(Revised, Updated, Expanded ed.), New York: St. Martin's Press, ISBN 978-0-312-18548-0

### Online books

- Crowell, Benjamin (2003),
*Calculus* - Garrett, Paul (2004),
*Notes on First-Year Calculus* - Hussain, Faraz (2006),
*Understanding Calculus* - Keisler, H. Jerome (2000),
*Elementary Calculus: An Approach Using Infinitesimals* - Mauch, Sean (2004),
*Unabridged Version of Sean's Applied Math Book*, archived from the original on 2006-04-15 - Sloughter, Dan (2000),
*Difference Equations to Differential Equations* - Strang, Gilbert (1991),
*Calculus* - Stroyan, Keith D. (1997),
*A Brief Introduction to Infinitesimal Calculus*, archived from the original on 2005-09-11 - Wikibooks,
*Calculus*