A real valued function
A relation f from a set
The notation
Given an element
The set of all values of
range of
Function | Domain | Range |
---|---|---|
-
One-One (Injective) Function
A function$f : X \rightarrow Y$ is defined to be one-one (or injective) if the images of distinct elements of$X$ under$f$ are distinct, i.e., for any$x1, x2 \in X$ , if$f(x1) = f(x2)$ , then it implies that$x1 = x2$ . -
Onto (Surjective) Function
A function$f : X \rightarrow Y$ is said to be onto (or surjective) if every element of$Y$ is the image of some element of$X$ under$f$ , i.e., for every$y \in Y$ , there exists an element$x \in X$ such that$f(x) = y$ . -
One-One and Onto (Bijective) Function
A function$f : X \rightarrow Y$ is said to be one-one and onto (or bijective) if it is both one-one and onto.
-
Explicit Functions
Explicit functions are functions where the dependent variable (usually denoted as$y$ ) is expressed explicitly in terms of the independent variable (usually denoted as$x$ ), such as$y = f(x)$ .
Example :$y = f(x) = 2x + 3$ -
Implicit Functions
Implicit functions are functions where the relationship between the dependent and independent variables is defined implicitly, often by an equation involving both variables, like$x^2 + y^2 = 1$ . -
Composite Functions
Composite functions are formed by combining two or more functions, creating a new function. For example, if$f(x)$ and$g(x)$ are functions, the composite function$h(x) = f(g(x))$ or$h(x) = g(f(x))$
Let$f(x) = 2x$ and$g(x) = x^2$ . Then the composite function is$h(x) = f(g(x)) = 2x^2$ . -
Polynomial Functions
Polynomial functions are algebraic functions of the form
$f(x) = a_nx^n + a_{n−1}x_{n−1} + . . . + a_1x + a0$ , where ai are constants, and n is a non-negative integer.
Example:$f(x) = 3x^3 − 2x^2 + 5x − 1$ is a polynomial function inx
with degree 3.Note: A polynomial function of degree ’0’ is called a constant polynomial function (or) simply constant function.
-
Rational Functions
Rational functions are functions of the form$f(x) = \frac{p(x)}{q(x)}$ , where$p(x)$ and$q(x)$ are both polynomial functions.
Example:$f(x) = \frac{2x^2−3x+1}{x^2+4x+4}$ -
Algebraic Functions
Algebraic functions are functions that can be defined by algebraic equations involving polynomial, rational, and root functions.
Example:$f(x) = \sqrt{3x^3 − 2x^2 + 5x − 1}$
If a relation arises due to performing a finite number of fundamental operations additions, subtraction, multiplication, division, root extraction etc. on polynomial functions then such a relation is also called an Algebraic function.- All polynomial functions are algebraic but not the converse.
- A function that is not algebraic is called transcendental function.
-
Logarithmic Functions
Logarithmic functions are functions of the form$f(x) = logb(x)$ , where$b$ is the base of the logarithm.
Example:$f(x) = log_{10}(x)$ -
Even and Odd Functions
Even functions are symmetric about the y-axis, and odd functions are symmetric aboutthe origin. For even functions,$f(−x) = f(x)$ , and for odd functions,$f(−x) = −f(x)$ .
Example: Even Function:$f(x) = x^2$ (Symmetric about the y-axis) -
Odd Function:
$f(x) = x^3$ (Symmetric about the origin) -
Exponential Functions
Exponential functions are functions of the form$f(x) = a^x$ , where$a$ is a positive constant. Example:$f(x) = 2x$ -
Modulus Functions
Modulus functions, often denoted as$f(x) = |x|$ , return the absolute value of$x$ , making it always non-negative. Example:$f(x) = |x|$ -
Signum (Sign) Functions
The signum (sign) function is defined as$f(x) = \text{sgn}(x)$ , where:
Example:$$ \text{sgn}(x) = \begin{cases} -1 & \text{if } x < 0 \ 0 & \text{if } x = 0 \ 1 & \text{if } x > 0 \end{cases} $$
-
Let
$f : A \rightarrow B$ and$g : B \rightarrow C$ be two functions. Then, the composition of$f$ and$g$ , denoted by$g \circ f$ , is defined as the function$g \circ f : A \rightarrow C$ given by$$ (g \circ f)(x) = g(f(x))\text{, for all }x \in A $$
-
If
$f : A \rightarrow B$ and$g : B \rightarrow C$ are one-one, then$g \circ f : A \rightarrow C$ is also one-one. -
If
$f : A \rightarrow B$ and$g : B \rightarrow C$ are onto, then$g \circ f : A \rightarrow C$ is also onto. -
Let
$f : A \rightarrow B$ and$g : B \rightarrow C$ be the given functions such that$g \circ f$ is one-one. Then$f$ is one-one. -
Let
$f : A \rightarrow B$ and$g : B \rightarrow C$ be the given functions such that$g \circ f$ is onto. Then$g$ is onto.
- A function
$f : X \rightarrow Y$ is defined to be invertible if there exists a function$g : Y \rightarrow X$ such that$g \circ f = I_X$ and$f \circ g = I_Y$ . The function$g$ is called the inverse of f and is denoted by$f^{−1}$ . - A function
$f : X \rightarrow Y$ is invertible if and only if f is a bijective function. - If
$f : X \rightarrow Y$ ,$g : Y \rightarrow Z$ , and$h : Z \rightarrow S$ are functions, then$h \circ (g \circ f) = (h \circ g) \circ f$ . - Let
$f : X \rightarrow Y$ and$g : Y \rightarrow Z$ be two invertible functions. Then$g \circ f$ is also invertible with$(g \circ f)^{−1} = f^{−1} \circ g^{−1}$ .
In calculus, the concept of a limit is fundamental to understanding the behavior of functions as they approach specific points. A limit represents the value that a function approaches as its input (independent variable) gets arbitrarily close to a certain value. We denote the limit of a function f(x) as x approaches a limit point c as follows:
This means that as x gets very close to c, the values of f(x) get arbitrarily close to L.
The function
Symbolically we write
Let
If
then
Let
If
then
If $\ell_1 = \ell_2$ then $\lim_{x \to a} f(x)$ exists. If the limit exists then it is unique.
There are several basic rules that help us evaluate limits:
-
The Limit of a Constant: $$ \lim_{x \rightarrow c} k = k $$ where k is a constant.
-
The Limit of a Sum or Difference:
$$ \lim_{x\rightarrow c} [f(x) ± g(x)] = \lim_{x \rightarrow c} f(x) \pm \lim_{x \rightarrow c} g(x) $$
-
The Limit of a Product:
$$ \lim_{x \to c} [f(x) . g(x)] = \lim_{x \to c}f(x) · \lim_{x \to c} g(x) $$
-
The Limit of a Quotient:
$$ \lim_{{x \to c}} \frac{f(x)}{g(x)} = \frac{\lim_{{x \to c}} f(x)}{\lim_{{x \to c}} g(x)}, \text{ if } \lim_{{x \to c}} g(x) \neq 0 $$
-
$\quad \lim_{{x \to 0}} \sin x = 0$ -
$\quad \lim_{{x \to 0}} \cos x = 1$ -
$\quad \lim_{{x \to 0}} \frac{\tan x}{x} = 1$ -
$\quad \lim_{{x \to 0}} \frac{\sin x}{x} = 1$ -
$\quad \lim_{{x \to 0}} \frac{\sin^{-1} x}{x} = 1$ -
$\quad \lim_{{x \to 0}} \frac{\tan^{-1} x}{x} = 1$ -
$\quad \lim_{{x \to \infty}} \frac{\sin x}{x} = 0$ -
$\quad \lim_{{x \to 0}} \left(\cos x + a \sin b x\right)^{\frac{1}{x}} = e^{ab}$ -
$\quad \lim_{{x \to 0}} \left(\frac{1 - \cos (ax)}{x}\right) = \frac{a^2}{2}$
-
$\quad \lim_{{x \to 0}} \left(1 + x\right)^{\frac{1}{x}} = e$ -
$\quad \lim_{{x \to 0}} \left(1 + ax\right)^{\frac{1}{x}} = e^a$ -
$\quad \lim_{{x \to \infty}} \left(1 + \frac{1}{x}\right)^x = e$ -
$\quad \lim_{{x \to \infty}} \left(1 + \frac{a}{x}\right)^x = e^a$
-
$\quad \lim_{{x \to 0}} e^x = 1$ -
$\quad \lim_{{x \to 0}} \frac{e^x - 1}{x} = 1$ -
$\quad \lim_{{x \to 0}} \frac{e^{mx} - 1}{mx} = m$ -
$\quad \lim_{{x \to 0}} \frac{a^x - 1}{x} = \log_e a$ -
$\quad \lim_{{x \to 0}} \frac{\log(1 + x)}{x} = 1$ -
$\quad \lim_{{x \to a}} \frac{x^n - a^n}{x - a} = na^{n-1}$ -
$\quad \lim_{{x \to a}} \left( \frac{a^x + b^x}{2} \right)^{\frac{1}{x}} = \sqrt{ab}$
We apply L'Hospital's Rule to the limit if we get the limit in the following forms (Indeterminate forms):
Try to convert all indeterminate forms into
If
Note:
If
-
Continuity of a function at a point:
A function$f(x)$ is said to be continuous ata
if it satisfies the following conditions-
$f(a)$ is defined -
$\lim_{x \to a} f(x)$ exists i.e$\lim_{x \to a^{−}} f(x) = lim_{x \to a^{+}} f(x)$ $\lim_{x \to a^{+}} f(x) = f(a)$
-
-
Left continuous (or) continuity from the left at a point:
A function$f(x)$ is said to be continuous from the left (or) left continuous at$x = a$ if-
$f(a)$ is defined $\lim_{x \to a^{-}} f(x) = f(a)$
-
-
Right continuous (or) continuity from the right at a point:
A function$f(x)$ is said to be continuous from the right (or) right continuous at$x = a$ if-
$f(a)$ is defined $\lim_{x \to a^{+}} f(x) = f(a)$
-
-
Continuity of a function in an open interval:
A function$f(x)$ is said to be continuous in an open interval$(a, b)$ if$f(x)$ is continuous$\forall x \in (a, b)$ (or)$\lim_{x \to c} f(x) = f(c) \forall c \in (a, b)$ -
Continuity of a function on closed interval:
A function$f(x)$ is said to be continuous on closed interval $[a, b]$if-
$f(x)$ is continuous$\forall x ∈ (a, b)$ $\lim_{x \to a^{+}} f(x) = f(a)$ $\lim_{x \to b^{-}} f(x) = f(b)$
-
Important Points:
- If
$f(x)$ and$g(x)$ are two continuous functions then$\quad f(x) + g(x),\quad f(x) − g(x),\quad f(x).g(x)$ and$\quad \frac{f(x)}{g(x)} (:: g(x) \neq 0)$ are also continuous. - Polynomial function, exponential function, sine and cosine functions, and modulus function are continuous everywhere.
- Logarithmic functions are continuous in
$(0, \infty)$ - Let the functions f and g be continuous at a point
$x = x_0$ then,-
$cf$ ,$f \pm g$ and$f.g$ are continuous at$x = x_0$ , where$c$ is any constant. -
$\frac{f}{g}$ is continuous at$x = x_0$ , if$g(x_0) \neq 0$
-
- If
$f$ is continuous at$x = x_0$ and$g$ is continuous at$f(x_0)$ then the composite function$g(f(g))$ is continuous at$x = x_0$ . - If f is continuous at an interior point
$c$ of a closed interval$[a, b]$ and$f(c) ̸= 0$ , then there exists a neighbourhood of$c$ , throughout which$f(x)$ has the same sign as$f(c)$ . - If
$f$ is continuous in a closed interval$[a, b]$ then it is bounded there and attains its bounds at least once in$[a, b]$ . - If
$f$ is continuous in a closed interval$[a, b]$ , and if$f(a)$ and$f(b)$ are of opposite signs, then there exists at least one point$c \in [a, b]$ such that$f(c) = 0$ . - If
$f$ is continuous in a closed interval$[a, b]$ and$f(a) \neq f(b)$ then it assumes every value between$f(a)$ and$f(b)$ .
Important Note:
- If the derivative of
$f(x)$ exists at$x = a$ then the function$f(x)$ is said to be differentiable function at$x = a$ . -
$f^l(a)$ exists at$x = a \iff Lf^l(a) = Rf^l(a)$ . - If
$f(x)$ and$g(x)$ are two differentiable functions then$f(x)+g(x)$ ,$f(x)−g(x)$ ,$f(x).g(x)$ ,$\frac{f(x)}{g(x)}$ ..$g(x) \neq 0$ are also differentiable. - Polynomial functions, exponential functions, sine and cosine functions are differentiable every where.
- Every differentiable function is continuous but a continuous function need not be differentiable.
Derivability of a function in an open interval:
A function
Derivability of a function on closed interval:
A function
-
$Rf^l(a)$ exists -
$Lf^l(b)$ exists.
Let
If the Taylor Series is centred at 0, then the series is known as the Maclaurin series. It means that, if
This is known as the Maclaurin series.
-
$(1 - x)^{-1} = 1 + x + x^2 + x^3 + \cdots \quad \text{for } |x| < 1$ -
$(1 + x)^{-1} = 1 - x + x^2 - x^3 + \cdots \quad \text{for } |x| < 1$ -
$(1 - x)^{-2} = 1 + 2x + 3x^2 + 4x^3 + \cdots \quad \text{for } |x| < 1$ -
$(1 + x)^{-2} = 1 - 2x + 3x^2 - 4x^3 + \cdots \quad \text{for } |x| < 1$ -
$(1 - x)^{-3} = 1 + 3x + 6x^2 + 10x^3 + \cdots \quad \text{for } |x| < 1$ -
$\quad (1 + x)^{-3} = 1 - 3x + 6x^2 - 10x^3 + \ldots \quad [-1 < x < 1]$ -
$\quad (1 + x)^n = 1 + nx + \frac{n(n-1)}{2!}x^2 + \frac{n(n-1)(n-2)}{3!}x^3 + \ldots \quad [-1 < x < 1]$ -
$\quad (1 - x)^{-n} = 1 + nx + \frac{n(n+1)}{2!}x^2 + \frac{n(n+1)(n+2)}{3!}x^3 + \ldots \quad [-1 < x < 1]$ -
$\quad e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \ldots \quad [-\infty < x < \infty]$ -
$\quad e^{-x} = 1 - x + \frac{x^2}{2!} - \frac{x^3}{3!} + \ldots \quad [-\infty < x < \infty]$ -
$\quad \sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \ldots \quad [-\infty < x < \infty]$ -
$\quad \sinh x = x + \frac{x^3}{3!} + \frac{x^5}{5!} + \frac{x^7}{7!} + \ldots \quad [-\infty < x < \infty]$ -
$\quad \sin^{-1} x = x + \frac{1}{2} \frac{x^3}{3} + \frac{1 \cdot 3}{2 \cdot 4} \frac{x^5}{5} + \frac{1 \cdot 3 \cdot 5}{2 \cdot 4 \cdot 6} \frac{x^7}{7} + \ldots \quad [-1 < x < 1]$ -
$\quad \cos x = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \ldots \quad [-\infty < x < \infty]$ -
$\quad \cosh x = 1 + \frac{x^2}{2!} + \frac{x^4}{4!} + \frac{x^6}{6!} + \ldots \quad [-\infty < x < \infty]$ -
$\quad \cos^{-1} x = \frac{\pi}{2} - \sin^{-1} x = \frac{\pi}{2} - \left( x + \frac{1}{2} \frac{x^3}{3} + \frac{1 \cdot 3}{2 \cdot 4} \frac{x^5}{5} + \ldots \right) \quad [-1 < x < 1]$ -
$\quad \tan x = x + \frac{x^3}{3} + \frac{2x^5}{15} + \ldots \quad \left[ -\frac{\pi}{2} < x < \frac{\pi}{2} \right]$ -
$\quad \tanh x = x - \frac{x^3}{3} + \frac{2x^5}{15} - \ldots \quad \left[ -\frac{\pi}{2} < x < \frac{\pi}{2} \right]$ -
$$ \quad \tan^{-1} x = \begin{cases} x - \frac{x^3}{3} + \frac{x^5}{5} - \frac{x^7}{7} + \ldots & \text{for } -1 < x < 1 \ \frac{\pi}{2} - \frac{1}{x} + \frac{1}{3x^3} - \frac{1}{5x^5} + \ldots & \text{for } x \geq 1 \ -\frac{\pi}{2} + \frac{1}{x} - \frac{1}{3x^3} + \frac{1}{5x^5} - \ldots & \text{for } x < -1 \ \end{cases} \quad \left[-1 < x < 1\right] $$
-
$\quad \cot x = \frac{1}{x} - \frac{x}{3} - \frac{x^3}{45} + \ldots \quad \left[0 < x < \pi\right]$ -
$\quad \coth x = \frac{1}{x} + \frac{x}{3} - \frac{x^3}{45} + \ldots \quad \left[0 < |x| < \pi\right]$ -
$$ \quad \cot^{-1} x = \frac{\pi}{2} - \tan^{-1} x = \begin{cases} \frac{\pi}{2} - \left(x - \frac{x^3}{3} + \frac{x^5}{5} - \frac{x^7}{7} + \ldots\right) & \text{for } -1 < x < 1 \ \frac{1}{x} - \frac{1}{3x^3} + \frac{1}{5x^5} - \ldots & \text{for } x \geq 1 \ \pi + \frac{1}{x} - \frac{1}{3x^3} + \frac{1}{5x^5} - \ldots & \text{for } x < -1 \ \end{cases} \quad \left[-1 < x < 1\right] $$
-
$\quad \sec x = 1 + \frac{x^2}{2} + \frac{5x^4}{24} + \ldots \quad \left[-\frac{\pi}{2} < x < \frac{\pi}{2}\right]$ -
$\quad \csc x = \frac{1}{x} + \frac{x}{6} + \frac{7x^3}{360} + \ldots \quad \left[0 < x < \pi\right]$ -
$\quad \ln(1 + x) = x - \frac{x^2}{2} + \frac{x^3}{3} - \frac{x^4}{4} + \ldots \quad \left[-1 < x < 1\right]$
Differentiation is the process of finding the derivative of a function. The derivative represents the rate of change of the function's value with respect to a change in its input value.
If
Note: The derivative represents the slope of the tangent line to the graph of the function at a given point.
-
Power Rule:
$$ \frac{d}{dx} [x^n] = nx^{n-1} $$
-
Constant Rule:
$$ \frac{d}{dx} [c] = 0 \text{ , where
$c$ is a constant.} $$ -
Sum Rule:
$$ \frac{d}{dx} [f(x) + g(x)] = f'(x) + g'(x) $$
-
Difference Rule:
$$ \frac{d}{dx} [f(x) - g(x)] = f'(x) - g'(x) $$
-
Product Rule:
$$ \frac{d}{dx} [f(x) \cdot g(x)] = f'(x) \cdot g(x) + f(x) \cdot g'(x) $$
-
Quotient Rule:
$$ \frac{d}{dx} \left[\frac{f(x)}{g(x)}\right] = \frac{f'(x) \cdot g(x) - f(x) \cdot g'(x)}{[g(x)]^2} $$
-
Chain Rule: $$ \frac{d}{dx} [f(g(x))] = f'(g(x)) \cdot g'(x) $$
-
Second Derivative: The derivative of the derivative, denoted as
$f''(x)$ or$\frac{d^2f}{dx^2}$ . -
nth Derivative: The derivative taken n times, denoted as
$f^{(n)}(x)$ or$\frac{d^nf}{dx^n}$ .
Critical Points and Inflection Points
-
Critical Points: Points where
$f'(x) = 0$ or$f'(x)$ is undefined. Used to determine local maxima and minima. -
Inflection Points: Points where the concavity of the function changes, identified by
$f''(x) = 0$ .
The Mean Value Theorem for Derivatives states that if a function
Conditions:
-
Continuity:
$f$ must be continuous on the closed interval$[a, b]$ . -
Differentiability:
$f$ must be differentiable on the open interval$(a, b)$ .
Interpretation:
The theorem essentially states that there is at least one point
Rolle's Theorem is a special case of the Mean Value Theorem. It states that if a function
-
$f$ is continuous on the closed interval$[a, b]$ . -
$f$ is differentiable on the open interval$(a, b)$ . -
$f(a) = f(b)$ .
Then, there exists at least one point
In simpler terms, if a function starts and ends at the same value on a certain interval and meets the continuity and differentiability conditions, there is at least one point in that interval where the derivative (slope of the tangent) is zero. This means the function has a horizontal tangent line at some point in the interval.
The Mean Value Theorem (MVT) generalizes Rolle's Theorem. It states that if a function
-
$f$ is continuous on the closed interval$[a, b]$ . -
$f$ is differentiable on the open interval$(a, b)$ .
Then, there exists at least one point
This means that there is at least one point in the interval where the instantaneous rate of change (the derivative) is equal to the average rate of change over the interval. In other words, the slope of the tangent at some point is equal to the slope of the secant line connecting the endpoints
Formulas Recap
-
Rolle's Theorem: If
$f(a) = f(b)$ , then$f'(c) = 0$ . -
Lagrange's Mean Value Theorem:
$f'(c) = \frac{f(b) - f(a)}{b - a}$ .
The Mean Value Theorem for Integrals states that if
Conditions:
-
Continuity:
$f$ must be continuous on the closed interval$[a, b]$ .
Interpretation:
The theorem states that there is at least one point
Example:
Consider the function
-
Check conditions:
-
$f(x) = x^2$ is continuous on$[1, 3]$ .
-
-
Apply the Mean Value Theorem for Integrals:
$$ \frac{1}{3 - 1} \int_1^3 x^2 , dx = \frac{1}{2} \left[ \frac{x^3}{3} \right]_1^3 = \frac{1}{2} \left( \frac{27}{3} - \frac{1}{3} \right) = \frac{1}{2} (9 - \frac{1}{3}) = \frac{1}{2} \times \frac{26}{3} = \frac{13}{3} $$
So, there exists a point
$c$ in$(1, 3)$ such that$f(c) = \frac{13}{3}$ .
- Mean Value Theorem for Derivatives: Guarantees that there is at least one point where the derivative (slope of the tangent) equals the average rate of change over an interval.
- Mean Value Theorem for Integrals: Guarantees that there is at least one point where the function's value equals its average value over an interval.
Implicit differentiation is a technique used in calculus to find the derivative of a function when it is not explicitly given in the form
-
Implicit Function: An implicit function is one where
$y$ is not isolated on one side of the equation. For example, in the equation$x^2 + y^2 = 1$ ,$y$ is defined implicitly in terms of$x$ . -
Differentiation Process:
- Treat
$y$ as a function of$x$ , even though it is not explicitly written as$y = f(x)$ . - Differentiate both sides of the equation with respect to
$x$ . Remember to apply the chain rule when differentiating terms involving$y$ , because$y$ is a function of$x$ .
- Treat
-
Chain Rule: The chain rule is used when differentiating a composite function. When you differentiate
$y$ with respect to$x$ , you get$\frac{dy}{dx}$ . For example, if you differentiate$y^2$ with respect to$x$ , you apply the chain rule: $$ \frac{d}{dx}(y^2) = 2y \cdot \frac{dy}{dx} $$
Example:
Let's find
-
Start with the equation:
$$ x^2 + y^2 = 1 $$
-
Differentiate both sides with respect to
$x$ :$$ \frac{d}{dx}(x^2) + \frac{d}{dx}(y^2) = \frac{d}{dx}(1) $$
-
Apply the differentiation:
- The derivative of
$x^2$ with respect to$x$ is$2x$ . - The derivative of
$y^2$ with respect to$x$ is$2y \cdot \frac{dy}{dx}$ (using the chain rule). - The derivative of a constant (1) with respect to
$x$ is 0.
So, we get:
$$ 2x + 2y \cdot \frac{dy}{dx} = 0 $$
- The derivative of
-
Solve for
$\frac{dy}{dx}$ :- Isolate
$\frac{dy}{dx}$ on one side of the equation: $$ 2y \cdot \frac{dy}{dx} = -2x $$ - Divide both sides by
$2y$ (assuming$y \neq 0$ ): $$ \frac{dy}{dx} = -\frac{x}{y} $$
- Isolate
So, the derivative
Implicit differentiation allows us to find the derivative of a variable that is defined implicitly by a relation involving another variable. By differentiating both sides of the relation with respect to
Logarithmic differentiation is a technique used in calculus to differentiate functions that are products, quotients, or powers of other functions. This method leverages the properties of logarithms to simplify the differentiation process, especially when dealing with complex expressions.
-
Logarithm Properties: The key properties of logarithms that make logarithmic differentiation useful are:
$\log(ab) = \log(a) + \log(b)$ $\log\left(\frac{a}{b}\right) = \log(a) - \log(b)$ $\log(a^b) = b \log(a)$
-
Steps for Logarithmic Differentiation:
-
Step 1: Take the natural logarithm ((\ln)) of both sides of the function
$y = f(x)$ . This step transforms the function into a form that is easier to differentiate. - Step 2: Use the properties of logarithms to simplify the expression.
-
Step 3: Differentiate both sides of the equation with respect to
$x$ . Remember to apply the chain rule when differentiating the logarithm of$y$ (since$y$ is a function of$x$ ). -
Step 4: Solve for
$\frac{dy}{dx}$ by isolating it on one side of the equation.
-
Step 1: Take the natural logarithm ((\ln)) of both sides of the function
Example:
Let's differentiate the function
-
Start with the function:
$$ y = x^x $$
-
Take the natural logarithm of both sides:
$$ \ln(y) = \ln(x^x) $$
-
Simplify using logarithm properties:
$$ \ln(y) = x \ln(x) $$
-
Differentiate both sides with respect to
$x$ :- For the left side, use the chain rule: $$ \frac{d}{dx}[\ln(y)] = \frac{1}{y} \cdot \frac{dy}{dx} $$
- For the right side, apply the product rule (since
$x \ln(x)$ is a product of$x$ and $\ln(x)$): $$ \frac{d}{dx}[x \ln(x)] = \ln(x) + x \cdot \frac{1}{x} = \ln(x) + 1 $$
So, we get:
$$ \frac{1}{y} \cdot \frac{dy}{dx} = \ln(x) + 1 $$
-
Solve for
$\frac{dy}{dx}$ :- Multiply both sides by
$y$ : $$ \frac{dy}{dx} = y (\ln(x) + 1) $$ - Recall that
$y = x^x$ : $$ \frac{dy}{dx} = x^x (\ln(x) + 1) $$
- Multiply both sides by
So, the derivative
Logarithmic differentiation is a powerful technique for differentiating functions that involve products, quotients, or powers. By taking the natural logarithm of the function, we can simplify the differentiation process using the properties of logarithms. This method is particularly useful for complex expressions where traditional differentiation rules would be cumbersome.
-
$\frac{d}{dx} (x^n) = n \cdot x^{n-1}\quad\quad$ $\frac{d}{dx} \left( \frac{1}{x^n} \right) = \frac{-n}{x^{n+1}}\quad\quad$ $\frac{d}{dx} (\sqrt{x}) = \frac{1}{2\sqrt{x}} ;\quad x \neq 0$ -
$\frac{d}{dx} [ax^n + b] = an \cdot x^{n-1}$ -
$\frac{d}{dx} [ax + b]^n = n \cdot a (ax + b)^{n-1}$ -
$\frac{d}{dx} [e^{ax}] = a \cdot e^{ax}$ -
$\frac{d}{dx} [\log x] = \frac{1}{x} ; \quad\quad x > 0$ -
$\frac{d}{dx} [a^x] = a^x \log a; \quad\quad a > 0$ -
$\frac{d}{dx} [\sin x] = \cos x\quad\quad$ $\frac{d}{dx} [\cos x] = -\sin x\quad\quad$ $\frac{d}{dx} [\tan x] = \sec^2 x\quad\quad$ $\frac{d}{dx} [\cot x] = -\csc^2 x\quad\quad$ $\frac{d}{dx} [\sec x] = \sec x \cdot \tan x\quad\quad$ $\frac{d}{dx} [\csc x] = -\csc x \cdot \cot x\quad\quad$
Inverse Rule
If
-
Local Maximum/Minimum: A point
$(x_0, y_0)$ where the function reaches its highest or lowest value in a small surrounding neighborhood. -
Global Maximum/Minimum: A point
$(x_0, y_0)$ where the function reaches its highest or lowest value over its entire domain.
Local or relative maximum
A function
Local or relative minimum
A function
Stationary points
The values of
Stationary values
A function
Extreme point
The point at which the function has a maximum or a minimum is called an extreme point.
Extreme values
The values of the function at extreme points are called extreme values (Extrema).
Point of inflection:
The point at which a curve crosses its tangents is called the point of inflection.
The function
Note:
- A necessary condition for a function to have an extreme value at
$x = a$ is$f'(a) = 0$ . -
$f'(a) = 0$ is only a necessary condition but not a sufficient condition for$f(a)$ to be an extreme value of$f(x)$ . - Every extreme point is a stationary point but every stationary point need not be an extreme point.
Rule to find maxima and minima:
Let
Step 1: Find
Step 2: Equate
Step 3: Find
- If
$f''(x_0) > 0$ then$f(x)$ has a minimum at$x = x_0$ - If
$f''(x_0) < 0$ then$f(x)$ has a maximum at$x = x_0$ - If
$f''(x_0) = 0$ then$f(x)$ may (or) may not have extremum.
In this case, check for maxima and minima using the changes in sign of
- For
$x < x_0$ if$f'(x) < 0$ and$x > x_0$ if$f'(x) > 0$ then$f(x_0)$ is a minimum value of$f(x)$ . - For
$x < x_0$ if$f'(x) > 0$ and$x > x_0$ if$f'(x) < 0$ then$f(x_0)$ is a maximum value of$f(x)$ . - For
$x < x_0$ and$x > x_0$ if$f'(x) > 0$ (or)$f'(x) < 0$ then$f(x_0)$ is not an extremum.
-
Function and Partial Derivatives: Let
$f(x, y)$ be the function. Compute the first-order partial derivatives$f_x$ and$f_y$ :
-
Critical Points: Solve the system of equations
$f_x = 0$ and$f_y = 0$ to find the critical points$(x_0, y_0)$ . -
Second-Order Partial Derivatives: Compute the second-order partial derivatives:
-
Hessian Determinant: Calculate the Hessian determinant
$H$ at each critical point:
-
Classification of Critical Points:
- If
$H > 0$ and$f_{xx} > 0$ ,$(x_0, y_0)$ is a local minimum. - If
$H > 0$ and$f_{xx} < 0$ ,$(x_0, y_0)$ is a local maximum. - If
$H < 0$ ,$(x_0, y_0)$ is a saddle point (neither a maximum nor a minimum). - If
$H = 0$ , the test is inconclusive.
- If
Example
Let's go through an example to illustrate these steps:
Consider the function:
- First-Order Partial Derivatives:
-
Critical Points:
Set
$f_x = 0$ and$f_y = 0$ :
So, the critical point is
- Second-Order Partial Derivatives:
- Hessian Determinant:
-
Classification:
Since
$H > 0$ and$f_{xx} = 2 > 0$ ,$(2, 3)$ is a local minimum.
To verify if it's a global minimum, we need to check the behavior of the function over its entire domain, but generally speaking, quadratic functions like this tend to have their local minima as global minima.
The absolute maximum/minimum values of the function
-
Absolute maximum value
$\max (f(a), f(b), \text{all local maximum values of } f)$ - greatest value of
$f(x)$ in$[a, b]$ .
-
Absolute minimum value
$\min (f(a), f(b), \text{all local minimum values of } f)$ - least value of
$f(x)$ in$[a, b]$ .
In calculus, a partial derivative of a function of multiple variables is its derivative with respect to one of those variables, with the other variables held constant. This is different from an ordinary derivative, where the function depends on a single variable.
Partial derivatives are crucial in multivariable calculus because they help us understand how a function changes as each of its variables changes. They are widely used in various fields such as physics, engineering, economics, and machine learning.
Let
In simple terms,
Notation
-
$f_x$ or$\frac{\partial f}{\partial x}$ : Partial derivative with respect to$x$ . -
$f_y$ or$\frac{\partial f}{\partial y}$ : Partial derivative with respect to$y$ .
Example
Consider the function
-
Partial Derivative with Respect to
$x$ :$$ f_x = \frac{\partial}{\partial x} (x^2y + 3xy + y^3) $$
$$ f_x = 2xy + 3y $$
-
Partial Derivative with Respect to
$y$ : $$ f_y = \frac{\partial}{\partial y} (x^2y + 3xy + y^3) $$ $$ f_y = x^2 + 3x + 3y^2 $$
Partial derivatives give us the slope of the tangent line to the curve obtained by intersecting the surface
$z = f(x, y)$ with a plane parallel to the respective coordinate plane.
Just like ordinary derivatives, we can take higher-order partial derivatives. For a function
-
$f_{xx} = \frac{\partial^2 f}{\partial x^2}$ : Second partial derivative with respect to$x$ . -
$f_{yy} = \frac{\partial^2 f}{\partial y^2}$ : Second partial derivative with respect to$y$ . -
$f_{xy} = \frac{\partial^2 f}{\partial y \partial x}$ or$f_{yx} = \frac{\partial^2 f}{\partial x \partial y}$ : Mixed partial derivatives.
As mentioned earlier, for functions with continuous second-order partial derivatives, the mixed partial derivatives are equal:
Partial derivatives are used in numerous applications, including:
-
Gradient: The gradient of a function
$f(x, y)$ is a vector that points in the direction of the steepest ascent. It is denoted by$\nabla f$ and is given by:$$ \nabla f = \left( \frac{\partial f}{\partial x}, \frac{\partial f}{\partial y} \right) $$
-
Tangent Planes: The equation of the tangent plane to the surface
$z = f(x, y)$ at the point ((x_0, y_0, z_0)) is:$$ z - z_0 = f_x(x_0, y_0)(x - x_0) + f_y(x_0, y_0)(y - y_0) $$
-
Optimization: In optimization problems, partial derivatives are used to find local maxima, minima, and saddle points of functions with several variables.
-
Find the Partial Derivatives: For the function
$g(x, y) = \sin(x) \cos(y)$ :$$ g_x = \frac{\partial g}{\partial x} = \cos(x) \cos(y) $$
$$ g_y = \frac{\partial g}{\partial y} = -\sin(x) \sin(y) $$
-
Compute the Gradient: For
$h(x, y) = x^2 + y^2$ : $$ \nabla h = \left( \frac{\partial h}{\partial x}, \frac{\partial h}{\partial y} \right) = (2x, 2y) $$
Let's visualize partial derivatives with a simple 3D surface. Suppose we have a surface
-
Slice Parallel to
$yz$ -Plane: Fix$x = x_0$ . The curve is$z = f(x_0, y)$ . The slope of this curve at$y = y_0$ is$f_y(x_0, y_0)$ . -
Slice Parallel to
$xz$ -Plane: Fix$y = y_0$ . The curve is$z = f(x, y_0)$ . The slope of this curve at$x = x_0$ is$f_x(x_0, y_0)$ .
In summary, partial derivatives help us understand how functions of multiple variables change with respect to each variable independently. They are the cornerstone of multivariable calculus.
If
The process of computing an integral of a function is called Integration and the function to be integrated is called integrand.
An integral of a function is not unique. If
The difference in the values of an integral of a function
The number ‘$a$’ is called the lower limit and the number ‘$b$’ is the upper limit of integration.
If
where
-
If
$f(x)$ is a continuous function of$x$ over$[a, b]$ , and$c$ belongs to$[a, b]$ , then
$\int_{a}^{b} f(x)dx = \int_{a}^{c} f(x)dx + \int_{c}^{b} f(x)dx.$ -
If
$f(x)$ is a continuous function of$x$ over$[a, b]$ , then
$\int_{a}^{b} Kf(x)dx = K \int_{a}^{b} f(x)dx.$ -
If
$f(x)$ is a continuous function of$x$ over$[a, b]$ , then
$\int_a^b f(x)dx = -\int_b^a f(x)dx.$ -
If
$f(x)$ is continuous in some neighbourhood of$a$ , then
$\int_a^a f(x)dx = 0.$ -
If
$f(x)$ and$g(x)$ are continuous in$[a, b]$ , then
$\int_a^b [f(x) + g(x)]dx = \int_a^b f(x)dx + \int_a^b g(x)dx.$ -
$\int_a^b f(x)dx = \int_a^b f(z)dz = \int_a^b f(t)dt.$ -
$\int_0^a f(x)dx = \int_0^a f(a - x)dx.$ -
$\int_{-a}^a f(x)dx = 0, \text{ if } f(x) \text{ is odd}.$ -
$\int_{-a}^a f(x)dx = 2 \int_0^a f(x)dx \text{ if } f(x) \text{ is even}.$ -
$\int_0^{2a} f(x)dx = 2 \int_0^a f(x)dx, \text{ if } f(2a - x) = f(x) = 0 \text{ if } f(2a - x) = -f(x)$ -
$\int_0^{na} f(x)dx = n \int_0^a f(x)dx, \text{ if } f(a + x) = f(x)$