Problem. Let \( a, b, c \) be positive real numbers such that \[ a^2 + b^2 + c^2 + ab + bc + ca = 6. \] Prove that: \[ \frac{1}{a^2 + 5} + \frac{1}{b^2 + 5} + \frac{1}{c^2 + 5} \leq \frac{1}{2}. \]
Solution.
Let us define the function \( f(x) = \frac{1}{x + 5} \), which is convex for \( x > 0 \), since: \[ f''(x) = \frac{2}{(x + 5)^3} > 0. \]
Applying Jensen's inequality for the convex function \( f \), we get: \[ \sum_{\text{cyc}} \frac{1}{a^2 + 5} \leq 3 \cdot \frac{1}{\frac{a^2 + b^2 + c^2}{3} + 5} = \frac{9}{a^2 + b^2 + c^2 + 15}. \]
From the given condition: \[ a^2 + b^2 + c^2 + ab + bc + ca = 6, \] we can substitute \( a^2 + b^2 + c^2 = 6 - (ab + bc + ca) \). Therefore, \[ \sum_{\text{cyc}} \frac{1}{a^2 + 5} \leq \frac{9}{21 - (ab + bc + ca)}. \]
It now suffices to prove that: \[ ab + bc + ca \leq 3, \] because then: \[ \frac{9}{21 - (ab + bc + ca)} \leq \frac{9}{18} = \frac{1}{2}. \]
To prove this, observe the identity: \[ a^2 + b^2 + c^2 \geq ab + bc + ca, \] which holds for all real numbers \( a, b, c \). Applying this to our condition: \[ a^2 + b^2 + c^2 + ab + bc + ca = 6, \] we get: \[ 2(ab + bc + ca) \leq 6 \Rightarrow ab + bc + ca \leq 3. \]
Thus, \[ \sum_{\text{cyc}} \frac{1}{a^2 + 5} \leq \frac{9}{21 - (ab + bc + ca)} \leq \frac{9}{18} = \frac{1}{2}, \] as desired.
Equality holds when \( a = b = c = 1 \), since: \[ a^2 + b^2 + c^2 + ab + bc + ca = 3 + 3 = 6, \quad \text{and} \quad \sum \frac{1}{1^2 + 5} = \frac{3}{6} = \frac{1}{2}. \]
\(\blacksquare\)
phys 227 practice exam 2
---
**Problem 1: Divergence Theorem in Cylindrical Coordinates**
Let \( \mathbf{F} = (r^2, z, r) \) and let \( V \) be the cylindrical region defined by \( 0 \leq r \leq a \), \( 0 \leq z \leq h \), and \( 0 \leq \theta \leq 2\pi \). Verify the Divergence Theorem by computing both:(a) The volume integral \( \iiint_V \nabla \cdot \mathbf{F} \, dV \)
(b) The flux integral \( \iint_S \mathbf{F} \cdot d\mathbf{S} \) over the closed surface of the cylinder.
---
**Problem 2: Diagonalization of a Matrix**
Consider the matrix:
\[
A = \begin{bmatrix} 4 & -2 \\ 1 & 1 \end{bmatrix}
\]
(a) Compute the eigenvalues of \( A \).
(b) Find a basis of eigenvectors and construct the diagonalization \( A = PDP^{-1} \).
---
**Problem 3: Vector Calculus and Stokes’ Theorem**
Let \( \mathbf{F} = (-y, x, z^2) \), and let \( S \) be the upper hemisphere of the sphere \( x^2 + y^2 + z^2 = 1 \) with the boundary curve \( C \) being the equator.(a) Compute \( \oint_C \mathbf{F} \cdot d\mathbf{r} \).
(b) Compute \( \iint_S (\nabla \times \mathbf{F}) \cdot d\mathbf{S} \) and verify Stokes’ Theorem. ---
### **Problem 4: Infinite Series and Convergence** Determine whether the series \[ \sum_{n=1}^{\infty} \frac{(-1)^n}{n^p} \] converges for different values of \( p \), and justify using the Alternating Series Test and p-Test. ---
**Problem 5: Complex Numbers and Matrices**
(a) Express \( (1 + i)^{10} \) in polar form.(b) Compute the determinant and inverse (if it exists) of the matrix:
\[ B = \begin{bmatrix} 2 & 3 \\ 1 & 4 \end{bmatrix} \]
Physics 227:Vector Operators and Vector Differentiation
1. Gradients as Vector Operators
The gradient operator, denoted \(\nabla\), acts on scalar fields (functions \(f(x,y,z)\)) and produces a vector field.
\[ \nabla = \frac{\partial}{\partial x} \mathbf{\hat{i}} + \frac{\partial}{\partial y} \mathbf{\hat{j}} + \frac{\partial}{\partial z} \mathbf{\hat{k}} \]
For a scalar field \(f(x, y, z)\), the gradient is: \[ \nabla f = \frac{\partial f}{\partial x} \mathbf{\hat{i}} + \frac{\partial f}{\partial y} \mathbf{\hat{j}} + \frac{\partial f}{\partial z} \mathbf{\hat{k}} \] This points in the direction of greatest increase of \(f\) and gives the slope in that direction.
2. Divergence Operator
The divergence operator acts on vector fields, producing a scalar field that measures how much the vector field spreads out from a point.
\[ \nabla \cdot \mathbf{F} = \frac{\partial F_x}{\partial x} + \frac{\partial F_y}{\partial y} + \frac{\partial F_z}{\partial z} \]
This is a scalar field indicating the net "outflow" from a point — useful in fluid flow, electromagnetism, and other physical systems.
ExampleFor \(\mathbf{F} = (x^2, y^2, z^2)\): \[ \nabla \cdot \mathbf{F} = \frac{\partial x^2}{\partial x} + \frac{\partial y^2}{\partial y} + \frac{\partial z^2}{\partial z} = 2x + 2y + 2z \]
3. Curl Operator
The curl operator acts on vector fields, producing another vector field that describes the rotation or "circulation" of the field at each point.
\[ \nabla \times \mathbf{F} = \begin{vmatrix} \mathbf{\hat{i}} & \mathbf{\hat{j}} & \mathbf{\hat{k}} \\ \frac{\partial}{\partial x} & \frac{\partial}{\partial y} & \frac{\partial}{\partial z} \\ F_x & F_y & F_z \end{vmatrix} \]
This determinant expands to: \[ \nabla \times \mathbf{F} = \left(\frac{\partial F_z}{\partial y} - \frac{\partial F_y}{\partial z}\right)\mathbf{\hat{i}} - \left(\frac{\partial F_z}{\partial x} - \frac{\partial F_x}{\partial z}\right)\mathbf{\hat{j}} + \left(\frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y}\right)\mathbf{\hat{k}} \]
ExampleFor \(\mathbf{F} = (0, 0, xy)\): \[ \nabla \times \mathbf{F} = \begin{bmatrix} \frac{\partial}{\partial x} \\ \frac{\partial}{\partial y} \\ \frac{\partial}{\partial z} \end{bmatrix} \times \begin{bmatrix} 0 \\ 0 \\ xy \end{bmatrix} = \begin{bmatrix} \frac{\partial xy}{\partial y} \\ -\frac{\partial xy}{\partial x} \\ 0 \end{bmatrix} = \begin{bmatrix} x \\ -y \\ 0 \end{bmatrix} \]
4. Laplacian Operator
The Laplacian is a scalar operator that acts on scalar or vector fields. For a scalar field \(f\), it is the divergence of the gradient:
\[ \nabla^2 f = \nabla \cdot \nabla f \]
In Cartesian coordinates: \[ \nabla^2 f = \frac{\partial^2 f}{\partial x^2} + \frac{\partial^2 f}{\partial y^2} + \frac{\partial^2 f}{\partial z^2} \]
This operator shows up in the heat equation, wave equation, and Schrödinger equation.
ExampleFor \(f = x^2 + y^2\): \[ \nabla^2 f = \frac{\partial^2}{\partial x^2}(x^2 + y^2) + \frac{\partial^2}{\partial y^2}(x^2 + y^2) = 2 + 2 = 4 \]
5. Vector Differentiation
Vector differentiation refers to differentiating vector-valued functions with respect to scalars or other vectors.
Derivative of Vector FunctionsFor a vector-valued function \(\mathbf{r}(t)\): \[ \mathbf{r}(t) = x(t)\mathbf{\hat{i}} + y(t)\mathbf{\hat{j}} + z(t)\mathbf{\hat{k}} \] The derivative is: \[ \frac{d\mathbf{r}}{dt} = \frac{dx}{dt}\mathbf{\hat{i}} + \frac{dy}{dt}\mathbf{\hat{j}} + \frac{dz}{dt}\mathbf{\hat{k}} \]
Directional DerivativesThe directional derivative of a scalar field \(f\) in the direction of a unit vector \(\mathbf{u}\) is: \[ \frac{\partial f}{\partial \mathbf{u}} = \nabla f \cdot \mathbf{u} \] This gives the rate of change of \(f\) in the direction of \(\mathbf{u}\).
Gradient and Tangent Vectors
The gradient is perpendicular to level surfaces (contours) of \(f\). Tangent vectors lie in the plane tangent to those surfaces.
Physics 227: Gradients, Lagrange Multipliers, and Hessians
1. Gradients
The gradient of a scalar function \(f\) is a vector that points in the direction of the steepest increase of \(f\). It contains the partial derivatives with respect to each variable.
For a function \(f(x, y, z)\), the gradient is:
\[ \nabla f = \left(\frac{\partial f}{\partial x}, \frac{\partial f}{\partial y}, \frac{\partial f}{\partial z}\right) \]
The gradient is used to find critical points (where \(\nabla f = 0\)) and in defining directional derivatives, flux, and many other physical quantities.
2. Lagrange Multipliers
Lagrange multipliers are used to find extrema of a scalar function subject to one or more constraints.
The method works by enforcing that at the constrained extrema, the gradient of the objective function must be parallel to the gradient of the constraint function.
For maximizing \(f(x, y, z)\) subject to a constraint \(g(x, y, z) = 0\), the condition is:
\[ \nabla f = \lambda \nabla g \]
This produces a system of equations to solve for the variables \(x, y, z\) and the Lagrange multiplier \(\lambda\). Combined with the constraint \(g(x, y, z) = 0\), this system fully determines the extrema.
This method generalizes to multiple constraints \(g_1, g_2, \ldots\), where:
\[ \nabla f = \lambda_1 \nabla g_1 + \lambda_2 \nabla g_2 + \cdots \]
Example
Maximize \(f(x, y) = x^2 + y^2\) subject to the constraint \(x^2 + y^2 = 1\).
The gradient of the objective: \[ \nabla f = (2x, 2y) \] The gradient of the constraint: \[ \nabla g = (2x, 2y) \] The condition: \[ \nabla f = \lambda \nabla g \] This leads to: \[ 2x = \lambda 2x,\quad 2y = \lambda 2y \] If \(x\) and \(y\) are non-zero, \(\lambda = 1\). Combined with the constraint \(x^2 + y^2 = 1\), this describes the full boundary.
3. Hessian Matrices
The Hessian matrix describes the second-order partial derivatives of a scalar function. It is used to assess the local curvature near a critical point, helping classify whether the point is a minimum, maximum, or saddle point.
For a scalar function \(f(x, y)\), the Hessian matrix is:
\[ H = \begin{bmatrix} \frac{\partial^2 f}{\partial x^2} & \frac{\partial^2 f}{\partial x \partial y} \\ \frac{\partial^2 f}{\partial y \partial x} & \frac{\partial^2 f}{\partial y^2} \end{bmatrix} \]
In general for \(f(x_1, x_2, \ldots, x_n)\), the Hessian is:
\[ H_{ij} = \frac{\partial^2 f}{\partial x_i \partial x_j} \]
Classification Using the Hessian
- If the Hessian is positive definite (all eigenvalues positive), the point is a local minimum.
- If the Hessian is negative definite (all eigenvalues negative), the point is a local maximum.
- If the Hessian has mixed-sign eigenvalues, the point is a saddle point.
Example
For \(f(x, y) = x^2 + y^2\), the Hessian is:
\[ H = \begin{bmatrix} 2 & 0 \\ 0 & 2 \end{bmatrix} \]
The eigenvalues are both \(2\), so the Hessian is positive definite, confirming a local minimum at \((0,0)\).
Mega Mega Review for Every Single Topic on the UW Math 207 Midterm 2 Because I Slept Through Most of That Class And Now I Need to Cram Everything In a Few Days

Section 3.1: Second-Order Linear Differential Equations
The general form of a second-order linear differential equation is:
\[ a y'' + b y' + c y = 0 \]
We solve this by finding the characteristic equation:
\[ ar^2 + br + c = 0 \]
Example: Solve \( y'' - 3y' + 2y = 0 \)
Characteristic equation: \( r^2 - 3r + 2 = 0 \), roots are \( r = 1,2 \), so:
\[ y(t) = C_1 e^t + C_2 e^{2t} \]
Section 3.3: Complex Roots
If the characteristic equation has complex roots \( r = \lambda \pm i\mu \), the general solution is:
\[ y(t) = e^{\lambda t} (C_1 \cos(\mu t) + C_2 \sin(\mu t)) \]
Example: Solve \( y'' + y = 0 \).
The characteristic equation is:
\[ r^2 + 1 = 0 \]
Solving for \( r \):
\[ r = \pm i \]
Since the roots are purely imaginary (\( \lambda = 0, \mu = 1 \)), the solution is:
\[ y(t) = C_1 \cos t + C_2 \sin t \]
Interpretation:
- The solution represents oscillatory motion.
- The frequency of oscillation is given by \( \omega = \mu = 1 \).
- If \( \lambda \neq 0 \), the function would have an exponential growth or decay component.
Section 3.4: Reduction of Order
Reduction of order is useful when we already have one known solution \( y_1(t) \) to a second-order differential equation and need to find a second, linearly independent solution \( y_2(t) \).
We assume the second solution has the form:
\[ y_2(t) = v(t) y_1(t) \]
Substituting this into the differential equation and simplifying leads to a first-order equation for \( v'(t) \), which we can solve.
Example: Solve \( y'' - y = 0 \) given that \( y_1 = e^t \).
- Let \( y_2 = v e^t \), differentiate and substitute into the equation.
- After simplification, solve for \( v(t) \), then determine \( y_2(t) \).
- Final solution: \( y(t) = C_1 e^t + C_2 e^{-t} \).
Section 3.5: Undetermined Coefficients
The method of undetermined coefficients is used to find particular solutions to nonhomogeneous differential equations:
\[ a y'' + b y' + c y = g(t) \]
We guess a form for \( y_p \) based on \( g(t) \), plug it into the equation, and solve for the unknown coefficients.
We can find the particular solution using undetermined coefficients--assume \( y_p(t) \) has the same form as \( g(t) \), with unknown coefficients to determine.
Form of \( g(t) \) | Guess for \( y_p(t) \) |
---|---|
\( P_n(t) \) (Polynomial of degree \( n \)) | \( A_n t^n + A_{n-1} t^{n-1} + \dots + A_0 \) |
\( e^{at} \) | \( A e^{at} \) |
\( P_n(t) e^{at} \) | \( (A_n t^n + \dots + A_0) e^{at} \) |
\( \cos(bt) \) or \( \sin(bt) \) | \( A \cos(bt) + B \sin(bt) \) |
Example: Solve \( y'' - 3y' + 2y = e^t \).
- Guess \( y_p = A e^t \), substitute into the equation.
- Solve for \( A \), giving \( A = \frac{1}{2} \).
- Final solution: \( y(t) = C_1 e^t + C_2 e^{2t} + \frac{1}{2} e^t \).
Section 3.7: Mechanical Vibrations
For \( m y'' + γ y' + k y = 0 \), solving the characteristic equation determines oscillatory behavior.
Example: \( 2y'' + 2y' + 8y = 0 \) gives damped oscillations:
\[ y(t) = e^{-t/2} (C_1 \cos(√15/2 t) + C_2 \sin(√15/2 t)) \]