For a scalar field \(f(x, y, z)\), the gradient is:
\[
\nabla f = \frac{\partial f}{\partial x} \mathbf{\hat{i}} + \frac{\partial f}{\partial y} \mathbf{\hat{j}} + \frac{\partial f}{\partial z} \mathbf{\hat{k}}
\]
This points in the direction of greatest increase of \(f\) and gives the slope in that direction.
2. Divergence Operator
The divergence operator acts on vector fields, producing a scalar field that measures how much the vector field spreads out from a point.
Vector differentiation refers to differentiating vector-valued functions with respect to scalars or other vectors.
Derivative of Vector Functions
For a vector-valued function \(\mathbf{r}(t)\):
\[
\mathbf{r}(t) = x(t)\mathbf{\hat{i}} + y(t)\mathbf{\hat{j}} + z(t)\mathbf{\hat{k}}
\]
The derivative is:
\[
\frac{d\mathbf{r}}{dt} = \frac{dx}{dt}\mathbf{\hat{i}} + \frac{dy}{dt}\mathbf{\hat{j}} + \frac{dz}{dt}\mathbf{\hat{k}}
\]
Directional Derivatives
The directional derivative of a scalar field \(f\) in the direction of a unit vector \(\mathbf{u}\) is:
\[
\frac{\partial f}{\partial \mathbf{u}} = \nabla f \cdot \mathbf{u}
\]
This gives the rate of change of \(f\) in the direction of \(\mathbf{u}\).
Gradient and Tangent Vectors
The gradient is perpendicular to level surfaces (contours) of \(f\). Tangent vectors lie in the plane tangent to those surfaces.
Physics 227: Gradients, Lagrange Multipliers, and Hessians
1. Gradients
The gradient of a scalar function \(f\) is a vector that points in the direction of the steepest increase of \(f\). It contains the partial derivatives with respect to each variable.
The gradient is used to find critical points (where \(\nabla f = 0\)) and in defining directional derivatives, flux, and many other physical quantities.
2. Lagrange Multipliers
Lagrange multipliers are used to find extrema of a scalar function subject to one or more constraints.
The method works by enforcing that at the constrained extrema, the gradient of the objective function must be parallel to the gradient of the constraint function.
For maximizing \(f(x, y, z)\) subject to a constraint \(g(x, y, z) = 0\), the condition is:
\[
\nabla f = \lambda \nabla g
\]
This produces a system of equations to solve for the variables \(x, y, z\) and the Lagrange multiplier \(\lambda\). Combined with the constraint \(g(x, y, z) = 0\), this system fully determines the extrema.
This method generalizes to multiple constraints \(g_1, g_2, \ldots\), where:
Maximize \(f(x, y) = x^2 + y^2\) subject to the constraint \(x^2 + y^2 = 1\).
The gradient of the objective:
\[
\nabla f = (2x, 2y)
\]
The gradient of the constraint:
\[
\nabla g = (2x, 2y)
\]
The condition:
\[
\nabla f = \lambda \nabla g
\]
This leads to:
\[
2x = \lambda 2x,\quad 2y = \lambda 2y
\]
If \(x\) and \(y\) are non-zero, \(\lambda = 1\). Combined with the constraint \(x^2 + y^2 = 1\), this describes the full boundary.
3. Hessian Matrices
The Hessian matrix describes the second-order partial derivatives of a scalar function. It is used to assess the local curvature near a critical point, helping classify whether the point is a minimum, maximum, or saddle point.
For a scalar function \(f(x, y)\), the Hessian matrix is:
\[
H =
\begin{bmatrix}
\frac{\partial^2 f}{\partial x^2} & \frac{\partial^2 f}{\partial x \partial y} \\
\frac{\partial^2 f}{\partial y \partial x} & \frac{\partial^2 f}{\partial y^2}
\end{bmatrix}
\]
In general for \(f(x_1, x_2, \ldots, x_n)\), the Hessian is:
The eigenvalues are both \(2\), so the Hessian is positive definite, confirming a local minimum at \((0,0)\).
Mega Mega Review for Every Single Topic on the UW Math 207 Midterm 2 Because I Slept Through Most of That Class And Now I Need to Cram Everything In a Few Days
Section 3.1: Second-Order Linear Differential Equations
The general form of a second-order linear differential equation is:
\[ a y'' + b y' + c y = 0 \]
We solve this by finding the characteristic equation:
\[ ar^2 + br + c = 0 \]
Example: Solve \( y'' - 3y' + 2y = 0 \)
Characteristic equation: \( r^2 - 3r + 2 = 0 \), roots are \( r = 1,2 \), so:
\[ y(t) = C_1 e^t + C_2 e^{2t} \]
Section 3.3: Complex Roots
If the characteristic equation has complex roots \( r = \lambda \pm i\mu \), the general solution is:
Since the roots are purely imaginary (\( \lambda = 0, \mu = 1 \)), the solution is:
\[ y(t) = C_1 \cos t + C_2 \sin t \]
Interpretation:
The solution represents oscillatory motion.
The frequency of oscillation is given by \( \omega = \mu = 1 \).
If \( \lambda \neq 0 \), the function would have an exponential growth or decay component.
Section 3.4: Reduction of Order
Reduction of order is useful when we already have one known solution \( y_1(t) \) to a second-order differential equation and need to find a second, linearly independent solution \( y_2(t) \).
We assume the second solution has the form:
\[ y_2(t) = v(t) y_1(t) \]
Substituting this into the differential equation and simplifying leads to a first-order equation for \( v'(t) \), which we can solve.
Example: Solve \( y'' - y = 0 \) given that \( y_1 = e^t \).
Let \( y_2 = v e^t \), differentiate and substitute into the equation.
After simplification, solve for \( v(t) \), then determine \( y_2(t) \).
The method of undetermined coefficients is used to find particular solutions to nonhomogeneous differential equations:
\[ a y'' + b y' + c y = g(t) \]
We guess a form for \( y_p \) based on \( g(t) \), plug it into the equation, and solve for the unknown coefficients.
We can find the particular solution using undetermined coefficients--assume \( y_p(t) \) has the same form as \( g(t) \), with unknown coefficients to determine.
Form of \( g(t) \)
Guess for \( y_p(t) \)
\( P_n(t) \) (Polynomial of degree \( n \))
\( A_n t^n + A_{n-1} t^{n-1} + \dots + A_0 \)
\( e^{at} \)
\( A e^{at} \)
\( P_n(t) e^{at} \)
\( (A_n t^n + \dots + A_0) e^{at} \)
\( \cos(bt) \) or \( \sin(bt) \)
\( A \cos(bt) + B \sin(bt) \)
Example: Solve \( y'' - 3y' + 2y = e^t \).
Guess \( y_p = A e^t \), substitute into the equation.
here's a practice exam i made based off previous exams i could find and the homework quesstions from this unit, since there aren't many review materials available. solutions are underneath under the arrow.
Linear Algebra & Vector Spaces
1. Linear Functions:Let \( f(x, y, z) = ax + by + cz + d \). For what values of \( a, b, c, d \) is \( f \) a linear function? Prove your answer.
2. Linear Independence: Consider the vectors in \( \mathbb{R}^5 \):
\[
v_1 = (1,2,3,4,5), \quad v_2 = (2,1,0,-1,-2), \quad v_3 = (3,2,1,0,-1), \quad v_4 = (1,1,1,1,1)
\]
Determine how many of them are linearly independent.
3. Orthonormal Basis: Given the vectors
\[
a_1 = (1, 1, 0), \quad a_2 = (-1, 2, 1), \quad a_3 = (2, -1, 3)
\]
use the Gram-Schmidt process to construct an orthonormal basis.
Matrix Transformations & Rotations
4. Rotation Matrix:Derive the \( 3 \times 3 \) matrix that represents a counterclockwise rotation by \( \theta \) around the z-axis.
5. Matrix Operations: Let \( M \) be the \( 3 \times 3 \) matrix
\[
M = \begin{bmatrix} 1 & 2 & 0 \\ -1 & 1 & 3 \\ 2 & -2 & 4 \end{bmatrix}
\]
Find a matrix \( R \) such that \( RM \) swaps the second and third rows of \( M \).
Inner Products & Norms
6.Inner Product Proof: Given the inner product for matrices:
\[
\langle A| B \rangle = \text{tr}(A^T B)
\]
Show that \( \langle A| A \rangle \geq 0 \) and that \( \langle A| A \rangle = 0 \) if and only if \( A \) is the zero matrix.
7. Abstract Angle: Let
\[
A = \begin{bmatrix} 2 & 1 \\ 0 & 3 \end{bmatrix}, \quad B = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}
\]
Find the "abstract angle" between these matrices using the inner product definition.
Vector Calculus & Geometry
8. Parallelogram Diagonal Theorem:Prove, using only vector addition, subtraction, and scalar multiplication, that the diagonals of a parallelogram bisect each other.
9. Distance from a Point to a Line: A line is given parametrically as
\[
\mathbf{r} = (1,2,3) + \lambda (4,-2,1)
\]
Find the shortest distance from the point \( P = (0, 1, 2) \) to this line.
Determinants, Eigenvalues, and Diagonalization
10. Determinant Calculation: Compute the determinant of the \( 4 \times 4 \) matrix
\[
\begin{bmatrix} 1 & 0 & 2 & 3 \\ 4 & 5 & 6 & 0 \\ 7 & 8 & 9 & 1 \\ 0 & 2 & 1 & 4 \end{bmatrix}
\]
11. Diagonalization: Let
\[
A = \begin{bmatrix} 3 & 1 \\ 1 & 3 \end{bmatrix}
\]
Find its eigenvalues and an orthonormal basis of eigenvectors.
Commutators & Special Matrices
12. Matrix Commutator: Let
\[
A = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}, \quad B = \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix}
\]
Compute \( [A, B] = AB - BA \).
13. Reflection & Rotation:Find the \( 2 \times 2 \) matrix that first rotates a vector by \( \frac{\pi}{6} \) counterclockwise and then reflects it across the x-axis.
Pauli Matrices & Quantum Mechanics
14. Pauli Matrices: Show that the Pauli matrices
\[
\sigma_1 = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}, \quad \sigma_2 = \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix}, \quad \sigma_3 = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}
\]
satisfy \( \sigma_i^2 = I \) and that their commutators satisfy
\[
[\sigma_i, \sigma_j] = 2i \varepsilon_{ijk} \sigma_k
\]
where \( \varepsilon_{ijk} \) is the Levi-Civita symbol.
15. Exponential of a Matrix: Compute \( e^{i \theta \sigma_3} \) in terms of sine and cosine functions.
☆ How to find the solution for second order ODE's
For example, consider the equation:
\[
y'' - 11y' + 30y = 0
\]
Step 1: Characteristic Equation
\[
r^2 - 11r + 30 = 0
\]
Factoring:
\[
(r - 5)(r - 6) = 0
\]
Solving for \( r \):
\[
r = 5, \quad r = 6
\]
Thus, the general solution is:
\[
y(t) = C_1 e^{5t} + C_2 e^{6t}
\]
Step 2: Apply Initial Conditions
Given \( y(0) = 0 \) and \( y'(0) = 2 \), we solve for \( C_1 \) and \( C_2 \):
\[
C_1 + C_2 = 0
\]
\[
5C_1 + 6C_2 = 2
\]
Solving, we get:
\[
C_1 = -2, \quad C_2 = 2
\]
Thus, the unique solution is:
\[
y(t) = -2e^{5t} + 2e^{6t}
\]
How to Characterize Any Type of ODE
1. Order of the ODE
The order of a differential equation refers to the highest derivative of the function \(y(x)\) or \(y(t)\) in the equation.
First-order: The highest derivative is \(y'\) (or \(dy/dx\)).
Example: \( y' + 2y = 0 \)
Counterexample: \( y'' + y = 0 \) (This is second-order).
Second-order: The highest derivative is \(y''\) (or \(d^2y/dx^2\)).
Example: \( y'' + 5y' - 3 = 0 \)
Counterexample: \( y' + y = 0 \) (This is first-order).
tldr: Look at the highest derivative. If it’s \(y''\), it’s second-order. If it’s just \(y'\), it’s first-order.
2. Linearity
A differential equation is linear if the dependent variable \(y\) and its derivatives appear to the first power and are not multiplied or divided by each other.
Linear ODE: The equation can be written in the form:
\[
a_n(x) y^{(n)} + a_{n-1}(x) y^{(n-1)} + \dots + a_1(x) y' + a_0(x) y = g(x)
\]
where \(a_n(x)\) are functions of \(x\), but \(y\) and its derivatives are not raised to any powers or multiplied together.
Example: \( y'' + 2y' - 3y = 0 \)
Counterexample: \( y' + y^2 = 0 \) (Here, \(y^2\) makes it nonlinear).
tldr: If \(y\) and its derivatives are only raised to the first power, and they aren't multiplied/divided by each other, it's linear. If there’s any term like \(y^2\), \(y' \cdot y\), etc., it's nonlinear.
3. Homogeneiety (Idk how to spell ?? ૮ ◞ ﻌ ◟ ა)
A differential equation is homogeneous if every term contains the dependent variable \(y\) or its derivatives. If the equation has a term without \(y\), it's non-homogeneous.
Homogeneous ODE: All terms involve \(y\) or its derivatives, and the equation equals zero.
Example: \( y'' + 2y' - 3y = 0 \)
Counterexample: \( y'' + 2y' - 3y = e^x \) (The \(e^x\) term makes it non-homogeneous).
tldr: If there’s a constant term (like a number) or any term without \(y\) or its derivatives, it’s non-homogeneous. If everything involves \(y\), it’s homogeneous.
4. Separable Variables
An ODE is separable if you can separate the variables (i.e., \(y\) and \(x\)) onto opposite sides of the equation.
Separable ODE: You can rewrite the equation so that one side has only terms with \(y\) and the other side only has terms with \(x\).
Example: \( \frac{dy}{dx} = \frac{1}{x} \cdot y \), which can be rewritten as:
\[
\frac{1}{y} \, dy = \frac{1}{x} \, dx
\]
Counterexample: \( y' + y^2 = 0 \) (You can’t cleanly separate \(y\) and \(x\)).
tldr: If you can rearrange the equation so all \(y\)-terms are on one side and \(x\)-terms are on the other side, it's separable. If it’s impossible to do so, it’s not.
6. Autonomous ODEs
An ODE is autonomous if the independent variable \(x\) does not explicitly appear in the equation (it only involves \(y\) and its derivatives).
Autonomous ODE: The equation has no explicit \(x\)-term.
Example: \( y'' = y \) (Only involves \(y\) and its derivatives, not \(x\)).
Counterexample: \( y'' + y = x \) (Here, the \(x\)-term is present explicitly).
tldr: If the equation only involves \(y\) (and possibly its derivatives) but not \(x\), it's autonomous.
Summary of Key Characteristics
Characteristic
What it means
Example
Counterexample
Order
Highest derivative
\( y' + 2y = 0 \) (first-order)
\( y'' + y = 0 \) (second-order)
Linear vs Nonlinear
Linear = no powers or products of \(y\), \(y'\)
\( y'' + 10y' + 106y = 0 \) (linear)
\( y' + y^2 = 0 \) (nonlinear)
Homogeneous vs Nonhomogeneous
Homogeneous = no term without \(y\) or its derivatives
\( y'' + 2y' - 3y = 0 \) (homogeneous)
\( y'' + 2y' - 3y = e^x \) (nonhomogeneous)
Conways Game of Life!
One of my favorite examples of cellular automata is Conway's game of life. 4 simple rules on a grid create complex systems (oscillators, glider guns, even a general purpose computer). It's a zero player game–meaning that the game play is determined solely by its initial state.
The four rules are as follows:
1. A live cell dies if it has fewer than two live neighbors.
2. A live cell with two or three live neighbors lives on to the next generation.
3. A live cell with more than three live neighbors dies.
4. A dead cell will be brought back to live if it has exactly three live neighbors.
Here is a quick simulation I made in javascript:You can manually click some of the boxes to create the starting conditions, or you can just click randomize.