Problem. Let be positive real numbers such that
Prove that:
Solution.
Let us define the function , which is convex for , since:
Applying Jensen's inequality for the convex function , we get:
From the given condition:
we can substitute . Therefore,
It now suffices to prove that:
because then:
To prove this, observe the identity:
which holds for all real numbers . Applying this to our condition:
we get:
Thus,
as desired.
Equality holds when , since:
phys 227 practice exam 2
---
**Problem 1: Divergence Theorem in Cylindrical Coordinates**
Let and let be the cylindrical region defined by , , and . Verify the Divergence Theorem by computing both:
(a) The volume integral
(b) The flux integral over the closed surface of the cylinder.
---
**Problem 2: Diagonalization of a Matrix**
Consider the matrix:
(a) Compute the eigenvalues of .
(b) Find a basis of eigenvectors and construct the diagonalization .
---
**Problem 3: Vector Calculus and Stokes’ Theorem**
Let , and let be the upper hemisphere of the sphere with the boundary curve being the equator.
(a) Compute .
(b) Compute and verify Stokes’ Theorem.
---
### **Problem 4: Infinite Series and Convergence**
Determine whether the series
converges for different values of , and justify using the Alternating Series Test and p-Test.
---
**Problem 5: Complex Numbers and Matrices**
(a) Express in polar form.
(b) Compute the determinant and inverse (if it exists) of the matrix:
Physics 227:Vector Operators and Vector Differentiation
1. Gradients as Vector Operators
The gradient operator, denoted , acts on scalar fields (functions ) and produces a vector field.
For a scalar field , the gradient is:
This points in the direction of greatest increase of and gives the slope in that direction.
2. Divergence Operator
The divergence operator acts on vector fields, producing a scalar field that measures how much the vector field spreads out from a point.
This is a scalar field indicating the net "outflow" from a point — useful in fluid flow, electromagnetism, and other physical systems.
Example
For :
3. Curl Operator
The curl operator acts on vector fields, producing another vector field that describes the rotation or "circulation" of the field at each point.
This determinant expands to:
Example
For :
4. Laplacian Operator
The Laplacian is a scalar operator that acts on scalar or vector fields. For a scalar field , it is the divergence of the gradient:
In Cartesian coordinates:
This operator shows up in the heat equation, wave equation, and Schrödinger equation.
Example
For :
5. Vector Differentiation
Vector differentiation refers to differentiating vector-valued functions with respect to scalars or other vectors.
Derivative of Vector Functions
For a vector-valued function :
The derivative is:
Directional Derivatives
The directional derivative of a scalar field in the direction of a unit vector is:
This gives the rate of change of in the direction of .
Gradient and Tangent Vectors
The gradient is perpendicular to level surfaces (contours) of . Tangent vectors lie in the plane tangent to those surfaces.
Physics 227: Gradients, Lagrange Multipliers, and Hessians
1. Gradients
The gradient of a scalar function is a vector that points in the direction of the steepest increase of . It contains the partial derivatives with respect to each variable.
For a function , the gradient is:
The gradient is used to find critical points (where ) and in defining directional derivatives, flux, and many other physical quantities.
2. Lagrange Multipliers
Lagrange multipliers are used to find extrema of a scalar function subject to one or more constraints.
The method works by enforcing that at the constrained extrema, the gradient of the objective function must be parallel to the gradient of the constraint function.
For maximizing subject to a constraint , the condition is:
This produces a system of equations to solve for the variables and the Lagrange multiplier . Combined with the constraint , this system fully determines the extrema.
This method generalizes to multiple constraints , where:
Example
Maximize subject to the constraint .
The gradient of the objective:
The gradient of the constraint:
The condition:
This leads to:
If and are non-zero, . Combined with the constraint , this describes the full boundary.
3. Hessian Matrices
The Hessian matrix describes the second-order partial derivatives of a scalar function. It is used to assess the local curvature near a critical point, helping classify whether the point is a minimum, maximum, or saddle point.
For a scalar function , the Hessian matrix is:
In general for , the Hessian is:
Classification Using the Hessian
- If the Hessian is positive definite (all eigenvalues positive), the point is a local minimum.
- If the Hessian is negative definite (all eigenvalues negative), the point is a local maximum.
- If the Hessian has mixed-sign eigenvalues, the point is a saddle point.
Example
For , the Hessian is:
The eigenvalues are both , so the Hessian is positive definite, confirming a local minimum at .
Mega Mega Review for Every Single Topic on the UW Math 207 Midterm 2 Because I Slept Through Most of That Class And Now I Need to Cram Everything In a Few Days
Section 3.1: Second-Order Linear Differential Equations
The general form of a second-order linear differential equation is:
We solve this by finding the characteristic equation:
Example: Solve
Characteristic equation: , roots are , so:
Section 3.3: Complex Roots
If the characteristic equation has complex roots , the general solution is:
Example: Solve .
The characteristic equation is:
Solving for :
Since the roots are purely imaginary (), the solution is:
Interpretation:
- The solution represents oscillatory motion.
- The frequency of oscillation is given by .
- If , the function would have an exponential growth or decay component.
Section 3.4: Reduction of Order
Reduction of order is useful when we already have one known solution to a second-order differential equation and need to find a second, linearly independent solution .
We assume the second solution has the form:
Substituting this into the differential equation and simplifying leads to a first-order equation for , which we can solve.
Example: Solve given that .
- Let , differentiate and substitute into the equation.
- After simplification, solve for , then determine .
- Final solution: .
Section 3.5: Undetermined Coefficients
The method of undetermined coefficients is used to find particular solutions to nonhomogeneous differential equations:
We guess a form for based on , plug it into the equation, and solve for the unknown coefficients.
We can find the particular solution using undetermined coefficients--assume has the same form as , with unknown coefficients to determine.
Form of |
Guess for |
(Polynomial of degree ) |
|
|
|
|
|
or |
|
Example: Solve .
- Guess , substitute into the equation.
- Solve for , giving .
- Final solution: .
Section 3.7: Mechanical Vibrations
For , solving the characteristic equation determines oscillatory behavior.
Example: gives damped oscillations:
Practice Exam for Physics 227 Midterm 2
here's a practice exam i made based off previous exams i could find and the homework quesstions from this unit, since there aren't many review materials available. solutions are underneath under the arrow.
Linear Algebra & Vector Spaces
1. Linear Functions:Let . For what values of is a linear function? Prove your answer.
2. Linear Independence: Consider the vectors in :
Determine how many of them are linearly independent.
3. Orthonormal Basis: Given the vectors
use the Gram-Schmidt process to construct an orthonormal basis.
Matrix Transformations & Rotations
4. Rotation Matrix:Derive the matrix that represents a counterclockwise rotation by around the z-axis.
5. Matrix Operations: Let be the matrix
Find a matrix such that swaps the second and third rows of .
Inner Products & Norms
6.Inner Product Proof: Given the inner product for matrices:
Show that and that if and only if is the zero matrix.
7. Abstract Angle: Let
Find the "abstract angle" between these matrices using the inner product definition.
Vector Calculus & Geometry
8. Parallelogram Diagonal Theorem:Prove, using only vector addition, subtraction, and scalar multiplication, that the diagonals of a parallelogram bisect each other.
9. Distance from a Point to a Line: A line is given parametrically as
Find the shortest distance from the point to this line.
Determinants, Eigenvalues, and Diagonalization
10. Determinant Calculation: Compute the determinant of the matrix
11. Diagonalization: Let
Find its eigenvalues and an orthonormal basis of eigenvectors.
Commutators & Special Matrices
12. Matrix Commutator: Let
Compute .
13. Reflection & Rotation:Find the matrix that first rotates a vector by counterclockwise and then reflects it across the x-axis.
Pauli Matrices & Quantum Mechanics
14. Pauli Matrices: Show that the Pauli matrices
satisfy and that their commutators satisfy
where is the Levi-Civita symbol.
15. Exponential of a Matrix: Compute in terms of sine and cosine functions.
☆ How to find the solution for second order ODE's
For example, consider the equation:
Step 1: Characteristic Equation
Factoring:
Solving for :
Thus, the general solution is:
Step 2: Apply Initial Conditions
Given and , we solve for and :
Solving, we get:
Thus, the unique solution is:
How to Characterize Any Type of ODE
1. Order of the ODE
The order of a differential equation refers to the highest derivative of the function or in the equation.
tldr: Look at the highest derivative. If it’s , it’s second-order. If it’s just , it’s first-order.
2. Linearity
A differential equation is linear if the dependent variable and its derivatives appear to the first power and are not multiplied or divided by each other.
tldr: If and its derivatives are only raised to the first power, and they aren't multiplied/divided by each other, it's linear. If there’s any term like , , etc., it's nonlinear.
3. Homogeneiety (Idk how to spell ?? ૮ ◞ ﻌ ◟ ა)
A differential equation is homogeneous if every term contains the dependent variable or its derivatives. If the equation has a term without , it's non-homogeneous.
tldr: If there’s a constant term (like a number) or any term without or its derivatives, it’s non-homogeneous. If everything involves , it’s homogeneous.
4. Separable Variables
An ODE is separable if you can separate the variables (i.e., and ) onto opposite sides of the equation.
tldr: If you can rearrange the equation so all -terms are on one side and -terms are on the other side, it's separable. If it’s impossible to do so, it’s not.
6. Autonomous ODEs
An ODE is autonomous if the independent variable does not explicitly appear in the equation (it only involves and its derivatives).
- Autonomous ODE: The equation has no explicit -term.
Example: (Only involves and its derivatives, not ).
Counterexample: (Here, the -term is present explicitly).
tldr: If the equation only involves (and possibly its derivatives) but not , it's autonomous.
Summary of Key Characteristics
Characteristic |
What it means |
Example |
Counterexample |
Order |
Highest derivative |
(first-order) |
(second-order) |
Linear vs Nonlinear |
Linear = no powers or products of , |
(linear) |
(nonlinear) |
Homogeneous vs Nonhomogeneous |
Homogeneous = no term without or its derivatives |
(homogeneous) |
(nonhomogeneous) |
Conways Game of Life!
One of my favorite examples of cellular automata is Conway's game of life. 4 simple rules on a grid create complex systems (oscillators, glider guns, even a general purpose computer). It's a zero player game–meaning that the game play is determined solely by its initial state.
The four rules are as follows:
1. A live cell dies if it has fewer than two live neighbors.
2. A live cell with two or three live neighbors lives on to the next generation.
3. A live cell with more than three live neighbors dies.
4. A dead cell will be brought back to live if it has exactly three live neighbors.
Here is a quick simulation I made in javascript:You can manually click some of the boxes to create the starting conditions, or you can just click randomize.