Menu

Physics 227:Vector Operators and Vector Differentiation

1. Gradients as Vector Operators

The gradient operator, denoted \(\nabla\), acts on scalar fields (functions \(f(x,y,z)\)) and produces a vector field.

\[ \nabla = \frac{\partial}{\partial x} \mathbf{\hat{i}} + \frac{\partial}{\partial y} \mathbf{\hat{j}} + \frac{\partial}{\partial z} \mathbf{\hat{k}} \]

For a scalar field \(f(x, y, z)\), the gradient is: \[ \nabla f = \frac{\partial f}{\partial x} \mathbf{\hat{i}} + \frac{\partial f}{\partial y} \mathbf{\hat{j}} + \frac{\partial f}{\partial z} \mathbf{\hat{k}} \] This points in the direction of greatest increase of \(f\) and gives the slope in that direction.

2. Divergence Operator

The divergence operator acts on vector fields, producing a scalar field that measures how much the vector field spreads out from a point.

\[ \nabla \cdot \mathbf{F} = \frac{\partial F_x}{\partial x} + \frac{\partial F_y}{\partial y} + \frac{\partial F_z}{\partial z} \]

This is a scalar field indicating the net "outflow" from a point — useful in fluid flow, electromagnetism, and other physical systems.

Example

For \(\mathbf{F} = (x^2, y^2, z^2)\): \[ \nabla \cdot \mathbf{F} = \frac{\partial x^2}{\partial x} + \frac{\partial y^2}{\partial y} + \frac{\partial z^2}{\partial z} = 2x + 2y + 2z \]

3. Curl Operator

The curl operator acts on vector fields, producing another vector field that describes the rotation or "circulation" of the field at each point.

\[ \nabla \times \mathbf{F} = \begin{vmatrix} \mathbf{\hat{i}} & \mathbf{\hat{j}} & \mathbf{\hat{k}} \\ \frac{\partial}{\partial x} & \frac{\partial}{\partial y} & \frac{\partial}{\partial z} \\ F_x & F_y & F_z \end{vmatrix} \]

This determinant expands to: \[ \nabla \times \mathbf{F} = \left(\frac{\partial F_z}{\partial y} - \frac{\partial F_y}{\partial z}\right)\mathbf{\hat{i}} - \left(\frac{\partial F_z}{\partial x} - \frac{\partial F_x}{\partial z}\right)\mathbf{\hat{j}} + \left(\frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y}\right)\mathbf{\hat{k}} \]

Example

For \(\mathbf{F} = (0, 0, xy)\): \[ \nabla \times \mathbf{F} = \begin{bmatrix} \frac{\partial}{\partial x} \\ \frac{\partial}{\partial y} \\ \frac{\partial}{\partial z} \end{bmatrix} \times \begin{bmatrix} 0 \\ 0 \\ xy \end{bmatrix} = \begin{bmatrix} \frac{\partial xy}{\partial y} \\ -\frac{\partial xy}{\partial x} \\ 0 \end{bmatrix} = \begin{bmatrix} x \\ -y \\ 0 \end{bmatrix} \]

4. Laplacian Operator

The Laplacian is a scalar operator that acts on scalar or vector fields. For a scalar field \(f\), it is the divergence of the gradient:

\[ \nabla^2 f = \nabla \cdot \nabla f \]

In Cartesian coordinates: \[ \nabla^2 f = \frac{\partial^2 f}{\partial x^2} + \frac{\partial^2 f}{\partial y^2} + \frac{\partial^2 f}{\partial z^2} \]

This operator shows up in the heat equation, wave equation, and Schrödinger equation.

Example

For \(f = x^2 + y^2\): \[ \nabla^2 f = \frac{\partial^2}{\partial x^2}(x^2 + y^2) + \frac{\partial^2}{\partial y^2}(x^2 + y^2) = 2 + 2 = 4 \]

5. Vector Differentiation

Vector differentiation refers to differentiating vector-valued functions with respect to scalars or other vectors.

Derivative of Vector Functions

For a vector-valued function \(\mathbf{r}(t)\): \[ \mathbf{r}(t) = x(t)\mathbf{\hat{i}} + y(t)\mathbf{\hat{j}} + z(t)\mathbf{\hat{k}} \] The derivative is: \[ \frac{d\mathbf{r}}{dt} = \frac{dx}{dt}\mathbf{\hat{i}} + \frac{dy}{dt}\mathbf{\hat{j}} + \frac{dz}{dt}\mathbf{\hat{k}} \]

Directional Derivatives

The directional derivative of a scalar field \(f\) in the direction of a unit vector \(\mathbf{u}\) is: \[ \frac{\partial f}{\partial \mathbf{u}} = \nabla f \cdot \mathbf{u} \] This gives the rate of change of \(f\) in the direction of \(\mathbf{u}\).

Gradient and Tangent Vectors

The gradient is perpendicular to level surfaces (contours) of \(f\). Tangent vectors lie in the plane tangent to those surfaces.

Physics 227: Gradients, Lagrange Multipliers, and Hessians

1. Gradients

The gradient of a scalar function \(f\) is a vector that points in the direction of the steepest increase of \(f\). It contains the partial derivatives with respect to each variable.

For a function \(f(x, y, z)\), the gradient is:

\[ \nabla f = \left(\frac{\partial f}{\partial x}, \frac{\partial f}{\partial y}, \frac{\partial f}{\partial z}\right) \]

The gradient is used to find critical points (where \(\nabla f = 0\)) and in defining directional derivatives, flux, and many other physical quantities.

2. Lagrange Multipliers

Lagrange multipliers are used to find extrema of a scalar function subject to one or more constraints.

The method works by enforcing that at the constrained extrema, the gradient of the objective function must be parallel to the gradient of the constraint function.

For maximizing \(f(x, y, z)\) subject to a constraint \(g(x, y, z) = 0\), the condition is:

\[ \nabla f = \lambda \nabla g \]

This produces a system of equations to solve for the variables \(x, y, z\) and the Lagrange multiplier \(\lambda\). Combined with the constraint \(g(x, y, z) = 0\), this system fully determines the extrema.

This method generalizes to multiple constraints \(g_1, g_2, \ldots\), where:

\[ \nabla f = \lambda_1 \nabla g_1 + \lambda_2 \nabla g_2 + \cdots \]

Example

Maximize \(f(x, y) = x^2 + y^2\) subject to the constraint \(x^2 + y^2 = 1\).

The gradient of the objective: \[ \nabla f = (2x, 2y) \] The gradient of the constraint: \[ \nabla g = (2x, 2y) \] The condition: \[ \nabla f = \lambda \nabla g \] This leads to: \[ 2x = \lambda 2x,\quad 2y = \lambda 2y \] If \(x\) and \(y\) are non-zero, \(\lambda = 1\). Combined with the constraint \(x^2 + y^2 = 1\), this describes the full boundary.

3. Hessian Matrices

The Hessian matrix describes the second-order partial derivatives of a scalar function. It is used to assess the local curvature near a critical point, helping classify whether the point is a minimum, maximum, or saddle point.

For a scalar function \(f(x, y)\), the Hessian matrix is:

\[ H = \begin{bmatrix} \frac{\partial^2 f}{\partial x^2} & \frac{\partial^2 f}{\partial x \partial y} \\ \frac{\partial^2 f}{\partial y \partial x} & \frac{\partial^2 f}{\partial y^2} \end{bmatrix} \]

In general for \(f(x_1, x_2, \ldots, x_n)\), the Hessian is:

\[ H_{ij} = \frac{\partial^2 f}{\partial x_i \partial x_j} \]

Classification Using the Hessian

  • If the Hessian is positive definite (all eigenvalues positive), the point is a local minimum.
  • If the Hessian is negative definite (all eigenvalues negative), the point is a local maximum.
  • If the Hessian has mixed-sign eigenvalues, the point is a saddle point.

Example

For \(f(x, y) = x^2 + y^2\), the Hessian is:

\[ H = \begin{bmatrix} 2 & 0 \\ 0 & 2 \end{bmatrix} \]

The eigenvalues are both \(2\), so the Hessian is positive definite, confirming a local minimum at \((0,0)\).

Mega Mega Review for Every Single Topic on the UW Math 207 Midterm 2 Because I Slept Through Most of That Class And Now I Need to Cram Everything In a Few Days

Section 3.1: Second-Order Linear Differential Equations

The general form of a second-order linear differential equation is:

\[ a y'' + b y' + c y = 0 \]

We solve this by finding the characteristic equation:

\[ ar^2 + br + c = 0 \]

Example: Solve \( y'' - 3y' + 2y = 0 \)

Characteristic equation: \( r^2 - 3r + 2 = 0 \), roots are \( r = 1,2 \), so:

\[ y(t) = C_1 e^t + C_2 e^{2t} \]

Section 3.3: Complex Roots

If the characteristic equation has complex roots \( r = \lambda \pm i\mu \), the general solution is:

\[ y(t) = e^{\lambda t} (C_1 \cos(\mu t) + C_2 \sin(\mu t)) \]

Example: Solve \( y'' + y = 0 \).

The characteristic equation is:

\[ r^2 + 1 = 0 \]

Solving for \( r \):

\[ r = \pm i \]

Since the roots are purely imaginary (\( \lambda = 0, \mu = 1 \)), the solution is:

\[ y(t) = C_1 \cos t + C_2 \sin t \]

Interpretation:

  • The solution represents oscillatory motion.
  • The frequency of oscillation is given by \( \omega = \mu = 1 \).
  • If \( \lambda \neq 0 \), the function would have an exponential growth or decay component.

Section 3.4: Reduction of Order

Reduction of order is useful when we already have one known solution \( y_1(t) \) to a second-order differential equation and need to find a second, linearly independent solution \( y_2(t) \).

We assume the second solution has the form:

\[ y_2(t) = v(t) y_1(t) \]

Substituting this into the differential equation and simplifying leads to a first-order equation for \( v'(t) \), which we can solve.

Example: Solve \( y'' - y = 0 \) given that \( y_1 = e^t \).

  • Let \( y_2 = v e^t \), differentiate and substitute into the equation.
  • After simplification, solve for \( v(t) \), then determine \( y_2(t) \).
  • Final solution: \( y(t) = C_1 e^t + C_2 e^{-t} \).

Section 3.5: Undetermined Coefficients

The method of undetermined coefficients is used to find particular solutions to nonhomogeneous differential equations:

\[ a y'' + b y' + c y = g(t) \]

We guess a form for \( y_p \) based on \( g(t) \), plug it into the equation, and solve for the unknown coefficients.

We can find the particular solution using undetermined coefficients--assume \( y_p(t) \) has the same form as \( g(t) \), with unknown coefficients to determine.

Form of \( g(t) \) Guess for \( y_p(t) \)
\( P_n(t) \) (Polynomial of degree \( n \)) \( A_n t^n + A_{n-1} t^{n-1} + \dots + A_0 \)
\( e^{at} \) \( A e^{at} \)
\( P_n(t) e^{at} \) \( (A_n t^n + \dots + A_0) e^{at} \)
\( \cos(bt) \) or \( \sin(bt) \) \( A \cos(bt) + B \sin(bt) \)

Example: Solve \( y'' - 3y' + 2y = e^t \).

  • Guess \( y_p = A e^t \), substitute into the equation.
  • Solve for \( A \), giving \( A = \frac{1}{2} \).
  • Final solution: \( y(t) = C_1 e^t + C_2 e^{2t} + \frac{1}{2} e^t \).

Section 3.7: Mechanical Vibrations

For \( m y'' + γ y' + k y = 0 \), solving the characteristic equation determines oscillatory behavior.

Example: \( 2y'' + 2y' + 8y = 0 \) gives damped oscillations:

\[ y(t) = e^{-t/2} (C_1 \cos(√15/2 t) + C_2 \sin(√15/2 t)) \]

Practice Exam for Physics 227 Midterm 2

here's a practice exam i made based off previous exams i could find and the homework quesstions from this unit, since there aren't many review materials available. solutions are underneath under the arrow.

Linear Algebra & Vector Spaces

1. Linear Functions:Let \( f(x, y, z) = ax + by + cz + d \). For what values of \( a, b, c, d \) is \( f \) a linear function? Prove your answer.
2. Linear Independence: Consider the vectors in \( \mathbb{R}^5 \): \[ v_1 = (1,2,3,4,5), \quad v_2 = (2,1,0,-1,-2), \quad v_3 = (3,2,1,0,-1), \quad v_4 = (1,1,1,1,1) \] Determine how many of them are linearly independent.
3. Orthonormal Basis: Given the vectors \[ a_1 = (1, 1, 0), \quad a_2 = (-1, 2, 1), \quad a_3 = (2, -1, 3) \] use the Gram-Schmidt process to construct an orthonormal basis.

Matrix Transformations & Rotations

4. Rotation Matrix:Derive the \( 3 \times 3 \) matrix that represents a counterclockwise rotation by \( \theta \) around the z-axis.
5. Matrix Operations: Let \( M \) be the \( 3 \times 3 \) matrix \[ M = \begin{bmatrix} 1 & 2 & 0 \\ -1 & 1 & 3 \\ 2 & -2 & 4 \end{bmatrix} \] Find a matrix \( R \) such that \( RM \) swaps the second and third rows of \( M \).

Inner Products & Norms

6.Inner Product Proof: Given the inner product for matrices: \[ \langle A| B \rangle = \text{tr}(A^T B) \] Show that \( \langle A| A \rangle \geq 0 \) and that \( \langle A| A \rangle = 0 \) if and only if \( A \) is the zero matrix.
7. Abstract Angle: Let \[ A = \begin{bmatrix} 2 & 1 \\ 0 & 3 \end{bmatrix}, \quad B = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \] Find the "abstract angle" between these matrices using the inner product definition.

Vector Calculus & Geometry

8. Parallelogram Diagonal Theorem:Prove, using only vector addition, subtraction, and scalar multiplication, that the diagonals of a parallelogram bisect each other.
9. Distance from a Point to a Line: A line is given parametrically as \[ \mathbf{r} = (1,2,3) + \lambda (4,-2,1) \] Find the shortest distance from the point \( P = (0, 1, 2) \) to this line.

Determinants, Eigenvalues, and Diagonalization

10. Determinant Calculation: Compute the determinant of the \( 4 \times 4 \) matrix \[ \begin{bmatrix} 1 & 0 & 2 & 3 \\ 4 & 5 & 6 & 0 \\ 7 & 8 & 9 & 1 \\ 0 & 2 & 1 & 4 \end{bmatrix} \]
11. Diagonalization: Let \[ A = \begin{bmatrix} 3 & 1 \\ 1 & 3 \end{bmatrix} \] Find its eigenvalues and an orthonormal basis of eigenvectors.

Commutators & Special Matrices

12. Matrix Commutator: Let \[ A = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}, \quad B = \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix} \] Compute \( [A, B] = AB - BA \).
13. Reflection & Rotation:Find the \( 2 \times 2 \) matrix that first rotates a vector by \( \frac{\pi}{6} \) counterclockwise and then reflects it across the x-axis.

Pauli Matrices & Quantum Mechanics

14. Pauli Matrices: Show that the Pauli matrices \[ \sigma_1 = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}, \quad \sigma_2 = \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix}, \quad \sigma_3 = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} \] satisfy \( \sigma_i^2 = I \) and that their commutators satisfy \[ [\sigma_i, \sigma_j] = 2i \varepsilon_{ijk} \sigma_k \] where \( \varepsilon_{ijk} \) is the Levi-Civita symbol.
15. Exponential of a Matrix: Compute \( e^{i \theta \sigma_3} \) in terms of sine and cosine functions.

☆ How to find the solution for second order ODE's

For example, consider the equation:

\[ y'' - 11y' + 30y = 0 \]

Step 1: Characteristic Equation

\[ r^2 - 11r + 30 = 0 \]

Factoring:

\[ (r - 5)(r - 6) = 0 \]

Solving for \( r \):

\[ r = 5, \quad r = 6 \]

Thus, the general solution is:

\[ y(t) = C_1 e^{5t} + C_2 e^{6t} \]

Step 2: Apply Initial Conditions

Given \( y(0) = 0 \) and \( y'(0) = 2 \), we solve for \( C_1 \) and \( C_2 \):

\[ C_1 + C_2 = 0 \] \[ 5C_1 + 6C_2 = 2 \]

Solving, we get:

\[ C_1 = -2, \quad C_2 = 2 \]

Thus, the unique solution is:

\[ y(t) = -2e^{5t} + 2e^{6t} \]

How to Characterize Any Type of ODE

1. Order of the ODE

The order of a differential equation refers to the highest derivative of the function \(y(x)\) or \(y(t)\) in the equation.

  • First-order: The highest derivative is \(y'\) (or \(dy/dx\)).

    Example: \( y' + 2y = 0 \)

    Counterexample: \( y'' + y = 0 \) (This is second-order).

  • Second-order: The highest derivative is \(y''\) (or \(d^2y/dx^2\)).

    Example: \( y'' + 5y' - 3 = 0 \)

    Counterexample: \( y' + y = 0 \) (This is first-order).

tldr: Look at the highest derivative. If it’s \(y''\), it’s second-order. If it’s just \(y'\), it’s first-order.

2. Linearity

A differential equation is linear if the dependent variable \(y\) and its derivatives appear to the first power and are not multiplied or divided by each other.

  • Linear ODE: The equation can be written in the form: \[ a_n(x) y^{(n)} + a_{n-1}(x) y^{(n-1)} + \dots + a_1(x) y' + a_0(x) y = g(x) \] where \(a_n(x)\) are functions of \(x\), but \(y\) and its derivatives are not raised to any powers or multiplied together.

    Example: \( y'' + 2y' - 3y = 0 \)

    Counterexample: \( y' + y^2 = 0 \) (Here, \(y^2\) makes it nonlinear).

tldr: If \(y\) and its derivatives are only raised to the first power, and they aren't multiplied/divided by each other, it's linear. If there’s any term like \(y^2\), \(y' \cdot y\), etc., it's nonlinear.

3. Homogeneiety (Idk how to spell ?? ૮ ◞ ﻌ ◟ ა)

A differential equation is homogeneous if every term contains the dependent variable \(y\) or its derivatives. If the equation has a term without \(y\), it's non-homogeneous.

  • Homogeneous ODE: All terms involve \(y\) or its derivatives, and the equation equals zero.

    Example: \( y'' + 2y' - 3y = 0 \)

    Counterexample: \( y'' + 2y' - 3y = e^x \) (The \(e^x\) term makes it non-homogeneous).

tldr: If there’s a constant term (like a number) or any term without \(y\) or its derivatives, it’s non-homogeneous. If everything involves \(y\), it’s homogeneous.

4. Separable Variables

An ODE is separable if you can separate the variables (i.e., \(y\) and \(x\)) onto opposite sides of the equation.

  • Separable ODE: You can rewrite the equation so that one side has only terms with \(y\) and the other side only has terms with \(x\).

    Example: \( \frac{dy}{dx} = \frac{1}{x} \cdot y \), which can be rewritten as: \[ \frac{1}{y} \, dy = \frac{1}{x} \, dx \]

    Counterexample: \( y' + y^2 = 0 \) (You can’t cleanly separate \(y\) and \(x\)).

tldr: If you can rearrange the equation so all \(y\)-terms are on one side and \(x\)-terms are on the other side, it's separable. If it’s impossible to do so, it’s not.

6. Autonomous ODEs

An ODE is autonomous if the independent variable \(x\) does not explicitly appear in the equation (it only involves \(y\) and its derivatives).

  • Autonomous ODE: The equation has no explicit \(x\)-term.

    Example: \( y'' = y \) (Only involves \(y\) and its derivatives, not \(x\)).

    Counterexample: \( y'' + y = x \) (Here, the \(x\)-term is present explicitly).

tldr: If the equation only involves \(y\) (and possibly its derivatives) but not \(x\), it's autonomous.

Summary of Key Characteristics

Characteristic What it means Example Counterexample
Order Highest derivative \( y' + 2y = 0 \) (first-order) \( y'' + y = 0 \) (second-order)
Linear vs Nonlinear Linear = no powers or products of \(y\), \(y'\) \( y'' + 10y' + 106y = 0 \) (linear) \( y' + y^2 = 0 \) (nonlinear)
Homogeneous vs Nonhomogeneous Homogeneous = no term without \(y\) or its derivatives \( y'' + 2y' - 3y = 0 \) (homogeneous) \( y'' + 2y' - 3y = e^x \) (nonhomogeneous)

Conways Game of Life!

One of my favorite examples of cellular automata is Conway's game of life. 4 simple rules on a grid create complex systems (oscillators, glider guns, even a general purpose computer). It's a zero player game–meaning that the game play is determined solely by its initial state.

The four rules are as follows:

1. A live cell dies if it has fewer than two live neighbors.

2. A live cell with two or three live neighbors lives on to the next generation.

3. A live cell with more than three live neighbors dies.

4. A dead cell will be brought back to live if it has exactly three live neighbors.

Here is a quick simulation I made in javascript:You can manually click some of the boxes to create the starting conditions, or you can just click randomize.

Conway's game of life is turing complete–meaning if you have a large enough grid, you could perform any operation on it that you could perform on any other computer. This is really well demonstrated by Phillip Bradbury on github,who created a game of life that so large that it starts simulating conway's game of life again

. Theoretically, this could keep going–you could create a Life game that simulates a Life game that simulates a Life game that simulates a Life game….

The simulation I made is only 30 x 30 px, so you won't be able to make anything too complex. But if you're interested, I would recommend checking out golly, which has a built in script to generate metapixel grids that let you do some wacky things with Conway's game of life.

(If you ever want the code for how I make these simulations just email me!!!!!!!)

If you've ever read the time traveler's wife, you might remember the quote “Everything seems simple until you think about it”. 4 rules can create a computer so complex it can run itself over and over and over again...

There is so much to the game of life. For example, there are patterns that don't change from one generation to the next–Conway called these still lifes, things like the four celled block, the six celled beehive or the eighth celled pond.

Patterns that needed lots of generations to stabilize are called methuselahs. A good example of a methuselah is the acorn, developed by Charles Corderman, which takes 5,206 generations to stabilize

But this isn't even close to the longest lasting methuselah. On January 16 2021, Dylan Chen discovered one that takes 52, 513 generations to stabilize, which was found using apgsearch (ash pattern generator search) which generates soups (random initial patterns), where ash is a stable oscillation or flying outcome of a soup.

Different patterns in the game have different classifications and names, for example spaceships are finite patterns that return to the initial state after a certain number of generations but in a different location.

The classes for patterns are as follows:

- Class I-still lifes

- Class II-oscillators

- Class III- space ships

- Class IV- guns

- Class V- unstable, predictable patterns

- Class VI-unstable, unpredictable patterns

Here's a neat table (Courtesy of Alan Zucconi):

Gliders are the smallest space ships known to exist–first discovered in 1969 by Richard guy. Then in 1970, Bill Gosper invented the Gosper glider gun, An oscillator that, every 30 generations, spawns a new glider. That was the first Class IV object to be discovered.

Another interesting aspect of the game is the Garden of Eden Theorem. The theorem was proved by Edward Moore and John Myhill in the 1960s– before Conway's Game of life was even invented. Gardens of Eden are patterns that have no predecessors, which can be pretty easily found since you can use the rules of the game to retrace past generations, and therefore must be starting conditions. The Garden of Eden theorem states that the class of surjective (every pattern is mapped to by some other pattern) cellular automata and those which are injective (no two distinct finite patterns map into the same finite pattern) over finite configurations coincide. So any cellular automaton contains Gardens of Eden if and only if it is not surjective.

Basically, a cellular automaton has a Garden of Eden if and only if it has two different finite configurations that evolve into the same configuration in one step.



The point of all this is that simple rules have high sensitivity to initial conditions, ranging from complete annihilation to a static universe to pure chaos. Stephen Wolfram called the game “the purest example I know of the dynamics of collective human innovation.”

Brian Eno said of the game “Complexity arises from simplicity! That is such a revelation; we are used to the idea that anything complex must arise out of something more complex. Human brains design airplanes, not the other way around. Life shows us complex virtual “organisms” arising out of the interaction of a few simple rule”

I would also like to make it abundantly clear I'm not one of those wackos who thinks Conway's game of life is a genuine reflection of our universe, or that it proves the existence of god. Yes, I have actually heard BYU math students say that the game's dependence on starting conditions proves the existence of god. Obviously the rules are very different from the universal laws we have, and most importantly there's no conservation laws, so something could arise from nothing.

Conway's game of life is just an interesting example of mathematical chaos, because it is not readily predictable, but it is NOT random. There are patterns, laws, and emergent phenomena. Kind of like our universe, a few universal laws and a handful of particles, started by an unknown starting conditions 13.8 billion years ago… and now I can write a blog post distributed to the world wide web on a 10” by 13” screen!




Langton's Ant!

I have been on a big cellular automata kick recently. Mostly because I think the patterns they make are pretty, but also because the math behind some of the papers are really interesting. If you haven't played Conway's Game of life or langton's ant I would highly recommend checking them out. Langton's ant is one of my favorites because it is really simple but it is a turing complete machine, technically, despite only having two rules:

1. If the ant is on a white square: The ant turns 90 degrees right, flips the square to black, and moves forward one step.

2. If the ant is on a black square: The ant turns 90 degrees left, flips the square to white, and moves forward one step.

Here's a quick simulation of it I made in javascript–.if you let it run for long enough without refreshing the page, you should see a pattern emerge.

Langton's ant has three main stages–simplicity, where it has simple, symmetric moves, then it has chaos, where it traces a pretty random and irregular path, and then from the chaos stage it creates the last stage, emergent order, a recurrent highway pof repeatable steps that it gets locked into. I'm sure someone much more poetic than me could make that into a metaphor for life or love or something.

I thought these types of games are fun, so I decided to make my own game except with shapes. I call it Silly Billy Polygon Game.

The rules are as follows:

A grid of cells has one of the following initial states in each cell:

1. Empty (no polygon)

2.Square

3. Triangle

4. Hexagon

And will abide by the following growth rules:

1. Empty cells turn into Squares if surrounded by two or more Squares.

2. Squares can grow into Triangles if adjacent to more than two Squares or other Triangles.

3. Triangles can evolve into Hexagons if surrounded by triangles and squares.

4. Hexagons remain Hexagons unless surrounded by another specific combination of shapes that triggers a new, even larger polygon.

Heres a simulation of it (also made with js):

Another cellular automata that ti think is arguable y more interesting than langton's ant is patersons worm. I'm not going to make a simulation for it because it involves an isometric grid and to be honest I have no idea how to code one of those. But Tomasso Marziu made a great simulation of it that I will link here

.

Basically the rules of the game are as follows:

Starting:

Start with an empty, isometric grid

Make a line segment connecting the origin to any other next to it.

There are 6 cardinal directions:

0, which is straight ahead

1, which is a 60° right turn

2, which is a 120° right turn

3 which is a 180° turn (NOT ALLOWED)

4, which is a 120° left turn

5, which is a 60° left turn

You cannot retrace an already occupied path, which is why option 3 is not allowed after any path is made.

So, 0 recurring over and over would make a straight line (BO-RINGG); 1 recurring over and over would make a hexagon; 2 recurring over and over would make a triangle You can stack the rules on top of each other (like 5, 0, 1, 2, 5 or something) to create the rules (in order) that the worm will have to follow

Methods for Finding Solids of Revolution

Given a cylindrical shell with inner radius \( r_1 \), outer radius \( r_2 \), and height \( h \), its volume \( V \) can be calculated by subtracting \( V_1 \) from \( V_2 \).

Where:

\[ r = \frac{r_2 + r_1}{2} \quad \text{and} \quad \Delta r = r_2 - r_1 \]

The volume of the shell is:

\[ V = 2\pi r h \Delta r \]

This essentially means:

\[ V = \text{circumference} \times \text{height} \times \text{thickness} \]

Shell Method

Now, given a solid of revolution rotated about the y-axis and bounded by the lines \( x = a \) and \( x = b \), if we divide the area into \( n \) subintervals of equal width \( \Delta x \) and height \( f(x_i) \) before it revolves around the y-axis, we get a shell.

Therefore, the approximation of the volume \( V \) can be written as a sum of the volume of these shells:

\[ V \approx \sum_{i=1}^{n} 2\pi x_i f(x_i) \Delta x \]

Taking the limit as \( n \to \infty \), we get the integral:

\[ V = \int_{a}^{b} 2\pi x f(x) \, dx \]


Method of Disks and Washers

You can also estimate the volume of a solid by using increasing numbers of circular slabs called disks:

\[ V \approx \sum_{i=1}^{n} A(f(x_i)) \Delta x \]

Taking the integral:

\[ V = \int_{a}^{b} A(x) \, dx \]

In some cases, the solid of revolution does not join together at the axis it is being revolved around, forming an annular ring or washer.

The area of a washer is found by subtracting the cross-sectional area of the inner radius from the area of the outer radius:

\[ A(x) = \pi \left( R^2 - r^2 \right) \]

Menu



top