Oscillators and Absorption

The linear and nonlinear story of classical absorption in 1D

It’s a goal of mine to eventually understand why water is blue, which has to do with the fact that H2OH_{2}O preferentially absorbs red light. This post is one in a collection of related ideas working toward that.

A previous post in this vein tackled the beautiful linear theory of molecular vibration. Model a molecule as point masses sitting at a minimum of a potential VV, approximate VV by its quadratic part, and the equations of motion become a coupled system of simple harmonic oscillators which decouple to a finite list of normal frequencies ω1,ω2,\omega_1, \omega_2, \ldots. These are the frequencies that molecule can vibrate at. But which frequencies of incoming light can it absorb?

The goal of this post is to set up a model of absorption from scratch — what it means for a system to absorb light at a given frequency, and how to compute it from the system’s internal dynamics.

We do this in two stages. First, the linear case: a single oscillator in a quadratic potential, where the absorption spectrum can be worked out in closed form. Second, the simplest nonlinear correction: a cubic term in the potential, where closed forms vanish and we explore numerically. The absorption spectra of these two cases are qualitatively quite different, which reinforces what I’ve heard - small nonlinearities are acutally quite important for the overal absorption properties.

Modeling absorption

We will work with the simplest classical toy model of a material: a single point mass on a line, moving under a potential energy V ⁣:RRV \colon \mathbb R \to \mathbb R with a non-degenerate minimum. There is no electromagnetic field, no quantum mechanics, no thermal bath — just a particle in a one-dimensional potential well. This is a less-than-honest model of any real material, but it is enough to set up the question we want to ask: what does it mean for such a system to absorb light?

Newton’s second law gives

x¨  =  V(x).\ddot x \;=\; -V'(x).

(We work in dimensionless units throughout: m=1m = 1, equilibrium at x=0x = 0 with V(0)=V(0)=0V(0) = V'(0) = 0 and V(0)=1V''(0) = 1.)

The dynamics near equilibrium are governed by this single ODE; everything that follows is built on top of it.

Light as a forcing term

To ask whether the system absorbs light at frequency ω\omega, we need to model what “shining light” means. Our toy model is too simple to derive this from electromagnetism, and we simply posit that light enters the equation of motion as a time-dependent external force at frequency ω\omega:

x¨  =  V(x)  +  Acos(ωt).\ddot x \;=\; -V'(x) \;+\; A \cos(\omega t).

The crucial new feature is that the right-hand side now depends explicitly on tt. The system is no longer autonomous. The amplitude AA is set by the strength of the incoming wave; the frequency ω\omega is the parameter we get to vary. Absorption becomes a question about the driven system: how does it respond as ω\omega changes?

Dissipation

Our equation x¨=V(x)+Acos(ωt)\ddot x = -V'(x) + A\cos(\omega t) is incomplete as a model of a real material. A real molecule is never isolated: it collides with neighbors, it reradiates into the electromagnetic field, and it couples to its own internal degrees of freedom that the “point mass” abstraction hides. Each of these is a channel through which energy leaks away from our oscillator.

A fully honest model would couple our coordinate xx to every variable describing every one of these channels. This is hopeless. We would be modeling the entire environment to ask a single question about a single oscillator. So we adopt an effective description: the cumulative effect of the unmodeled world is summarized by a single phenomenological term f(x,x˙)f(x, \dot x) that drains energy from our oscillator. Two structural constraints narrow down the form of ff before we have to choose anything specific.

ff depends on x˙\dot x but not on xx. Two arguments converge here. First, a force depending only on xx is conservative — it reshapes the potential well rather than draining any energy. Second, by xx-independence: a homogeneous environment imposes the same dissipative effects wherever the particle sits, so ff should not depend on xx at all.

ff must extract energy. The work done by f(x˙)-f(\dot x) on the oscillator over a time dtdt is f(x˙)x˙dt-f(\dot x)\,\dot x\, dt. For the term to remove energy at every instant — the meaning we want for “dissipative” — we require

f(x˙)x˙    0for all x˙,f(\dot x)\,\dot x \;\geq\; 0 \qquad \text{for all } \dot x,

with equality only at x˙=0\dot x = 0. This forces f(0)=0f(0) = 0 and ff to have the same sign as x˙\dot x.

Taylor expanding ff around zero,

f(x˙)  =  a1x˙+a2x˙2+a3x˙3+,f(\dot x) \;=\; a_1 \dot x \,+\, a_2\dot x^2 \,+\, a_3 \dot x^3 \,+\, \cdots,

the dissipativity inequality near x˙=0\dot x = 0 forces the leading nonzero term to be at an odd power with positive coefficient: were the leading term a2x˙2a_2 \dot x^2, we would have f(x˙)x˙a2x˙3f(\dot x)\dot x \sim a_2 \dot x^3 near zero, sign-indefinite. So at leading order,

f(x˙)    2γx˙,γ>0.f(\dot x) \;\approx\; 2\gamma\, \dot x, \qquad \gamma > 0.

Higher-order corrections to ff are present in general, but matter only at large amplitudes. For small-amplitude motion every dissipative mechanism contributes the same leading 2γx˙2\gamma \dot x. We keep only this leading term. The equation of motion becomes

x¨  +  2γx˙  =  V(x)  +  Acos(ωt).\boxed{ \ddot x \;+\; 2\gamma\, \dot x \;=\; -V'(x) \;+\; A \cos(\omega t).}

The absorption spectrum

We are now ready to define what we mean by absorption. The total mechanical energy of the oscillator at any moment is

E(t)  =  12x˙2+V(x).E(t) \;=\; \tfrac12 \dot x^2 \,+\, V(x).

Differentiating along a solution and using x¨+V(x)=2γx˙+Acos(ωt)\ddot x + V'(x) = -2\gamma\dot x + A\cos(\omega t),

dEdt  =  x˙(x¨+V(x))  =  2γx˙2+Acos(ωt)x˙.\frac{dE}{dt} \;=\; \dot x\,\bigl(\ddot x + V'(x)\bigr) \;=\; -2\gamma\,\dot x^2 \,+\, A\cos(\omega t)\cdot \dot x.

The two terms on the right have natural physical interpretations: 2γx˙2-2\gamma\dot x^2 is the rate at which damping does work on the oscillator (always non-positive, by construction), and Acos(ωt)x˙A\cos(\omega t)\cdot\dot x is the rate at which the driver does work.

We want a single absorption rate per drive frequency ω\omega. The instantaneous rates fluctuate, but if the system reaches a steady state — a long-time behavior in which the average energy is constant — then time-averaging dE/dtdE/dt over the steady state gives zero, and the two rates balance.

For any bounded function f ⁣:RRf \colon \mathbb R \to \mathbb R admitting a long-time average, write

f  :=  limT1T0Tf(t)dt.\langle f \rangle \;:=\; \lim_{T \to \infty} \frac{1}{T}\int_0^T f(t)\, dt.

When ff is periodic with period T0T_0 this reduces to a single-period integral, f=1T00T0f(t)dt\langle f \rangle = \frac{1}{T_0}\int_0^{T_0} f(t)\,dt. For our periodic steady states T0=2π/ωT_0 = 2\pi/\omega, and this single-period form is what we actually compute.

Whether such a steady state exists is a real question. The linear oscillator always reaches one — every solution converges exponentially to a unique periodic attractor, because γ>0\gamma > 0. For nonlinear potentials this is no longer automatic. For now we assume the steady state exists, with the understanding that we will revisit the assumption when it breaks.

In steady state, dE/dt=0\langle dE/dt\rangle = 0, so

2γx˙2  =  Acos(ωt)x˙.\bigl\langle 2\gamma \dot x^2 \bigr\rangle \;=\; \bigl\langle A\cos(\omega t)\cdot\dot x \bigr\rangle.

Both sides equal the rate at which energy flows through the system in steady state — in from the driver, out through the dissipation channel. This common rate, viewed as a function of drive frequency, is the central object of the post:

The absorption spectrum at drive frequency ω\omega is the long-time-averaged rate of energy throughput, Pabs(ω)  :=  limT1T0T2γx˙(t)2dt  =  limT1T0TAcos(ωt)x˙(t)dt,P_{\text{abs}}(\omega) \;:=\; \lim_{T \to \infty} \frac{1}{T}\int_0^T 2\gamma\, \dot x(t)^2 \, dt \;=\; \lim_{T \to \infty} \frac{1}{T}\int_0^T A\cos(\omega t)\,\dot x(t)\, dt, with x˙\dot x the velocity in the steady-state solution at drive frequency ω\omega.

The first integrand is the rate at which damping removes energy; the second is the rate at which the driver supplies it. In steady state they balance and the two long-time averages coincide. The first form is closer to the conceptual definition (energy out through the dissipation channel); the second is closer to what an experimentalist would measure (time-averaged power from the source). We will use whichever fits.

The framework

The setup, in full, is

x¨  +  2γx˙  =  V(x)  +  Acos(ωt)Pabs(ω)  =  2γx˙2\boxed{ \begin{gathered} \ddot x \;+\; 2\gamma \dot x \;=\; -V'(x) \;+\; A \cos(\omega t) \\ P_{\text{abs}}(\omega) \;=\; \bigl\langle 2\gamma \dot x^2 \bigr\rangle \end{gathered} }

Three ingredients in: a potential VV encoding the system’s internal dynamics, a dissipation rate γ>0\gamma > 0 encoding the strength of coupling to the unmodeled environment, and a driving force at frequency ω\omega and amplitude AA. One function out: the absorption spectrum, computed from the steady-state solution. The framework is general: it does not assume VV is quadratic or smooth, it does not assume small amplitudes, and it does not assume any particular relationship between drive and natural frequencies.

The rest of this post applies the recipe in two cases — first V(x)=12x2V(x) = \tfrac12 x^2, where everything can be computed in closed form, and then a nonlinear modification, where the picture becomes qualitatively richer.

The linear oscillator

The simplest non-trivial choice of potential is the quadratic one, V(x)=12x2V(x) = \tfrac12 x^2, for which the equation of motion reads

x¨+2γx˙+x  =  Acos(ωt).\ddot x \,+\, 2\gamma\, \dot x \,+\, x \;=\; A \cos(\omega t).

A particle in the parabolic well, kicked by Acos(ωt)A\cos(\omega t). Drag the sliders to vary the damping γ\gamma, drive frequency ω\omega, and amplitude AA, and watch how the steady motion changes — the response is small far from ω=1\omega = 1 and large near it.

This is the system the rest of the section analyzes.

Solving the equation

Writing L:=t2+2γt+1L := \partial_t^2 + 2\gamma\, \partial_t + 1, the equation is

Lx  =  Acos(ωt).L\,x \;=\; A\cos(\omega t).

The operator LL is a polynomial in t\partial_t, so it is diagonalized by exponentials: for any λC\lambda \in \mathbb C,

Leλt  =  p(λ)eλt,p(λ)  :=  λ2+2γλ+1.L\, e^{\lambda t} \;=\; p(\lambda)\, e^{\lambda t}, \qquad p(\lambda) \;:=\; \lambda^2 + 2\gamma\lambda + 1.

The kernel of LL and a particular solution both follow from this single fact.

The kernel. The kernel consists of eλte^{\lambda t} with p(λ)=0p(\lambda) = 0, i.e., λ±=γ±γ21\lambda_\pm = -\gamma \pm \sqrt{\gamma^2 - 1}. The qualitative regime depends on γ\gamma:

Since γ>0\gamma > 0, every kernel element has Reλ<0\mathrm{Re}\,\lambda < 0 and decays exponentially. These are transients — solution components that depend on initial conditions but are washed out at long times.

A particular solution. To invert LL on the drive Acos(ωt)A\cos(\omega t), decompose into exponentials: cos(ωt)=12(eiωt+eiωt)\cos(\omega t) = \tfrac12(e^{i\omega t} + e^{-i\omega t}). These are eigenfunctions with λ=±iω\lambda = \pm i\omega and eigenvalues p(±iω)=(1ω2)±2iγωp(\pm i\omega) = (1-\omega^2) \pm 2i\gamma\omega, so

xp(t)  =  A2eiωtp(iω)+A2eiωtp(iω)  =  ARe ⁣(eiωtp(iω)).x_p(t) \;=\; \tfrac{A}{2}\,\frac{e^{i\omega t}}{p(i\omega)} \,+\, \tfrac{A}{2}\,\frac{e^{-i\omega t}}{p(-i\omega)} \;=\; A\,\mathrm{Re}\!\left(\frac{e^{i\omega t}}{p(i\omega)}\right).

The steady state. The full solution is xpx_p plus a transient. The transient decays; the periodic part survives. Whatever initial conditions we start from, the long-time behavior is the same particular solution. Multiplying numerator and denominator by p(iω)=(1ω2)2iγω\overline{p(i\omega)} = (1-\omega^2) - 2i\gamma\omega and taking the real part,

xss(t)  =  A(1ω2)2+4γ2ω2[(1ω2)cosωt  +  2γωsinωt].x_{\text{ss}}(t) \;=\; \frac{A}{(1-\omega^2)^2 + 4\gamma^2\omega^2}\,\Bigl[\,(1-\omega^2)\,\cos\omega t \;+\; 2\gamma\omega\,\sin\omega t\,\Bigr].

Click and drag inside the phase-space box to drop a new initial condition (x0,x˙0)(x_0, \dot x_0); the trajectory is drawn live against a dashed reference for the steady state. Released ICs leave light-gray trails — and you can let the autoplay cycle through ICs to see how every starting point funnels onto the same attractor.

Whatever IC we pick, the trajectory collapses onto the same periodic attractor. This attractor has the same period as the drive but is not in phase with it: combining the cos\cos and sin\sin pieces gives the polar form

xss(t)  =  A(1ω2)2+4γ2ω2cos(ωtφ),tanφ(ω)  =  2γω1ω2,x_{\text{ss}}(t) \;=\; \frac{A}{\sqrt{(1-\omega^2)^2 + 4\gamma^2\omega^2}}\,\cos(\omega t - \varphi), \qquad \tan\varphi(\omega) \;=\; \frac{2\gamma\omega}{1-\omega^2},

a phase lag running from 00 at ω=0\omega = 0, through π/2\pi/2 at resonance (ω=1\omega = 1), to π\pi as ω\omega \to \infty.

The absorption spectrum

We want the time-averaged dissipation rate

Pabs(ω)  =  2γx˙ss2.P_{\text{abs}}(\omega) \;=\; \langle 2\gamma\,\dot x_{\text{ss}}^2 \rangle.

Top trace is xss(t)x_{\text{ss}}(t); the shaded curve below is the instantaneous dissipation rate 2γx˙2(t)2\gamma\,\dot x^2(t), with its period average drawn as a dashed line. The bar to the right has the same height as the dashed line — that’s Pabs(ω)P_{\text{abs}}(\omega) at the current drive. Slide ω\omega across resonance and watch the bar climb and fall.

The bar height is the absorption: Pabs(ω)P_{\text{abs}}(\omega) is the time-averaged height of the dissipation curve. We can read it off the demo at any ω\omega.

To compute it explicitly, differentiate the polar form of the steady state from §2:

x˙ss(t)  =  Aω(1ω2)2+4γ2ω2sin(ωtφ).\dot x_{\text{ss}}(t) \;=\; -\frac{A\,\omega}{\sqrt{(1-\omega^2)^2 + 4\gamma^2\omega^2}}\,\sin(\omega t - \varphi).

Squaring and using sin2=12\langle\sin^2\rangle = \tfrac12,

x˙ss2  =  A2ω22[(1ω2)2+4γ2ω2].\langle \dot x_{\text{ss}}^2\rangle \;=\; \frac{A^2\,\omega^2}{2\bigl[(1-\omega^2)^2 + 4\gamma^2\omega^2\bigr]}.

Multiplying by 2γ2\gamma,

Pabs(ω)  =  A2γω2(1ω2)2+4γ2ω2.\boxed{ P_{\text{abs}}(\omega) \;=\; \frac{A^2\,\gamma\,\omega^2}{(1-\omega^2)^2 + 4\gamma^2\omega^2}. }

A single peak at the natural frequency ω=1\omega = 1, vanishing at ω=0\omega = 0 and decaying as ω2\omega^{-2} for ω\omega \to \infty. The peak sits exactly at the resonance from §2 — the frequency where the phase lag is π/2\pi/2 and the response is in pure quadrature with the drive.

The rust curve is Pabs(ω)P_{\text{abs}}(\omega) vs ω\omega; the thin column to the right has fixed height πA2/4\pi A^2/4, the value of 0Pabsdω\int_0^\infty P_{\text{abs}}\,d\omega. Drag γ\gamma (or let it autoplay): as γ\gamma shrinks, the peak narrows and grows tall — but the column stays put.

The bar’s stubbornness is a genuine conservation law. Direct integration (with some courage) gives

0Pabs(ω)dω  =  πA24,\int_0^\infty P_{\text{abs}}(\omega)\, d\omega \;=\; \frac{\pi A^2}{4},

independent of γ\gamma. The peak’s three features come from three independent ingredients of the model: the peak position from the conservative dynamics (V=1V'' = 1), the peak width from the dissipation rate γ\gamma, and the integrated area from the drive amplitude AA.

What we have learned: in the linear theory, a single oscillator absorbs light in a single peak centered at the natural frequency ω=1\omega = 1.

A nonlinear modification

The linear theory rests on a strong assumption: VV is purely quadratic. In a real system, that’s only the leading-order behavior near the minimum. Taylor expanding,

V(x)  =  12x2+ϵ3x3+O(x4),V(x) \;=\; \tfrac12 x^2 \,+\, \tfrac{\epsilon}{3}\, x^3 \,+\, O(x^4),

the next correction is cubic, and the simplest nonlinear modification is to keep just that one extra term:

V(x)  =  12x2+ϵ3x3,ϵ0.V(x) \;=\; \tfrac12 x^2 \,+\, \tfrac{\epsilon}{3}\, x^3, \qquad \epsilon \geq 0.

Slide ϵ\epsilon to bend the well. The cubic term pulls the left side down, the well shallows, and for any ϵ>0\epsilon > 0 a local maximum forms at x=1/ϵx = -1/\epsilon with barrier height 1/(6ϵ2)1/(6\epsilon^2) — the energy a particle would need to escape.

This VV is unbounded below: as xx \to -\infty, VV \to -\infty. A particle with enough energy can climb over the local maximum and escape, and we’ll occasionally see this happen in simulations — affected frequencies are flagged as having no steady state.

The model

The equation of motion picks up a quadratic nonlinearity:

x¨+2γx˙+x+ϵx2  =  Acos(ωt).\ddot x \,+\, 2\gamma\, \dot x \,+\, x \,+\, \epsilon\, x^2 \;=\; A\cos(\omega t).

At ϵ=0\epsilon = 0 we recover the linear theory exactly. For ϵ>0\epsilon > 0 the equation has no closed-form solution, so we go numerical.

The same setup as the linear demo, now in the asymmetric well. Sweep ϵ\epsilon up gradually: at small values you can see the anharmonic distortion of the response, and for large enough ϵ\epsilon at the right ω\omega, the particle eventually finds enough energy to climb the barrier and escape.

Steady states and absorption

The linear damping term 2γx˙2\gamma\dot x is unaffected by the nonlinearity, so transients still decay exponentially. We expect — and observe — that initial conditions wash out, leaving a unique periodic attractor, at least for trajectories whose energy stays below the escape barrier.

Drop ICs in the phase-space box as before. Most still funnel onto a common attractor — but ICs with enough energy now escape over the barrier and run off, those are the trajectories that don’t settle.

What the attractor itself looks like is best seen directly:

The plotted trace is xss(t)x_{\text{ss}}(t), computed by integrating from rest with RK4 until transients fall below numerical noise. Drag ϵ\epsilon up to see the waveform deviate from a sinusoid — the asymmetric well biases the equilibrium and distorts the shape, severely so for larger ϵ\epsilon.

The recipe for absorption is unchanged from §3:

Pabs(ω)  =  1T00T02γx˙ss(t)2dt,T0  =  2π/ω,P_{\text{abs}}(\omega) \;=\; \tfrac{1}{T_0}\int_0^{T_0} 2\gamma\, \dot x_{\text{ss}}(t)^2\, dt, \qquad T_0 \;=\; 2\pi/\omega,

with x˙ss\dot x_{\text{ss}} now obtained numerically rather than in closed form. Each value of Pabs(ω)P_{\text{abs}}(\omega) is the result of integrating the equation of motion until transients decay, then averaging 2γx˙22\gamma\dot x^2 over a single steady-state period.

The solid curve is the full nonlinear Pabs(ω)P_{\text{abs}}(\omega), built up live one frequency at a time on a log y-axis; the dashed curve is the linear (ϵ=0\epsilon = 0) spectrum from §3 for comparison. Vertical reference lines mark ω=1/n\omega = 1/n, and gray bands flag frequency ranges where trajectories escape (no steady state). Slide ϵ\epsilon up and watch new sub-resonance peaks emerge near ω=1/2,1/3,\omega = 1/2, 1/3, \ldots.

The departure from the linear theory is clear. The peak at ω=1\omega = 1 survives — distorted, slightly shifted — and as ϵ\epsilon grows, new peaks appear below the natural frequency. There’s at least one clear one, that appears around ω1/2\omega\approx 1/2. And for the right choices of conditions, more can be seen, roughly aligning with ω1/3\omega\approx 1/3 and ω1/4\omega\approx 1/4 (for refernce the lines ω=1/n\omega=1/n are drawn in the plot). This is a huge departure from the linear story: nonlinearity allows new peaks.


The linear theory predicted a single absorption peak at the natural frequency. The simplest nonlinear correction — a cubic term in VV, the leading anharmonicity — gives more: a distorted version of the linear peak, plus a sequence of additional peaks at smaller frequencies. Water is certianly a more complicated system, with both more complicated nonlinearities, and coupling between different oscillatory modes.

So it will take some time to build up the right model - but this already makes it much clearer why absorption spectra can have so many scattered little peaks - slight nonlinearities have big effects!

← All posts