Modelling Temperature Dissipation in the Mask Lysis Unit

April 2022

Abstract

We aim to model the effects of heating on different conditions in the lysis unit to determine whether heating is uniform and fast enough, and how scale affects it. We use thermal iterative methods on a discrete space in three dimensions. Code is written in Python with the NumPy library, Raster Geometry and Matplotlib. Electronics information in liaison with Gautam Krishna. Thanks to Sophie Elena - co-writer of the code and modelling expert - and Jian-Hui Mo.


Equations

For a continuous scalar field \(T:\mathbb{R}^3 \to \mathbb{R}\)representing temperature in a three dimensional volume, its evolution is governed by the heat equation:

\[\frac{\partial T}{\partial t}=\alpha \nabla^2 T\] ... where \(\alpha:\mathbb{R}^3 \to \mathbb{R}\) is the thermal diffusivity (as a scalar field).

Our model also involves a heating element, modelled as a resistor, which requires more thought. The power \(P=\frac{V^2}{R}\) can be substituted into the differential form of the specific heat equation.

\[dE = mc\cdot dT\]

\[dT = \frac{dE}{dt}\frac{dt}{mc}= dt \frac{P}{mc}\]

However, the resistance of the element changes with temperature, modelled by... \[R = R_{ref}[1+\alpha(T-T_{ref})]\]

where \(R_{ref}\) and \(T_{ref}\) are the resistance of wire and the temperature at which it was measured. Combining this gives, for the wire:

\[dT = \frac{V^2}{mc}\frac{1}{R_{ref}[1+\alpha(T-T_{ref})]}dt\]

This is - in accordance with the superposition theorem - added to the differential from the heat equation (which applies globally)1:

\[dT = \alpha \nabla^2T\cdot dt\]

Discretisation

The modelled volume must be divided up into discrete voxels to implement an iterated approximation. Each voxel is assigned an array index \(i,j,k: i,j,k\geq 0: i,j,k \in \mathbb{Z}\). The volume modelled is a cuboid formed of uniform cubes of size \(\langle \Delta x,\Delta x,\Delta x\rangle\). The voxels are identically sized in the\(z\), \(x\) and \(y\) dimensions, so the equations below refer only to \(i\) and \(x\) but apply to all.

\[x = i\Delta x\]

\[i = \left\lfloor \frac{x}{\Delta x} \right\rceil\]

\[\Delta x = \frac{L}{s}\]

Here \(s\) is the number of spatial samples and \(\lfloor x \rceil\)denotes the rounding of \(x\) to an integer. The temperature and thermal diffusivity fields have now also become arrays, I will represent the entry at voxel \(i,j,k\) as \(T_{i,j,k}\) and \(\alpha_{i,j,k}\), respectively. Next, the Laplacian \(\nabla^2\), usually defined as an operator on continuous fields in 3D Cartesian coordinates as

\[\nabla^2 = \frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}+\frac{\partial^2}{\partial z^2}\]

must now be defined discretely (central differencing). In one dimension:

\[\begin{split} \nabla^2T_{i} = \frac{\Delta}{\Delta x}\left(\frac{\Delta T_{i}}{\Delta x}\right) = \frac{\Delta}{\Delta x}\left(\frac{T_{i+1}-T_{i}}{\Delta x}\right) \\ = \frac{T_{i+1}-T_{i}-(T_{i}-T_{i-1})}{\Delta x^2} = \frac{T_{i+1}+T_{i-1}-2T_{i}}{\Delta x^2} \end{split}\] And in three2... \[(\nabla^2T)_{i,j,k} = \frac{1}{\Delta x^2}[T_{i+1,j,k}+T_{i-1,j,k}+T_{i,j+1,k}+T_{i,j-1,k}+T_{i,j,k+1}+T_{i,j,k-1}-6T_{i,j,k}]\]

The iteration equations become (global, then wire-only extra clause):

\[(\Delta T)_{i,j,k} = \alpha_{i,j,k}(\nabla^2T)_{i,j,k}\cdot\Delta t\]

\[(\Delta T)_{i,j,k} = \left[\alpha_{i,j,k}(\nabla^2T)_{i,j,k} + \frac{V^2}{mc}\frac{1}{R_{ref}[1+\alpha_{i,j,k}(T_{i,j,k}-T_{ref})]}\right]\Delta t\]

Model Parameters3

The lysis unit can be modelled as the intersection of three regions: a large outer Reuleaux cylinder of PLA, a smaller inner Reuleaux cylinder of fluid (water), and a smallest resistive cylinder with a perpendicular axis.

The lysis unit can be modelled as being contained within a small simulated volume (padding the lysis unit volume by a few voxels), which itself is exposed to an infinitely absorbing environment of room temperature (this is included in the boundary conditions, see Section 7).

Code

Simulation
simulation.py
import matrices as mt import numpy as np import time as t def find_dt(self) -> float: a_max = np.max(self.diffusivity) return self.delta_xyz**2 / a_max / np.ndim(self.initial_state) / 8 class ThermalModel: def reset_sim(self) -> None: self.frames = [] # Creates list to store frames self.frame_times = [] # Creates list for times corresponding to frames self.state = np.copy(self.initial_state) # Makes a copy of the initial state self.time = 0.00 # Resets simulation time self.cache_state() # Saves first frame def __init__(self, initial_state, thermal_diffusivity, delta_xyz, boundary_conditions = 'repeat', smallest_rate = -1, void_temp = 0) -> None: self.dims = np.shape(initial_state) self.initial_state = initial_state # Gives alias to static initial_state self.stop = smallest_rate # Smallest rate of temperature change (K/s) before simulation "converges" and stops self.void_temp = void_temp # Temperature of void boundary condition (if chosen) self.reset_sim() # Resets simulation self.diffusivity = np.copy(thermal_diffusivity) self.delta_xyz = delta_xyz self.boundary_conditions = boundary_conditions self.overrides = [] # List of tuples of overrides def voxel_tick_override(self, overriden_voxels, appended_tick_differential) -> None: override = ( (overriden_voxels != 0) , appended_tick_differential) # Overriden_voxels is truned into a Boolean mask, selecting which pixels to override # Appended_tick_differential (type: Function) should take args (self) and return an array of self.dims to be multiplied by dt self.overrides.append(override) def cache_state(self) -> None: self.frames.append(self.state) self.frame_times.append(self.time) # Saves current time, frame def tick(self, delta_time) -> bool: differential = mt.Laplacian(self.state, self.delta_xyz, self.boundary_conditions, self.void_temp) # Works out discretised Laplacian, with the given boundary condition differential *= self.diffusivity for override in self.overrides: differential += override[0] * override[1](self) # Local (masked by override[0]) override equation iterations if np.max(differential) < self.stop: return False # Check if further iterations will result in temperature change greater than the smallest allowed rate self.state += differential * delta_time # Global heat equation iteration self.time += delta_time # Advance time return True # returns False if simulation "converges" (falls below minimum allowed rate of change) def simulate_to(self, time, cache_fps = None, dt = None, log = False): self.reset_sim() if dt is None: dt = find_dt(self) # If no dt given, finds a suitable convergent delta time ticks = int(time / dt) # Works out how many times to tick checks = int(ticks / 10) # Works out how many ticks makes 10% tic = t.time() # Set stopwatch if cache_fps: # If a selective frame caching rate (simulated time) is given, only save frames every 'caches' ticks caches = int(1 / cache_fps / dt) for i in range(ticks): if log and i % checks == 0: percent = round(i / ticks, 1) * 100 print(percent, "% : t = ", self.time) # Print every 10% progress if i == 20: elapsed = t.time() - tic print("ETA in seconds: ", round(elapsed / 20 * (ticks - 20), 1)) # Calculate an overestimated time of completion (seconds) if not self.tick(dt): print("Simulation converges, stopping further iteration") self.cache_state() break # Checks for "convergence", breaks the loop of simulation if i % caches == 0: self.cache_state() else: # Else save every frame for i in range(ticks): if log and i % checks == 0: percent = round(i / ticks, 1) * 100 print(percent, "% : t = ", self.time) if i == 20: elapsed = t.time() - tic print("ETA in seconds: ", round(elapsed / 20 * (ticks - 20), 1)) if not self.tick(dt): print("Simulation converges, stopping further iteration") self.cache_state() break self.cache_state() return np.array(self.frames), np.array(self.frame_times) # Returns tuple of array of frames across time and the corresponding frame times
Matrices
matrices.py
import numpy as np def Laplacian(arr, delta, boundary_conditions = 'repeat', void_temp = None): axes = np.ndim(arr) # Find dimensionality of array s = - 2 * axes * arr for axis in range(axes): s += np.roll(arr, 1, axis) + np.roll(arr, -1, axis) # Array shifted once forwards and backwards in each axis s *= 1/delta**2 if boundary_conditions == 'void': s = np.pad(arr, 1, 'constant', constant_values = void_temp) # Pad array with void_temp, then work out Laplacian, then shave off the paddings return shave(Laplacian(s, delta), 1) elif boundary_conditions == 'repeat': return s elif boundary_conditions == 'frozen': # Makes the Laplacian edge pixels all zero, resulting in no temporal change return null_edges(s) elif boundary_conditions == 'xi': # Xi return s + np.random.normal(1e+5, 1e+3, np.shape(arr)) elif boundary_conditions == 'adiabatic': arr = np.pad(arr, 1, 'edge') s = Laplacian(arr, delta) # Pad array with copies of edge pixels, work out Laplacian and shave off padding return shave(s, 1) else: return s def shave(arr, number): n = np.ndim(arr) indices = [slice(number, -number) for _ in range(n)] # List of slices, used as index, that discards padding return arr[tuple(indices)] def null_edges(arr): shape = np.shape(arr) shape = tuple(i-2 for i in shape) id = np.ones(shape) mask = np.pad(id, 1, 'constant') # Pad the ones array with a layer of zeros # Boolean array of (pixel == edgepixel) return arr * mask # Array with zeros for edges
Lysis Sim
lysis_sim.py
import numpy as np, raster_geometry as rg from parameters import * def pixels(length): return length / scale # Converts continuous length to counterpart scale in pixels def reuleaux(dims, height, infra_radius, radius): a = np.ones(dims, dtype = bool) constructors = [ [-infra_radius, 0, 0], [0.5 * infra_radius, np.sqrt(3) / 2 * infra_radius, 0], [0.5 * infra_radius, -np.sqrt(3) / 2 * infra_radius, 0]] for i in constructors: a *= rg.cylinder(dims, height, radius, position = np.array(i) / np.array(dims) + 0.5) return a # Construct Reuleaux triangle raster by intersection of three equidistant constructor cylinders initial = np.ones(dims) * room_temp # inital temperature state alpha = np.ones(dims) * alpha_air # set diffusivity to all alpha_air cyl_bool = reuleaux(dims, pixels(h_cyl), pixels(infra_cyl), pixels(r_cyl)) water_bool = reuleaux(dims, pixels(h_water), pixels(infra_cyl), pixels(r_water)) wire_bool = rg.cylinder(dims, pixels(h_wire), pixels(r_wire), axis = 1) # Rasterise reuleaux / cylinders as Boolean mask arrays alpha[cyl_bool] = alpha_cyl alpha[water_bool] = alpha_water alpha[wire_bool] = alpha_wire # Fill diffusivity data with these region masks J = voltage**2 / mass_resistor / heat_capacity_resistor / ref_resistance def Wire_tick(self): arr = self.state - ref_temp arr = np.multiply(arr, self.diffusivity) arr += 1 return J * np.reciprocal(arr) # Define wire override tick import simulation as sim Model = sim.ThermalModel(initial, alpha, scale, 'void', -1, room_temp) # Construct a model with the above initial state, diffusivity, delta_x, void boundary, no convergence cutoff rate Model.voxel_tick_override(wire_bool, Wire_tick) # Enact the resistive element simulation override frames, times = Model.simulate_to(20, cache_fps=60, log = True) # Simulate evolution of temperature np.savez('simulation data', frames = frames, times = times) # Save the data!

The Oscillating Grid Problem

When running iterative models, it is more time efficient to use larger \(\Delta t\). Considering the taxing array operations and high spatial sampling rate, optimising with larger time intervals was largely necessary. However, on running simulation with large enough a value of \(\Delta t\), we encounter unexpected behaviour of the model.

Some voxels begin to oscillate, forming a checkerboard pattern which is undesired. It is evident now that this is a result of exceedingly large steps in temperature \(\Delta T\). Since \(\Delta T\) is directly (locally) proportional to \(\Delta t\), there is a limit to how large the time interval can be, before it begins to induce the checkerboard oscillation observed. Knowing this limit would give us the best value for \(\Delta t\).

Finding this limit starts with a model of the model. Consider an infinite grid of alternating pixels in 2 dimensions, with temperatures\(T_0\) and \(T_1\) and all with uniform \(\alpha\).

This problem is symmetric: all pixels with the same initial temperature behave the same way. We expect the grid to converge to a state where all the pixels have reached a mean temperature.

\[T_\mu = \frac{1}{2}(T_0(0)+T_1(0))\]

The Laplacian for pixel 0 \(\nabla^2 T_0\) is:

\[\nabla^2 T_0 = \frac{2}{\Delta x^2}(2T_1-2T_0)\]

more generally in n dimensions

\[\nabla^2 T_0 = \frac{2n}{\Delta x^2}(T_1-T_0)\]

Similarly, for pixels 1:

\[\nabla^2 T_1 = \frac{2n}{\Delta x^2}(T_0-T_1)\]

Therefore, the heat equation for both pixels yields...

\[\frac{dT_0}{dt} = \alpha\nabla^2 T_0 = \frac{2n\alpha}{\Delta x^2}(T_1-T_0)\]

\[\frac{dT_1}{dt} = \alpha\nabla^2 T_1 = \frac{2n\alpha}{\Delta x^2}(T_0-T_1)\]

Equations (19) and (20) are coupled first order linear differential equations in the two temperatures, solved analytically:

\[\frac{dT_0}{dt} =-\frac{dT_1}{dt}\]

\[\implies T_0=-T_1+C\]

\[C = 2T_\mu\]

\[\frac{dT_0}{dt} = \frac{2n\alpha}{\Delta x^2}(2T_\mu-2T_0)\]

\[k := \frac{4n\alpha}{\Delta x^2},\quad \frac{dT_0}{dt} = k(T_\mu-T_0)\]

\[\int \frac{1}{T_\mu-T_0}dT_0 = k\int dt\]

\[-\ln|T_\mu-T_0| = kt+D\]

\[T_\mu-T_0 = De^{-kt}\]

\[T_0(t) = T_\mu-De^{-kt}\]

I reassigned the letter \(D\) from the constant of integration to what would be \(e^{-D}\), it doesn’t matter as it acts as some constant until we find its value, which is \(D=T_\mu-T_0(0)\). Solving for \(T_1\) is similar...

\[\frac{dT_1}{dt} = k(T_\mu-T_1)\]

\[T_1(t) = T_\mu-Be^{-kt}\]

Here \(B=T_\mu-T_1(0)=-D\)

\[T_1(t) = T_\mu+De^{-kt}\]

The iterative solution must be expressed in terms of a sum.

\[\Delta T_0 = \frac{k}{2}(T_1-T_0)\Delta t\]

\[\Delta T_1 = \frac{k}{2}(T_0-T_1)\Delta t\]

\[T_0^{[i+1]} = T_0^{[i]}+\frac{k}{2}(T_1^{[i]}-T_0^{[i]})\Delta t = T_0^{[i]}\left(1-\frac{k}{2}\Delta t\right)+T_1^{[i]}\frac{k}{2}\Delta t\]

\[\begin{split} T_0^{[1]} = T_0^{[0]}+\frac{k}{2}(T_1^{[0]}-T_0^{[0]})\Delta t \\ = T_0^{[0]}\left(1-\frac{k}{2}\Delta t\right)+T_1^{[0]}\frac{k}{2}\Delta t \end{split}\]

\[T_0^{[2]} = T_0^{[1]}\left(1-\frac{k}{2}\Delta t\right)+T_1^{[1]}\frac{k}{2}\Delta t\]

\[T_0^{[2]} = \left[T_0^{[0]}\left(1-\frac{k}{2}\Delta t\right)+T_1^{[0]}\frac{k}{2}\Delta t\right]\left(1-\frac{k}{2}\Delta t\right)+\left[T_1^{[0]}\left(1-\frac{k}{2}\Delta t\right)+T_0^{[0]}\frac{k}{2}\Delta t\right]\frac{k}{2}\Delta t\]

\[T_0^{[2]} = T_0^{[0]}\left[\left(1-\frac{k}{2}\Delta t\right)^2+\left(\frac{k}{2}\Delta t\right)^2\right]+T_1^{[0]}\left[2\left(1-\frac{k}{2}\Delta t\right)\left(\frac{k}{2}\Delta t\right)\right]\]

\[T_0^{[3]} = T_0^{[2]}\left(1-\frac{k}{2}\Delta t\right)+T_1^{[2]}\frac{k}{2}\Delta t\]

\[\begin{split} T_0^{[3]} = & T_0^{[0]}\left[\left(1-\frac{k}{2}\Delta t\right)^3+\left(\frac{k}{2}\Delta t\right)^2\left(1-\frac{k}{2}\Delta t\right)+2\left(1-\frac{k}{2}\Delta t\right)\left(\frac{k}{2}\Delta t\right)^2\right] + \\ & T_1^{[0]}\left[2\left(1-\frac{k}{2}\Delta t\right)^2\left(\frac{k}{2}\Delta t\right)+\left(1-\frac{k}{2}\Delta t\right)^2\left(\frac{k}{2}\Delta t\right)+\left(\frac{k}{2}\Delta t\right)^3\right] \end{split}\]

\[\begin{split} T_0^{[3]} = & T_0^{[0]}\left[\left(1-\frac{k}{2}\Delta t\right)^3+3\left(1-\frac{k}{2}\Delta t\right)\left(\frac{k}{2}\Delta t\right)^2\right] + \\ & T_1^{[0]}\left[3\left(1-\frac{k}{2}\Delta t\right)^2\left(\frac{k}{2}\Delta t\right)+\left(\frac{k}{2}\Delta t\right)^3\right] \end{split}\]

Equations (29), (30) and (31) lead us to draw the conclusion that:

\[T_0^{[n]} = \sum_{r=0}^{n}\binom{n}{r}\left(1-\frac{k}{2}\Delta t\right)^r\left(\frac{k}{2}\Delta t\right)^{n-r}\left[D(-1)^{r+n+1}+T_\mu\right]\]

\[T_1^{[n]} = \sum_{r=0}^{n}\binom{n}{r}\left(1-\frac{k}{2}\Delta t\right)^r\left(\frac{k}{2}\Delta t\right)^{n-r}\left[D(-1)^{r+n}+T_\mu\right]\]

Proof by induction follows:

\[\begin{split} T_0^{[1]} = \sum_{r=0}^{1}\binom{1}{r}\left(1-\frac{k}{2}\Delta t\right)^r\left(\frac{k}{2}\Delta t\right)^{1-r}\left[D(-1)^{r+2}+T_\mu\right] \\ = \left(\frac{k}{2}\Delta t\right)\left[D(-1)^2+T_\mu\right] + \left(1-\frac{k}{2}\Delta t\right)\left[D(-1)^3+T_\mu\right] \\ = \left(\frac{k}{2}\Delta t\right)T_1^{[0]} + \left(1-\frac{k}{2}\Delta t\right)T_0^{[0]} \end{split}\]

\[\begin{split} T_0^{[n+1]} = T_0^{[n]}\left(1-\frac{k}{2}\Delta t\right)+T_1^{[n]}\left(\frac{k}{2}\Delta t \right) \\ = \left(1-\frac{k}{2}\Delta t\right) \sum_{r=0}^{n}\binom{n}{r}\left(1-\frac{k}{2}\Delta t\right)^r\left(\frac{k}{2}\Delta t\right)^{n-r}\left[D(-1)^{r+n+1}+T_\mu\right] \\ + \left(\frac{k}{2}\Delta t\right) \sum_{r=0}^{n}\binom{n}{r}\left(1-\frac{k}{2}\Delta t\right)^r\left(\frac{k}{2}\Delta t\right)^{n-r}\left[D(-1)^{r+n}+T_\mu\right] \\ = \sum_{r=0}^{n}\binom{n}{r}\left(1-\frac{k}{2}\Delta t\right)^{r+1}\left(\frac{k}{2}\Delta t\right)^{n-r}\left[D(-1)^{r+n+1}+T_\mu\right] \\ + \sum_{r=0}^{n}\binom{n}{r}\left(1-\frac{k}{2}\Delta t\right)^r\left(\frac{k}{2}\Delta t\right)^{n-r+1}\left[D(-1)^{ r+n}+T_\mu\right] \\ = \sum_{r=1}^{n+1}\binom{n}{r-1}\left(1-\frac{k}{2}\Delta t\right)^{r}\left(\frac{k}{2}\Delta t\right)^{n-r+1}\left[D(-1)^{r+n}+T_\mu\right] \\ + \sum_{r=0}^{n}\binom{n}{r}\left(1-\frac{k}{2}\Delta t\right)^r\left(\frac{k}{2}\Delta t\right)^{n-r+1}\left[D(-1)^{r+n}+T_\mu\right] \\ = \sum_{r=1}^{n}\left[\binom{n}{r-1}+\binom{n}{r}\right]\left(1-\frac{k}{2}\Delta t\right)^{r}\left(\frac{k}{2}\Delta t\right)^{n+1-r}\left[D(-1)^{r+n}+T_\mu\right] \\ + \binom{n}{0}\left(1-\frac{k}{2}\Delta t\right)^0\left(\frac{k}{2}\Delta t\right)^{n+1}\left[D(-1)^{n}+T_\mu\right] \\ + \binom{n}{n}\left(1-\frac{k}{2}\Delta t\right)^{n+1}\left(\frac{k}{2}\Delta t\right)^{0}\left[D(-1)^{2n+1}+T_\mu\right] \end{split}\]

The binomial coefficient:

\[\begin{split} \binom{n}{r-1}+\binom{n}{r} = \frac{n!}{(r-1)!(n-r+1)!}+\frac{n!}{r!(n-r)!} \\= \frac{n!}{(r-1)!(n-r)!}\left[\frac{1}{n-r+1}+\frac{1}{r}\right] \\= \frac{n!}{(r-1)!(n-r)!}\left[\frac{n+1}{(n-r+1)r}\right] \\= \frac{(n+1)!}{r!(n+1-r)!}=\binom{n+1}{r} \end{split}\]

\[\begin{split} T_0^{[n+1]} = \sum_{r=1}^{n}\binom{n+1}{r}\left(1-\frac{k}{2}\Delta t\right)^{r}\left(\frac{k}{2}\Delta t\right)^{n+1-r}\left[D(-1)^{r+(n+1)+1}+T_\mu\right] \\ + \binom{n+1}{0}\left(1-\frac{k}{2}\Delta t\right)^0\left(\frac{k}{2}\Delta t\right)^{n+1}\left[D(-1)^{(n+1)+1}+T_\mu\right] \\ + \binom{n+1}{n+1}\left(1-\frac{k}{2}\Delta t\right)^{n+1}\left(\frac{k}{2}\Delta t\right)^{0}\left[D(-1)^{n+1+(n+1)+1}+T_\mu\right] \\= \sum_{r=0}^{n+1}\binom{n+1}{r}\left(1-\frac{k}{2}\Delta t\right)^{r}\left(\frac{k}{2}\Delta t\right)^{n+1-r}\left[D(-1)^{r+(n+1)+1}+T_\mu\right] \end{split}\]

The algebra of \((-1)^x\) allows us to add or subtract any even number to \(x\) and maintain the same value, which was exploited for the proof. Equation (32) is true for \(n=1\), and if it is true for \(n\), then it is also true for \(n+1\). By the principle of mathematical induction, the formula must be true for all \(n\in\mathbb{N}\). The formulae can be further simplified:

\[\begin{split} T_0^{[n]}=\sum_{r=0}^{n}\binom{n}{r}\left(1-\frac{k}{2}\Delta t\right)^r\left(\frac{k}{2}\Delta t\right)^{n-r}\left[D(-1)^{r+n+1}+T_\mu\right]\\ = T_\mu\sum_{r=0}^{n}\binom{n}{r}\left(1-\frac{k}{2}\Delta t\right)^r\left(\frac{k}{2}\Delta t\right)^{n-r} + D(-1)^{n+1}\sum_{r=0}^{n}(-1)^r\binom{n}{r}\left(1-\frac{k}{2}\Delta t\right)^r\left(\frac{k}{2}\Delta t\right)^{n-r}\\ = T_\mu\left(1-\frac{k}{2}\Delta t+\frac{k}{2}\Delta t\right)^n + D(-1)^{n+1}\sum_{r=0}^{n}\binom{n}{r}\left(\frac{k}{2}\Delta t-1\right)^r\left(\frac{k}{2}\Delta t\right)^{n-r} \\ = T_\mu+ D(-1)^{n+1}(k\Delta t-1)^n \end{split}\]

\[T_0^{[n]} = T_\mu- D(1-k\Delta t)^n\]

\[T_1^{[n]} = T_\mu+ D(1-k\Delta t)^n\]

Here are side-by-side comparisons of the analytic solutions and the iterated solutions with certain \(\Delta t\) values, the resulting behaviours of which we will impose conditions on in the following section.

Conditions for Convergent Iteration

We put forwards some basic conditions for convergence of the checkerboard grid: conditions to stop the oscillation that is observed with large enough \(\Delta t\). Theabsolute condition for convergent iteration is, using equation (26)...

\[\Delta T_0 < T_1-T_0\]

\[\Delta t < \frac{2}{k}\frac{T_1-T_0}{T_1-T_0}\]

\[\Delta t < \frac{\Delta x^2}{2n\alpha_{max}}\]

Note the use of \(\alpha_{max}\) to find the global upper bound for convergence, as \(\Delta t\) is a global constant. The next condition eliminates the possibility of either \(\Delta T\) vacillating from negative to positive (condition for unidirectional convergence):

\[\Delta T_0 < \frac{1}{2}(T_1-T_0)\]

\[\Delta t < \frac{\Delta x^2}{4n\alpha_{max}}\]

Finally we can impose a third condition in order to getmeaningful or accurate convergent iteration. An error function between the analytic and iterated solutions allows us to choose a desired error level and return a suitable value for \(\Delta t\). The absolute error between the iteration and the analytic solution at discrete intervals \(t_i=i\Delta t\) as a function of the parameter \(\Delta t\) is:

\[\varepsilon(\Delta t) = \sum_{i=0}^{\infty} 2\left|[T_\mu -De^{-ik\Delta t}]-[T_\mu-D(1-k\Delta t )^i]\right|\]

\[\varepsilon(\Delta t) = 2|D|\sum_{i=0}^{\infty} \left|e^{-ik\Delta t}-(1-k\Delta t)^i\right|\]

Notice due to the symmetry of the solutions, we can account for the\(T_1\) error by multiplying the \(T_0\) error by 2. At this point it becomes easier to convert this into an integral, for which we need to extend the iterated solution’s domain from discrete to continuous, using \(t=i\Delta t\).

\[\varepsilon(\Delta t) = 2|D|\int_{0}^{\infty} \left|e^{-kt}-(1-k\Delta t)^{\frac{t}{\Delta t}} \right| dt\]

To remove the modulus, we have to assert that \(e^{-kt}\geq (1-k\Delta t)^{\frac{t}{\Delta t}}\) for \(t\geq0\). This is difficult to prove, though it seems to hold up to large \(t\) for most parameters, and any time this assumption breaks at large \(t\), the difference between the analytic and iterated solutions will be almost negligible, so we can safely work with this condition. The error function becomes:

\[\varepsilon(\Delta t) = 2|D|\int_{0}^{\infty} e^{-kt}-(1-k\Delta t)^{\frac{t}{\Delta t}} dt\]

\[= 2|D|\left[ -\frac{e^{-kt}}{k}-\frac{(1-k\Delta t)^{\frac{t}{\Delta t}}}{\ln(1-k\Delta t)}\Delta t \right]_0^\infty\]

\[\varepsilon(\Delta t) = 2|D|\left[ \frac{1}{k}+\frac{\Delta t}{\ln(1-k\Delta t)} \right]\]

Removing the \(2|D|\) to standardise the error function for all initial parameter, this can be approximated quadratically with a Taylor expansion around \(\Delta t = 0\):

\[\varepsilon_{standard}(\Delta t) \approx \frac{1}{2}x+\frac{k}{12}x^2+\frac{k^2}{24}x^3 + \frac{19k^3}{720}x^4\]

This error function allows us to crudely estimate how accurate any temperature simulation will be, given a value for \(\Delta t\).

Results

After the implementation of basic boundary conditions, including ’repeat’ (modular space), ’void’ (infinite environment with fixed temperature) and ’adiabatic’ (no heat lost to surroundings, energy conserved within volume):

Void boundary condition (#1)
Void boundary condition (#2)
Adiabatic boundary condition (#1)
Adiabatic boundary condition (#2)
Frozen boundary condition (#1)
Frozen boundary condition (#2)
Repeat boundary condition (#1)
Repeat boundary condition (#2)

...the simulation was run with the model parameters, with the convergent \(\Delta t\) and rasterised using the Raster Geometry package. The results follow, with an evolving heatmap showing the temperature in a cross-section of the model volume and a graph tracking the temperature of fixed points in the volume over time.

The graph above shows the temperature sampled from three points over time. In blue: a point in the lysis unit. In orange: a point far from the resistive element. And, in green is a sampled point outside of the PLA container.

Final Heatmap


  1. This is an abuse of notation, \(dT\) is the total differential, where it should be \(\partial t\), but these are converted to finite deltas for iteration.↩︎

  2. \(\Delta x = \Delta y = \Delta z\)↩︎

  3. Thermal diffusivity values were sourced from: https://thermtest.com/thermal-resources/materials-database↩︎