Wednesday, December 14, 2016

Westworld and the mathematical structure of memories

The dominant conceptual paradigm in mathematical neuroscience is to represent the human mind, and prospective artificial intelligence, as a neural network. The patterns of activity in such a network, whether they're realised by the neuronal cells in a human brain, or by artificial semiconductor circuits, provide the capability to represent the external world and to process information. In particular, the mathematical structures instantiated by neural networks enable us to understand what memories are, and thus to understand the foundation upon which personal identity is built.

Intriguingly, however, there is some latitude in the mathematical definition of what a memory is. To understand the significance of this, let's begin by reviewing some of the basic ideas in the field.

On an abstract level, a neural network consists of a set of nodes, and a set of connections between the nodes. The nodes possess activation levels; the connections between nodes possess weights; and the nodes have numerical rules for calculating their next activation level from a combination of the previous activation level, and the weighted inputs from other nodes. A negative weight transmits an inhibitory signal to the receiving node, while a positive weight transmits an excitatory signal.

The nodes are generally divided into three classes: input nodes, hidden/intermediate nodes, and output nodes. The activity levels of input nodes communicate information from the external world, or another neural system; output notes transmit information to the external world or other neural systems; and the hidden nodes merely communicate with other nodes inside the network. 

In general, any node can possess a connection with any other node. However, there is a directionality to the network in the sense that patterns of activation propagate through it from the input nodes to the output nodes. In a feedforward network, there is a partial ordering relationship defined on the nodes, which prevents downstream nodes from signalling those upstream. In contrast, such feedback circuits are permitted in a recurrent network. Biological neural networks are recurrent networks.

Crucially, the weights in a network are capable of evolving with time. This facilitates learning and memory in both biological and artificial networks. 

The activation levels in a neural network are also referred to as 'firing rates', and in the case of a biological brain, generally correspond to the frequencies of the so-called 'action potentials' which a neuron transmits down its output fibre, the axon. The neurons in a biological brain are joined at synapses, and in this case the weights correspond to the synaptic efficiency. The latter is dependent upon factors such as the pre-synaptic neurotransmitter release rate, the number and efficacy of post-synaptic receptors, and the availability of enzymes in the synaptic cleft. Whilst the weights can vary between inhibitory and excitatory in an artificial network, this doesn't appear to be possible for synaptic connections.

Having defined a neural network, the next step is to introduce the apparatus of dynamical systems theory. Here, the possible states of a system are represented by the points of a differential manifold $\mathcal{M}$, and the possible dynamical histories of that system are represented by a particular set of paths in the manifold. Specifically, they are represented by the integral curves of a vector field defined on the manifold by a system of differential equations. This generates a flow $\phi_t$, which is such that for any point $x(0) \in \mathcal{M}$, representing an initial state, the state after a period of time $t$ corresponds to the point $x(t) = \phi_t(x(0))$.  

In the case of a neural network, a state of the system corresponds to a particular combination of activation levels $x_i$ ('firing rates') for all the nodes in the network, $i = 1,\ldots,n$. The possible dynamical histories are then specified by ordinary differential equations for the $x_i$. A nice example of such a 'firing rate model' for a biological brain network is provided by Curto, Degeratu and Itskov:

$$
\frac{dx_i}{dt} = - \frac{1}{\tau_i}x_i + f \left(\sum_{j=1}^{n}W_{ij}x_j + b_i \right), \,  \text{for } \, i = 1,\ldots,n
$$
$W$ is the matrix of weights, with $W_{ij}$ representing the strength of the connection from the $j$-th neuron to the $i$-th neuron; $b_i$ is the external input to the $i$-th neuron; $\tau_i$ defines the timescale over which the $i$-th neuron would return to its resting state in the absence of any inputs; and $f$ is a non-linear function which, amongst other things, precludes the possibly of negative firing rates. 

In the case of a biological brain, one might have $n=10^{11}$ neurons in the entire network. This entails a state-state of dimension $10^{11}$. Within this manifold are submanifolds corresponding to the activities of subsets of neurons. In a sense to be defined below, memories correspond to stable fixed states of these submanifolds.

In dynamical systems theory, a fixed state $x^*$ is defined to be a point $x^* \in \mathcal{M}$ such that $\phi_t(x^*) = x^*$ for all $t \in \mathbb{R}$. 

The concept of a fixed state in the space of possible firing patterns of a neural network captures the persistence of memory. Memories are stored by changes to the synaptic efficiencies in a subnetwork, and the corresponding matrix of weights $W_{ij}$ permits the existence of a fixed state in the activation levels of that subnetwork. 

However, real physical systems cannot be controlled with infinite precision, and therefore cannot be manoeuvred into isolated fixed points in a continuous state space. Hence memory states are better defined in terms of the properties of neighbourhoods of fixed points. In particular, some concept of stability is required to ensure that the state of the system remains within a neighbourhood of a fixed point, under the inevitable perturbations and errors suffered by a system operating in a real physical environment.

There are two possible definitions of stability in this context (Hirsch and Smale, Differential Equations, Dynamical Systems and Linear Algebra, p185-186):

(i) A fixed point is stable if for every neighbourhood $U$ there is a super-neighbourhood $U_1$ such that any initial point $x(0) \in U$ remains in $U_1$, and therefore close to $x^*$, under the action of the flow $\phi_t$.


(ii)  A fixed point is asymptotically stable if for every neighbourhood $U$ there is a super-neighbourhood $U_1$ such that any initial point $x(0) \in U$ not only remains in $U_1$ but $lim_{t \rightarrow \infty} x(t) = x^*$.


The first condition seems more consistent with the nature of human memory. The memories are not perfect, retaining some aspects of the original experience, but fluctuate with time, (and ultimately become hazy as the synaptic weights drift away from their original values). The second condition, however, is a much stricter condition. In conjunction with an ability to fix the weights of a subnetwork on a long-term basis, this condition seems consistent with the long-term fidelity of memory. 

At first sight, one might wish to design an artificial intelligence so that its memories are asymptotically stable fixed points in the possible firing rate patterns within an artificial neural network. However, doing so could well entail that those memories become as vivid and realistic to the host systems as their present-day experiences. It might become impossible to distinguish past from present experience. 

And that might not turn out so well...


Saturday, November 19, 2016

Neural networks and spatial topology

Neuro-mathematician Carina Curto has recently published a fascinating paper, 'What can topology tell us about the neural code?' The centrepiece of the paper is a simple and profound exposition of the method by which the neural networks in animal brains can represent the topology of space.

As Curto reports, neuroscientists have discovered that there are so-called place cells in the hippocampus of rodents which "act as position sensors in space. When an animal is exploring a particular environment, a place cell increases its firing rate as the animal passes through its corresponding place field - that is, the localized region to which the neuron preferentially responds." Furthermore, a network of place cells, each representing a different position, is collectively capable of representing the topology of the environment.

Rather than beginning with the full topological structure of an environmental space X, the approach of such research is to represent the collection of place fields as an open covering, i.e., a collection of open sets $\mathcal{U} = \{U_1,...,U_n \}$ such that $X  = \bigcup_{i=1}^n U_i$. A covering is referred to as a good cover if every non-empty intersection $\bigcap_{i \in \sigma} U_i$ for $\sigma \subseteq \{1,...,n \}$ is contractible. i.e., if it can be continuously deformed to a point.

The elements of the covering, and the finite intersections between them, define the so-called 'nerve' $\mathcal{N(U)}$ of the cover, (the mathematical terminology is coincidental!):

$\mathcal{N(U)} = \{\sigma \subseteq \{1,...,n \}: \bigcap_{i \in \sigma} U_i \neq \emptyset \}$.

The nerve of a covering satisfies the conditions to be a simplicial complex, with each subset $U_i$ corresponding to a vertex, and each non-empty intersection of $k+1$ subsets defining a $k$-simplex of the complex. A simplicial complex inherits a topological structure from the imbedding of the simplices into $\mathbb{R}^n$, hence the covering defines a topology. And crucially, the following lemma applies:

Nerve lemma: Let $\mathcal{U}$ be a good cover of X. Then $\mathcal{N(U)}$ is homotopy equivalent to X. In particular, $\mathcal{N(U)}$ and X have exactly the same homology groups.

The homology (and homotopy) of a topological space provides a group-theoretic means of characterising the topology. Homology, however, provides a weaker, more coarse-grained level of classification than topology as such. Homeomorphic topologies must possess the same homology (thus, spaces with different homology must be topologically distinct), but conversely, a pair of topologies with the same homology need not be homeomorphic. 

Now, different firing patterns of the neurons in a network of hippocampal place cells correspond to different elements of the nerve which represents the corresponding place field. The simultaneous firing of $k$ neurons, $\sigma \subseteq \{1,...,n \}$, corresponds to the non-empty intersection $\bigcap_{i \in \sigma} U_i \neq \emptyset$ between the corresponding $k$ elements of the covering. Hence, the homological topology of a region of space is represented by the different possible firing patterns of a collection of neurons.

As Curto explains, "if we were eavesdropping on the activity of a population of place cells as the animal fully explored its environment, then by finding which subsets of neurons co-fire, we could, in principle, estimate $\mathcal{N(U)}$, even if the place fields themselves were unknown. [The nerve lemma] tells us that the homology of the simplicial complex $\mathcal{N(U)}$ precisely matches the homology of the environment X. The place cell code thus naturally reflects the topology of the represented space."

This entails the need to issue a qualification to a subsection of my 2005 paper, 'Universe creation on a computer'. This paper was concerned with computer representations of the physical world, and attempted to place these in context with the following general definition:

A representation is a mapping $f$ which specifies a correspondence between a represented thing and the thing which represents it. An object, or the state of an object, can be represented in two different ways:

$1$. A structured object/state $M$ serves as the domain of a mapping $f: M \rightarrow f(M)$ which defines the representation. The range of the mapping, $f(M)$, is also a structured entity, and the mapping $f$ is a homomorphism with respect to some level of structure possessed by $M$ and $f(M)$.

$2$. An object/state serves as an element $x \in M$ in the domain of a mapping $f: M \rightarrow f(M)$ which defines the representation. 

The representation of a Formula One car by a wind-tunnel model is an example of type-$1$ representation: there is an approximate homothetic isomorphism, (a transformation which changes only the scale factor), from the exterior surface of the model to the exterior surface of a Formula One car. As an alternative example, the famous map of the London Underground preserves the topology, but not the geometry, of the semi-subterranean public transport network. Hence in this case, there is a homeomorphic isomorphism.

Type-$2$ representation has two sub-classes: the mapping $f: M \rightarrow f(M)$ can be defined by either (2a) an objective, causal physical process, or by ($2$b) the decisions of cognitive systems.

As an example of type-$2$b representation, in computer engineering there are different conventions, such as ASCII and EBCDIC, for representing linguistic characters with the states of the bytes in computer memory. In the ASCII convention, 0100000 represents the symbol '@', whereas in EBCDIC it represents a space ' '. Neither relationship between linguistic characters and the states of computer memory exists objectively. In particular, the relationship does not exist independently of the interpretative decisions made by the operating system of a computer.

In 2005, I wrote that "the primary example of type-$2$a representation is the representation of the external world by brain states. Taking the example of visual perception, there is no homomorphism between the spatial geometry of an individual's visual field, and the state of the neuronal network in that part of the brain which deals with vision. However, the correspondence between brain states and the external world is not an arbitrary mapping. It is a correspondence defined by a causal physical process involving photons of light, the human eye, the retina, and the human brain. The correspondence exists independently of human decision-making."

The theorems and empirical research expounded in Curto's paper demonstrate very clearly that whilst there might not be a geometrical isometry between the spatial geometry of one's visual field and the state of a subsystem in the brain, there are, at the very least, isomorphisms between the homological topology of regions in one's environment and the state of neural subsystems.

On a cautionary note, this result should be treated as merely illustrative of the representational mechanisms employed by biological brains. One would expect that a cognitive system which has evolved by natural selection will have developed a confusing array of different techniques to represent the geometry and topology of the external world.

Nevertheless, the result is profound because it ultimately explains how you can hold a world inside your own head.

Monday, November 14, 2016

Trump and Brexit

One of the strangest things about most scientists and academics, and, indeed, most educated middle-class people in developed countries, is their inability to adopt a scientific approach to their own political and ethical beliefs.

Such beliefs are not acquired as a consequence of growing rationality or progress. Rather, they are part of what defines the identity of a particular human tribe. A particular bundle of shared ideas is acquired as a result of chance, operating in tandem with the same positive feedback processes which drive all trends and fashions in human society. Alex Pentland, MIT academic and author of 'Social Physics', concisely summarises the situation as follows:

"A community with members who actively engage with each other creates a group with shared, integrated habits and beliefs...most of our public beliefs and habits are learned by observing the attitudes, actions and outcomes of peers, rather than by logic or argument," (p25, Being Human, NewScientistCollection, 2015).

So it continues to be somewhat surprising that so many scientists and academics, not to mention writers, journalists, and the judiciary, continue to regard their own particular bundle of political and ethical ideas, as in some sense, 'progressive', or objectively true.

Never has this been more apparent than in the response to Britain's decision to leave the European Union, and America's decision to elect Donald Trump. Those who voted in favour of these respective decisions have been variously denigrated as stupid people, working class people, angry white men, racists, and sexists.

To take one example of the genre, John Horgan has written an article on the Scientific American website which details the objective statistical indicators of human progress over hundreds of years. At the conclusion of this article he asserts that Trump's election "reveals that many Americans feel threatened by progress, especially rights for women and minorities."

There are three propositions implicit in Horgan's statement: (i) The political and ethical ideas represented by the US Democratic party are those which can be objectively equated with measurable progress; (ii) Those who voted against such ideas are sexist; (iii) Those who voted against such ideas are racist.

The accusation that those who voted for Trump feel threatened by equal rights for women is especially puzzling. As many political analysts have noted, 42% of those who voted for Trump were female, which, if Horgan is to be believed, was equivalent to turkeys voting for Christmas.

It doesn't say much for Horgan's view of women that he thinks so many millions of them could vote against equal rights for women. Unless, of course, people largely tend to form political beliefs, and vote, according to patterns determined by the social groups to which they belong, rather than on the basis of evidence and reason. A principle which would, unfortunately, fatally undermine Horgan's conviction that one of those bundles of ethical and political beliefs represents an objective form of progress.

In the course of his article, Horgan defines a democracy "as a society in which women can vote," and also, as an indicator of progress, points to the fact that homosexuality was a crime when he was a kid. These are two important points to consider when we turn from the issue of Trump to Brexit, and consider the problem of immigration. The past decades have seen the large-scale migration of people into Britain who are enemies of the open society: these are people who reject equal rights for women, and people who consider homosexuality to be a crime.

So the question is as follows: Do you permit the migration of people into your country who oppose the open society, or do you prohibit it?

If you believe that equal rights for women and the non-persecution of homosexuals are objective indicators of progress, then do you permit or prohibit the migration of people into your country who oppose such progress?

It's a well-defined, straightforward question for the academics, the writers, the journalists, the judiciary, and indeed for all those who believe in objective political and ethical progress. It's a question which requires a decision, not merely an admission of complexity or difficulty.

Now combine that question with the following European Union policy: "Access to the European single market requires the free migration of labour between participating countries."

Hence, Brexit.

What unites Brexit and Trump is that both events are a measure of the current relative size of different tribes, under external perturbations such as immigration. It's not about progress, rationality, reactionary forces, conspiracies or conservatism. Those are merely the delusional stories each tribe spins as part of its attempts to maintain internal cohesion and bolster its size. It's more about gaining and retaining membership of particular social groups, and that requires subscription to a bundle of political and ethical ideas.

However, the thing about democracy is that it doesn't require the academics, the writers, the journalists, the judiciary, and other middle-class elites to understand any of this. They just need to lose.

Sunday, September 18, 2016

Cosmological redshift and recession velocities



















In a recent BBC4 documentary, 'The Beginning and End of the Universe', nuclear physicist and broadcaster Jim Al Khalili visits the Telescopio Nazionale Galileo (TNG). There, he performs some nifty arithmetic to calculate that the redshift $z$ of a selected galaxy is:
$$
z = \frac{\lambda_o - \lambda_e}{\lambda_e} =
\frac{\lambda_o}{\lambda_e} - 1 \simeq 0.1\,,
$$ where $\lambda_o$ denotes the observed wavelength of light and $\lambda_e$ denotes the emitted wavelength. He then applies the following formula to calculate the recession velocity of the galaxy:
$$
v = c z = 300,000 \; \text{km s}^{-1} \cdot 0.1 \simeq 30,000 \; \text{km s}^{-1} \,,
$$ where $c$ is the speed of light.

After pausing for a moment to digest this fact, Jim triumphantly concludes with an expostulation normally reserved for use by people under the mental age of 15, and F1 trackside engineers:

"Boom.....science!"

It's worth noting, however, that the formula used here to calculate the recession velocity is only an approximation, valid at low redshifts, as Jim undoubtedly explained in a scene which hit the cutting-room floor. So, let's take a deeper look at the concept of cosmological redshift to understand what the real formula should be.

In general relativistic cosmology, the universe is represented by a Friedmann-Roberston-Walker (FRW) spacetime. Geometrically, an FRW model is a $4$-dimensional Lorentzian manifold $\mathcal{M}$ which can be expressed as a 'warped product' (Barrett O'Neill, Semi-Riemannian Geometry with Applications to Relativity, Academic Press, 1983):
$$
I \times_R \Sigma \,.
$$ $I$ is an open interval of the pseudo-Euclidean manifold $\mathbb{R}^{1,1}$, and $\Sigma$ is a complete and connected $3$-dimensional Riemannian manifold. The warping function $R$ is a smooth, real-valued, non-negative function upon the open interval $I$, otherwise known as the 'scale factor'.

If we denote by $t$ the natural coordinate function upon $I$, and if we denote the metric tensor on $\Sigma$ as $\gamma$, then the Lorentzian metric $g$ on $\mathcal{M}$ can be written as
$$
g = -dt \otimes dt + R(t)^2 \gamma \,.
$$ One can consider the open interval $I$ to be the time axis of the warped product cosmology. The $3$-dimensional manifold $\Sigma$ represents the spatial universe, and the scale factor $R(t)$ determines the time evolution of the spatial geometry.

Now, a Riemannian manifold $(\Sigma,\gamma)$ is equipped with a natural metric space structure $(\Sigma,d)$. In other words, there exists a non-negative real-valued function $d:\Sigma \times \Sigma
\rightarrow \mathbb{R}$ which is such that

$$\eqalign{d(p,q) &= d(q,p) \cr
d(p,q) + d(q,r) &\geq d(p,r) \cr
d(p,q) &= 0 \; \text{iff} \; p =q}$$ The metric tensor $\gamma$ determines the Riemannian distance $d(p,q)$ between any pair of points $p,q \in \Sigma$. The metric tensor $\gamma$ defines the length of all curves in the manifold, and the Riemannian distance is defined as the infimum of the length of all the piecewise smooth curves between $p$ and $q$.

In the warped product space-time $I \times_R \Sigma$, the spatial distance between $(t,p)$ and $(t,q)$ is $R(t)d(p,q)$. Hence, if one projects onto $\Sigma$, one has a time-dependent distance function on the points of space,
$$
d_t(p,q) = R(t)d(p,q) \,.
$$Each hypersurface $\Sigma_t$ is a Riemannian manifold $(\Sigma_t,R(t)^2\gamma)$, and $R(t)d(p,q)$ is the distance between $(t,p)$ and $(t,q)$ due to the metric space structure $(\Sigma_t,d_t)$.

The rate of change of the distance between a pair of points in space, otherwise known as the 'recession velocity' $v$, is given by
$$\eqalign{
v = \frac{d}{dt} (d_t(p,q)) &= \frac{d}{dt} (R(t)d(p,q)) \cr &= R'(t)d(p,q) \cr &=
\frac{R'(t)}{R(t)}R(t)d(p,q) \cr &= H(t)R(t)d(p,q) \cr &=
H(t)d_t(p,q)\,. }
$$ The rate of change of distance between a pair of points is proportional to the spatial separation of those points, and the constant of proportionality is the Hubble parameter $H(t) \equiv R'(t)/R(t)$.

Galaxies are embedded in space, and the distance between galaxies increases as a result of the expansion of space, not as a result of the galaxies moving through space. Where $H_0$ denotes the current value of the Hubble parameter, and $d_0 = R(t_0)d$ denotes the present 'proper' distance between a pair of points, the Hubble law relates recession velocities to proper distance by the simple expresssion $v = H_0d_0$.

Cosmology texts often introduce what they call 'comoving' spatial coordinates $(\theta,\phi,r)$. In these coordinates, galaxies which are not subject to proper motion due to local inhomogeneities in the distribution of matter, retain the same spatial coordinates at all times.

In effect, comoving spatial coordinates are merely coordinates upon $\Sigma$ which are lifted to $I \times \Sigma$ to provide spatial coordinates upon each hypersurface $\Sigma_t$. The radial coordinate $r$ of a point $q \in \Sigma$ is chosen to coincide with the Riemannian distance in the metric space $(\Sigma,d)$ which separates the point at $r=0$ from the point $q$. Hence, assuming the point $p$ lies at the origin of the comoving coordinate system, the distance between $(t,p)$ and $(t,q)$ can be expressed in terms of the comoving coordinate $r(q)$ as $R(t)r(q)$.

If light is emitted from a point $(t_e,p)$ of a warped product space-time and received at a point $(t_0,q)$, then the integral,
$$
d(t_e) = \int^{t_0}_{t_e}\frac{c}{R(t)} \, dt \, ,
$$ expresses the Riemannian distance $d(p,q)$ in $\Sigma$, (equivalent to the comoving coordinate distance), travelled by the light between the point of emission and the point of reception. The distance $d(t_e)$ is a function of the time of emission, $t_e$, a concept which will become important further below.

The present spatial distance between the point of emission and the point of reception is:
$$
R(t_0)d(p,q) = R(t_0) \int^{t_0}_{t_e}\frac{c}{R(t)} \, dt \,.
$$ The distance which separated the point of emission from the point of reception at the time the light was emitted is:
$$
R(t_e)d(p,q) = R(t_e) \int^{t_0}_{t_e}\frac{c}{R(t)} \, dt \,.
$$ The following integral defines the maximum distance in $(\Sigma,\gamma)$ from which one can receive light by the present time $t_0$:
$$
d_{max}(t_0) = \int^{t_0}_{0}\frac{c}{R(t)} \, dt \,.
$$ From this, cosmologists define something called the 'particle horizon':
$$
R(t_0) d_{max}(t_0) = R(t_0) \int^{t_0}_{0}\frac{c}{R(t)} \, dt
\,.
$$ We can only receive light from sources which are presently separated from us by, at most, $R(t_0) d_{max}(t_0)$. The size of the particle horizon therefore depends upon the time-dependence of the scale factor, $R(t)$.

Under the FRW model which currently has empirical support, (the 'concordance model', with cold dark matter, a cosmological constant $\Lambda$, and a mass-energy density equal to the critical density), the particle horizon is approximately 46 billion light years. This is the conventional definition of the present radius of the observable universe, before the possible effect of inflationary cosmology is introduced...

To obtain an expression which links recession velocity with redshift, let us first return to the Riemannian/ comoving distance travelled by the light that we detect now, as a function of the time of emission $t_e$:
$$
d(t_e) = \int^{t_0}_{t_e}\frac{c}{R(t)} \, dt \,.
$$ We need to replace the time parameter here with redshift, and to do this we first note that the redshift can be expressed as the ratio of the scale-factor at the time of reception to the time of emission:
$$
1+ z = \frac{R(t_0)}{R(t)} \,.
$$ Taking the derivative of this with respect to time (Davis and Lineweaver, p19-20), and re-arranging obtains:
$$
\frac{dt}{R(t)} = \frac{-dz}{R(t_0) H(z)} \,.
$$ Substituting this in and executing a change of variables in which $t_o \rightarrow z' = 0$ and $t_{e} \rightarrow z' = z$, we obtain an expression for the Riemannian/comoving distance as a function of redshift:
$$
d(z) = \frac{c}{R(t_0)} \int^{0}_{z}\frac{dz'}{H(z')} \, .
$$ From our general definition above of the recession velocity between a pair of points $(p,q)$ separated by a Riemannian/comoving distance $d(p,q)$ we know that:
$$
v =  R'(t)d(p,q) \,.
$$ Hence, we obtain the following expression (Davis and Lineweaver Eq. 1) for the recession velocity of a galaxy detected at a redshift of $z$:
$$
v = R'(t) d(z) = \frac{c}{R(t_0)} R'(t) \int^{0}_{z}\frac{dz'}{H(z')} \, .
$$ To obtain the present recession velocity, one merely sets $t = t_0$:
$$
v = R'(t_0) d(z) = \frac{c}{R(t_0)} R'(t_0) \int^{0}_{z}\frac{dz'}{H(z')} \, .
$$ At low redshifts, such as the case of $z \simeq 0.1$, the integral reduces to:
$$
 \int^{0}_{z}\frac{dz'}{H(z')} \approx \frac{z}{H(0)} =  \frac{z}{H(t_0)} \, .
$$ Hence, recalling that $H(t) \equiv R'(t)/R(t)$, at low redshifts one obtains Jim Al Khalili's:
$$
v = cz \,.
$$ Boom...mathematics!

Monday, May 09, 2016

Brain of Britain

BBC Radio 4 has a general knowledge quiz-show modestly titled 'Brain of Britain'. The 2016 final of 'Brain of Britain' was broadcast this week. The four contestants were:

John, a dentist from Southampton.
Ian, a software developer from North Worcestershire.
Mike, a driver from Brechin.
Jane, a teacher and writer from Edinburgh.

After 7 mins, quiz-master Russell Davies poses the following question:

"In science, what name is given to the product of the mass of a particle and its velocity?"

Bit of a tricky one, eh? Science question. Still, at least it's an elementary science question, the type of question that anyone who didn't leave school at the age of 12 should be able to answer, surely?

In fact, this simple question elicited the following responses, in turn, from the contestants. And remember, these are the four finalists on a show entitled 'Brain of Britain':

John: "vector?"
Russell Davies: "No."
Ian: "acceleration."
Russell Davies: "Not that either, no"
Mike: "Force?"
Russell Davies: "No-o."
Jane: "Is it speed?"
Russell Davies: "It's not speed, it's momentum."

Still, it was Radio 4, so a science question does go somewhat outside the usual diet of politics, GCSE economics, and the arts. 

Sunday, April 17, 2016

Williams FW18/19 vs Ferrari F310/B

Mark Hughes has a useful survey of Ferrari's F1 fortunes from 1996 to the present day in the May edition of Motorsport Magazine. At the beginning of the article, it's noted that "The secret to the speed of the [Williams] FW18 in '96 and the following year's FW19 was exploiting a regulation loophole that allowed Newey to take the diffuser over the top of the plank to get a much bigger exit area - and therefore a more powerful diffuser effect...This arrangement made its debut late in '95 on the FW17B but amazingly Ferrari - and everyone else - had not noticed and thus did not incorporate it into their '96 cars."

So let's take a closer look at precisely what this loophole was.

The images below of the FW18's diffuser and its counterpart on the 1997 Ferrari F310B, show that whilst both exploit the greater permitted rearward extension of the central region, they differ in the crucial respect that Newey opened up windows in the vertical walls of the central diffuser. This not only increased the effective exit area of the diffuser, but coupled it to the beam-wing, thereby increasing its mass-flow rate and its capacity to generate downforce.


How glaring was this regulation loophole? Well, let's study the 1997 F1 Technical regulations, which are available, pro bono, at MattSomersF1. The relevant propositions read as follows:

3.10) No bodywork behind the centre line of the rear wheels, and more than 15cm each side of the longitudinal centre line of the car, may be less than 30cm above the reference plane. 

This regulation permitted the central region of the diffuser to be 30cm wide. To give some idea of the relative dimensions here, the central box itself was only 30cm tall. So outside that central region, nothing was permitted to be lower than the roof the central diffuser.



3.12) Between the rear edge of the complete front wheels and the front edge of the complete rear wheels all sprung parts of the car visible from underneath must form surfaces which lie on one of two parallel planes, the reference plane or the step plane.

This effectively defined the kick-up point of the diffuser to be the leading edge of the rear-wheels. 

The surface formed by all parts lying on the reference plane must extend from the rear edge of the complete front wheels to the centre line of the rear wheels, have minimum and maximum widths of 30cm and 50cm respectively and must be symmetrical about the centre line of the car. 

All parts lying on the reference and step planes, in addition to the transition between the two planes, must produce uniform, solid, hard, continuous, rigid (no degree of freedom in relation to the body/chassis unit), impervious surfaces under all circumstances.

This seems to be the regulation which Ferrari mis-interpreted. Whilst 3.12 required all parts of the car visible from underneath to belong to a pair of parallel surfaces, and for the transition between those surfaces to be continuous and impervious, this applied only between the trailing edge of the front wheels and the leading edge of the rear wheels. Moreover, although the definition of the reference plane extended to the centreline of the rear wheels, there was nothing whatsoever in the regulations which required a vertical plane behind the rear-wheel centreline to be continuous or impervious.

(Ferrari F310B diffuser. Photo by Alan Johnstone)
As an observation in passing, another part of regulation 3.10 should cause some puzzlement:

Any bodywork behind the rear wheel centre line which is more than 50cm above the reference plane, when projected to a plane perpendicular to the ground and the centre line of the car, must not occupy a surface greater than 70% of the area of a rectangle whose edges are 50cm either side of the car centre line and 50cm and 80cm above the reference plane.

As written, this regulation is somewhat opaque, not least because it is impossible in 3 dimensions to have a plane which is both perpendicular to the ground and the centreline of the car. A plane which is perpendicular to the centreline is certainly a well-defined concept, but in 3 dimensions such a plane will intersect the ground plane along a transverse line, hence cannot be perpendicular to it...

Saturday, April 09, 2016

Ferrari and thermal tyre modelling

Flavio Farroni, currently Research Fellow at the University of Naples Federico II, has been developing a suite of tyre-performance models for several years in collaboration with both Ferrari GT and the Ferrari Formula 1 team. Flavio has now published some of his work, and it may be of more than a little interest to those outside Maranello.

The snappily-titled Development of a grip and thermodynamics sensitive procedure for the determination of tyre/road interaction curves based on outdoor test sessions, provides an overview of all three of Farroni's models.

TRICK appears to be a tool for inferring tyre performance characteristics from empirical telemetry data; TRT is a thermal tyre model, specifically designed to calculate bulk tyre-temperature in real-time; GrETA is a grip model which takes the output from TRT and incorporates the influence of tyre compound and road-surface roughness on tyre performance.

Farroni reports that "TRICK and TRT have been successfully employed together, constituting an instrument able to provide tyre thermal analysis, useful to identify the range of temperature in which grip performances are maximized, allowing to define optimal tyres and vehicle setup."

Recent work on the thermal tyre model, published as An Evolved version of Thermo Racing Tyre for Real Time Applications, is worth considering in some detail.

Here, Farroni's model calculates bulk and sidewall tyre temperatures by representing: (i) the heat generated by the rolling deformation of the tyre and the tangential stresses at the contact patch between the tread and road surface; (ii) the heat flux between the sidewalls, carcass, bulk and surface layers; (iii) the heat transfer due to conduction between the tyre and the road; (iv) the convective heat transfer from the gas inside the tyre to the inner surface of the sidewall and the 'inner liner' (aka the 'carcass'); and (v) the convective heat transfer from the surface of the tread and the outer surface of the sidewall to the external atmosphere. Farroni neglects radiation as a heat transfer mechanism.

This particular paper reports that the measured surface and carcass temperatures can be reproduced despite resort to a simple model in which the bulk, carcass and sidewalls are replaced by single nodes rather than a full-blown mesh. This simplification enables the model to run in real-time, and Farroni reproduces some interesting graphs (below).



There are four graphs here, one for each corner of the car. The horizontal axes represent time, and the vertical axes represent temperatures, which "are dimensionless because of confidentiality agreements."

Those sufficiently cursed to spend their working lives staring at telemetry in ATLAS will recognise the fluctuating signature of the surface tyre-temperatures, which suffer transient peaks under cornering. The peak surface temps are greater than the bulk and carcass temps, but are on average lower that the latter. One can see that the outer sidewall temps are lower than the inner sidewall temps. Also possibly of interest is the fact that the bulk temps are lower than the inner liner temps, which implies there is a net heat flux from the inner liner into the bulk of the tyre.

Now, it's something of a pity that the vertical axes on those diagrams are "dimensionless because of confidentiality agreements." Happily, however, Farroni's PhD thesis is somewhat more forthcoming, printing a pair of fully-dimensionalised temperature plots on p98-99, (below).


The first diagram here plots the measured carcass ('inner liner') temps, the simulated carcass temps, and the calculated bulk temps. Once again, the calculated bulk temps are lower than the carcass temps throughout. The delta seems to be about 10 degrees at the outset, and increases over the course of what appears to be a stint. Being 850 seconds long, the segment of data reproduced covers about 10 laps of data. 

Farroni points out that "Proper time ranges have been selected to highlight thermal dynamics characteristic of each layer; in particular, as concerns bulk and inner liner, temperature decreasing trend is due to a vehicle slowdown before a pit stop." 

This is the drop in carcass and bulk temperatures which occur as a tyre loses its ability to generate and/or retain heat over the course of a stint, due to physical wear and/or irreversible thermal degradation. All four corners suffer this temperature reduction, but the effect appears most marked on the left-front and left-rear. The left-rear drops from ~130 degrees to ~110 degrees, while the left-front drops from ~120 degrees to ~100.

All four corners begin in the range 115-130 degrees, so perhaps this was a set of Softs?


The second diagram (above) is "with reference to a different circuit," and once more displays simulated bulk temperatures lower than the carcass temps. In each case, the bulk temp seems to match the carcass temp at the outset, and then swiftly decline. Both front tyre carcass temps start at 100 degrees, whilst the rear carcass temps start at only 80 degrees.

The left-front carcass temp increases to about 110 degrees, the right-front remains fairly constant, the left-rear increases by almost 20 degrees, whilst the right-rear increases by about 10 degrees. All of which might suggest a set of Mediums?

As a final flourish, Farroni also studies the rather alarming effect that exhaust blown diffusers had on tyre temps (below), suggesting that rear bulk temps could have reached ~200 degrees in some regions.

Farroni suggests that this would "bring the tyre to a too fast degradation and to average temperatures not able to maximize the grip." Quite.

Friday, March 25, 2016

The polarization of gravitational waves

In general relativity, a plane gravitational wave, such as that apparently detected by the LIGO apparatus in September 2015, is a type of transverse shear wave in the geometry of space.

To understand this, first consider the concept of a transverse wave in general relativity.

Recall that observers in general relativity are represented by timelike curves, and instantaneous observers correspond to particular points along timelike curves.

For an instantaneous observer, represented by the tangent vector $Z$ to a timelike curve at a point $z$, there is a local version of Euclidean space, dubbed the local rest-space $R = Z^\bot$, and defined as the set of (spacelike) vectors orthogonal to $Z$.

A plane gravitational wave travels in a spatial direction specified by a propagation vector $k \in R = Z^\bot$, and distorts the geometry of space in the two-dimensional plane $T$ orthogonal to $k$ in the observer's local rest-space $R$. It is in this sense that a gravitational wave is a transverse wave.

In particular, a plane gravitational wave is also a shear wave, and understanding this requires an explanation of the polarization of gravitational waves.

In the simplest case, a linearly-polarized gravitational wave alternately stretches space in one direction $e_x \in T$, and compresses it in a direction $e_y \in T$ at right-angles to $e_x$, in a manner which distorts circles into ellipses, but preserves spatial areas.

However, linearly polarized plane gravitational waves are nothing more than very special cases, and the purpose of this post is largely to put linear polarization into context.

But before digging a little deeper, it's worthwhile first to recall the characteristics of an electromagnetic plane wave, and its possible polarizations.

Just like a gravitational wave, an electromagnetic plane wave has a direction of propagation $k$. The electric $E$ and magnetic fields $B$ are then defined by perpendicular vectors of oscillating magnitude in a plane which is orthogonal to the propagation vector $k$. However, it is the direction in which the electric field vector points which defines the plane of polarization.

In the case of linear polarization, the plane of the electric field vector is constant. The electric field merely oscillates back-and-forth within this plane.

However, the most general case of an electromagnetic plane wave is one which is elliptically polarized. This is a superposition of two perpendicular plane waves, which may differ in either phase or amplitude. The polarization direction of one is separated by 90 degrees from the polarization direction of the other. The net effect is that the tip of the resultant electric field vector will sweep out an ellipse in the plane orthogonal to the direction of propagation.

If the relative phases of the component waves differ by 90 degrees, and the amplitudes of the two components are the same, then this reduces to the special case of circular polarization. In this event, the tip of the resultant electric field vector will sweep out a circle in the plane orthogonal to the direction of propagation.

One important distinction between gravitational waves and electromagnetic waves is that, whilst the most general case of an electromagnetic wave is defined as a linear combination of two components oriented at 90 degrees to each other, the most general case of a plane gravitational wave is defined as a linear combination of two components oriented at 45 degrees to each other.

To understand this, first note that the wave-fronts of a plane gravitational wave are represented by a foliation of space-time into a 1-parameter family of null hypersurfaces, each of which $\mathscr{W}$ is defined by a particular value of the function $\phi = t - z$.

This assumes that the z-coordinate is aligned with the direction of propagation of the wave. In general, one might be interested in surfaces with a constant value of $\omega (t - k \cdot x)$, with $\omega$ being the wave frequency and $k$ being the propagation vector.

Tangent to these null hypersurfaces $\mathscr{W}$ is a null vector field $Y$ which defines the space-time propagation vector of the gravitational wave (Sach and Wu, General relativity for mathematicians, 1977, p244). The projection of the null vector field $Y$ into an observer's local rest-space at a point provides the spatial propagation vector $k$.

If one imagines space-time as a 2-dimensional plane, with the time axis $t$ as the vertical axis, and the spatial direction $z$ as the horizontal axis, then the null hypersurfaces of constant $\phi$ correspond to diagonal lines running from the bottom left to the top-right. These represent a gravitational wave passing from the left to the right of the diagram. An observer corresponds to a timelike curve, tracing a path from the bottom to the top of the diagram.

In Christian Reisswig's diagram below, (taken from a different application), the null hypersurfaces are those labelled as $u$=constant, and the worldline of an observer corresponds to that labelled as $R_\Gamma$.

As the proper time of the observer elapses, the observer's worldline intersects a sequence of the null hypersurfaces. This corresponds to the different phases of the wave passing through the observer's point-in-space. Hence $\phi$ can be thought of as defining the phase of a plane gravitational wave.

In terms of the metric tensor, a gravitational wave is typically represented as a perturbation $h_{\mu\nu}$ on a background space-time geometry $\bar{g}_{\mu\nu}$: $$ g_{\mu\nu} = \bar{g}_{\mu\nu} + h_{\mu\nu} $$ The perturbation is represented as follows: $$ h_{\mu\nu} = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & h_+(\phi) & h_\times(\phi) & 0 \\ 0 & h_\times(\phi) & -h_+(\phi) & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} \; . $$ The two components, or polarizations, of the wave are denoted as $h_+(\phi)$ and $h_\times(\phi)$. They form a net polarization tensor $h(\phi)$, which can be extracted from the metric tensor above, and written as follows: $$ h(\phi) = h_+(\phi)(e_x \otimes e_x - e_y \otimes e_y) + h_\times(\phi)(e_x \otimes e_y + e_y \otimes e_x) $$ Now, suppose that the source of a gravitational wave is a gravitationally bound system consisting of two compact objects (i.e., black holes or neutron stars). The plane of that orbital system will be inclined at an angle $\iota$ between 0 and 90 degrees to the line-of-sight of the observer. The case $\iota$ = 0 corresponds to a system which is face-on to the observer, and the case $\iota = \pi/2$ corresponds to a system which is edge-on to the observer.

The time-variation of a plane gravitational wave emitted by such a compact binary system, passing through a distant observer's point-of-view, is effectively specified by the phase-dependence of the two components of the wave: $$ h_+(\phi) = A(1+ \cos^2\iota) \cos (\phi) \\ h_\times(\phi) = -2A \cos \iota \sin \phi $$ $A$ determines the amplitude of the wave.

This is the general case, corresponding to elliptical polarization. The orbital paths of the stars or black holes in the binary system will appear as ellipses. In terms of the basis vectors in which the metric tensor perturbation is expressed, $e_x$ is determined by the long axis of the ellipse, and $e_y$ is perpendicular to $e_x$ in the plane orthogonal to the line-of-sight.

There are two special cases: when the system is face-on, the gravitational wave exhibits circular polarization; and when the system is edge-on, the wave exhibits linear polarization.

To make this explicit, consider first the case where the source of the wave is edge-on to the observer. $\iota = \pi/2$, hence $\cos^2 \iota = \cos \iota = 0$, and it follows that: $$ h_+(\phi) = A(1+ \cos^2\iota) \cos (\phi) = A \cos \phi \\ h_\times(\phi) = -2A \cos \iota \sin \phi = 0 $$ One of the polarization components has vanished altogether, hence from the perspective of the distant observer, space alternately stretches and contracts along a fixed pair of perpendicular axes. One of these axes, $e_x$, is determined by the orientation of the orbital plane of the source system, seen edge-on, and the other, $e_y$, is the axis perpendicular to $e_x$ in the plane orthogonal to the line-of-sight. The polarization tensor reduces to: $$\eqalign{ h(\phi) &= h_+(\phi)(e_x \otimes e_x - e_y \otimes e_y) \cr &= A \cos \phi(e_x \otimes e_x - e_y \otimes e_y)} $$ The negative sign associated with $e_y \otimes e_y$ entails that as space is stretching in direction $e_x$, it is contracting in direction $e_y$. This linear polarization is the simplest special case of a plane gravitational wave, as beautifully demonstrated in the animation below from Markus Possel:

In the other special case, the case of a face-on system, $\iota$ = 0. It follows that $\cos^2 \iota = \cos \iota = 1$, hence: $$ h_+(\phi) = A(1+ \cos^2\iota) \cos (\phi) = A \cos \phi + A \cos \phi = 2A \cos \phi \\ h_\times(\phi) = -2A \cos \iota \sin \phi = -2A \sin \phi $$ In this case, then, the two components have equal amplitude, $2A$, and differ by virtue of the fact that the $h_\times$ component lags 90 degrees behind the $h_+$ component. This is the case of circular polarization. As seen in the Markus Possel animation below, the net effect is to produce a rotation of the shear axes.

Sunday, February 28, 2016

Formula One and relativity

It would be not inaccurate to say that relativity theory has something of a low profile in Formula One. The recent announcement that gravitational waves have been detected for the first time aroused little more than a grudging blip of interest within the region of the autistic spectrum occupied by F1 vehicle dynamicists, strategists, and aerodynamicists.

It's worth noting, however, that modern F1 operations are heavily dependent upon relativity theory. F1 utilises GPS for its timing systems, and almost all teams use GPS for their trajectory analysis; and GPS, of course, is crucially dependent upon relativity theory.

To accurately establish the position of a car on the surface of the Earth, a GPS receiver must compare the time-stamps on signals it receives from multiple satellites, each one of which is orbiting the Earth at 14,000km/hr. To maintain the desired positional accuracy, the time on each such satellite must be known to within an accuracy of 20-30 nanoseconds.

However, there are two famous relativistic effects which have to be compensated for to maintain such accuracy: (i) special relativistic time dilation; (ii) general relativistic time dilation inside a gravitational well.

Because the satellites are in motion at high speed relative to the reference frame of a car on the surface of the Earth, their clock-ticks are slower by a rate of about 7 microseconds per day. Conversely, because a car lies deeper inside a gravitational well than the satellites, its clock-ticks will slow down by about 45 microseconds per day. The net effect is that the clocks on-board the satellites tick faster than those on-board an Earth-bound GPS receiver by about 35 microseconds per day.

As Richard W. Pogge points outs, "This sounds small, but the high-precision required of the GPS system requires nanosecond accuracy, and 38 microseconds is 38,000 nanoseconds. If these effects were not properly taken into account, a navigational fix based on the GPS constellation would be false after only 2 minutes, and errors in global positions would continue to accumulate at a rate of about 10 kilometers each day! The whole system would be utterly worthless for navigation in a very short time. This kind of accumulated error is akin to measuring my location while standing on my front porch in Columbus, Ohio one day, and then making the same measurement a week later and having my GPS receiver tell me that my porch and I are currently somewhere in the air kilometers away."

Which is worth recalling the next time GPS reveals that Dudley Duoflush is repeatedly missing the apex in Turn 4, or overtook under yellow-flag conditions between Turns 7 and 8.

Saturday, February 27, 2016

Red Bull's T-tray wing

Red Bull appeared at the first pre-season Formula 1 test this week with an interesting wing perched atop the T-tray splitter beneath the chassis. As Craig Scarborough points out on Autosport.com, the tips of this wing act as vortex generators. Craig also points out that the idea has been tried before, on the Brawn 001 in 2009.

The interesting thing about such a device is that it's profiled in the manner of an aircraft wing, generating low-pressure above and high pressure below. The consequence of this is that it generates vortices rotating in the same sense as the Y250 vortex on each side of the chassis.

So, for example, looking from a perspective in front of the car, and focusing on the right-hand-side of the chassis, both the Y250 vortex and the T-tray wing vortex rotate in an anticlockwise direction. On the left-hand-side, they both rotate in a clockwise direction.

Now, this is in contrast with the influence provided by a J-vane vortex. As alluded to in Jonathan Pegrum's 2006 academic work, when a vortex spinning around an axis pointing in the direction of the freestream flow passes close to a solid surface, it tends to pull a counter-rotating vortex off the boundary layer of that surface. Hence, when the Y250 vortex passes the J-vanes hanging from the underside of the raised nose on a Formula 1 car, it creates a pair of counter-rotating vortices on each side of the chassis.

For vortices sharing approximately the same rotation axis, it is a general rule that counter-rotating vortices tend to repel each other, whereas co-rotating vortices tend to attract each other. In fact, for a time, co-rotating vortices will orbit a common center of vorticity. This situation will persist so long as they are separated by a distance large compared to their vortex-core radii. Eventually, however, viscous diffusion will enlarge their respective cores, and they will begin to deform each other, eject arms of vorticity, and finally merge into a single, larger vortex.

Because the J-vane vortex rotates in the opposite sense to the Y250, it tends to repel it. Hence, the J-vane can be used to push the Y250 into the optimal position to fulfil its ultimate purpose, which is to push the front-wheel wake further outboard. 

However, the J-vane vortex can only push the Y250. Fitting a T-tray wing, which presumably generates vortices with the same sense of rotation as the Y250 itself, conceivably provides Red Bull with the ability to push and pull the position of the Y250, from two different downstream locations. That possibly improves their ability to fine-tune the position of the Y250 in both a vertical and lateral direction. Alternatively, of course, it may just be designed to interact with the vorticity generated by the bargeboards et al.

Whilst Brawn tried the same concept in 2009, note that the Brawn wasn't fitted with J-vanes, and the presence of a double-diffuser might have reduced the sensitivity of the diffuser to ingress of the front-wheel wake anyway.

Wednesday, February 10, 2016

Britain braced for -10C winter blast

The Daily Telegraph website has an article about the UK weather-forecast for this weekend. To enhance the restrained and informative nature of the article, which avoids lazy journalistic cliché, I've added my own parenthetical comments in red below:

Britain braced for -10C winter blast. (It's a maritime climate, and it's winter).

Sleet and snow is forecast as far south as Wales and the Midlands by the weekend with wintry showers across the rest of the country. (So not as far South as, say, The South. Not the Isle of Wight, not Bournemouth, but as far South as the Midlands. In a country with a North and a South, the bit roughly halfway between the two is The Midlands. So, the bit which isn't in the South is as far South as the sleet and snow is forecast to reach.)

Storm-hit Britain could be hit with an arctic blast bringing more than three inches of snow towards the end of the week. (It's a maritime climate, and it's winter).

A twist in the jet stream means bitter winds will bring freezing fog and widespread frosts. (A twist? Like when you hold one end of the jet-stream in an aerodynamic clamp, and rotate the other end around its axis? I guess the meanders in a river are colloquially said to be twists, but isn't it really a bend or a kink in the jet-stream?) 

Sleet and snow is forecast as far south as Wales and the Midlands by the weekend while wintry showers are possible across the country. (The headline said the forecast was for wintry showers across the rest of the country, now you're saying they're merely 'possible'. Are they likely or just possible? If you can't tell me, then you've failed in the primary journalistic task of disseminating useful information).

Gareth Harvey, a forecaster with MeteoGroup, said today would be “fairly chilly” and added there would be “widespread frost on Wednesday night, with temperatures between 0C and -3C, which could happen anywhere.” (It's a sad reflection on the modern world if literally 'anywhere' could suffer a frost. In winter, in a maritime climate).

He added: "In the northern part of Scotland, people will wake up to a covering of snow on Thursday morning with accumulations of up to several centimetres. A band of rain and snow will slowly move its way southwards but it will peter out as it reaches central parts." (Not just Scotland, but the 'northern part of Scotland', will wake up to a covering of snow. It's as if the chance of snow in winter increases at higher latitudes).

But the real cold snap will begin on Friday when experts say temperatures could plunge to -10C. (Experts. I didn't realise there were experts involved. These are people who know what they're talking about)

And James Madden, forecaster for Exacta Weather, said cold weather could hold out through the rest of this month. ('Could' or 'likely to'?)

He said: The colder and wintry theme will begin to take more of a stronghold into the second week of February as the UK becomes locked in an icy and wintry grip." ('Stronghold', 'locked', 'grip'. Sounds like some panic-buying in the supermarkets is in order).


The chilly outlooks comes as Britain recovers from the effects of Storm Imogen, which struck on Monday.

Which brings us to the naming of storms. Below the main story we find the following, under the heading 'A-Z of UK storms':

Why do we need to name them? Using a "single authoritative system" helps the media communicate what's happening more effectively, says the Met Office, which in turn increases public awareness.

In what sense, exactly, does the naming of transient patterns in atmospheric airflow constitute an 'authoritative system'. Do anonymous patterns of airflow lack presence in some way? Do anticyclones suffer from poor self-confidence? And how does giving a storm a name help the media to communicate what's happening more effectively? Should we also give economic recessions avuncular names, so that the media can explain more effectively why living standards are falling?  

Saturday, February 06, 2016

Chemical adhesion and Formula One tyres

In the early 1980s, John Watson enjoyed what can only be described as a 'spree' of remarkable Grand Prix victories, achieved by overtaking numerous cars from mediocre or lowly grid positions.

Watson's success at Zolder and Detroit in '82, and Long Beach in '83, is commonly ascribed to using a harder compound of tyre, but John himself has commented that "it wasn't so straightforward [as a harder compound] because in those days there were extremely subtle differences between grades, compounds and construction of tyres and Michelin operated with great secrecy anyway," (1982, Christopher Hilton, p126). In particular, John mentions that the tyre he took on the left-hand side at Zolder in '82 was recommended by Michelin's Pierre Dupasquier on the basis of its performance on Bruno Giacomelli's Alfa Romeo at Las Vegas in 1981.

One can hypothesize that John achieved those stunning victories using a Michelin compound which was not only harder, but which generated an unusually high proportion of its grip from chemical adhesion.

In this context, recall that there are two distinct but related mechanisms by which a rubber tyre generates grip: (i) the viscoelastic deformation of the tyre by the 'asperities' in the road surface, ultimately leading to the viscous dissipation of kinetic energy into heat energy; and (ii) chemical adhesion at the interface between the tyre and the road surface.

The viscoelastic mechanism is often dubbed the 'hysteretic friction'. This is not our main concern here, but the interested reader is referred to Tyre friction and self-affine surfaces for an introduction to the representation and role of asperities.

Chemical adhesion is maximised by higher temperatures and higher contact areas. When a tyre gets hotter, it gets softer, and this allows it to deform further into the crenellations in the road surface, increasing the contact area. Hence, adhesion is maximised on smooth, hot surfaces.

Now, let's hypothesise that Las Vegas, Zolder, Detroit and Long Beach shared the following combination of characteristics: the asphalt was very smooth, and, (with the exception of Detroit), somewhere between fairly warm and very hot.

Certainly, Dupasquier has attested to the fact that Long Beach was a low 'severity' surface, (Alpine and Renault, Roy Smith, p148), and it seems likely that Detroit, as another street circuit, would have possessed similar characteristics. Las Vegas was basically just the car-park to a casino, so the same presumably applied there.

Whilst Detroit in '82 was slightly overcast, it was warmer than anticipated at Zolder, and the races at Las Vegas in '81 and Long Beach in '83 were run in high temperatures. On balance, then, Watson's amazing victories were mostly achieved on hot, smooth circuits, and the best tyre on a hot, smooth surface is one which generates a larger proportion of its grip from adhesive friction than hysteretic friction. 


A useful graph in this respect can be found in the latest paper co-authored by rubber-friction expert, B.N.J. Persson, concerning the dependency of rubber friction on normal load, (hereafter referred to as Fortunato et al). The graph, reproduced above, plots viscoelastic friction and adhesive friction as a function of the sliding velocity of a tyre.

The latter concept requires a brief digression: When a tyre is turned at an angle to the direction in which the car is moving, (the so-called slip-angle θ), the contact patch is deformed at a velocity which has a component parallel to the direction in which the tyre is rolling, and a transverse component, perpendicular to the rolling direction. The latter component is the sliding velocity which generates a cornering force. In the figure above, this sliding velocity is plotted in logarithmic form on the horizontal scale. In other words, it expresses the sliding velocity as a power of 10.

If the car-velocity is vc, and the slip-angle is θ, then the transverse slip-velocity is vy = vc Sin θ. Hence, approximately the same slip velocity can be generated by a large slip-angle in a slow-speed corner, and a smaller slip-angle in a high-speed corner. The actual slip velocities seen by an F1 contact patch, of the order ~1 m/s, correspond to a value of 0 on the log scale in the figure above.

Now, the friction coefficient generated by a tyre is actually a function of at least two principal variables: (i) the 'bulk' temperature of the tyre tread, and (ii) the sliding velocity. Hence, the coefficient of friction (mu) should always be imagined as a 2-dimensional surface.

If one represents bulk temperature along the x-axis, sliding velocity along the y-axis, and the friction coefficient as a vertical function mu = f(x,y), then peak adhesive and hysteretic friction can each be pictured as diagonal escarpments running from the bottom-left to the top-right of the horizontal plane. At a fixed sliding velocity, one can plot the mu as a function of bulk temperature; and at a fixed bulk temperature, one can plot the mu as a function of the sliding velocity. The figure above from Fortunato et al represents only a slice of the latter type.
 
As a tyre ages and wears, it loses the ability to generate and retain heat, and its temperature begins to fall. If a driver continued inducing the same slip-velocities as the tyre temperature dropped, then the mu would follow a track parallel with the x-axis, and the drop in grip would be quite precipitous. It's more likely that as a tyre ages, either the cornering speed will reduce, or the driver will fractionally reduce the slip-angles, thereby reducing the slip velocities, and the grip will follow more of a diagonal path, down the ridge of the escarpment towards the bottom left of the mu surface.

Fortunato et al make the crucial point that "at room temperature the maximum in the adhesive contribution is located below the typical slip velocities in tire [sic] applications (1 - 10 m/s), while the maximum in the viscoelastic contribution may be located above typical sliding speeds...Increasing the temperature shifts both [the adhesive and hysteretic mu] towards higher sliding speeds, and also increases the area of real contact A, making the adhesive contribution more important. Depending on the relative importance of the adhesive and viscoelastic contribution to the friction, the friction coefficient may increase or decrease with increasing temperatures."

John Watson's victories in the early 80s were achieved on tyres which took some laps to 'come in', hence this all adds up to a tyre which generated a higher proportion of its grip from adhesion, and only generated peak mu when it had been strained sufficiently to reach a higher temperature. In this case, the greater adhesion at high temperatures more than offset the loss of hysteretic friction.

During the Michelin and Bridgestone tyre war of the early 2000s, Formula One tyres continued to generate a significant proportion of their grip from chemical adhesion, hence a driver was able to 'push' on consecutive laps, without losing grip. The gain in chemical adhesion would offset the loss of hysteretic friction.

In contrast, if we consider the hypothetical case of a tyre which generated only a small proportion of its grip from chemical adhesion, then even before the effects of wear kick-in, a racing driver would find such tyres to be constantly balanced on a knife-edge of hysteretic grip. Push too hard for several laps, and as the tyre gets hotter, it would lose hysteretic grip without a compensating gain in adhesion...

Saturday, January 23, 2016

Formula 1 strategy and Nash equilibrium

At first sight, Formula 1 race strategy seems to be an ideal domain for the application of game-theory. There are a collection of non-cooperative agents, each seeking to anticipate the decisions of their competitors, and choose a strategy which maximizes their pay-off. The immediate pay-off at each Grand Prix is championship points.

However, there's a subtlety of game-theory which needs to be appreciated before its most famous concept, that of Nash equilibrium, can be applied.

Let's begin with the game-theory. John Nash demonstrated that a non-cooperative n-player game, in which each player has a finite set of possible strategies, must have at least one point of equilibrium.

This equilibrium is a state in which each player's choice of strategy cannot be improved, given every other player's choice of strategy. In game-theoretic language, each player's pay-off is maximized, given every other player's choice of strategy.

In formal terms, there must be an n-tuple of strategies σ = (σ1,...,σn) in which the pay-off for each player, vi, is maximized:

vi(σ) = max vi1,...,σn),   for i = 1,...,n

where the maximum is taken over all the player-i strategies, σi.

The set of strategies adopted by the teams at each Grand Prix should possess at least one such state of Nash equilibrium, (irrespective of whether the competitors are capable of finding that optimal state). However, it's possible to define a simple and realistic scenario which, at first sight, undermines Nash equilibrium.

Suppose that a Ferrari is ahead of a Mercedes in the early laps of a race, but the Mercedes has a pace advantage. Suppose, however, that the pace delta between the cars is less than the minimum threshold for a non-zero probability of the Mercedes overtaking the Ferrari.

Now, for the sake of argument, suppose that due to aerodynamic interference from the wake of the car ahead, the Mercedes cannot follow closer than 1.5 seconds behind the Ferrari, and suppose that tyre degradation is sufficiently low that new tyres provide a 1-lap undercut worth less than 1 second. Even if Mercedes pit first, Ferrari can respond the next lap, and (assuming an error-free stop) will emerge still in the lead.

Clearly, if Mercedes are to beat Ferrari they will need to use a different strategy. Let's make this interesting by postulating that whilst a 1-stop strategy is the fastest 'deterministic' race, a 2-stop strategy is only a second or so slower.

Now, if the Mercedes switches to a 2-stop strategy, it will be out of sync with the Ferrari, will be able to circulate at its true pace, and will be able to beat the Ferrari if the Scuderia remain on a 1-stop. (For the sake of argument, we assume that there are traffic-free gaps into which the Ferrari can pit, without being delayed by other competitors).

However, if Ferrari anticipates this, and plan a 2-stop strategy, it will still win the race. If both cars are on the 2-stop strategy, Mercedes cannot utilise its superior pace.

However, however, if Mercedes anticipates that, it can win the race by sticking to the original 1-stop strategy...which Ferrari, again, can head off by sticking with the 1-stop. And so on, ad infinitum.


Clearly, there is no Nash equilibrium here. Each possible combination of strategies is such that at least one competitor can improve their pay-off by changing strategy, if the other competitor's strategy remains fixed. This structure is depicted graphically above. Each of the four cells represents a possible combination of 1-stop and 2-stop strategies. The pair of numbers in each cell represents the pay-off, in championship points, for Ferrari and Mercedes, respectively.

The coloured arrows indicate how one competitor can always improve their pay-off. For example, the top-left cell represents the case in which Ferrari and Mercedes both pursue a 1-stop strategy. The blue arrow reaching across to the top-right cell indicates that Mercedes can improve their pay-off by switching to a 2-stop strategy, if Ferrari remain wedded to the 1-stop. However, the downward red arrow in the top-right cell indicates that Ferrari can improve their pay-off by switching to a 2-stop if Mercedes remain committed to a 2-stop.

The problem here is that the strategies considered are termed 'pure' strategies in game-theoretic terms. Nash's theorem pertains not to pure strategies, but to probabilistic combinations of pure strategies, called 'mixed' strategies. If there are two possible pure strategies, A and B, a mixed strategy is one in which, for example, you resolve to follow strategy-A 30% of the time, and strategy-B 70% of the time. You must also use a random number generator to enforce the probabilistic split.

A mixed strategy, then, is a rather abstract thing, and not necessarily something which represents human strategic thinking. People often have contingency plans, alternative strategies that they will adopt if certain events occur, but they rarely frame their original strategy in terms of probabilistic mixtures.

In terms of the Formula 1 strategy scenario defined above, there is a state of Nash equilibrium: if Ferrari and Mercedes both adopt the mixed strategy of pursuing a 1-stop with 50% probability and a 2-stop with 50% probability, then neither competitor has a mixed strategy which offers an improvement in terms of their average pay-off.

However, Formula 1 teams are unlikely to adopt such a coin-tossing approach to strategy, so a Grand Prix potentially offers an interesting case study of a non-cooperative n-player game far from Nash equilibrium.

Sunday, January 17, 2016

Pantheism and religion

Sandwiched between articles on human flatulence and the hazard posed by pigeon-droppings to electricity pylons, the 2015 Christmas/New Year edition of New Scientist contained an article by theologian Mary-Jane Rubenstein. The main thrust of the article attempts to draw parallels between some ancient philosophies and modern multiverse proposals in cosmology.

Specifically, Mary-Jane argues that the atomists were proposing a type of spatial multiverse, whilst the stoics were advocating a temporal one. Although it's stretching the point somewhat, the majority of the article is quite interesting.

However, as we reach the final paragraphs, Mary-Jane can be found citing a type of pantheism advocated by Nicholas of Cusa:

"Traditionally, Christian doctrine has taught that humans are made in the image of God. Cusa disrupted this idea by saying that the universe, not man, bears the image of God. And if humans are not particularly godlike, then God is not particularly humanoid. God doesn't look like a patriarch in the sky: he looks like the universe."

Now, pantheism is a rather strange notion. It's as if one has responded to the question 'Do unicorns exist as well as horses?' by replying 'Yes, they do, but they don't have horns, and can be identified with, or considered to resemble horses.'

But that's not the main problem with the article. The main problem comes in the final paragraph, where Mary-Jane concludes that because pantheisms "change what it means to be God...we don't need to chose between God and the multiverse...Is it possible that modern cosmology is asking us, not to abandon religion, but to think differently about what it is that gives life, what it is that's sacred, where it is we come from - and where we'll go?"

Whoa! Hold on a cotton-picking minute there, Mary-Jane. Perhaps there were some readers whose blood-flow was devoted more towards the stomach than the brain over the Christmas period, and under such conditions it might be possible to miss the sleight-of-hand here. Under most other conditions it's not too difficult to spot the sudden jump from the abstract metaphysical concept of pantheism to the introduction of religion.

The term 'religion' doesn't just entail a bundle of metaphysical concepts: it means a human institution; it means scripture, liturgy, a priesthood, a dogmatic moral code, the indoctrination of children, and the amplification of tribal behaviour.

That's rather more than pantheism suggests, I fear, and certainly not the answer to any of the questions posed by multiverse cosmology.