Tuesday, 8 August 2017

What we learn from the learning rate

Cells need to sense their environment in order to survive. For example, some cells measure the concentration of food or the presence of signalling molecules. We are interested in studying the physical limits to sensing with limited resources, to understand the challenges faced by cells and to design synthetic sensors.

We have recently published a paper 'What we learn from the learning rate' (free version) where we explore the interpretation of a metric called 'the learning rate' that has been used to measure the quality of a sensor (e.g here). Our motivation is that in this field a number of metrics (a metric is a number you can calculate from the properties of the sensor that, ideally, tells you how good the sensor is) have been applied to make some statement about the quality of sensing, or limits to sensory performance. For example, a limit of particular interest is the energy required for sensing. However, it is not always clear how to interpret these metrics. We want to find out what the learning rate means. If one sensor has a higher learning rate than another what does that tell you? 

The learning rate is defined as the rate at which changes in the sensor increase the information the sensor has about the signal. The information the sensor has about the signal is how much your uncertainty about the state of the signal is reduced by knowing the state of the sensor (this is known as the mutual information). From this definition, it seems plausible that the learning rate could be a measure of sensing quality, but it is not clear. Our approach is a test to destruction – challenge the learning rate in a variety of circumstances, and try to understand how it behaves and why.

To do this we need a framework to model a general signal and sensor system. The signal hops between discrete states and the sensor also hops between discrete states in a way that follows the signal. A simple example is a cell using a surface receptor to detect the concentration of a molecule in its environment.




The figure shows such a system. The circles represent the states and the arrows represent transitions between the states. The signal is the concentration of a molecule in the cell’s environment. It can be in two states; high or low, where high is double the concentration of low. The sensor is a single cell surface receptor, which can be either unbound or bound to a molecule. Therefore, the joint system can be in four different states. The concentration jumps between its states with rates that don’t depend on the state of the sensor. The receptor becomes unbound at a constant rate and is bound at a rate proportional to the molecule concentration. 


We calculated the learning rate for several systems, including the one above, and compared it to the mutual information between the signal and the sensor (the mutual information is a refined measure of correlation). We found that in the simplest case, shown in the figure, the learning rate essentially reports the correlation between the sensor and the signal and so it is showing you the same thing as the mutual information. In more complicated systems the learning rate and mutual information show qualitatively different behaviour. This is because the learning rate actually reflects the rate at which the sensor must change in response to the signal, which is not, in general, the equivalent to the strength of correlations between the signal and sensor. Therefore, we do not think that the learning rate is useful as a general metric for the quality of a sensor. Rory, Nick and Tom 

Tuesday, 18 July 2017

Looplessness

Complex (adj.): 1. Consisting of many different and connected parts. ‘A complex network of water channels’.

Oxford English Dictionary

Complex systems’ – like cells, the brain or human society – are often defined as those whose interesting behaviour emerges from the interaction of many connected elements. A simple but particularly useful representation of almost any complex system is therefore as a network (aka a graph). When the connections (edges) between elements (nodes) have a direction, this takes the form of a directed network. For example, to describe interactions in an ecosystem, ecologists use directed networks called food webs, in which each species is a node and directed edges (usually drawn as arrows) go from prey to their predators. The last two decades have witnessed a lot of research into the properties of networks, and how their structure is related to aspects of complex systems, such as their dynamics or robustness. In the case of ecosystems, it has long been thought that their remarkable stability – in the sense that they don’t tend to succumb easily to destructive avalanches of extinctions – must have something to do with their underlying architecture, especially given May’s paradox: mathematical models predict that ecosystems should become more unstable with increasing size and complexity, but this doesn’t seem to happen to, say, rainforests or coral reefs.

Trophic coherence

In 2014 we proposed a solution to May’s paradox: the key structural property of ecosystems was a food-web feature called “trophic coherence”. Ecologists classify species by trophic level in the following way. Plants (nodes with no in-coming edges) have level one, herbivores (species which only have in-coming edges from plants) are at level two, and, in general, the level of any species is defined as the average level of its prey, plus one. Thus, if the network in the top left-hand corner of the figure below represented a food web, the nodes at the bottom would be plants (level 1) the next ones up would be herbivores (level 2), the next, primary carnivores (level 3) and so on. In reality, though, food webs are never quite so neatly organised, and many species prey on various levels, making food webs a bit more like the network in the top right-hand corner. Here, most species have a fractional trophic level. In order to measure this degree of order, which we called trophic coherence, we attributed to each directed edge a “trophic difference”, the difference between the levels of the predator and the prey, and looked at the statistical distribution of differences over all the edges in the whole network. We called the standard deviation of this distribution an “incoherence parameter”, q, because a perfectly coherent network like the one on the left has q=0, while a more incoherent one like that on the right has q>0 – in this case, q=0.7.

It turns out that the trophic coherence of food webs is key to their stability, and when we simulated (i.e. generated in the computer) networks with varying levels of coherence, we found that, for sufficiently coherent ones, the relationship between size and stability is inverted. Although there are plenty of caveats to this result – not least the question how one should measure stability – this suggests a solution to May’s paradox. Since then, further research has shown that trophic coherence affects other structural and dynamical properties of networks – for instance, whether a cascade of activity will propagate through a neural network (example papers here, here and here!). But all these results were somewhat anecdotal, since we didn’t have a mathematical theory relating trophic coherence to other network features. This is what we set out to do in our most recent paper.


Figure. Four directed networks, plotted so that the height of each node on the vertical axis is proportional in each case to its trophic level. The top two are synthetic networks, generated in a computer with the ‘preferential preying model’, which allows the user to tune trophic coherence [1,3]. Thus, they both have the same numbers of nodes and edges, but the one on the left is perfectly coherent (q=0) while the one on the right is more incoherent (q=0.7). The bottom two are empirically derived: the one on the left is the Ythan Estuary food web, which is significantly coherent (it has q=0.42, which is about 15% of its expected q) and belongs to the loopless regime; the one on the right is a representation of the Chlamydia pneumoniae metabolic network, which is singificantly incoherent (q=8.98, or about 162% of the random expectation) and sits in the loopful regime. The top two networks are reproduced from the SI of Johnson et alPNAS, 2014 [1], while the bottom two are from the SI of Johnson & Jones, PNAS, 2017 [5].




Looplessness

In statistical physics one thinks about systems in terms of ensembles – the sets of all possible systems which satisfy certain constraints – and this method has also been used in graph theory. For example, the Erdős-Rényi ensemble comprises all possible networks with given numbers of nodes N and edges L, while the configuration ensemble also specifies the degree sequence (the degree of a node being its number of neighbours). We defined the “coherence ensemble” as the set of all possible directed networks which not only have given N, L and degree sequences (each node has two degrees in directed networks, one in and one out) but also specified trophic coherence. This allows us to derive equations for the expected values of various network properties as a function of trophic coherence; in other words, these are the values we should expect to measure in a network given its trophic coherence (and other specified constraints) if we had no other knowledge about its structure.

Many network properties are heavily influenced by cycles – that is, paths through a network which begin and end at the same node. For example, in a food web you might find that eagles eat snakes, which eats squirrels, which eat eagles (probably in egg form), thus forming a cycle of length three. These cycles (properly called ‘directed cycles’ in directed networks), or loops, are related to various structural and dynamical features of complex systems. For example, feedback loops can destabilise ecosystems, mediate self-regulation of genes, or maintain neural activity in the brain. Furthermore, it had been reported that certain kinds of network – in particular, food webs and gene regulatory networks – often had either no cycles at all, or only a small number of quite short cycles. This was surprising, because in (arbitrarily large) random networks the number of cycles of length l grows exponentially with l, so it was assumed that there must be some evolutionary reason for this “looplessness”. We were able to use our coherence ensemble approach to derive the probability with which a randomly chosen path would be a cycle, as a function of q. From there we could obtain expected values for the number of cycles of length l, and for other quantities related to stability (in particular, for the adjacency matrix eigenspectrum, which captures the total extent of feedback in a system). It turns out that the number of cycles does indeed depend on length exponentially, but via a factor τ which is a function of trophic coherence. For sufficiently coherent networks, τ is negative, and hence the expected number of cycles of length l falls rapidly to zero. In fact, such networks have a high chance of being completely acyclic. Thus, our theory predicts that networks can belong to either of two regimes, depending on the “loop exponent” τ: a loopful one with lots of feedback, or a loopless one in which networks are either acyclic or have just a few short cycles. A comparison with a large set of networks from the real world – including networks of species, genes, metabolites, neurons, trading nations and English words –  shows that this is indeed so, and almost all of them are very close to our expectations given their trophic coherence.

Our theory can also be used to see how close quantities such as trophic coherence, or mean trophic level, are to what would be our random expectations, given just N, L and the degree sequences, for any real directed network. We found, for example, that in our dataset the food webs tended to be very coherent, while networks derived from metabolic reactions were significantly incoherent (see the bottom two networks in the figure: the one on the left is a food web and the one on the right is a metabolic network). Our gene regulatory networks are interesting in that, while often quite coherent in absolute terms, they are in fact very close to their random expectation.

Open questions

This work leaves open many new questions. Why are some networks significantly coherent, and others incoherent? We can guess at the mechanism behind food-web coherence: the adaptations which allow a given predator, say a wolf, to hunt deer are also useful for catching prey like goats or elk, which have similar characteristics because they, in turn, have similar diets – i.e. trophic levels. This correlation between trophic levels and node function might be more general. For example, we have shown that in a network of words which are concatenated in a text, trophic level serves to identify syntactic function, and something similar may occur in networks of genes or metabolites. If edges tend to form primarily between nodes with certain functions, this might induce coherence or incoherence. Some networks, like the artificial neural networks used for “deep learning”, are deliberately coherent, which suggests another question: how does coherence affect the performance of different kinds of system? Might there be an optimal level of trophic coherence for neural networks? And how might it affect financial, trade, or social networks, which can, in some sense, be considered human ecosystems? We hope topics such as these will attract the curiosity of other researchers who can make further inroads. You can read our paper “Looplessness in networks is linked to trophic coherence” for free here and also in the journal PNAS. Sam and Nick.

Thursday, 13 July 2017

Mitochondrial heterogeneity, metabolic scaling and cell death


Juvid Aryaman, Hanne Hoitzing, Joerg P. Burgstaller, Iain G. Johnston and Nick S. Jones


Cells need energy to produce functional machinery, deal with challenges, and continue to grow and divide -- these activities and others are collectively referred to as "cell physiology". Mitochondria are the dominant energy sources in most of our cells, so we'd expect a strong link between how well mitochondria perform and cell physiology. Indeed, when mitochondrial energy production is compromised, deadly diseases can result -- as we've written about before.

The details of this link -- how cells with different mitochondrial populations may differ physiologically -- is not well understood. A recent article shed new light on this link by looking at a measure of mitochondrial functionality in cells of different sizes. They found what we'll call the "mitopeak" -- mitochondrial functionality peaks at intermediate cell sizes, with larger and smaller cells having less functional mitochondria. The subsequent interpretation was that there is an “optimal”, intermediate, size for cells. Above this size, it was suggested that a proposed universal relationship between the energy demands of organisms (from microorganisms to elephants) and their size predicts the reduction in the function of mitochondria. Smaller cells, which result from a large cell having divided, were suggested to have inherited their parent's low mitochondrial functionality. Cells were predicted to “reset” their mitochondrial activity as they initially grow and reach an “optimal” size.

We were interested in the mitopeak, and wondered if scientifically simpler hypotheses could account for it. Using mathematical modelling, our idea was to use the observation that as a cell becomes larger in volume, the size of its mitochondrial population (and hence power supply) increases in concert. We considered that a cell has power demands which also track its volume, as well as demands which are proportional to surface area and power demands which do not depend on cell size at all (such as the energetic cost of replicating the genome at cell division, since the size of a cell's genome does not depend on how big the cell is). Assuming that power supply = demand in a cell, then bigger cells may more easily satisfy e.g. the constant power demands. This is because the number of mitochondria increases with cell volume yet the constant demands remain the same regardless of cell size. In other words, if a cell has more mitochondria as it gets larger, then each mitochondrion has to work less hard to satisfy power demand.

To explain why the smallest cells also have mitochondria which do not appear to work hard, we suggested that some smaller cells could be in the process of dying. If smaller cells are more likely to die, and if dying cells have low mitochondrial functionality (both of these ideas are biologically supported), then, by combining this with the power supply/demand picture above, the observed mitopeak naturally emerges from our mathematical model.

As an alternative model, we also suggested that the mitopeak could come entirely from a nonlinear relationship between cell size and cell death, with mitochondrial functionality as a passive indicator of how healthy a cell is. This indicates the existence of multiple hypotheses which could explain this new dataset.




Interestingly, we also found that the mitopeak could be an alternative to one aspect of a model we used some time ago to explain a different dataset, looking at the physiological influence of mitochondrial variability. Then, we modelled the activity of mitochondria as a quantity that is inherited identically by each daughter cell from its parent, plus some noise -- noting that this was a guess at the true behaviour because we didn't have the data to make a firm statement. We needed this relationship because observed functionality varied comparatively little between sister cells but substantially across a population. The mitopeak induces this variability without needing random inheritance of functionality, and may thus be the refined picture we've been looking for. These ideas, and suggestions for future strategies to explore the link between mitochondria and cell physiology in more detail, are free in our new BioEssays article "Mitochondrial heterogeneity, metabolic scaling and cell death" here. Juvid, Nick, and Iain.

Thursday, 19 January 2017

Using (mutual) information as a chemical battery

Biological systems at many scales exploit information to extract energy from their environment. In chemotaxis, single-celled organisms use the location of food molecules to navigate their way to more food; humans use the fact that food is typically found in the cafeteria. Although the general idea is clear, the fundamental physical connection between information and energy is not yet well-understood. In particular, whilst energy is inherently physical, information appears to be an abstract concept, and relating the two consistently is challenging. To overcome this problem, we have designed two microscopic machines that can be assembled out of naturally-occurring biological molecules and exploit information in the environment to perform chemical work.
Using chemical correlations as a battery.

The basic idea behind the machines is simple, and makes use of pre-existing biology. We use an enzyme that can take a small phosphate group from one molecule and attach it to another – a process known as phosphorylation. Phosphorylation is the principal signaling mechanism within a cell, as enzymes called kinases use phosphate molecules to activate other proteins. In addition to signalling, phosphates are one of the cell’s main stores of energy; chains of phosphate bonds in ATP (the cell’s fuel molecule) act as batteries. By ‘recharging’ ATP through phosphorylation, we store energy in a useful format; this is effectively what mitochondria do via a long series of biochemical reactions. In reality energy is stored both in the newly-formed phosphate bond and in the fact that the concentration of ATP has changed. We are only interested in the effects due to concentration so we set up the model to ignore the contribution from bond formation. This can trivially be put back in, as we explain in the Supplementary Material.

The machines we consider have three main components: the enzyme, the ‘fuel’ molecule that acts as a source of phosphates to charge ATP, and an activator for the enzyme, all of which are sitting in a solution of ATP and its dephosphorylated form ADP. Fuel molecules can either be charged (i.e. have a phosphate attached) or uncharged (without phosphate). When the enzyme is bound to an activator, it allows transfer of a phosphate from a charged fuel molecule to an ADP, resulting in an uncharged fuel molecule and ATP. The reverse reaction is also possible.

In order to systematically store energy in ATP, we want to activate the enzyme when a charged fuel molecule is nearby. This is possible if we have an excess of charged fuel molecules, or if charged fuel molecules are usually located near activators. In the second case, we're making use of information: the presence of an activator is informative about the possible presence of a charged fuel molecule. This is a very simple analogue of the way that cells and humans use information as outlined above. Indeed, mathematically, the 'mutual information' between the fuel and activator molecules is simply how well the presence of an activator indicates the presence of a charged fuel molecule. This mutual information  acts as an additional power supply that we can use to charge our ATP-batteries. We analyse the behaviour of our machines in environments containing information, and find that they can indeed exploit this information, or expend chemical energy in order to generate more information.

A nice feature of our designs is that they are completely free-running, or autonomous. Like living systems, they can operate without any external manipulation, happily converting between chemical energy and information on its own. There’s still a lot to do on this subject; we have only analysed the simplest kind of information structure possible and have yet to look at more complex spatial or temporal correlations. In addition, our system doesn’t learn, but relies on ‘hard-coded’ knowledge about the relation between fuel and activators. It would be very interesting to see how machines that can learn and harness more complex correlation structures would behave. You can read about this paper here for free or in Physical Review Letters under the title 'Biochemical machines for the interconversion of mutual information and work'. Tom, Nick, Pieter Rein, Tom

Stochastic Survival of the Densest: defective mitochondria could be seen as altruistic to understand their expansion

With age, our skeletal muscles (e.g. muscle of our legs and arms) work less well. In some people, there is a substantial loss of strength an...