Introduction to Temporal Dynamics and Change

Introduction

This post will provide a brief introduction to the core concepts of temporal dynamics and change. The focus will be on complex systems: where researchers investigate the components of a system and complexity can emerge out of a simple altering of rules (Butner, 2018). We will be using simulated data from a model that explores a predator-prey ecosystem and its stability (Wilensky, 1997). Here you can find the GitHub repository with dataset & code.

Key concepts such as emergence, temporal signatures and orders of change shall be explained and visualised at the hand of examples in this post. Furthermore, for our predator-prey dataset we conduct tests for stationarity and stability to explain those concepts as well.

The goal of this post is to explain the required concepts to understand temporal dynamics and change, and to be the link between the theory and practical side along with visualisation of these concepts. This will be realised by explaining the core concepts and linking them to easy to understand examples using the predator-prey dataset. Each concept shall be given a brief introduction. Code and examples will then be used to further assist the reader in understanding the concepts and to explain temporal dynamics and change. Let us start by explaining temporal patterns and types of change.

Temporal patterns and types of change

In this section we will elaborate on what a temporal pattern is and what different types of change exist. This is necessary for understanding how dynamical systems behave and how information can be extracted from them.

Temporal pattern

Temporal patterns can be found in any time-series. Fundamentally, it is the behaviour of a system over time. In the phenomena of interest to us — complex systems — the temporal pattern is much more than simply the output of the different system components combined. These lower-level components have non-linear interactions which make up the behaviour of the system — something which is called emergence. Complex systems, demonstrated by the temporal pattern, that is the behaviour of the system over time, are the by-product of emergence: it is the way you can see how the system develops through these lower level-higher level interactions. The temporal pattern of our simulated dataset can be observed in the following graph.

Figure 1: Predator-prey full dataset

Figure 1 shows the time-series of the predator-prey dataset. What is important to note here is that a temporal pattern is the whole behaviour of the system. But in this behaviour, you can observe different types of change. In the end, the combination of different types of change is the temporal pattern as a whole. Therefore we will now shortly elaborate the types of changes you can observe, based on the explanations given by by Butner (2018).

First-order change

First-order of change are the simplest notions of change known. The following are three types of first-order change:

  • a linear value
  • a constantly changing value
  • a non-constant value of change
Figure 2: First-order change plot

In Figure 2 the different types of first-order change are visualised. First-order change can be easily exemplified using the time spent on training how to program. If one doesn’t practice programming, there will be no change in return. If one practices daily, they would expect a linear improvement proportionally to the amount of time spent practicing. But, that is usually not the case. A non-linear return of daily practice is usually the case. This means that there isn’t a one-to-one return on investment when spending time practicing.

First-order of change is present in the predator-prey dataset only at discrete time intervals: before starting the simulation, in the beginning or at the end. Before starting the simulation there’s no value of change, which is called the constant value of no change. When starting the simulation a linear increase is shown in the amount of prey available and a linear decrease in the amount of gras, which is the constant change. When either the predator or prey would go extinct the other species would showcase a non-linear increase (exponential), which is the third and last example of first-order change.

Second-order change

In Figure 3, the different types of second-order change are plotted. Second-order change has everything to do with oscillations. Oscillations are up and down movements in a form of rhythm. The oscillations in the system can be of three different types:

  • oscillations at a constant rate
  • oscillations at a non-constant rate
  • oscillations which oscillate themselves
Figure 3: Second-order change plot

Third-order change

Third-order change is none like the aforementioned types of change. The official term is deterministic chaos and it is a non-repeating pattern that is sensitive to the initial conditions of where the system starts. Chaos makes it very difficult to predict, since the cost of forecasting increases exponentially the further into the future you go. The Lorenz system is a set of mathematical coupled differential equations that express this chaotic behaviour.

Figure 4: Third-order chaotic change plot

Figure 4 shows the X value of the Lorenz equation, which does not show a clear overall pattern in the data. Also, it shows dependency on the starting point of the data and is therefore sensitive to initial conditions. Changing these initial conditions could lead the system to exhibit completely different behaviour.

If you would like to determine whether the pattern you observe is statistically spoken constant over time, you need to test for stationarity. This will be the topic of the next section.

Stationarity

The easiest form of predictability and forecasting would be if the data in question had a clear and definite structure or set of rules. This set of rules govern the dynamics of the data and in this context it is called determinism. According to time-series modelling, the properties of these rules should not change over time.

In complex dynamical systems, stationarity coincides with this deterministic view; the notion that the measurements we observe, correspond to properties of the object or component in question over time (Kant & Schrieber, 2004). Similarly, non-stationarity would imply that the properties of the system do not stay the same over time. For information on non-stationarity, we refer you to the section “Regime Shifts ” in our post “Properties of Dynamical Systems”.

Thus, stationarity is a requirement for the utilisation of some statistical tools in time-series analysis; “…that the parameters of the system remain constant and that the phenomenon is sufficiently sampled” (Kant & Schreiber, 2004, p. 15).

In practice, this means that we have to make sure that to use certain tools, the data we’re using is stationary. There are a number of methods to test stationarity in time-series. The easiest way is to check the mean and the variance for N sequences of the data. This method allows a more in-depth view of the data in respect to the mean and variance for multiple sequences of the data.

Other methods include the KPSS test, short for Kwiatkowski-Phillips-Schmidt-Shin (KPSS) (“KPSS test for stationarity,” 2019). This test has as a null hypothesis that the data are stationary. On the other hand, the ADF (Augmented Dickey Fuller test) (“Augmented Dickey-Fuller (ADF) test — Must read guide — ML+,” 2019) is a unit root test, but this time the null-hypothesis is that the data are non-stationary. A full explanation of these two unit root tests is beyond the scope of this post, but can be found here and here.

We use these tests on our data and we get the following outputs:

Figure 5: Augmented Dickey Fuller & KPSS test

In Figure 5 we plot the results of the Augmented Dickey Fuller & KPSS test. Based on our explanations in the previous paragraph, we reject the null hypotheses that the data are non-stationary for all three Augmented Dickey Fuller tests with an alpha level of 0.05. Furthermore, we fail to reject the null hypotheses that the data are stationary for all three KPSS tests with an alpha level of 0.05.

The outcomes of these tests therefore give us some evidence that the data are stationary. In other words, we have similar temporal patterns multiple times throughout the whole dataset. This means we can observe the true temporal pattern of the system. Which leads to our next research question: how stable is the system that we observed?

Stability and measurement error

Stability in complex systems is where the variation in the temporal signature is constantly brought back to the original constant value (Butner, 2018, p. 31). In an unstable situation, we would see the temporal pattern ever changing as with a random walk. Since there is no ‘force’ bringing the system back to its’ original state, the system drifts away to different values. More on these so-called stable states in our post “Attractor Dynamics”.

Figure 6: Plot of random-walk of a system

Figure 6 shows what a random-walk would look like. This state, where we constantly observe the temporal signature — real value, despite measurement error — is called a stable state. In this state, “..the system is constantly working to overcome perturbations” (Butner, 2018, p.31).

A more helpful way to visualise this stable state, is through a density plot. Here, the value at which most of the values are residing is visible (x-axis); the y axis depicts the density. We use our predator-prey ecosystem data to show the stability of the system.

Figure 7: Sheep density plot
Figure 8: Wolves density plot
Figure 9: Grass density plot

In Figure 7 to 9 the density plots are displayed for the three time-series. As you can see from the stability in the system of Sheep and Wolves, there seems to be one stable level. For sheep this is around 85 sheep, and for wolves this is around 35 wolves. The peak indicates that most of the values lie around that number of predators/preys. For grass we can see an interesting feature, there seem to be two peaks, one around 200 pieces of grass and the other one around 420 pieces of grass. This could imply multi-stability, e.g the system has multiple stable states. Thinking about our data, these stable states could be either when there are proportionally more sheep in the system or when there are proportionally more wolves in the system.

In assessing a complex system’s temporal signature at time point t, we want to know if that value is the true value — that is, the true value of change — or if that variability is due to some measurement error. In a formal context, measurement error (ME) can be defined as the difference between observed value and the true — unobserved — value. According to some, ME can be systematic or random and is an indicator of bias and extra variability in the data (“Measurement error,” 2019). This is something that we would want to avoid when testing stability.

As Butner puts it: “…stability is only defined with the existence of perturbations” (Butner, 2018, p.39). So, how we do this in practice is by adding noise to the — efficiently — sampled data. If the system still manages to demonstrate stability, we can assume that there is some powerful ‘force’ at play keeping the system in its current stable state.

In this example, we add some (artificial) measurement error to the predator-prey data and we ask ourselves: how does the temporal signature hold despite this noise? We test for 4 different levels of noise, where SD = {1, 10, 30, 50}.

Sheep:

Figure 10: Measurement error plot Sheep

Figure 10 shows what happens to the stability of the sheep time-series when adding random noise. You can see that reasonably small levels of of noise (SD(sheep) = {1, 10}), the temporal signature is relatively stable. However, moving to higher levels of noise (SD(sheep) = {30, 50}), there is no clear stable point anymore and the system will begin to shift.

Wolves:

Figure 11: Measurement error plot Wolves

Figure 11 shows what happens to the stability of the wolves time-series when adding random noise. For the wolves you can see a very similar story as for the sheep. One of the differences is only that the pattern for wolves overall is relatively more stable for lower levels of noise (a higher density is around 35 wolves in the system).

Grass:

Figure 12: Measurement error plot Grass

Figure 12 shows what happens to the stability of the grass time-series when adding random noise. Like stated before, the temporal pattern for grass is already relatively less stable, although it does show some signs of multi-stability. The temporal pattern decays as fast with noise as the aforementioned two temporal patterns.

Sampling & smoothing

Assessing the temporal pattern in the data is not always straight-forward. Sometimes, a birds-eye view of the data (as it is) makes it out to be quite noisy and messy. This is the reason why we sample or smooth the data. For our predator-prey dataset we are working with discrete data. Sampling the data will smooth our time-series. The process of smoothing involves taking a certain window of observations and applying a mathematical method to these observations; in this post we will take the rolling mean of an observational window.

Figure 13: Smoothed predator-prey time-series with rolling mean window of 25

What smoothing does, is that it removes random variation — noise — and therefore gives a general idea of the relative changes in the data. Figure 13 shows a simple plot of a smoothed data allows us to capture the important temporal patterns in the data. It is important to not have too big of an observational window otherwise the temporal pattern of the data will disappear or become smaller; this prevents us from capturing important patterns. You can see in the following graph how the temporal pattern dissipates when taking a too large smoothing window. The same effect happens when not sampling enough. For demonstrative purposes we set the rolling mean at a window of 200.

Figure 14: Oversmoothed predator-prey time-series with rolling mean window of 200

Figure 14 shows a plot when the time-series are over-smoothed. It is clearly visible that the temporal pattern has dissipated. Important features of the system are therefore no longer observable and it would be difficult to properly characterise the system.

Conclusion

In this blog we have supplied the reader with some simple theoretical and coded examples for measuring temporal dynamics and change. Through different tests and assessments for stationarity, stability and types of change, one can depict the true temporal pattern of a system. In this analysis, the prey-predator dataset has been used to exemplify the stated tests and concepts. This can be used in the future to assess if a dataset has complex characteristics (which we talk about in our posts “Properties of Dynamical Systems” and “(Multi)-Fractal Analysis”) , and how we can use other dynamical systems tools to accurately describe and hopefully forecast the behaviour of a system of interest (we go into some of these tool in our posts “Attractor Dynamics” and “Phase Space Reconstruction”).

This story was written by Malik Rigot, Ayrton Sambo & Niels Meulmeester, originally created as part of Travis J. Wiltshire’s Complex Systems Methods for Cognitive and Data Scientists course at Tilburg University.

References

- Augmented Dickey-Fuller (ADF) Test — Must Read Guide. (2019, November 2). ML+. https://www.machinelearningplus.com/time-series/augmented-dickey-fuller-test/

- Prabhakaran, S. (2019, November 2). KPSS Test for Stationarity. ML+. https://www.machinelearningplus.com/time-series/kpss-test-for-stationarity/

- Grossman, W. (2019, May 8). Measurement error. CROS — European Commission. https://ec.europa.eu/eurostat/cros/content/measurement-error_en

- Kantz, H., & Schreiber, T. (2004). Nonlinear Time Series Analysis (2nd ed.). Cambridge University Press.

- Butner, J. (2018). Quantitative Reasoning Under a Dynamical Social Science (1.02). University of Utah.

--

--