Properties of Dynamical Systems

Introduction

In the post “Introduction to Temporal Dynamics and Change” we have discussed temporal signatures, orders of change within these signatures and how systems behave with respect to measurement error . In the current post important properties of complex dynamical systems are discussed. We try to detect these properties in our predator-prey dataset with methods used in the article by Olthof et al. (2020); such as the Bartels-Rank Test, Auto-Correlation Functions (ACF), the KPSS test and a Change Point Analysis. This post includes outputs from R and relevant plots to visualise the analysis. You can find all these materials on our GitHub Page.

Before going into the analysis, note that a lot of the theories and methods used in this post are based on the article by Olthof et al (2020), where they examine complex dynamical system properties in a time series of psychological self-ratings. It is our assumption that the methods used by Olthof et al. (2020) can be used to examine a dataset of our own.

Olthof et al. (2020) mentions that a complex system should demonstrate the following three characteristics:

  1. Complex systems must have memory
  2. Complex systems exhibit regime shifts between attractor states
  3. Complex systems are sensitive to initial conditions

In this post we will go through the first two characteristics, how they can be found and which tools can be used to uncover/analyse them.

For this post we ran an agent-based model similar to the model used in our other post; “Temporal Dynamics and Change”. In this case, we let the model run until a clear regime shift occurred, e.g. until the predators became extinct. From this dataset, we take a timeframe of 5000 between time 230000 & 28000, which encapsulates the period in which this regime shift occurs. You can see it visualised in Figure 1.

Figure 1: regime shift in predator-prey dataset, shortened to a timeframe of 5000

Before going through the properties mentioned above, we want to give a short note on interaction-dominant dynamics.

Interaction-Dominant Dynamics

Two ways of looking at how a system develops itself is by splitting it up in the following:

  1. Interaction-dominant systems (also called soft-assembled systems)
  2. Component-dominant systems.

The system behaviour of a component-dominant system is the product of a fixed, defined architecture of system modules and component elements or agents. Each component or module also has pre-arranged objectives. A factory line or a pendulum clock is a commonly used example (Richardson et al., 2014).

System behaviour for soft-assembled systems differ from component-dominant systems. The key property of soft-assembled systems is that they exhibit this interaction-dominant dynamics. This means that the system behaviour is the outcome of the interaction between the systems components, agents and situational factors. The interaction between these three, affect the structure of the three themselves (Anderson, Richardson & Chemero, 2012; Kello, Beltz, Holden, & Van Orden, 2007; Van Orden, Kloos, & Wallot, 2011). The concept where these smaller lower-level inputs lead to more than additive higher-level outcomes is called ‘non-linearity’.

Because of how flexible an interaction-dominant system is, one can observe that elements or agents of the lower level of the system modify or influence the macroscopic order of the higher level. But, at the same time the lower level is structured by the macroscopic order of the system (Richardson et al., 2014). In “Introduction to Temporal Dynamics and Change” we defined this shortly as emergence.

Although the scope of this post is to show the different properties of soft-assembled systems, we do find it important to distinguish between the two types of assembled systems and their behaviour.

Memory

Memory in its broadest sense can be reflected in time-series as dependency on past values (Olthof et al., 2020). In statistical terms this means that complex systems generate time-series data in a non-random fashion. The Bartels-Rank test is a way to test for this (non-)randomness. The null-hypothesis is that the data are random. Visually inspecting our series, we would be expecting that there is some sort of non-randomness.

Figure 2: outcome of Bartels-Rank test for the three time-series in our dataset

In Figure 2 the output for the Bartels-Rank test can be found. The Bartels-Rank test also tells us something about the predictability of our system and therefore its nature; random, deterministic (two extremes) or complex (in between) (Olthof et al., 2020, p. 7). In this case, the test statistics are all approximately -70.6 and the p-values are all much smaller than any reasonable significance level. This means that we have found some evidence that there is interdependence between observations.

Memory can also be divided into short-term and long-term memory. Short-term memory can be estimated at a time lag of one or two. Long-term memory is studied by inspecting long-range temporal correlations. This can be examined by plotting the Autocorrelation Function (ACF) function. The ACF shows the correlation of a time series with a lagged version of itself. For this input, we’ll be using the Partial Autocorrelation Function (PACF).

In the PACF, the indirect correlations are removed, and it is only compared to a given time-point in the future. For our investigation purposes, we take a max lag of 900. This represents approximately 5 oscillations. It is our understanding that if there’s correlations that pass through multiple oscillations, there must be a form of long-term memory of the system.

Figure 3: Sheep Partial-Autocorrelation Plot

In Figure 3, the PACF for the ‘sheep’ time-series is plotted. For the ‘sheep’ we find 34 values which surpass the two-tailed z-test threshold (blue lines), which gives us some evidence that there is memory in the time-series. Furthermore, the maximum lag at which it passes the two-tailed z-test is at a lag of 561, therefore definitely indicating some long-term memory.

Figure 4: Wolves Partial-Autocorrelation Plot

In Figure 4, the PACF for the ‘wolves’ time-series is plotted. For the ‘wolves’ variable we find 53 values which surpass the two-tailed z-test threshold, which gives us some evidence that there is memory in the time-series. Furthermore, the maximum lag at which it passes the two-tailed z-test is at a lag of 883, therefore showing memory till almost 5 oscillations away from an observation. We see this as a large piece of evidence of long-term memory.

Figure 5: Grass Partial-Autocorrelation Plot

In Figure 5, the PACF for the ‘grass’ time-series is plotted. For the ‘grass’ variable we find 31 values which surpass the two-tailed z-test threshold, which gives us some evidence that there is memory in the time-series. Furthermore, the maximum lag at which it passes the two-tailed z-test is at a lag of 379, therefore showing the least long-term memory of the three systems. Still, this is more than 2 oscillations away from a given point and therefore a piece of evidence for long-term memory.

Regime Shifts

As is explained in more detail in our post “Attractor Dynamics”, of all the states complex systems can exhibit, there are certain states that are ‘preferred’ by the system. It takes a force to drive the system out of this balance, and into a new state. Even though the force can be applied gradually, the qualitative state change of the system is often sudden (Thelen, Ulrich & Wolff, 1991, p.11). This qualitative state change of the system goes by many different names, but here we’ll stick to regime shifts as defined by Olthof et al (2020).

Olthof et al (2020, p.3) states that “different regimes refer to different attractors … and may be characterised by different mean levels … different variance levels … or differences in any distributional characteristic”. Multiple sources mention that when the system is near a phase shift, a period of instability is apparent (Kelso, 1995, p.26; Olthof et al, 2020, p.3; Thelen, Ulrich & Wolff, 1991, p.12). In Figure 6, you can see a visual representation of this instability.

Figure 6: visual representation of regime shift, preceded by period of instability. By Thelen, Ulrich & Wolff (1991), Illustration of the onset of convection rolls in a boiling pot as a ball in a potential well, retrieved at 16th of April 2021

These periods of instabilities can be seen as Early Warning Signals (EWS) (Olthof et al, 2020, p.3), which can be of importance in early recognition of adverse events. In our dataset about the predator-prey ecosystem, these early warning signals could help to detect a natural ecosystem about to be thrown off balance.

In Figure 1 you can see that around time 27250, the data undergoes a radical change and the wolves become extinct. In “Introduction to Temporal Dynamics and Change”, the data we extracted showed evidence of stationarity by the use of the KPSS test. Similarly, for the purpose of finding evidence against stationarity we run this test on our subset between time 23000 and 28000.

Figure 7: KPSS output for the three time series in the timeframe of 23000 and 28000.

Figure 7 states the output from running a KPSS analysis on our three time-series. Since we reject the null hypothesis with a p-value < 0.01, we have found some evidence that the data are non-stationary. From this we move on to perform a Change Point Analysis (CPA) on the different time-series to investigate whether we can find a window of instability before the occurrence of the regime shift.

For running the CPA, we run the e.divisive algorithm from the ecp package. Simply said, this algorithm runs a procedure for finding a change point, and then dividing the space in two. When the change point is found, it continues to find additional change points in between, and can therefore be represented as a binary tree (James & Matteson, 2014). We set parameter R at 500, so it is a bit more robust than the default (199) (this indicates the maximum number of permutations, but we will not go in detail on the parameters). Furthermore, we set our p-value to 0.05, to only indicate change points who pass that level of significance.

The e.divisive algorithm has shown to not work very well with data which follows a certain trend, and we have tried different parameters and adaptations of the dataset until we found a CPA which was interpretable. For the process on this, you can read this short post on adapting our dataset to fit the e.divisive algorithm: “Trend Data and E-divisive Algorithm”. Here, the only thing that you need to know is that we detrended our dataset.

In the outcome plots, you can find the change points at the blue lines. Our goal of this CPA was to show the period of instability prior to the regime shift. As stated in the introduction, finding these EWS can be of vital importance to complex systems. The predators ultimately become extinct at timestamp 4217 in the plots.

Figure 8: CPA on sheep time-series

Figure 8 plots the CPA for the ‘sheep’ variable. The change point estimates for the ‘sheep’ time-series are at 0, 4223 and 5000, where the middle is the change point around the regime shift. It takes place 6 time-steps after the predators have become extinct and therefore cannot be useful as EWS.

Figure 9: CPA on wolves time-series

Figure 9 plots the CPA for the ‘wolves’ variable. The change point estimates for the ‘wolves’ time-series are at 0, 4189 and 5000, again the middle being around the regime shift. Since it takes place 28 time-steps before the predators become extinct, it could be used as an EWS.

Figure 10: CPA on grass time-series

Figure 10 plots the CPA for the ‘grass’ variable. The change point estimates for the ‘grass’ time-series are at 0, 4206 and 5000. Interestingly, the grass should only be affected when sheep populations grow beyond proportions because of the absence of wolves, but the grass already shows a change point before the sheep demonstrate a change point. Therefore, this component of the system holds information about the system as a whole and shows that by measuring different things (wolves or grass), you could come to similar conclusions regarding EWS. More on this can be found in our post about “Phase Space Reconstruction”.

As a final comment, we can see that there is no real ‘period of instability’ demonstrated in our dataset. Rather, it seems like the data have clear points of shifts in the system. This goes against the theory that periods of instabilities as described above, but is in line with the theory that regime shifts are often sudden and powerful.

Conclusion

In summary, we have proven that the methods to investigate the properties exhibited by complex dynamical systems as defined by Olthof et al. (2020) are in fact usable to examine our own wolf-sheep predation data. By providing a clear explanation to what the concepts are, we were able to link them back to our own dataset.

This soft-assembled system, characterised by dominant lower-order component interaction has displayed that it

  1. Is not completely random
  2. Shows some dependency on past values
  3. Can clearly indicate regime shifts through change points

As per the third property, sensitivity on initial conditions, it is not in the scope of this post to elaborate on this notion of nonlinearity and divergence. It is however an important characteristic of complex dynamical systems and requires elaboration in the future.

This story was written by Malik Rigot, Ayrton Sambo & Niels Meulmeester, originally created as part of Travis J. Wiltshire’s Complex Systems Methods for Cognitive and Data Scientists course at Tilburg University.

References

Anderson, M. L., Richardson, M. J., & Chemero, A. (2012). Eroding the Boundaries of Cognition: Implications of Embodiment. Topics in Cognitive Science, 4(4), 717–730. https://doi.org/10.1111/j.1756-8765.2012.01211.x

James, N. A., & Matteson, D. S. (2014). ecp: An R Package for Nonparametric Multiple Change Point Analysis of Multivariate Data. Journal of Statistical Software, 62(7). https://doi.org/10.18637/jss.v062.i07

Kello, C. T., Beltz, B. C., Holden, J. G., & Van Orden, G. C. (2007). The Emergent Coordination of Cognitive Function. Journal of Experimental Psychology: General, 136(4), 551–568. https://doi.org/10.1037/0096-3445.136.4.551

Kelso, J. A. (1995). How Nature Handles Complexity. In Dynamic Patterns: The Self-Organization of Brain And Behavior. The MIT Press.

Olthof, M., Hasselman, F., & Lichtwarck-Aschoff, A. (2020). Complexity in Psychological Self-ratings: Implications for Research and Practice. BMC Medicine, 18(1). https://doi.org/10.1186/s12916-020-01727-2

Richardson, M. J., Marsh, K. L., & Dale, R. (2014). Complex Dynamical Systems in Social and Personality Psychology: Theory, Modeling & Analysis. In Handbook of Research Methods in Social and Personality Psychology (2nd ed., pp. 251–280). Cambridge University Press. https://www.researchgate.net/publication/259892479_Complex_dynamical_systems_in_social_and_personality_psychology_Theory_modeling_and_analysis

Thelen, E., Ulrich, B. D., & Wolff, P. H. (1991). Hidden Skills: A Dynamic Systems Analysis of Treadmill Stepping During the First Year. Monographs of the Society for Research in Child Development, 56(1). https://doi.org/10.2307/1166099

Van Orden, G. C., Kloos, H., & Wallot, s. (2011). Living in the Pink: Intentionality, Wellbeing, and Complexity. In Philosophy of Complex Systems (10th ed., pp. 629–672). Elsevier. https://doi.org/10.1016/B978-0-444-52076-0.50022-5

Wilensky, U. (1997). NetLogo Wolf Sheep Predation model. http://ccl.northwestern.edu/netlogo/models/WolfSheepPredation. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL

Wilensky, U. (1999). NetLogo. http://ccl.northwestern.edu/netlogo/. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL

--

--