From Metaphor to Metric: A New Method for Identifying Attractor States in Psychopathology

For decades, the study of mental health has grappled with a fundamental challenge: how do we understand the dynamic, often unpredictable, nature of psychological change? Why do some individuals remain stuck in persistent states of depression, while others “bounce back” with remarkable resilience? And how can we anticipate sudden shifts in behavior, like a relapse or a breakthrough in therapy?

A powerful new perspective comes from complex systems science, which reframes mental health not as a static condition but as an emergent property of a dynamic system. This approach uses the “ball-and-landscape” metaphor: our psychological state is a ball rolling across a landscape of valleys and hills. The valleys are attractor states—stable, self-organizing patterns that our system naturally gravitates towards and resides in. A deep valley represents a robust state, like chronic depression or, conversely, a state of resilient well-being. The hills represent unstable transition zones.

This perspective is transformative because it allows us to see:

  • Healthy and Pathological States as Attractors: Both well-being (resilience) and psychopathology (e.g., persistent sadness) are viewed as stable dynamic patterns.
  • Clinical Change as Phase Transitions: Sudden shifts, like recovery or relapse, are seen as “phase transitions,” where the system tips from one attractor valley into another. These transitions can be of different types, such as bifurcation-induced tipping (B-tipping), caused by a gradual change in the landscape itself, or noise-induced tipping (N-tipping), caused by a sudden, large perturbation.
  • Intervention as Landscape Design: Therapy and other interventions can be understood as efforts to reshape this landscape—to shallow a “depression” valley or deepen a “well-being” valley.

The predictive power of this model is immense. As a system’s current attractor state becomes unstable and approaches a B-tipping point, it exhibits tell-tale signs like critical slowing down (slower recovery from minor setbacks) and critical fluctuations (increased behavioral variability). These are “early warning signals” that a major transition is imminent, creating a window for just-in-time interventions.

But this powerful metaphor raises a critical scientific question: Can we move beyond the metaphor? Can we use real-world data to empirically identify these hidden attractor states and test their properties? My goal was to develop a rigorous statistical methodology to answer a critical question: Can we use real-world time series data to not only identify distinct hidden states but also to test whether those states qualify as bona fide attractors?

I discovered that Hidden Markov Models (HMMs) offer a powerful toolkit for this exact purpose. Instead of assuming that the hidden states an HMM identifies are attractors, my approach uses the various outputs of a fitted HMM as a suite of metrics to either support or refute this claim.

This document outlines the framework I developed for this investigation, a crucial conceptual insight that emerged, and a practical guide to implementing this validation process in R.

A Framework for Testing Attractor State Characteristics

The first step was to deconstruct the “attractor” concept into its core, testable properties. An attractor is a state that a system is drawn towards, resides in, and resists leaving. I translated these properties into a set of hypotheses that could be evaluated using HMM parameters.

Crucially, the interpretation of the evidence is not black-and-white. A given metric might provide strong evidence for the existence of a stable pattern, while another might offer more nuanced information about the type of stability or the likelihood of a specific kind of transition.

Attractor State Characteristic Definition Evidence for… Implies Tipping Type... HMM Quantifiable Metric
Section 1: Core Properties of Attractor States
Depth of the attractor & Resistance to change A measure of state stability; a "deeper valley" is harder to leave. The presence of a stable attractor. The shallowness of a valley (low inertia) can make the system susceptible to N-tipping from noise. State Inertia: P(i|i) & Expected State Duration
Return rate to equilibrium & Speed along the attractor The speed at which a system returns to its stable state after being perturbed. The stability of an attractor. A slowing return rate is a key indicator of B-tipping (Critical Slowing Down). A fast, stable rate suggests the system is not approaching a B-tipping point. Mean First Passage Time (MFPT)
Location in phase space & Amplitude The central point or average signal level of the observations when the system is in a given state. The existence and location of an attractor. A gradual change in the mean can be an EWS for B-tipping (e.g., a saddle-node bifurcation). A sudden jump between two stable means suggests N-tipping. Mean of the emission distribution: μₖ
Size and shape & Variance The geometric properties (spread, orientation) of the region the system occupies in its state space. The structure and stability of an attractor. Increasing variance is a classic EWS for B-tipping, indicating the attractor valley is flattening. Variance/Covariance of the emission distribution: σ²ₖ or Σₖ
Density across the attractor & Frequency The long-run probability or empirical proportion of time the system spends in each state. The dominance or preference for an attractor. A high density in one state and low in another suggests a bi-stable system where N-tipping is possible. Stationary Distribution (δ) & Fractional Occupancy
Topology The structured map of connections and pathways between different states. An organized, non-random dynamic system. The existence of multiple, connected states is a prerequisite for any type of tipping. The full Transition Probability Matrix: Γ
Force required for transition The magnitude of an external factor needed to push the system out of its current state. The robustness of an attractor. A high "force" needed suggests a deep attractor resistant to N-tipping. The coefficient can model the gradual change leading to B-tipping. Regression coefficient (β) linking a covariate to a transition
Section 2: Dynamic Behaviors & Early Warning Signals (EWS)
Autocorrelation (ACF) The "memory" in the data; how much the current value depends on past values. The temporal structure of the system. Increasing ACF is a primary EWS for B-tipping and R-tipping, as it directly measures Critical Slowing Down. Autocorrelation Function
Flickering Frequent, rapid transitions between states, often due to high noise. The presence of multiple, shallow attractors. This behavior is a direct sign of N-diffusion or a system highly susceptible to N-tipping. Off-diagonal transition probabilities: P(j|i), i≠j
Rigidity/Flexibility The balance between ordered and random patterns in the system's dynamics. The adaptive nature of a state. Pathological rigidity suggests being "stuck" in a deep attractor, while healthy flexibility (pink noise) suggests an optimal balance, possibly more resilient to unwanted tipping. State Inertia & Switching Probabilities

The “Engine vs. Dashboard” Insight: A Key to Correct Inference

A critical conceptual hurdle emerged during this work. I initially worried that the HMM’s core assumption—that the current hidden state depends only on the immediate past state (the Markov property)—would artificially limit my analysis to simple, lag-1 dynamics. This, however, is a misunderstanding of how the HMM’s two-layer structure works, and correcting it is essential for making valid inferences.

The “Engine vs. Dashboard” analogy proved invaluable:

  1. The Hidden “Engine” (The State): This is the underlying, unobserved state we are testing (e.g., a state of Severe Depression). The simple lag-1 Markov rule applies only here.
  2. The Observed “Dashboard” (The Data): This is what we measure (e.g., daily depression scores). These scores are emitted by the hidden engine state.

The link between them is state inertia. If the Severe Depression state is a true attractor, its inertia will be high (P(Depression|Depression) ≈ 1). This persistence means that even though the state-switching rule is simple, the observed data will exhibit long-range memory. A high depression score today will be correlated not just with tomorrow’s score, but with the score ten days from now, because all are likely generated by the same, persistent underlying state. This allows us to use the full autocorrelation structure of the data as a rich source of evidence without being constrained by the model’s internal mechanics.

Practical Implementation: A Guide to R’s depmixS4 Package

With the validation framework established, the final step was to operationalize it. The R package depmixS4 is perfectly suited for this task. The workflow involves specifying a model with depmix(), fitting it with fit(), and then using the resulting depmix.fitted object (which I’ll call fm) to extract the metrics needed to test our hypotheses.

Here’s the complete guide on how to gather the evidence using fm:

Attractor State Characteristic HMM Quantifiable Metric Implementation in R's depmixS4 Package
Depth of the attractor &
Resistance to change
State Inertia: P(i|i) & Expected State Duration How to get it: The transition probabilities are the most direct measure. They are printed in the summary() of a fitted model. The diagonals of this matrix are the state inertias. Expected duration is derived from inertia.

R Code:
# After fitting a model: fm <- fit(mod)
# 1. View the transition matrix in the summary output
summary(fm)
# 2. To derive expected duration for state 1:
inertia_s1 <- # (get value from summary)
exp_dur_s1 <- 1 / (1 - inertia_s1)
        
Return rate to equilibrium &
Speed along the attractor
Mean First Passage Time (MFPT) How to get it: This is a derived metric. You must first extract the transition probability matrix (Γ) from the model summary and then apply matrix algebra to calculate the MFPT. depmixS4 does not provide a direct function for this.

R Code (Conceptual):
# 1. Get the transition probability matrix (tpm) from summary(fm)
# 2. Use a helper function (not in depmixS4) to compute MFPT
mfpt_matrix <- calculate_mfpt(tpm)
        
Location in phase space &
Amplitude
Mean of the emission distribution: μₖ How to get it: The parameters of the response (emission) distributions for each state are printed in the summary(). For Gaussian responses, this will be the intercept/coefficient, which is the mean (μ).

R Code:
# 1. View the response parameters in the summary output
summary(fm)
# 2. Programmatically get the response model for state 1, response 1
response_s1 <- getmodel(fm, which = "response", state = 1, number = 1)
# The mean is a parameter within this object
getpars(response_s1)
        
Size and shape &
Variance
Variance/Covariance of the emission distribution: σ²ₖ or Σₖ How to get it: This is also a response parameter found in the summary(). For Gaussian responses (family=gaussian()), the standard deviation (sd) is estimated. For multivariate normal responses (MVNresponse), the covariance matrix (Sigma) is estimated.

R Code:
# View the response sd or Sigma matrix in the summary output
summary(fm)
# Programmatically access using getmodel() as shown for the mean.
        
Density across the attractor &
Frequency
Stationary Distribution (δ) & Fractional Occupancy How to get it:
1. Stationary Distribution: Use the stationary() function on the transition probability matrix obtained from the model summary.
2. Fractional Occupancy: Use the posterior() function with type="global" to get the most likely state sequence (Viterbi path), then calculate the proportion of time spent in each state.

R Code:
# 1. For Stationary Distribution (conceptual):
tpm <- matrix(c(0.9, 0.1, 0.2, 0.8), 2, 2, byrow=TRUE)
stationary(tpm)
# 2. For Fractional Occupancy:
state_sequence <- posterior(fm, type = "global")$state
table(state_sequence) / length(state_sequence)
        
Topology The full Transition Probability Matrix: Γ How to get it: The transition matrix is the direct representation of the system's topology. It is best viewed in the output of summary().

R Code:
# The "Transition probabilities" section of the summary is the metric
summary(fm)
        
Force required for transition Regression coefficient (β) linking a covariate to a transition How to get it: When a covariate is included in the transition formula (e.g., transition = ~ my_covariate), its regression coefficient (β) is estimated. This coefficient is shown in the summary() output under the transition model parameters.

R Code:
# 1. Specify covariate in model:
mod <- depmix(..., transition = ~ Pacc, ...)
fm <- fit(mod)
# 2. View the coefficient for Pacc in the summary output
summary(fm)
        
Autocorrelation (ACF) Autocorrelation Function How to get it: This is a derived metric. The most practical way to assess this is to generate a long time series from the fitted model using the simulate() function, and then compute the ACF on the simulated data using R's built-in acf() function.

R Code:
# 1. Simulate a long time series from the fitted model
sim_data <- simulate(fm, nsim=10000)
# 2. Compute ACF on the simulated response
acf(sim_data$response)
        
Flickering Off-diagonal transition probabilities: P(j|i), i≠j How to get it: This is the same as "Depth of the attractor." High values on the off-diagonals of the transition matrix in the summary() output indicate a high probability of switching, which is characteristic of flickering.

R Code:
# Look at off-diagonal values in the transition matrix from:
summary(fm)
        
Characteristics Not Directly Quantifiable by HMMs
Local curvature, Critical Fluctuations, Critical Slowing Down, Fractal Dimension, Spectral Properties N/A Reason: These characteristics are either concepts from continuous geometry, nonlinear dynamics, or are phenomena that must be inferred indirectly (e.g., by observing trends over time) rather than being a direct parameter output by the HMM. They are outside the native scope of the discrete-state probabilistic HMM framework.

Conclusion

This work culminated in a robust, data-driven methodology for using HMMs as an investigatory tool. We no longer need to simply assume that the hidden states identified by a model are attractors. We can now subject them to a battery of statistical tests, using the model’s own parameters as evidence. This approach allows us to build a case for or against a state’s status as an attractor, and more importantly, to begin hypothesizing about the types of transitions a system is prone to. It moves the complex systems approach from a powerful metaphor to a testable scientific framework, unlocking new possibilities for predicting and influencing psychological change.