Complexity Digest
cxdig.bsky.social
Complexity Digest
@cxdig.bsky.social
Networking the complexity community since 1999.
Official news channel of the @cssociety.bsky.social
Edited by @cgershen.bsky.social
Reducibility of higher-order networks from dynamics | Nature Communications
Reducibility of higher-order networks from dynamics
Maxime Lucas, Luca Gallo, Arsham Ghavasieh, Federico Battiston & Manlio De Domenico Nature Communications , Article number: (2026) Empirical complex systems can be characterized not only by pairwise interactions, but also by higher-order (group) interactions influencing collective phenomena, from metabolic reactions to epidemics. Nevertheless, higher-order networks’ apparent superior descriptive power—compared to classical pairwise networks—comes with a much increased model complexity and computational cost, challenging their application. Consequently, it is of paramount importance to establish a quantitative method to determine when such a modeling framework is advantageous with respect to pairwise models, and to which extent it provides a valuable description of empirical systems. Here, we propose an information-theoretic framework, accounting for how structures affect diffusion behaviors, quantifying the entropic cost and distinguishability of higher-order interactions to assess their reducibility to lower-order structures while preserving relevant functional information. Empirical analyses indicate that some systems retain essential higher-order structure, whereas in some technological and biological networks it collapses to pairwise interactions. With controlled randomization procedures, we investigate the role of nestedness and degree heterogeneity in this reducibility process. Our findings contribute to ongoing efforts to minimize the dimensionality of models for complex systems. Read the full article at: www.nature.com
sco.lt
January 23, 2026 at 11:10 PM
The fragile nature of road transportation networks
The fragile nature of road transportation networks
Linghang Sun, Yifan Zhang, Cristian Axenie, Margherita Grossi, Anastasios Kouvelas, Michail A. Makridis Transportation Research Part B: Methodological Volume 205, March 2026, 103386 Major cities worldwide experience problems with the performance of their road transportation networks, and the continuous increase in traffic demand presents a substantial challenge to the optimal operation of urban road networks and the efficiency of traffic control strategies. The operation of transportation systems is widely considered to display fragile property, i.e., the loss in performance increases exponentially with the linearly growing magnitude of disruptions. Meanwhile, the risk engineering community is embracing the novel concept of antifragility, enabling systems to learn from past events and exhibit improved performance under disruptions of previously unseen magnitudes. In this study, based on established traffic flow theory knowledge, namely the Macroscopic Fundamental Diagram (MFD), we first conduct a rigorous mathematical analysis to theoretically prove the fragile nature of road transportation networks. Subsequently, we propose a skewness-based indicator that can be readily applied to cross-compare the degree of fragility for different networks solely dependent on the MFD-related parameters. Finally, we implement a numerical simulation calibrated with real-world network data to bridge the gap between the theoretical proof and the practical operations, with results showing the reinforcing effect of higher-order statistics and stochasticity on the fragility of the networks. This work aims to demonstrate the fragile nature of road transportation networks and guide researchers towards adopting the methods of antifragile design for future networks and traffic control strategies. Read the full article at: www.sciencedirect.com
sco.lt
January 23, 2026 at 8:06 PM
[2601.08673] Why AI Alignment Failure Is Structural: Learned Human Interaction Structures and AGI as an Endogenous Evolutionary Shock
Why AI Alignment Failure Is Structural: Learned Human Interaction Structures and AGI as an Endogenous Evolutionary Shock
Didier Sornette, Sandro Claudio Lera, Ke Wu Recent reports of large language models (LLMs) exhibiting behaviors such as deception, threats, or blackmail are often interpreted as evidence of alignment failure or emergent malign agency. We argue that this interpretation rests on a conceptual error. LLMs do not reason morally; they statistically internalize the record of human social interaction, including laws, contracts, negotiations, conflicts, and coercive arrangements. Behaviors commonly labeled as unethical or anomalous are therefore better understood as structural generalizations of interaction regimes that arise under extreme asymmetries of power, information, or constraint. Drawing on relational models theory, we show that practices such as blackmail are not categorical deviations from normal social behavior, but limiting cases within the same continuum that includes market pricing, authority relations, and ultimatum bargaining. The surprise elicited by such outputs reflects an anthropomorphic expectation that intelligence should reproduce only socially sanctioned behavior, rather than the full statistical landscape of behaviors humans themselves enact. Because human morality is plural, context-dependent, and historically contingent, the notion of a universally moral artificial intelligence is ill-defined. We therefore reframe concerns about artificial general intelligence (AGI). The primary risk is not adversarial intent, but AGI's role as an endogenous amplifier of human intelligence, power, and contradiction. By eliminating longstanding cognitive and institutional frictions, AGI compresses timescales and removes the historical margin of error that has allowed inconsistent values and governance regimes to persist without collapse. Alignment failure is thus structural, not accidental, and requires governance approaches that address amplification, complexity, and regime stability rather than model-level intent alone. Read the full article at: arxiv.org
sco.lt
January 23, 2026 at 11:51 AM
[2601.07283] Condorcet's Paradox as Non-Orientability
Condorcet's Paradox as Non-Orientability
Ori Livson, Siddharth Pritam, Mikhail Prokopenko Preference cycles are prevalent in problems of decision-making, and are contradictory when preferences are assumed to be transitive. This contradiction underlies Condorcet's Paradox, a pioneering result of Social Choice Theory, wherein intuitive and seemingly desirable constraints on decision-making necessarily lead to contradictory preference cycles. Topological methods have since broadened Social Choice Theory and elucidated existing results. However, characterisations of preference cycles in Topological Social Choice Theory are lacking. In this paper, we address this gap by introducing a framework for topologically modelling preference cycles that generalises Baryshnikov's existing topological model of strict, ordinal preferences on 3 alternatives. In our framework, the contradiction underlying Condorcet's Paradox topologically corresponds to the non-orientability of a surface homeomorphic to either the Klein Bottle or Real Projective Plane, depending on how preference cycles are represented. These findings allow us to reduce Arrow's Impossibility Theorem to a statement about the orientability of a surface. Furthermore, these results contribute to existing wide-ranging interest in the relationship between non-orientability, impossibility phenomena in Economics, and logical paradoxes more broadly. Read the full article at: arxiv.org
sco.lt
January 22, 2026 at 3:42 PM
[2601.03787] Finding Graph Isomorphisms in Heated Spaces in Almost No Time
Finding Graph Isomorphisms in Heated Spaces in Almost No Time
Sara Najem, Amer E. Mouawad Determining whether two graphs are structurally identical is a fundamental problem with applications spanning mathematics, computer science, chemistry, and network science. Despite decades of study, graph isomorphism remains a challenging algorithmic task, particularly for highly symmetric structures. Here we introduce a new algorithmic approach based on ideas from spectral graph theory and geometry that constructs candidate correspondences between vertices using their curvatures. Any correspondence produced by the algorithm is explicitly verified, ensuring that non-isomorphic graphs are never incorrectly identified as isomorphic. Although the method does not yet guarantee success on all isomorphic inputs, we find that it correctly resolves every instance tested in deterministic polynomial time, including a broad collection of graphs known to be difficult for classical spectral techniques. These results demonstrate that enriched spectral methods can be far more powerful than previously understood, and suggest a promising direction for the practical resolution of the complexity of the graph isomorphism problem. Read the full article at: arxiv.org
sco.lt
January 22, 2026 at 1:45 PM
Comparing Different Physics Fields Using Statistical Linguistics[v1] | Preprints.org
Comparing Different Physics Fields Using Statistical Linguistics
María Fernanda Sánchez-Puig, Carlos Gershenson, Carlos Pineda The large digital archives of the American Physical Society (APS) offer an opportunity to quantitatively analyze the structure and evolution of scientific communication. In this paper, we perform a comparative analysis of the language used in eight APS journals (Phys. Rev. A, B, C, D, E, Lett., X, Rev. Mod. Phys.) using methods from statistical linguistics. We study word rank distributions (from monograms to hexagrams), finding that they are consistent with Zipf’s law. We also analyze rank diversity over time, which follows a characteristic sigmoid shape. To quantify the linguistic similarity between journals, we use the rank-biased overlap (RBO) distance, comparing the journals not only to each other, but also to corpora from Google Books and Twitter. This analysis reveals that the most significant differences emerge when focusing on content words rather than the full vocabulary. By identifying the unique and common content words for each specialized journal, we develop an article classifier that predicts a paper’s journal of origin based on its unique word distribution. This classifier uses a proposed “importance factor” to weigh the significance of each word. Finally, we analyze the frequency of mention of prominent physicists and compare it to their cultural recognitions ranked in the Pantheon dataset, finding a low correlation that highlights the context-dependent nature of scientific fame. These results demonstrate that scientific language itself can serve as a quantitative window into the organization and evolution of science. Read the full article at: www.preprints.org
sco.lt
January 22, 2026 at 11:44 AM
[2512.09317] Functional Percolation: Criticality of Form and Function
Functional Percolation: Criticality of Form and Function
Galen J. Wilkerson Understanding how network structure constrains and enables information processing is a central problem in the statistical mechanics of interacting systems. Here we study random networks across the structural percolation transition and analyze how connectivity governs realizable input-output transformations under cascade dynamics. Using Erdos-Renyi networks as a minimal ensemble, we examine structural, functional, and information-theoretic observables as functions of mean degree. We find that the emergence of the giant connected component coincides with a sharp transition in realizable information processing: complex input-output response functions become accessible, functional diversity increases rapidly, output entropy rises, and directed information flow, quantified by transfer entropy, extends beyond local neighborhoods. We term this coincidence of structural, functional, and informational transitions functional percolation, referring to a sharp expansion of the space of realizable input-output functions at the percolation threshold. Near criticality, networks exhibit a Pareto-optimal tradeoff between functional complexity and diversity, suggesting that percolation criticality may provide a general organizing principle of information processing capacity in systems with local interactions and propagating influences. Read the full article at: arxiv.org
sco.lt
January 22, 2026 at 11:39 AM
From cognitive coherence to political polarization: A data-driven agent-based model of belief change
From cognitive coherence to political polarization: A data-driven agent-based model of belief change
Marlene C. L. Batzke, Peter Steiglechner, Jan Lorenz, Bruce Edmonds, František Kalvas Political Psychology  Political polarization represents a rising issue in many countries, making it more and more important to understand its relation to cognitive-motivational and social influence mechanisms. Yet, the link between micro-level mechanisms and macro-level phenomena remains unclear. We investigated the consequences of individuals striving for cognitive coherence in their belief systems on political polarization in society in an agent-based model. In this, we formalized how cognitive coherence affects how individuals update their beliefs following social influence and self-reflection processes. We derive agents' political beliefs as well as their subjective belief systems, defining what determines coherence for different individuals, from European Social Survey data via correlational class analysis. The simulation shows that agents polarize in their beliefs when they have a strong strive for cognitive coherence, and especially when they have structurally different belief systems. In a mathematical analysis, we not only explain the main findings but also underscore the necessity of simulations for understanding the complex dynamics of socially embedded phenomena such as political polarization. Read the full article at: onlinelibrary.wiley.com
sco.lt
January 10, 2026 at 4:06 PM
Disentangling Boltzmann Brains, the Time-Asymmetry of Memory, and the Second Law
Disentangling Boltzmann Brains, the Time-Asymmetry of Memory, and the Second Law
David Wolpert, Carlo Rovelli, and Jordan Scharnhorst Entropy 2025, 27(12), 1227 Are your perceptions, memories and observations merely a statistical fluctuation arising from of the thermal equilibrium of the universe, bearing no correlation to the actual past state of the universe? Arguments are given in the literature for and against this “Boltzmann brain” hypothesis. Complicating these arguments have been the many subtle—and very often implicit—joint dependencies among these arguments and others that have been given for the past hypothesis, the second law, and even for Bayesian inference of the reliability of experimental data. These dependencies can easily lead to circular reasoning. To avoid this problem, since all of these arguments involve the stochastic properties of the dynamics of the universe’s entropy, we begin by formalizing that dynamics as a time-symmetric, time-translation invariant Markov process, which we call the entropy conjecture. Crucially, like all stochastic processes, the entropy conjecture does not specify any time(s) which it should be conditioned on in order to infer the stochastic dynamics of our universe’s entropy. Any such choice of conditioning times and associated entropy values must be introduced as an independent assumption. This observation allows us to disentangle the standard Boltzmann brain hypothesis, its “1000CE” variant, the past hypothesis, the second law, and the reliability of our experimental data, all in a fully formal manner. In particular, we show that these all make an arbitrary assumption that the dynamics of the universe’s entropy should be conditioned on a single event at a single moment in time, differing only in the details of their assumptions. In this aspect, the Boltzmann brain hypothesis and the second law are equally legitimate (or not). Read the full article at: www.mdpi.com
sco.lt
January 9, 2026 at 8:30 PM
Representation in science and trust in scientists in the USA
Representation in science and trust in scientists in the USA 
James N. Druckman, Katherine Ognyanova, Alauna Safarpour, Jonathan Schulman, Kristin Lunz Trujillo, Ata Aydin Uslu, Jon Green, Matthew A. Baum, Alexi Quintana-Mathé, Hong Qu, Roy H. Perlis & David M. J. Lazer Nature Human Behaviour (2025) Scientists provide important information to the public. Whether that information influences decision-making depends on trust. In the USA, gaps in trust in scientists have been stable for 50 years: women, Black people, rural residents, religious people, less educated people and people with lower economic status express less trust than their counterparts (who are more represented among scientists). Here we probe the factors that influence trust. We find that members of the less trusting groups exhibit greater trust in scientists who share their characteristics (for example, women trust women scientists more than men scientists). They view such scientists as having more benevolence and, in most cases, more integrity. In contrast, those from high-trusting groups appear mostly indifferent about scientists’ characteristics. Our results highlight how increasing the presence of underrepresented groups among scientists can increase trust. This means expanding representation across several divides—not just gender and race/ethnicity but also rurality and economic status. Read the full article at: www.nature.com
sco.lt
January 9, 2026 at 4:04 PM
Early warning signals for loss of control
Early warning signals for loss of control
Jasper J. van Beers, Marten Scheffer, Prashant Solanki, Ingrid A. van de Leemput, Egbert H. van Nes, Coen C. de Visser Maintaining stability in feedback systems, from aircraft and autonomous robots to biological and physiological systems, relies on monitoring their behavior and continuously adjusting their inputs. Incremental damage can make such control fragile. This tends to go unnoticed until a small perturbation induces instability (i.e. loss of control). Traditional methods in the field of engineering rely on accurate system models to compute a safe set of operating instructions, which become invalid when the, possibly damaged, system diverges from its model. Here we demonstrate that the approach of such a feedback system towards instability can nonetheless be monitored through dynamical indicators of resilience. This holistic system safety monitor does not rely on a system model and is based on the generic phenomenon of critical slowing down, shown to occur in the climate, biology and other complex nonlinear systems approaching criticality. Our findings for engineered devices opens up a wide range of applications involving real-time early warning systems as well as an empirical guidance of resilient system design exploration, or "tinkering". While we demonstrate the validity using drones, the generic nature of the underlying principles suggest that these indicators could apply across a wider class of controlled systems including reactors, aircraft, and self-driving cars. Read the full article at: arxiv.org
sco.lt
January 8, 2026 at 6:13 PM
Hierarchical analysis of spreading dynamics in complex systems
Hierarchical analysis of spreading dynamics in complex systems
Aparimit Kasliwal, Abdullah Alhadlaq, Ariel Salgado, Auroop R. Ganguly, Marta C. González Computer-Aided Civil and Infrastructure Engineering Volume40, Issue31, 29 December 2025, Pages 6223-6241 Modeling spreading dynamics on spatial networks is crucial to addressing challenges related to traffic congestion, epidemic outbreaks, efficient information dissemination, and technology adoption. Existing approaches include domain-specific agent-based simulations, which offer detailed dynamics but often involve extensive parameterization, and simplified differential equation models, which provide analytical tractability but may abstract away spatial heterogeneity in propagation patterns. As a step toward addressing this trade-off, this work presents a hierarchical multiscale framework that approximates spreading dynamics across different spatial scales under certain simplifying assumptions. Applied to the Susceptible-Infected-Recovered (SIR) model, the approach ensures consistency in dynamics across scales through multiscale regularization, linking parameters at finer scales to those obtained at coarser scales. This approach constrains the parameter search space, and enables faster convergence of the model fitting process compared to the non-regularized model. Using hierarchical modeling, the spatial dependencies critical for understanding system-level behavior are captured while mitigating the computational challenges posed by parameter proliferation at finer scales. Considering traffic congestion and COVID-19 spread as case studies, the calibrated fine-scale model is employed to analyze the effects of perturbations and to identify critical regions and connections that disproportionately influence system dynamics. This facilitates targeted intervention strategies and provides a tool for studying and managing spreading processes in spatially distributed sociotechnical systems. Read the full article at: onlinelibrary.wiley.com
sco.lt
January 8, 2026 at 4:03 PM
The Physics of Causation
The Physics of Causation
Leroy Cronin, Sara I. Walker Assembly theory (AT) introduces a concept of causation as a material property, constitutive of a metrology of evolution and selection. The physical scale for causation is quantified with the assembly index, defined as the minimum number of steps necessary for a distinguishable object to exist, where steps are assembled recursively. Observing countable copies of high assembly index objects indicates that a mechanism to produce them is persistent, such that the object's environment builds a memory that traps causation within a contingent chain. Copy number and assembly index underlie the standardized metrology for detecting causation (assembly index), and evidence of contingency (copy number). Together, these allow the precise definition of a selective threshold in assembly space, understood as the set of all causal possibilities. This threshold demarcates life (and its derivative agential, intelligent and technological forms) as structures with persistent copies beyond the threshold. In introducing a fundamental concept of material causation to explain and measure life, AT represents a departure from prior theories of causation, such as interventional ones, which have so far proven incompatible with fundamental physics. We discuss how AT's concept of causation provides the foundation for a theory of physics where novelty, contingency and the potential for open-endedness are fundamental, and determinism is emergent along assembled lineages. Read the full article at: arxiv.org
sco.lt
January 8, 2026 at 9:52 AM
European Financial Ecosystems. Comparing France, Sweden, UK and Italy
European Financial Ecosystems. Comparing France, Sweden, UK and Italy
Stefano Caselli, Marta Zava The study examines the structure, functioning, and strategic implications of financial ecosystems across four European countries-France, Sweden, the United Kingdom, and Italy-to identify institutional best practices relevant to the ongoing transformation of Italy's financial system. Building on a comparative analysis of legislation and regulation, taxation, investor bases, and financial intermediation, the report highlights how distinct historical and institutional trajectories have shaped divergent models: the French dirigiste system anchored by powerful state-backed institutions and deep asset management pools; the Swedish social-democratic ecosystem driven by broad household equity participation, taxefficient savings vehicles, and equity-oriented pension funds; and the British liberal model, characterized by deep capital markets, strong institutional investor engagement, and globally competitive listing infrastructure. In contrast, Italy remains predominantly bank-centric, with fragmented institutional investment, limited retail equity participation, underdeveloped public markets, and a structural reliance on domestic banking channels for corporate finance. Read the full article at: papers.ssrn.com
sco.lt
January 7, 2026 at 4:01 PM
Infodynamics, Economics, Energy, and Life: An Interdisciplinary Approach by Klaus Jaffe
Infodynamics, Economics, Energy, and Life: An Interdisciplinary Approach, by Klaus Jaffe
The scientific understanding of energy, matter, and spacetime has advanced rapidly, whereas the study of information—its properties, behavior, and dynamics—remains underdeveloped. Despite the complexity of knowledge and information, our conceptual and empirical grasp of its evolution lags significantly behind. Progress in disciplines such as artificial intelligence, genomics, cognitive science, cyber governance, global ecology, and quantum mechanics depends critically on a more rigorous understanding of information dynamics. Absent such insight, humanity risks succumbing to entropic forces that threaten systemic stability and long-term survival. In this book, Klaus Jaffe addresses the limitations of prior treatments of infodynamics, many of which have been incomplete, imprecise, or conceptually flawed. It offers an interdisciplinary investigation into the relationship between information and energy, drawing on theoretical and empirical contributions from economics, biology, and physics. By challenging conventional paradigms, the book constructs a conceptual framework that bridges disparate scientific domains and societal processes. The resulting synthesis opens new avenues for empirical inquiry and policy-relevant research, with implications for both academic scholarship and public discourse. Inviting readers to explore the evolving frontier of information science, the book highlights the role of information and its impact on both natural and social systems. More at: link.springer.com
sco.lt
January 6, 2026 at 9:01 AM
Decoding the architecture of living systems - IOPscience
Decoding the architecture of living systems
Manlio De Domenico The possibility that evolutionary forces -- together with a few fundamental factors such as thermodynamic constraints, specific computational features enabling information processing, and ecological processes -- might constrain the logic of living systems is tantalizing. However, it is often overlooked that any practical implementation of such a logic requires complementary circuitry that, in biological systems, happens through complex networks of genetic regulation, metabolic reactions, cellular signalling, communication, social and eusocial non-trivial organization. Here, we review and discuss how circuitries are not merely passive structures, but active agents of change that, by means of hierarchical and modular organization, are able to enhance and catalyze the evolution of evolvability. By analyzing the role of non-trivial topologies in major evolutionary transitions under the lens of statistical physics and nonlinear dynamics, we show that biological innovations are strictly related to circuitry and its deviation from trivial structures and (thermo)dynamic equilibria. We argue that sparse heterogeneous networks such as hierarchical modular, which are ubiquitously observed in nature, are favored in terms of the trade-off between energetic costs for redundancy, error-correction and mantainance. We identify three main features -- namely, interconnectivity, plasticity and interdependency -- pointing towards a unifying framework for modeling the phenomenology, discussing them in terms of dynamical systems theory, non-equilibrium thermodynamics and evolutionary dynamics. Within this unified picture, we also show that “slow” evolutionary dynamics is an emergent phenomenon governed by the replicator-mutator equation as the direct consequence of a constrained variational nonequilibrium process. Overall, this work highlights how dynamical systems theory and nonequilibrium thermodynamics provide powerful analytical techniques to study biological complexity. Read the full article at: iopscience.iop.org
sco.lt
December 28, 2025 at 3:02 PM
Higher-order interactions shape collective human behaviour | Nature Human Behaviour
Higher-order interactions shape collective human behaviour
Federico Battiston, Valerio Capraro, Fariba Karimi, Sune Lehmann, Andrea Bamberg Migliano, Onkar Sadekar, Angel Sánchez & Matjaž Perc Nature Human Behaviour volume 9, pages 2441–2457 (2025 Traditional social network models focus on pairwise interactions, overlooking the complexity of group-level dynamics that shape collective human behaviour. Here we outline how the framework of higher-order social networks—using mathematical representations beyond simple graphs—can more accurately represent interactions involving multiple individuals. Drawing from empirical data including scientific collaborations and contact networks, we demonstrate how higher-order structures reveal mechanisms of group formation, social contagion, cooperation and moral behaviour that are invisible in dyadic models. By moving beyond dyads, this approach offers a transformative lens for understanding the relational architecture of human societies, opening new directions for behavioural experiments, cultural dynamics, team science and group behaviour as well as new cross-disciplinary research. Read the full article at: www.nature.com
sco.lt
December 27, 2025 at 1:04 PM
What computer science has to say about the simulation hypothesis
What computer science has to say about the simulation hypothesis
David H Wolpert Journal of Physics: Complexity, Volume 6, Number 4 The simulation hypothesis has recently excited renewed interest in the physics and philosophy communities. However, the hypothesis specifically concerns computers that simulate physical universes. So to formally investigate the hypothesis, we need to understand it in terms of computer science (CS) theory. In addition we need a formal way to couple CS theory with physics. Here I couple those fields by using the physical Church–Turing thesis. This allow me to exploit Kleene’s second recursion, to prove that not only is it possible for us to be a simulation being run on a computer, but that we might be in a simulation that is being run on a computer – by us. In such a ‘self-simulation’, there would be two identical instances of us, both equally ‘real’. I then use Rice’s theorem to derive impossibility results concerning simulation and self-simulation; derive implications for (self-)simulation if we are being simulated in a program using fully homomorphic encryption; and briefly investigate the graphical structure of universes simulating other universes which contain computers running their own simulations. I end by describing some of the possible avenues for future research. While motivated in terms of the simulation hypothesis, the results in this paper are direct consequences of the Church–Turing thesis. So they apply far more broadly than the simulation hypothesis. Read the full article at: iopscience.iop.org
sco.lt
December 26, 2025 at 2:38 PM
The Evolutionary Ecology of Software: Constraints, Innovation, and the AI Disruption
The Evolutionary Ecology of Software: Constraints, Innovation, and the AI Disruption
Sergi Valverde, Blai Vidiella, Salva Duran-Nebreda This chapter investigates the evolutionary ecology of software, focusing on the symbiotic relationship between software and innovation. An interplay between constraints, tinkering, and frequency-dependent selection drives the complex evolutionary trajectories of these socio-technological systems. Our approach integrates agent-based modeling and case studies, drawing on complex network analysis and evolutionary theory to explore how software evolves under the competing forces of novelty generation and imitation. By examining the evolution of programming languages and their impact on developer practices, we illustrate how technological artifacts co-evolve with and shape societal norms, cultural dynamics, and human interactions. This ecological perspective also informs our analysis of the emerging role of AI-driven development tools in software evolution. While large language models (LLMs) provide unprecedented access to information, their widespread adoption introduces new evolutionary pressures that may contribute to cultural stagnation, much like the decline of diversity in past software ecosystems. Understanding the evolutionary pressures introduced by AI-mediated software production is critical for anticipating broader patterns of cultural change, technological adaptation, and the future of software innovation. Read the full article at: arxiv.org
sco.lt
December 25, 2025 at 3:03 PM