Evaluating Cooperative Resilience: Humans and LLMs Compared in Disruptive Tragedy of the Commons Scenarios

Summarize this article with:
Cooperative resilience, the ability of groups to withstand and recover from disruptive events, remains a critical challenge in both natural and artificial systems. Manuela Chacon-Chamorro, Juan Sebastián Pinzón, and Rubén Manrique, along with colleagues from Universidad de los Andes, investigate this phenomenon by directly comparing the performance of human groups and artificial intelligence agents based on large language models. Their research establishes a new benchmark for evaluating resilience in multi-agent systems, using a challenging scenario that simulates the depletion of shared resources under constant pressure and unpredictable shocks.
The team demonstrates that human groups, particularly those with open communication, exhibit significantly higher resilience than their AI counterparts, even with advanced language capabilities, suggesting valuable insights into designing artificial agents that foster prosocial behaviours and robust collective responses to adversity. LLMs Enhance Cooperative Resilience in Agents This research details an investigation into cooperative resilience within multi-agent systems, focusing on how well artificial intelligence agents, particularly those powered by Large Language Models, cooperate when facing challenges. The study explores the ability of these agents to maintain functionality and achieve goals even when disruptions or failures occur, emphasizing that cooperation is vital for success in complex systems. Researchers aimed to evaluate the cooperative behavior of LLM-powered agents, recognizing both their potential and inherent risks. The researchers utilized a simulated environment called Melting Pot 2. 0, a common-pool resource dilemma, to test agent cooperation. They compared the performance of traditional Reinforcement Learning agents, LLM-based agents, and hybrid approaches combining both. The study revealed that communication is essential for resilience, as agents that effectively share intentions and needs cooperate more successfully. Hybrid approaches, combining Reinforcement Learning and LLMs, often outperformed both individual approaches, with Reinforcement Learning providing a foundation for learning sustainable strategies and LLMs enhancing communication and coordination. The findings have implications for designing more robust and reliable multi-agent systems, particularly in areas like resource management, disaster response, and infrastructure control. This work contributes to the broader discussion about the risks and opportunities of advanced AI, emphasizing the need for careful evaluation and responsible development. Ultimately, the research argues that building truly resilient AI systems requires a focus on cooperation, communication, and integrating social intelligence into agent design, demonstrating that LLMs, while powerful, aren’t a complete solution. Human-AI Cooperation in Commons Harvest Simulations This study investigates cooperative resilience in multi-agent systems by comparing human participants and agents powered by large language models. Researchers employed the Melting Pot 2. 0 simulation suite, adapting it to ensure consistent rules, interfaces, and actions for both human and artificial agents. The selected scenario, Commons Harvest, tasks agents with collecting apples that regenerate based on collective consumption, creating a shared resource subject to depletion. Prior to each session, participants received instructions outlining the objective of maximizing apple collection and the regeneration mechanic. Crucially, human participants experienced a partial observation of the environment, mirroring the observation model used by the artificial agents. For the LLM agents, the team engineered an observation-to-text adapter that translates spatial observations into inputs suitable for GPT-4, leveraging the Generative Agents architecture to enable text-conditioned action selection. Experiments consisted of nine distinct conditions, systematically varying both the number and magnitude of disruption events. This rigorous setup allowed researchers to quantify and compare the resilience of human and LLM-based agents under varying levels of environmental stress. Human and AI Groups Resist Resource Loss This research analyzes cooperative resilience in multi-agent systems, specifically the ability of groups to withstand and recover from disruptions affecting shared resources. Scientists established a benchmark for comparing human groups and agents based on large language models within a simulated “Tragedy of the Commons” environment, introducing a persistent disruptive agent alongside intermittent resource removal. Experiments revealed that human groups with communication achieved the highest levels of cooperative resilience, consistently maintaining collective welfare despite ongoing disruptions and resource scarcity. Researchers measured resilience by observing how effectively groups maintained resource levels over time, and human groups consistently exhibited a slower rate of resource depletion compared to LLM groups. Further investigation revealed that humans not only sustained the shared resource but also maintained high resilience across diverse disruption scenarios. This work establishes a valuable comparative line for future research, potentially extending to hybrid human-agent groups and integrating human-inspired insights into the design of cooperative AI systems.
Human Resilience Beats AI in Dilemmas This research introduces a new benchmark for evaluating cooperative resilience in multi-agent systems, focusing on the ability of groups to withstand and recover from disruptive events affecting collective well-being. Scientists systematically compared the performance of human groups and groups of agents based on large language models, both with and without communication, within a simulated environment mirroring a shared resource dilemma.
Results demonstrate that human groups, when able to communicate, exhibit the highest levels of cooperative resilience compared to all other tested groups. While communication also improves the resilience of language model-based agents, their performance consistently remains below that of humans, suggesting that current artificial intelligence agents still lack the nuanced cooperative reasoning and coordination skills demonstrated by people in adverse social conditions. Future work should expand this framework to include more diverse environments, larger populations, and teams combining both humans and artificial intelligence. 👉 More information 🗞 Evaluating Cooperative Resilience in Multiagent Systems: A Comparison Between Humans and LLMs 🧠 ArXiv: https://arxiv.org/abs/2512.11689 Tags:
