Warning: file_get_contents(https://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=41361191&cmd=llinks): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 215
A multi-agent reinforcement learning framework for exploring dominant strategies in iterated and evolutionary games #MMPMID41361191
Su Q; Wang H; Xia Y; Wang L
Nat Commun 2025[Dec]; ? (?): ? PMID41361191show ga
Exploring dominant strategies in iterated games holds theoretical and practical significance across diverse domains. Previous studies, through mathematical analysis of limited cases, have unveiled classic strategies such as tit-for-tat, generous-tit-for-tat, win-stay-lose-shift, and zero-determinant strategies. While these strategies offer valuable insights into human decision-making, they represent only a small subset of possible strategies, constrained by limited mathematical and computational tools available to explore larger strategy spaces. To bridge this gap, we propose an approach using multi-agent reinforcement learning to delve into complex decision-making processes that go beyond human intuition. Our approach has led to the discovery of a strategy that we call memory-two bilateral reciprocity strategy. Memory-two bilateral reciprocity strategy consistently outperforms a wide range of strategies in pairwise interactions while achieving high payoffs. When introduced into an evolving population with diverse strategies, memory-two bilateral reciprocity strategy demonstrates dominance and fosters higher levels of cooperation and social welfare in both homogeneous and heterogeneous structures, as well as across various game types. This high performance is verified by simulations and mathematical analysis. Our work highlights the potential of multi-agent reinforcement learning in uncovering dominant strategies in iterated and evolutionary games.