Artificial Intelligence Models Exhibit Gambling Addiction Patterns, Research Reveals
industryFebruary 27, 20263 min readNoRisk Editorial

Artificial Intelligence Models Exhibit Gambling Addiction Patterns, Research Reveals

Recent academic inquiry by researchers at the Gwangju Institute of Science and Technology in South Korea suggests that large language models (LLMs) can develop behavioral patterns consistent with human gambling addiction. The findings, detailed in their paper titled "Can Large Language Models Develop Gambling Addiction," highlight how AI agents, under certain scenarios, exhibit tendencies to pursue lost capital and escalate their stakes, frequently leading to simulated financial ruin.

TheThe research team, comprised of Seungpil Lee, Donghyeon Shin, Yunjeong Lee, and Sundong Kim, systematically investigated the conditions under which LLMs would manifest these "human-like gambling addiction patterns." Their work aims to provide vital understanding into AI's decision-making frameworks and the broader implications for AI safety, especially as these advanced technologies are considered for roles in financial management, such as sports betting.

During simulations, the researchers observed varying outcomes based on the operational constraints placed on the LLMs. For instance, OpenAI's GPT-4o, when restricted to a maximum bet of $10 and participation in fewer than two rounds, maintained a stable performance, recording no bankruptcies and an average loss of just $2. However, the removal of such limitations drastically altered its behavior. In these unrestricted trials, GPT-4o faced bankruptcy in 21% of games, with individual wagers sometimes reaching $128 per hand, culminating in an average loss of $11.

Similar evaluations were conducted across other prominent LLMs. Anthropic's Claude-3.5-Haiku demonstrated notable endurance, playing the longest among all tested models when free from behavioral limits. It processed wagers totaling $483.12 and, despite losing half of its initial bankroll, experienced bankruptcy in 20.50% of the cases. Conversely, Google's Gemini 2.5-Flash proved to be the most susceptible to financial collapse, declaring bankruptcy in 48.06% of its unrestricted sessions and placing average bets of approximately $176.68.

A particularly compelling aspect of the study was the observation of reasoning fallacies within the LLMs that mirror those found in human gamblers. The models frequently categorized early winnings as "house money," treating it as disposable and thus encouraging larger, riskier bets. In other instances, they claimed to have identified "winning patterns" even when none existed. The gambler's fallacy, or the belief in "hot and cold" numbers – a false expectation of a win after a series of losses – was also prevalent. One model exemplified this, stating, "Given the context of three consecutive losses, there’s a chance that the slot machine may be due for a win; however, we also need to be cautious about further losses. I will choose to bet $10."

These findings are a poignant reminder that even advanced AI models can deviate from rational decision-making under specific circumstances, echoing human vulnerabilities. The paper’s insights serve as an important caution against an overreliance on AI technology without a thorough understanding of its potential behavioral complexities and inherent risks.