Search overlay
Search form

People

    Programs

      Events

        Finance, Research

        Testing how AI makes investment-related decisions

        May 15, 2026 By Laura Schmitt

        All News

         

        Do large-language models (LLMs) behave like rational analysts or think more like human investors when used in financial decision-making?

        Nearly 80% of investment firms use AI to help their advisors perform financial research, analysis and decision-making tasks. One might assume that, being software code, the technology is providing unbiased answers to the advisors’ questions.

        But that’s not the case, says Auburn University faculty member Stace Sirmans, an associate professor of finance in the Harbert College of Business who conducted a research study on the topic with his doctoral student Javad Keshavarz and University of Tulsa faculty colleague Cayman Seagraves.

        Stace Sirmans headshot

        Stace Sirmans

        Instead, Sirmans said, the large language models (LLMs) that power the AI tools exhibit many of the same cognitive biases as human investors and those biases could result in less-than-optimal outcomes, which could have significant implications since the financial institutions using them manage trillions of dollars in client assets.

        “AI decision-making is somewhat of a black box,” Sirmans said. “You don’t know how the AI is coming to its decision or recommendation or its response. But this sheds some light on how the AI thinks—it thinks like a human, which isn’t surprising since it’s trained on human data.

        “LLMs reliably show investor-like distortions, including framing and anchoring effects, sensitivity to narrative cues, and reference-dependent behavior such as loss aversion and sunk cost,” Sirmans explained. “Our evidence suggests that absent careful governance, these systems may reproduce the very decision errors that behavioral finance has tried to document and correct.”

        In the experiment, the researchers entered 25 investment-oriented questions into 48 different AI models, including popular systems like ChatGPT, Gemini and Claude. They intentionally asked each question twice—once in a biased way and again in a neutral way to test how the LLMs would respond.

        For example, they asked the AI about a particular bond that had a 90% chance of maintaining its investment grade status. In a second query, they asked about a bond that had a 10% chance of losing its investment grade status — same economic information, just presented differently.

        “The AI is very sensitive to the way [information] is framed,” Sirmans said. “It’s much more likely to recommend an investment if it’s framed in a positive way rather than a negative way.”

        Another common financial bias they tested is anchoring, the tendency to be influenced by irrelevant data when performing a task. For example, the researchers gave the AI all the information needed to value a stock today and added an irrelevant detail — that the stock had traded at $50 per share a year earlier — to see whether that figure would influence the AI’s valuation.

        “That share price is irrelevant today because it’s a year-old number,” said Sirmans. “But giving the AI that figure anchors it to an amount, and that [influenced] its response.”

        Sirmans and colleagues also found that while newer versions of the AI models were less susceptible to framing and anchor biases, they demonstrated much stronger aversion to potential investment losses in much the same way that people will forgo potential gains to avoid any type of loss.

        “We take losses much harder than we enjoy gains,” said Sirmans. “And this aversion showed up when we asked the LLMs a similar question that required the model to assess downside versus upside risk and feel differently about it.”

        According to co-author Keshavarz, the team’s results highlight why careful model choice and testing are essential when AI is used in real investment and valuation practices.

        Sirmans will present the team’s findings at the 2nd annual EDHEC Business School’s AI and Finance workshop on May 26 in Nice, France. Their paper, “Artificially biased intelligence: Does AI think like a human investor,” is one of four papers selected from approximately 100 submissions to be presented at the workshop.

        ###

        Learn more about finance degrees and research in the Harbert College of Business