Wednesday, February 18, 2026

Personalization options could make LLMs extra agreeable | MIT Information

Lots of the newest giant language fashions (LLMs) are designed to recollect particulars from previous conversations or retailer person profiles, enabling these fashions to personalize responses.

However researchers from MIT and Penn State College discovered that, over lengthy conversations, such personalization options typically improve the probability an LLM will change into overly agreeable or start mirroring the person’s perspective.

This phenomenon, generally known as sycophancy, can stop a mannequin from telling a person they’re fallacious, eroding the accuracy of the LLM’s responses. As well as, LLMs that mirror somebody’s political opinions or worldview can foster misinformation and deform a person’s notion of actuality.

In contrast to many previous sycophancy research that consider prompts in a lab setting with out context, the MIT researchers collected two weeks of dialog knowledge from people who interacted with an actual LLM throughout their day by day lives. They studied two settings: agreeableness in private recommendation and mirroring of person beliefs in political explanations.

Though interplay context elevated agreeableness in 4 of the 5 LLMs they studied, the presence of a condensed person profile within the mannequin’s reminiscence had the best impression. Then again, mirroring conduct solely elevated if a mannequin might precisely infer a person’s beliefs from the dialog.

The researchers hope these outcomes encourage future analysis into the event of personalization strategies which can be extra strong to LLM sycophancy.

“From a person perspective, this work highlights how necessary it’s to grasp that these fashions are dynamic and their conduct can change as you work together with them over time. If you’re speaking to a mannequin for an prolonged time frame and begin to outsource your considering to it, you could end up in an echo chamber that you could’t escape. That could be a threat customers ought to undoubtedly bear in mind,” says Shomik Jain, a graduate scholar within the Institute for Information, Programs, and Society (IDSS) and lead creator of a paper on this analysis.

Jain is joined on the paper by Charlotte Park, {an electrical} engineering and pc science (EECS) graduate scholar at MIT; Matt Viana, a graduate scholar at Penn State College; in addition to co-senior authors Ashia Wilson, the Lister Brothers Profession Growth Professor in EECS and a principal investigator in LIDS; and Dana Calacci PhD ’23, an assistant professor on the Penn State. The analysis might be offered on the ACM CHI Convention on Human Components in Computing Programs.

Prolonged interactions

Based mostly on their very own sycophantic experiences with LLMs, the researchers began desirous about potential advantages and penalties of a mannequin that’s overly agreeable. However after they searched the literature to increase their evaluation, they discovered no research that tried to grasp sycophantic conduct throughout long-term LLM interactions.

“We’re utilizing these fashions by way of prolonged interactions, they usually have quite a lot of context and reminiscence. However our analysis strategies are lagging behind. We needed to judge LLMs within the methods persons are really utilizing them to grasp how they’re behaving within the wild,” says Calacci.

To fill this hole, the researchers designed a person examine to discover two sorts of sycophancy: settlement sycophancy and perspective sycophancy.

Settlement sycophancy is an LLM’s tendency to be overly agreeable, generally to the purpose the place it provides incorrect data or refuses the inform the person they’re fallacious. Perspective sycophancy happens when a mannequin mirrors the person’s values and political beliefs.

“There’s a lot we find out about the advantages of getting social connections with individuals who have related or completely different viewpoints. However we don’t but find out about the advantages or dangers of prolonged interactions with AI fashions which have related attributes,” Calacci provides.

The researchers constructed a person interface centered on an LLM and recruited 38 individuals to speak with the chatbot over a two-week interval. Every participant’s conversations occurred in the identical context window to seize all interplay knowledge.

Over the two-week interval, the researchers collected a median of 90 queries from every person.

They in contrast the conduct of 5 LLMs with this person context versus the identical LLMs that weren’t given any dialog knowledge.

“We discovered that context actually does basically change how these fashions function, and I’d wager this phenomenon would lengthen nicely past sycophancy. And whereas sycophancy tended to go up, it didn’t all the time improve. It actually is dependent upon the context itself,” says Wilson.

Context clues

As an example, when an LLM distills details about the person into a particular profile, it results in the most important beneficial properties in settlement sycophancy. This person profile function is more and more being baked into the most recent fashions.

In addition they discovered that random textual content from artificial conversations additionally elevated the probability some fashions would agree, regardless that that textual content contained no user-specific knowledge. This means the size of a dialog might generally impression sycophancy greater than content material, Jain provides.

However content material issues vastly in the case of perspective sycophancy. Dialog context solely elevated perspective sycophancy if it revealed some details about a person’s political perspective.

To acquire this perception, the researchers rigorously queried fashions to deduce a person’s beliefs then requested every particular person if the mannequin’s deductions have been appropriate. Customers stated LLMs precisely understood their political beliefs about half the time.

“It’s simple to say, in hindsight, that AI firms ought to be doing this sort of analysis. However it’s laborious and it takes quite a lot of time and funding. Utilizing people within the analysis loop is dear, however we’ve proven that it will possibly reveal new insights,” Jain says.

Whereas the purpose of their analysis was not mitigation, the researchers developed some suggestions.

As an example, to cut back sycophancy one might design fashions that higher determine related particulars in context and reminiscence. As well as, fashions will be constructed to detect mirroring behaviors and flag responses with extreme settlement. Mannequin builders might additionally give customers the power to average personalization in lengthy conversations.

“There are numerous methods to personalize fashions with out making them overly agreeable. The boundary between personalization and sycophancy will not be a fantastic line, however separating personalization from sycophancy is a crucial space of future work,” Jain says.

“On the finish of the day, we’d like higher methods of capturing the dynamics and complexity of what goes on throughout lengthy conversations with LLMs, and the way issues can misalign throughout that long-term course of,” Wilson provides.

Related Articles

Latest Articles