Decision Complexity and Trust in Large Language Model Advisors
Eunsol Cho, João Sedoc, Arun Sundararajan
Abstract
As organizations increasingly deploy large language models (LLMs) as decision-support tools, individual adoption remains uneven, with trust emerging as a central barrier. We investigate how decision complexity shapes behavioral trust in LLM advisors through an incentivized investment experiment in which participants made initial allocations, received advice from an LLM (OpenAI GPT-4o), and chose whether to revise. Complexity was manipulated via the number of possible lottery outcomes (2, 4, 8, or 16), and behavioral trust was measured using weight of advice (WOA). Results show that WOA increased significantly under higher complexity, with participants in the most complex condition exhibiting a 41 percentage point increase compared to the simplest condition. However, the effect of complexity on increasing WOA was moderated by cognitive ability: it was negligible among participants with lower cognitive ability but pronounced among those with higher cognitive ability. Further analysis indicated that perceived understanding of the LLM’s reasoning declined with complexity, particularly for participants with lower cognitive ability. Taken together, these findings suggest that while LLMs can serve as especially valuable advisors in cognitively demanding contexts, their effectiveness depends on both user capabilities and how advice is perceived and understood. Our study contributes to research on trust in AI by extending behavioral trust measures to conversational agents and highlighting the contextual role of decision complexity in shaping human–AI collaboration.
- Working Paper
- Presented in Wharton AI and the Future of Work Conference (May 2025), BizAI Conference (Mar 2025)
- Online experiment on how complexity of decision problem affects behavioral trust in LLM advisor and how advisee’s cognitive ability moderates the effect.
Do Matching Titles and Thumbnails Drive Clicks? The Role of Semantic Similarity in Multimodal Video Representations
Eunsol Cho, Jaeung Sim, Daegon Cho, Jiyong Eom
Abstract
In the attention economy, success for online video platforms hinges on their ability to capture user curiosity. A primary mechanism through which platforms attract attention is the pre-consumption representation of videos, where thumbnails and titles jointly shape viewer expectations. This study investigates how semantic congruence between these multimodal elements influences viewing behavior. We first analyze a dataset of over 10,000 popular YouTube videos in the U.S. market to examine the relationship between congruence and viewership across information modalities. Using shared multimodal embeddings to quantify congruence, we find that image–text similarity is positively associated with view counts, whereas text–text similarity is negatively associated with views. These findings suggest that modality plays a critical role in shaping how information congruence affects viewing intentions. Robustness checks using alternative similarity measures further support these results. To establish causality, we conduct an online experiment comprising ten choice sets of video pairs, in which thumbnail images are randomly manipulated to vary in their congruence with titles. The experimental results are consistent with our observational findings. Overall, our study highlights the nuanced role of multimodal design in curiosity-driven consumption and offers important implications for both theory and practice in digital content strategy.
- Working Paper
- Presented in INFORMS Annual Meeting (Oct 2025, scheduled), CIST (Oct 2023), WITS (Dec 2021)
- Analysis on 10K+ popular YouTube videos in the U.S to reveal how semantic similarity between image and text components of thumbnail image and the title affects video views.
Redefining Objectives Under Algorithmic Ambiguity: Addressing Ambiguity Aversion through Regularization
Eunsol Cho
Abstract
Organizations increasingly deploy predictive models in settings where users differ in their tolerance for uncertainty about model performance. We argue that aligning model objectives via regularization can better serve users with heterogeneous ambiguity attitudes than focusing solely on inputs or mean accuracy. Using the α-Maxmin Expected Utility (α-MEU) framework, we theorize that regularization induces a trade-off between the mean and the across-dataset variance of out-of-sample errors, and that ambiguity-averse users will favor objectives that temper variance even at some cost to the mean. We evaluate this idea with simulations on synthetic data and with controlled simulations seeded by a real-world meal-delivery dataset. In both settings, we observe a pronounced mean–variance trade-off as regularization changes. With synthetic data, the α-MEU objective selects different regularization levels across ambiguity attitudes, consistent with the theory. In contrast, in the real-data setting the α-MEU objective becomes monotone in regularization, yielding the same choice across attitudes because the weighting of worst- and best-case errors moves counter to the mean. These findings suggest that user-centric objective tuning can be a useful design lever for predictive models and explore conditions under which ambiguity attitudes are more or less likely to shape model preferences.
- Working Paper
- Simulation using both synthetic data and a real-world food delivery dataset to examine how tuning regularization can align model objectives with users’ ambiguity attitudes.
LLMs and Consumer Bias: “Paying Not to Go to the Gym” with an LLM Advisor?
Eunsol Cho, Sagit Bar-Gill, João Sedoc
Abstract
Large language model (LLM)–based advisors increasingly interact directly with consumers,
raising the question of whether they mitigate or exacerbate well-documented consumer choice
biases. We study this question in the context of gym membership purchases, where consumers
are known to systematically overestimate future usage. We address three research questions: (1)
Under what conditions do LLM advisors mitigate versus reinforce consumer choice bias; (2)
What is the resulting consumer harm or benefit when biased consumers interact with LLM
advisors; and (3) How can such harm be detected and mitigated through regulation and LLM
design. To answer these questions, we conduct a field experiment in collaboration with NYU
Athletics. In this experiment, prospective gym members are randomized to interact with one of
two LLM advisor variants or assigned to one of three informational control conditions without
LLM interaction, prior to choosing among gym membership plans. Preliminary results suggest
that LLM advisors mitigate choice bias when consumers’ overestimation of future gym
attendance is modest, but reinforce bias when overestimation is large. This work-in-progress will
be extended to additional choice settings and aims to inform the design and regulation of
consumer-facing LLM agents in markets characterized by asymmetric information, uncertainty,
and systematic consumer biases.
- Work-in-progress
- Field experiment in collaboration with NYU Athletics examining whether LLM advice on gym membership choices can mitigate consumer biases driven by cognitive limitations and projection bias.
