Artificial Intelligence and the Generational Divide: A Study on Trust and Acceptance
DOI:
https://doi.org/10.63075/s0a9ec84Abstract
As artificial intelligence (AI) is becoming so much integrated into daily life, such as education, healthcare, and business, public trust remains a crucial factor for its extensive acceptance. This study explores the generational and cultural factors that shape trust in AI, focusing on how different age groups perceive and interact with AI. Using a review study based on PRISMA guidelines, 32 peer-reviewed articles published between 2020 and 2025 were analyzed to identify psychological, cultural, and contextual variables affecting AI adoption. The Uses and Gratification Theory (UGT) served as the theoretical framework, enabling a deeper understanding of the motivations behind AI use across age groups. Findings disclose significant generational divides: Generation Z shows higher trust and more usage of AI tools like ChatGPT for cognitive and social gratifications such as learning and productivity. Cultural values further entangle these beliefs, with Western users highlighting transparency and individual gain, while Eastern cultures emphasize contextual fairness and collective well-being. Additionally, individual traits such as cognitive style, technological affinity, and proclivity to trust play important roles in shaping AI acceptance. The study underlines the importance of designing culturally adaptive and demographically sensitive AI systems that align with users’ expectations, values, and needs. Without such contemplation, AI technologies risk isolating key user segments and restricting their societal impact. This research contributes to the ongoing debate on ethical AI expansion by offering insights into how trust can be cultivated through targeted education, inclusive design, and policy reforms.