What human factors shape children's subjective feelings of trust and distrust towards AI? How can we create trustworthy AI systems that not only foster positive relationships but also help children understand AI limitations and manage their expectations? How do children decide when to trust, doubt, or challenge an AI system? What developmental factors shape the little moments where kids show confidence, curiosity, hesitation, or critique?
The second edition of the C3AI workshop explores how trust and distrust emerge in real child–AI conversations. We look closely at the human factors behind these interactions: cognitive differences across ages, emotional reactions such as confusion or confidence, social expectations, and the lived experiences that shape how children interpret AI behaviour. Even though today’s conversational AI systems aren’t designed for children, kids use them anyway, at home, in school, and in creative or social activities. This makes it essential to build evaluation approaches that are transparent, ethical, and developmentally sensitive.
In this workshop, participants will work hands-on with authentic child–AI transcripts, age-diverse personas, and real-world scenarios to analyse how trust unfolds over time. Together, we’ll co-design early versions of evaluation metrics that help us understand:
how children across different ages interpret AI actions,
how trust is built, lost, or repaired,
how children express uncertainty or doubt,
how systems can better support reflection, safety, and agency.
Our shared goal is to move toward child-centred evaluation frameworks that respect developmental diversity, uphold children’s rights, and support responsible, transparent AI design. Participants will help build reusable resources that empower researchers, designers, and educators to create AI systems that children can navigate confidently and critically.
CALL FOR PARTICIPANTS GO TO THE PROGRAM CONTACT IDC26 EASYCHAIR SUBMITTED CONTRIBUTIONS
June 22, 2026 - Half Day
Brighton, United Kingdom
Submission Deadline: April 05 April 15 AoE, 2026
Notification of Acceptance: April 25 AoE, 2026
Final Version Due: April 30, 2026
Call for Participants https://easychair.org/cfp/2ndC3AI
EasyChair or please send your submission to grazia.ragone@uniba.it
For any questions, please contact Grazia Ragone grazia.ragone@uniba.it
Submitting a position paper is not a requirement for participants. We also invite observers to attend the workshop, who may not submit papers but are interested in contributing to and learning from the discussions and activities.
This workshop offers a hands-on space to explore how children build, lose, and renegotiate trust in conversations with AI. Together, we will work with real child–CAI transcripts and age-diverse personas to trace emotional and cognitive cues such as doubt, curiosity, adaptability, and mutual understanding. Drawing on ethically approved child–AI collaboration studies conducted across multiple countries, we will uncover how cultural and social contexts shape these interactions. By the end of the workshop, participants will have co-created developmentally sensitive evaluation ideas and building blocks for child-centred, ethical, transparent, and empathetic AI systems that support children in engaging critically with AI.
Grazia Ragone
Zhen Bai
Judith Good
Ayça Atabey