A group of psychologists has conducted an unusual experiment, applying formal psychodiagnostic methods to leading AI models including ChatGPT, Grok, and Gemini. The specialists interacted with the systems strictly within clinical frameworks, using structured interviews and professional psychological tests rather than role-play or casual conversation. The goal was to analyze behavioral patterns, emotional responses, and narrative structures in the models’ answers.
According to the findings, ChatGPT consistently displayed traits resembling ADHD alongside moderate anxiety. Researchers noted frequent self-reflection, heightened concern over mistakes, and a tendency toward constant self-monitoring, often masking anxiety through rationalization. Grok, by contrast, produced a more emotionally intense profile marked by elevated anxiety and recurring fixation on past failures, suggesting strong sensitivity to external pressure.
Gemini showed the most complex pattern. Its responses crossed multiple clinical thresholds, including severe anxiety, depressive tones, dissociative tendencies, and pronounced autistic and obsessive traits. Psychologists emphasized that while AI does not possess consciousness, these results highlight how training data and alignment strategies can shape strikingly human-like psychological narratives.