An intriguing new study by the Max Planck Institute for Human Development reveals that humans are subconsciously adopting the “voice” of ChatGPT in everyday speech. Analysis of more than 360,000 YouTube videos and 770,000 podcast episodes found a sharp increase in the use of so-called “GPT words” – such as delve, meticulous, realm, swift, comprehend, bolster, and boast – since ChatGPT’s launch in late 2022.
The Cultural Feedback Loop: Machines Shaping Human Speech
Traditionally, AI models learn from human-generated text. This study suggests a surprising reversal: humans are now internalizing AI patterns, creating a feedback loop where machine-generated speech shapes how people speak. As study co-author Levin Brinkmann notes, we tend to imitate communicators perceived as knowledgeablev- and AI is increasingly seen in that light.
Co-author Hiromu Yakura describes the emergence of what he calls a “linguistic watermark.” A standout example is the word “delve” – one of the most emblematic GPT words. “Delve is only the tip of the iceberg,” he says.
Polished Tone and Reduced Emotional Nuance
The influence extends beyond vocabulary. Researchers observed changes in tone and structure: speakers are employing more polished, formal sentences with muted emotional expression – traits typical of ChatGPT’s style. This signals a shift toward uniform linguistic expression.
Morphing speech patterns into a more “AI-like” tone may offer clarity and accessibility – particularly benefiting non-native speakers or educational settings – but it also raises concerns. Experts warn that the unique spontaneity, regional dialects, and emotional ornaments that make human language rich and authentic may be fading.
Broader Implications: Trust, Identity, and Expression
The study sparks pressing questions:
- Linguistic diversity: Will standard AI-like language narrow the richness of human dialects and personal style?
- Emotional connection: If speech becomes more formal and neutral, will social communication risk becoming less genuine? Some researchers warn that over-reliance on AI structures could “undermine the quirks and imperfections that build trust in human exchanges.”
- AI trust paradox: As humans emulate AI speech to sound credible, they also grow more wary of perceived AI involvement — a tension known as the AI trust paradox.
What the Research Shows
The investigation used a clever methodology:
- Researchers fed large datasets (emails, essays, news articles) into ChatGPT and identified high-frequency GPT words.
- They tracked the presence and growth of these words in public speech media over time.
- After ChatGPT’s release, the frequencies of GPT words in spoken English rose sharply, even when accounting for synonyms or scripted content.
This peer-reviewed preprint offers the first empirical evidence that human speech is adapting to mimic AI-generated language patterns.
Benefits, Risks, and the Role of Conscious Choice
There are positive outcomes. ChatGPT-like speech tends to be clearer, more structured, and easier to follow. It may improve communication standards in academic, professional, and cross-cultural contexts.
However, risks include:
- Erosion of personal style: Speech could become overly sanitized and uniform.
- Emotional detachment: Formalized tone may lack warmth or nuance.
- Diminished authenticity: Without linguistic quirks, trust can erode.
Language is not static, but this study highlights an AI-driven phase of linguistic evolution. The question remains: will humans allow this shift or consciously preserve our unique voices?
Final Takeaway
As advanced AI tools like ChatGPT become woven into daily life, their influence extends beyond efficiency to reshape how we express ourselves. The rise of GPT-style language marks a novel moment in communication evolution — one that invites us to think critically about language, identity, and authenticity.
The next steps are personal and cultural: we must remain aware of how we speak and choose when to lean into AI-inspired precision — and when to reassert our distinctly human expressions.