In my perusal of the internet at night as I wait for sleep to take me, I came across a thread on Threads by @drjenwolken that says:
“I just found out that supposedly neurodivergent’s [sic] writing is more likely to get flagged as AI-generated, and I’m about to ask chat GPT [sic] why this is the case, so. There’s no irony here at all.”
This particular thread piqued my interest for a couple of reasons. First, I, myself, am autistic. I had also recently written an article for my own blog called AI Detection as Ableism: Why “Robot-Checking” Hurts Neurodivergent Writers (and Why It’s a Civil Rights Issue). It seemed only fitting to follow up—so let’s dig into why writing by autistic and otherwise neurodivergent folks gets flagged by AI detectors more often, and what that means for disabled writers everywhere.
Why Are Neurodivergent Writers Flagged More by AI Detectors?
The short version? AI detectors are trained on patterns, not on people. They look for what’s “normal,” and by definition, neurodivergent writing often falls outside those norms.
1. Pattern-Matching and “Normalcy”
AI detectors analyze word choices, sentence structures, pacing, and even punctuation. AI-Detection AIs have been trained to flag anything that doesn’t fit the average patterns of neurotypical writing, often based on huge datasets of standard essays, articles, and professional prose.
If your writing style is unusually direct, circuitous, literal, poetic, repetitive, or just different, it can look “non-human” to the detector, even when it’s 100% you.
2. Literal Language and Unconventional Structure
Many autistic and neurodivergent people use language in ways that are refreshingly direct, idiosyncratic, or structurally unique:
- Unexpected word order, longer (or shorter) sentences
- Unusual metaphors or direct statements
- Hyper-specific focus, repetition, or “infodumping”
- Scripting, formal tone, or “robotic” cadence (especially in masking or academic writing)
AI detectors, trained on the myth of the “average writer,” may interpret these as “generated” even though, for many of us, they’re just our natural voices.
3. Echolalia, Repetition, and Info-Dumping
Many neurodivergent writers (autistic folks especially) use repetition or “info-dumping” (sharing lots of detailed information in one burst) as a way of expressing excitement, expertise, or comfort. AI detectors, however, often associate these patterns with “predictable” or “formulaic” AI-generated text.
4. Masking, Camouflaging, and Over-Correction
Sometimes, neurodivergent writers over-correct to sound more “professional” or “neutral,” trying to avoid social penalties for “sounding weird.” Ironically, this can push our writing closer to what AI detectors think of as “robotic,” even if every word is ours.
5. Bias in the Training Data
Most AI detectors haven’t been trained on a representative sample of human diversity, especially not disabled, autistic, or otherwise neurodivergent voices. The bias is baked in from the start: if you’re not “average,” you’re more likely to get flagged.
So what’s the result? For neurodivergent writers, it means constantly being told your authentic voice is “wrong,” “robotic,” or “fake.” This is not just a misinterpretation; it’s an injustice. For disabled writers in general, it means facing yet another barrier to being seen and believed as real. It’s a widespread problem that requires attention.
Why Hasn’t Classic Literature Been Tested with AI Detectors?
Short answer: It already has been, by individuals online who have fed classic works —such as those by Jane Austen, Hemingway, and Twain — through AI detectors, and the results are laughable. Google’s own tool has flagged Pride and Prejudice, A Farewell to Arms, and other canonized texts as “likely AI-generated.” But, as you noticed, these tests rarely go beyond blog posts, Medium articles, or the occasional Reddit rant. There’s little rigorous, scientific analysis. The conversation mostly lives in the land of armchair experimentation and anecdote.
Why Do AI Detectors Flag the Classics?
- Archaic or complex syntax: Sentence structures from earlier eras don’t fit “modern” writing norms baked into detector training data.
- Expansive vocabulary: Literary authors and avid readers tend to use more sophisticated words and structures.
- Unconventional punctuation: Em dashes, semicolons, and other “spicy” punctuation marks are more common in literature than in the average business email.
- Bias in the training data: AI detectors are trained on a limited slice of “contemporary, typical” writing (often scraped from the web), not on centuries of human literary output.
- Statistical patterns > meaning: Detectors aren’t reading for content or intent. They’re matching surface-level patterns and “probabilities.” Anything outside the average, whether it’s an autistic writer’s unique cadence or Austen’s clever dialogue, gets flagged as suspicious.
So, What’s the Takeaway?
It’s not just you. The fact that even Jane Austen fails the “robot check” says less about your authenticity as a writer, and more about how narrow, prescriptive, and ultimately flawed these tools really are.
And, as you said, this matters for neurodivergent and disabled writers because our authentic voices’ structures, vocabularies, and punctuation are “different” by design. The same bias that trips up classic literature is the one tripping up anyone who doesn’t write like the imaginary “average.”
What Can Be Done?
If AI detectors routinely flag the classics and routinely penalize neurodivergent writers, then we clearly have a problem with the tool, not the writer. But what do we do with that knowledge? Here’s where we can start: We can advocate for better tools, educate others, and document our process. Together, we can make a difference.
1. Advocate for Better Tools
Push back against lazy gatekeeping if you’re in a position to give feedback (to employers, platforms, or educators), let them know that AI detectors are not a gold standard. They are deeply flawed, and their results do not necessarily prove dishonesty or lack of originality. Request human review and insist that decisions are not made solely by an algorithm.
2. Educate Others
Share your experience. If you’ve been unfairly flagged, talk about it online, with your peers, in advocacy groups. This sharing of experiences is crucial in normalizing the idea that “false positives” are common, and that authenticity can look very different from algorithmic averages. The more people know, the less power these tools have. We’re in this together.
3. Document Your Process
For writers in school, at work, or in publishing, it’s essential to keep drafts and notes. Show your process: outlines, early versions, research steps, revision notes, timestamps. It shouldn’t be on you to “prove you’re human,” but until the system catches up, having receipts can help protect your work and your reputation.
4. Support Neurodivergent Voices
If you’re an editor, teacher, or hiring manager, believe your writers. Encourage authentic voice. Question any system that says difference is the same as deception. Neurodivergent writers, and all marginalized voices, bring the richness that keeps language alive.
5. Lobby for Policy Change
On a larger scale, advocate for policies that protect disabled, neurodivergent, and minority writers from algorithmic discrimination. This is an accessibility and civil rights issue, and it’s time for it to be treated as such.
The solution isn’t for neurodivergent or creative writers to make themselves “smaller” or more average. It’s for the tech and the people using it to learn what real diversity of voice looks like.
Consequences & Future Risks
If we continue to let flawed AI detection tools shape who gets published, hired, or trusted, we risk narrowing the creative landscape to only what algorithms consider “normal.” This doesn’t just harm neurodivergent writers; it erases the nuance and color from our collective voice.
For writers, the cost is constant self-doubt, wasted time, and missed opportunities. For readers and the public, it means a world with fewer authentic stories and less true innovation.
We should never have to shrink our voices or flatten our language to appease a tool. If the system punishes difference, then the system, not the difference, needs changing.
Your turn
Have you ever been flagged by an AI detector, or worried your unique voice would be mistaken for a machine’s? What changes do you want to see? Share your stories, frustrations, and ideas below, or join our forums to keep the conversation going. Every story you share makes the world a little wider for those who come after.




Leave a Reply