Keeping It Human
- Chrys Charteris

- Jan 31
- 5 min read

Brigitte Helm on set as the Machine Human in Metropolis by Fritz Lang (1927). Photo by Horst von Harbou (Wikimedia Commons).
We may be heading beyond silicon technology and into an age of biocomputing, where AGI wakes up and humans are slowly but surely shunted into a netherworld of inferiority. But we’re not there yet.
We’re still in the land of bots with relatively basic, though impressively speedy, processing abilities. What’s more concerning than Judgement Day, though, is our acceptance of shortcuts to thinking, and a collective contentment to use chatbots, settling for language models that generate the same patterns, over and over.
Constructive thought and self-expression are lulled into submission as ChatGPT boasts “Ask anything”, and spews forth in seconds, comparing and contrasting one thing with another, using endless tricolons with an Oxford comma, and the omnipresent em dash. We make a request: we get the answer. No need to think, write, or even care. Personality is lost; authenticity engulfed.
Human attention span is atrophied by information overload, and we’re accepting ever more superficial solutions. When Google’s AI Overview delivers its web-search snapshot, complete with misinterpretations, it nullifies an already wearied need to look any further. Exposure begets familiarity, which leads to misplaced trust.
With habitual use of generative tools, the tendrils go deeper. A 2025 study from MIT Media Lab explored the consequences of using large language models for essay writing. 54 participants were split into three groups (LLM users, search-engine users, and brain-only users), and assessed over four months. LLM users consistently underperformed at neural, linguistic and behavioural levels, when compared with search-engine users, and those who relied on their brains alone. And when switched to a brain-only writing group, LLM users were residually impaired. Brain-only participants had the strongest, most distributed connectivity overall.[1]
A 2024 study from SBS Swiss Business School surveyed 666 participants, finding that AI users who trust generative tools are likely to depend on them for decision-making. Critical-thinking scores across the study were lower in younger participants, who were more likely to be AI-dependent.[2]
This AI-induced affliction has been dubbed “metacognitive laziness”.[3] We’re at risk of relaxing into a mentally squidgy state. A friend told me recently that the last time she’d written a piece that was truly hers, it was on a floppy disk. An acquaintance posted a short story online, confessing it was written by ChatGPT. Curious, I decided to test its capacity for fiction, requesting subject matter and author styles, throwing in abstract challenges.
It churned out paragraphs with varying degrees of compliance, and metaphors and similes gone awry. The oil-rig setting that I’d specified for a murder mystery was “like a cathedral built by engineers instead of God.” A character description for a Jilly Cooper-esque romance between two athletes made me chuckle: “… thighs like marble pillars, a smile that suggested mischief rather than mercy, and a reputation for finishing strong.” Playing up to descriptives, I asked for an account of a royal banquet in Angela Carter-style prose. The banquet hall “inhaled the dusk and exhaled light”, “guests leaned in, eager to be ruined”, “musicians stitched the courses together with sound” and “laughter burst like overripe figs”. For all its mindless mimicry, accidental absurdity and semantic shortfalls, it’s a wonder that a chatbot can do such tricks at all.
And there’s the catch. We can instruct and restrict the propensities of large language models, but do we want to lose ourselves as aides to their evolution? What are we doing to fuel our brains, our idiosyncrasies, and our own creativity?
One of my favourite states of mind is the feeling of inspiration: being fired up by an exciting idea, and brimming over with desire for it to take form. The word “inspiration” is rooted in the Latin inspirare, meaning “to breathe into”, being filled with life-giving energy, and transferring it to creation. There’s nothing quite like being swept along on that wave and directing its power. But it takes work. It’s easier to relax, receive, be fed, and not to strive to do. But when we do, when we create, there’s no beating the sense of achievement and reward. We finished it. We made it. We did it! We produced something unique.
Reliance on AI as creator can send us careering down a slippery slope. A book came into my hands last year, published by Amazon. Its author, supposedly an Australian herbalist, has a website with a profile photo and a vague bio. The book is hard-bound and nicely presented, but blatantly bot-written from the first page, where, ironically, the reader is thanked for their trust. It contains spelling errors, content cock-ups, factual blunders, and a botched list of references in the back, pairing authors with books they didn’t write. Tested by Originality.ai’s detector for artificial generation, it scored 100% likelihood, with the author and editorial reviewers flagged as fake.[4]
So where does this leave us? What should you do if you want to be authentic, but need help finding your voice, organising thoughts into words, translating ideas into pages, or polishing up what you’ve drafted yourself? You can opt for human assistance (which I offer with Word Surgery), or, if you want to use AI, keep true to YOU by not settling for slop. Research your subject. Explore your field. Do background checks. Go deep. Assess. Edit. Rehumanise. Never take what a bot tells you as gospel, and don’t splice bot and human text together. If you’re not sure how to avoid that, I’m here to help.
If your grammar skills are not so good, work on them. Being human - and not using AI to write for you - doesn’t mean that your shortcomings should mark your realism. I’ve heard it said that we should deliberately leave in typos, or other mistakes, to show that AI didn’t produce our text. That’s silly, and rather missing the point.
One great (and fun) thing to do once you’ve composed your human draft is to read your text aloud. It helps you to get inside the flow. Writing should be alive with pulse and nuance. When you read aloud, you feel the beats. It helps you spot mistakes you might have made, and hear unintended repetition. It objectifies and redefines your content until it’s crystal clear.
Language models use common linguistic devices, but they don’t know what they’re talking about. There’s an intricacy to linguistics, and a depth to human knowledge, wisdom and experience with which they cannot compete. It’s not just about gaps in truth, context and accuracy, but the absence of a human mind behind the words. As ChatGPT spouted, when I asked it for a list of human-versus-bot pros and cons in writing: bots do not suffer from “fatigue-based memory decay” like humans, but they do “get it wrong consistently”. On the downside, it cautioned, humans are “heavily influenced by mood, ego, stress, and social pressure.” Yes, we are. That gives us a hell of an edge. Let’s stay sharp and use it.
Notes:
1. Kosmyna, Nataliya, et al., ‘Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task’, Massachusetts Institute of Technology, 2025: https://www.media.mit.edu/publications/your-brain-on-chatgpt/
2. Gerlich, Michael, ‘AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking’, SBS Swiss Business School, 2025: https://doi.org/10.3390/soc15010006
3. Fan, Yizhou, et al., ‘Beware of metacognitive laziness: effects of generative artificial intelligence on learning motivation, processes, and performance’, British Journal of Educational Technology, 2024: https://research.monash.edu/en/publications/beware-of-metacognitive-laziness-effects-of-generative-artificial/
4. Fraiman, Michael, ‘82% of Amazon “Herbal Remedies” Books in 2025 Were Likely AI-Written’, 10 November 2025: https://originality.ai/blog/likely-ai-herbal-remedies-books-study#






Comments