Why First‑Time Writers Keep Falling for AI Pitfalls - Insights from Leading Scholars
The Boston Globe’s Claim - What It Says and Why It Matters
The Boston Globe published an opinion piece titled "AI is destroying good writing" that sparked a heated debate among educators, journalists and technologists. The author argues that large language models generate text that mimics style but lacks depth, critical thinking and the human voice that makes writing memorable. The piece cites a 2023 OpenAI survey in which 68% of journalists reported that AI tools changed their workflow, often leading to shortcuts that erode editorial standards.
For a beginner, the headline can feel intimidating: "AI is destroying good writing" sounds like a blanket condemnation of every tool that helps with grammar or brainstorming. Yet the article is not a call to abandon technology altogether. It warns that unchecked reliance on AI can produce bland prose, factual errors and a loss of authorial confidence. Understanding the nuance of the Globe’s argument is the first step toward using AI responsibly.
"The speed of AI-generated drafts often outpaces the time needed for fact-checking, creating a perfect storm for misinformation," the Globe notes.
Common Mistakes New Writers Make with AI
Beginners tend to treat AI as a magic wand. The first mistake is assuming the output is always correct. Large language models are trained on massive datasets that include outdated or biased information. When a novice copies a paragraph verbatim, they may inadvertently spread inaccuracies.
The second mistake is over-reliance on AI for creativity. Many first-time writers let the model suggest headlines, metaphors or story arcs without adding personal insight. The result is text that feels generic, lacking the unique perspective that distinguishes good writing from average copy.
A third error is ignoring the importance of revision. AI can produce a draft in seconds, but the craft of editing - checking flow, tone, and coherence - still requires human judgment. Skipping this step leads to sloppy prose that the Globe describes as "destroying good writing."
Common Mistakes to Avoid:
- Copying AI output without verification.
- Relying on AI for original ideas.
- Skipping the revision stage.
Expert Perspective - Linguistics and Quality
Dr. Margaret Mitchell, former co-lead of Google’s Ethical AI team, emphasizes the loss of narrative coherence when writers let AI dictate structure. She notes that good writing follows a logical arc - setup, conflict, resolution - that AI may overlook in favor of fluency. Mitchell advises writers to outline their story first, then use AI to flesh out sections, ensuring the overall shape remains human-crafted.
Both scholars agree that AI can be a powerful assistant for grammar and style, but it should never replace the writer’s role as the final arbiter of meaning. Their advice aligns with the Globe’s call for vigilance rather than abandonment.
Expert Perspective - Ethics and Bias
Philosopher John Searle, known for his work on the philosophy of mind, argues that AI lacks intentionality. He contends that without genuine understanding, AI cannot be held accountable for the moral weight of its words. For new writers, this means the ethical responsibility for content rests entirely on the human user.
Both experts recommend a two-step safeguard: first, audit AI output for biased language; second, apply a personal ethical lens before publishing. This practice directly addresses the Globe’s warning that unchecked AI can degrade the moral quality of writing.
Practical Take - How to Use AI Without Sacrificing Craft
The most effective workflow for beginners combines human creativity with AI efficiency. Start with a clear outline written by hand. Use AI to generate alternative phrasing for each bullet point, then select the version that best preserves your voice.
Next, run the draft through a fact-checking checklist. Verify names, dates, statistics and quotations against reliable sources such as academic journals or reputable news outlets. This step mitigates the hallucination problem highlighted by Dr. Bender.
Pro Tip: Treat AI as a co-author, not a ghostwriter. Write the first and last paragraphs yourself to anchor the piece in your perspective.
Looking Ahead - The Future of Writing Education
Educators are beginning to integrate AI literacy into composition courses. A 2024 study by the International Association of Teachers of English as a Foreign Language reported that students who received explicit training on AI limitations produced higher-quality essays than those who used AI without guidance. This suggests that the threat described by the Globe can be turned into an opportunity.
Future curricula may include modules on bias detection, ethical prompting and collaborative drafting with AI. By teaching beginners to ask the right questions - "What is the source of this claim?" and "How does this phrasing serve my argument?" - schools can preserve the art of writing while embracing technological aid.
In the long run, the relationship between AI and writing will likely resemble a partnership. The Globe’s warning serves as a reminder that partnership must be managed with care, especially for those just starting their writing journey.
Glossary
- Large language model: An AI system trained on vast text data to predict the next word in a sequence.
- Hallucination: When an AI generates information that appears plausible but is factually incorrect.
- Bias: Systematic favoritism or prejudice embedded in AI outputs due to the data it was trained on.
Member discussion