The debate around AI generated content and SEO performance has been running for years. Many assumed that as generative AI tools got better, the gap between machine-written and human-written pages would disappear. New data suggests that hasn’t happened. And for content teams working in competitive, high-trust industries for example cybersecurity, the gap matters more than most. What the data says A Semrush study examining 42,000 blog posts found that human-written pages appeared in Google’s number one position 80% of the time, compared to just 9% for purely AI generated pages. Human content is eight times more likely to reach the top spot. The gap was widest at position one and narrowed further down the page, with AI content appearing more frequently at positions four and below. Ahrefs ran a wider analysis, examining 600,000 pages across 100,000 keywords. Their findings added detail to the picture. The correlation between AI content percentage and ranking position came out at 0.011, which is as close to zero as you can get. Google does not actively punish AI generated content, but it doesn’t reward it either. Pages with minimal AI use, between 0 and 30% AI generated text, showed a slight tendency to rank higher, and purely AI written content rarely climbed to the number one position. What makes this more striking is the gap between perception and reality. In the Semrush survey, 72% of SEOs said they believed AI content performs as well as or better than human content. The ranking data tells a different story. What we noticed working in cybersecurity SEO A few months ago, we started noticing a pattern we couldn’t ignore. Working as SEOs for cybersecurity companies, we ran AI detection scores on content from competing brands that sold similar products, operated in the same niche, and had comparable domain ratings. The conditions were close enough to make a fair comparison. Pages that scored low on AI content detectors consistently outranked pages that scored high. These weren’t companies with massive backlink advantages or wildly different technical setups. Content quality was the factor that separated them. The cybersecurity space is revealing for this kind of analysis. Buyers are technically sharp and skeptical by nature. Generic, predictable writing doesn’t earn their trust. The pages that ranked well shared one thing: they read like they were written by someone who really understood the subject matter, not an AI content generator that had processed a lot of content about it. How LLMs generate content and why detectors can spot it To create content that reads as genuinely human, it helps to understand what you’re working against. Large language models generate text one token at a time. A token is roughly a word or part of a word. At each step, the model processes the entire context of what came before, calculates a probability distribution spanning its entire vocabulary, and selects from the most likely candidates. The result is text that is statistically smooth, coherent, grammatically clean, and built from the most predictable word sequences available. That predictability is the core problem. The same principle applies to AI generated images. Generative AI tools that produce visuals follow a similar logic, sampling from probability distributions trained on existing data, which is why AI generated images often share the same telltale smoothness plus compositional predictability that trained eyes learn to recognize. What AI detectors actually measure AI text detectors measure how surprising a piece of text is. The technical term is perplexity, which measures how much a language model is “surprised” by each word choice in a passage. Counterintuitively, human writing scores higher on perplexity than AI writing. Humans make unexpected word choices, shift registers, and express ideas in ways that don’t follow the most statistically likely path. AI almost always takes the safe route. Beyond perplexity, detectors measure three other signals: Burstiness: how much sentence size varies. Human writing mixes short, punchy sentences with longer, more developed ones. AI writing preserves a steady medium rhythm throughout. Structural predictability: how consistently paragraphs follow the same internal logic, typically a main sentence, supporting points, and a closing summary. Transition density: how often predictable connector phrases appear. Words like “furthermore,” “additionally,” and “it is worth noting” appear in AI text at three to five times the rate of human writing. Changes that don’t address these four signals, such as synonym swaps or added filler sentences, leave the statistical fingerprint largely intact. What the research says about AI writing patterns A peer-reviewed study from Carnegie Mellon University, published in the Proceedings of the National Academy of Sciences, examined the writing styles of GPT-4o and multiple variants of Llama 3 across tens of thousands of texts. The findings are worth sitting with. Instruction-tuned models, the kind most people actually use AI tools from, produce text that is measurably less human than the base models they are built on. Training a model to follow instructions and answer questions makes its writing style more distinct and easier to detect, not less. Specifically, the study found that: GPT-4o uses present participial clauses at over five times the rate of human writers Nominalizations, nouns formed from verbs or adjectives like “implementation” instead of “implementing,” appear at more than twice the human rate The overall style reads as noun-heavy and informationally dense, even when the model is explicitly prompted to write more casually The vocabulary findings are equally telling. GPT-4o and similar models use certain words at more than 100 times the rate of human writers. The word “tapestry” appeared in 23% of GPT-4o outputs across a diverse range of genres. “Amidst” appeared in 27%. Other heavily overrepresented words included “palpable,” “camaraderie,” “intricate,” “fleeting,” and “unravel.” These are not inherently bad words, but their appearance in news writing, blog posts, or business content is conspicuous to anyone who reads widely. The confidence problem in AI writing Research into AI writing patterns has found that AI-generated text overuses what linguists call booster language. These are words and phrases that make claims sound definitive: “Certainly” “Undoubtedly” “It is clear that” At the same time, AI underuses hedging language, the qualifiers that make human writing sound considered and grounded: “almost,” “in general,” “it appears,” “probably.” The result is writing that reads as more confident than any individual person actually sounds, which is its own kind of tell. How AI content lacks communicative range Research has found that human writing fulfills multiple communicative purposes within a single piece. Writers move between argumentation, explanation, narrative, and comparison, frequently within the same paragraph. AI-generated content tends to stay in one mode, usually argumentative or descriptive, and rarely shifts into narrative or personal observation. That narrowness is detectable, and it is what makes much AI content feel empty even when it is technically accurate. What this means for how you write The AuthorMist research paper offers a concrete example of this pattern in action. An original AI-generated sentence read: “Moreover, the study’s results clearly show a significant correlation between the variables, contradicting initial expectations.” After being rewritten to evade detection, it became: “What the study found, in fact, shows there’s a strong link between those variables, a result that turned out quite different from what was expected.” The second version uses a more conversational structure and mixes registers in a manner that feels natural rather than optimized. Detectors rated it as human. The rewriting model wasn’t given explicit rules. It learned, through trial and error, which patterns detectors respond to, and those patterns overlap almost entirely with what makes writing feel genuine to a human reader. The structural changes that actually move the needle Research from the University of Maryland found that adversarial paraphrasing, meaning rewriting guided by detector feedback, achieves an average detection rate reduction of around 88% across multiple detector types. An analysis from legitwrite found that specific framework changes produce the following score drops on tools like GPTZero, starting from a typical AI generated document scoring 90 to 95% AI probability: Sentence length variation: 20 to 35 percentage points Introduction rewrite: 15 to 25 percentage points Conclusion rewrite: 10 to 20 percentage points Removing predictable transition phrases: 5 to 10 percentage points Breaking paragraph uniformity: 10 to 15 percentage points A full structural pass covering all of these can bring a document down to the 15% – 40% range. A second targeted pass on high-signal sections pushes it below 15% in most cases. On vocabulary, the fix is not to avoid good words but to avoid the specific cluster of words that AI models reach for excessively. Terms such as “tapestry,” “intricate,” “palpable,” and “unravel” belong in literary contexts, not in cybersecurity blogs or marketing copy. Choosing the more direct, concrete, or slightly unexpected word over the elevated one moves text toward the word distribution that detectors associate with human authorship. The deeper point, which both the academic research and the practical detection literature agree on, is that the techniques that reduce AI detection scores are the same ones that improve writing. Varied rhythm, specific detail, shifts in register, and genuine opinions stated directly. These are not workarounds. They are the baseline of good writing, and the gap between AI-generated content and that baseline is measurable, consistent, and worth closing deliberately. How to create content that reads as human The research is clear on what makes AI content detectable, and it’s equally clear on what fixes it. None of it requires throwing out your AI workflow. It requires knowing where human judgment has to step in during the content creation process, and making sure it actually does. Use AI for structure, not for voice The most common mistake content teams make is treating AI output as a finished draft rather than a starting point. When you use AI to create content, it is really useful for research, outlining, and generating a first pass at a topic. It falls short on the things that make content worth reading. A specific point of view, a concrete example drawn from real experience, an opinion stated with conviction. Think of the workflow in two stages. Let the AI tool handle the scaffolding. Then rewrite with your own voice, your own observations, and your own understanding of the audience. The final product should reflect both the efficiency of AI assistance and the judgment of someone who actually knows the subject. Fix the four signals detectors look for If you are editing AI generated content before publishing, these are the areas that move the needle most: Sentence length: Read through any paragraph and count how many sentences land in the same medium-length range. Break that pattern. Cut one sentence to six or seven words. Let another run long with a subordinate clause or additional qualification. Do this across every paragraph, not just the introduction. Transitions: Search the document for “furthermore,” “additionally,” “it is worth noting,” “in conclusion,” and related phrases. Remove or replace most of them. Human writers use these occasionally. AI uses them constantly, and detectors know it. The introduction: AI introductions almost always open with a broad framing statement, define the topic, and preview what the article will cover. Replace this with a specific claim, a concrete scenario, or a direct question. The introduction is where detectors gain the most confidence, so it is where your editing effort pays off most. The conclusion: AI conclusions restate what was just said. Replace a summary conclusion with an implication, an open question, or a plain next step. Detectors recognize the summary pattern immediately. These four edits, done consistently across a document, produce the largest measurable drop in AI detection scores of any changes you can make. Cut the AI vocabulary list Based on the Carnegie Mellon research we covered earlier, certain words appear in AI-generated content at rates that are not normal in human writing. Some appear in over 20% of GPT-4o outputs across unrelated topics and genres. Others signal a literary register that has no place in a cybersecurity blog or a product explainer. Words to watch for and replace: Tapestry, intricate, palpable, camaraderie, fleeting, unravel, amidst Certainly, undoubtedly, it is clear that Furthermore, moreover, additionally, subsequently, notably Comprehensive, multifaceted, robust, innovative, seamless None of these is wrong in isolation. The problem is frequency and context. When they appear in business content, they register as off to a careful reader and as AI to a detector. Write from a specific position One of the clearest signals of human authorship is a stated opinion. AI defaults to providing information from a neutral third-person perspective. It avoids “we think,” “in our experience,” and direct judgments. That absence is flagged by detectors and felt by readers. State your position. Share what you have seen. Reference a specific situation, a client problem, a pattern you noticed across accounts. In the cybersecurity space, this matters more than in most. Your audience reads technical content all day. Generic information presented without perspective does not earn their attention. A useful test before publishing: read the piece and ask whether it could have been written by anyone. If the answer is yes, it needs more of your voice in it. Vary how your paragraphs are built AI paragraphs follow a predictable structure: topic sentence, two or three supporting points, closing summary. Human writing varies. Some paragraphs are a single sentence. Some open with a question. Some move from a specific example to a general principle rather than the other way around. Make deliberate structural choices during your content creation process. Break a long AI paragraph into two, one of them just a sentence or two. Start a paragraph with a direct question instead of a statement. Let a point stand alone without following it immediately with supporting evidence. These variations raise the burstiness score and, more importantly, they make the piece more interesting to read. Bring in what AI cannot generate The things that reliably separate high-performing content from average content are the things no AI tool can produce on its own. Original data. A specific customer scenario. A number drawn from your own analysis. An observation about a pattern in your industry that hasn’t been written about yet. The Semrush study found that only 19% of content teams say AI improves content quality, even though 70% cite faster production as the main benefit. That gap is where your effort should go. LLMs can get you to a draft quickly. What you add from there, the specificity, the experience, the genuine expertise, is what determines whether the content ranks in a search engine and whether readers trust it. For cybersecurity brands especially, trust is not a bonus. It’s the product. Post navigation What Retrieval-Augmented Generation (RAG) Means for Your Search Strategy