Google's Spam Policy Now Covers AI-Generated Content (2026)

Google's May 2026 spam-policy update covers generative AI responses. Here's the safe/gray/risky line for AI-assisted content, plus a six-question audit any owner can run this week.

Haley C.R. Button-Smith - Content Creator / Digital Marketing Specialist at Button Block
Haley C.R. Button-Smith

Content Creator / Digital Marketing Specialist

Published: May 15, 202614 min read
Editorial workspace with a printed content brief on one side and a laptop showing draft text on the other, representing AI-assisted writing under Google's updated spam policy

There is a moment in every small-business marketing strategy meeting in 2026 where someone says, “Why don't we just have AI write the blog?” Most of the time, what follows is a useful conversation about voice, fact-checking, and editorial judgment. Sometimes it goes the other way — someone hands a content brief to ChatGPT, copies the output into a page, and ships it. Per Search Engine Land's coverage of the May 15, 2026 update, the second pattern just got more explicitly risky.

Google updated the definition in its spam policies to clarify that the rules apply to attempts to manipulate generative AI responses in Google Search — both AI Overviews and AI Mode, in addition to traditional rankings. This is not a brand-new ban on AI content. The scaled-content abuse policy that prohibits “using generative AI tools or other similar tools to generate many pages without adding value” has been in Google's published spam policies for some time. What is new is the policy language now explicitly extending to AI-generated answer placements, not only to organic rankings. That has implications for any small business using AI in its content workflow.

This piece is a plain-language walk-through of what changed, where the safe / gray / risky lines actually sit for an SMB in May 2026, and a six-question internal audit you can run on your own site in fifteen minutes. We are not going to fabricate enforcement numbers — Google did not publish any, and Search Engine Land's report does not include manual-action volume figures. We will be honest about what we know, what we don't, and what the practical operational moves are.

Key Takeaways

  • Per Search Engine Land's May 15, 2026 coverage, Google updated its spam-policy language to explicitly cover attempts to manipulate generative AI responses — including AI Overviews and AI Mode — not only traditional rankings.
  • This is a clarification, not a new prohibition. The scaled-content abuse policy that targets AI-generated pages without added value has been on Google's books; the May 13 documentation update extends the same logic to AI-answer placements.
  • AI-assisted content remains safe when it is substantively edited, fact-checked, and accountable to a named human author. The category that is now explicitly risky is scaled AI generation with thin or fabricated information.
  • The biggest practical risk for SMBs is mass-generated location pages, AI-summarized news posts, and FAQ blocks where no human reviewed the answers for accuracy. These have been gray; they are now closer to the manual-action risk line.
  • A six-question internal audit — covering authorship, fact-check trail, originality, value, scale, and update cadence — gives any owner a defensible position before a Google enforcement wave or a client review.

What exactly did Google clarify on May 15, 2026?

Per Barry Schwartz's Search Engine Land report, Google updated the spam-policy definition to state that spam includes “attempting to manipulate generative AI responses in Google Search.” The previous version of the definition only referenced manipulation of search rankings. The clarified language now explicitly covers AI Overviews, AI Mode, and other AI-generated answer surfaces inside Google Search.

The Search Engine Land coverage is straightforward about what this means: it is a definitional extension. The categories of behavior the spam policy already prohibits — cloaking, doorway abuse, scaled content abuse, link spam, sneaky redirects, and so on — now apply to AI-response placements in addition to traditional ranking placements. Google did not introduce a new category of prohibited behavior. It clarified that existing categories cover a new surface.

For an SMB owner who has been using AI to help draft content, this is genuinely good news in a couple of ways. First, it confirms the rules of the road. Until last week, there was a defensible argument that AI Overviews operated under a different policy regime than organic rankings — and that publishers who wanted to game AI citations might face less scrutiny than those gaming rankings. That argument is now closed. Second, it gives owners a clear policy reference point when an agency or a content vendor pitches a “post 500 AI-generated location pages this month” tactic. The answer is now: that strategy was already against Google's scaled content abuse policy, and the manipulation can also affect AI-answer surfaces — making the surface area for enforcement larger, not smaller.

What Google did not say is also important. The May 14 documentation update Search Engine Land covered does not contain enforcement statistics, a list of recent manual actions, or a specific threshold for “how much AI is too much.” We will not invent any of those numbers — and you should be skeptical of any agency that pitches a specific percentage threshold based on this update. The policy says what the policy says. The interpretation comes from how Google has historically enforced the underlying spam categories.

This is consistent with the broader direction of the Google playbook we covered in our agentic engine optimization post: treating AI-generated surfaces as part of the same quality framework as traditional search results, rather than as a separate game.

Two screens side by side showing organic search results on one and an AI overview answer surface on the other, representing the dual policy framework

What's still safe, what's gray, and what is now closer to a manual-action risk?

This is the part most SMB owners are trying to figure out. We are going to lay it out as three categories, with the caveat that Google has not published a hard threshold for any of them — these are operational interpretations based on the published policy and the patterns we have seen in audits.

Category 1: Still safe. AI-assisted drafting where the workflow includes:

  • A named human author who reviewed and substantively edited the content
  • Original analysis, opinion, or first-person experience that the AI could not have produced on its own
  • Fact-checking against named sources, with citations
  • A clear point of view or perspective that distinguishes the piece from generic AI output

This is essentially what Google's helpful content guidance has always asked for: people-first content that demonstrates experience, expertise, and value beyond what a generic source could provide. AI assistance does not change the standard. It changes the workflow but not the bar.

Category 2: Gray zone. Patterns that are not explicitly prohibited but are increasingly hard to defend post-May-15:

  • Long AI-generated FAQ blocks added to existing pages where no one verified the answers are accurate
  • AI-summarized news posts published at high cadence with thin original analysis
  • Mass-generated location pages with city names swapped and otherwise near-identical copy
  • AI-rewritten versions of competitor content presented as original
  • “Topic cluster” content built by feeding an AI a keyword list and publishing the outputs with light editing

These were always a quality problem; the May 15 clarification raises the visibility of the policy risk, because the same content now affects AI-answer surfaces in addition to organic rankings. Two surfaces, one risk. The pattern in Search Engine Land's piece on why content doesn't appear in AI Overviews is consistent: AI systems devalue thin, unoriginal content already; now the same content carries additional spam-policy exposure.

Category 3: Explicitly risky. Patterns the scaled content abuse policy explicitly covers, now extended in scope:

  • Spinning up hundreds of AI-generated pages targeting long-tail keywords
  • AI-generated content with fabricated statistics, fake case studies, or invented quotes
  • Programmatic SEO at scale where each page has minimal original value
  • “Doorway” patterns where AI generates landing pages designed to funnel traffic to a single primary page
  • AI-generated content published under fake author bylines or anonymized “team” attributions on YMYL topics

The risk profile here was already meaningful before May 15; the clarification makes it explicit that AI-answer manipulation falls under the same umbrella. We covered the related risk of site reputation abuse in adjacent posts — that policy targets a different mechanism (publishing third-party content on an established domain to exploit ranking signals) but the underlying principle is the same: scale without value is the trigger, regardless of whether the content reaches AI surfaces or traditional rankings.

The same pattern shows up in our bland tax post: generic AI-generated content does not just risk policy enforcement, it also fails to win citations in AI search even when it is technically allowed. Compliance and visibility point the same direction.

Three labeled card stacks arranged in a row representing safe, gray zone, and risky AI content categories under Google's updated spam policy framework

The six-question internal AI-content audit (run on your own site this week)

This is the practical move every owner can make. Six questions, designed to be answerable in fifteen minutes per page, on the highest-traffic and most recently published content on your site.

1. Can you name the human author of this page? Not “marketing team” or “content staff.” A specific, named, accountable person who reviewed and substantively edited the content. For YMYL topics (medical, legal, financial), is the named author someone with verifiable expertise in the topic? If the answer is “no” or “the AI wrote it and we didn't really review,” that is the first remediation target.

2. Is there a fact-check trail? Specifically: are the statistics, percentages, dollar figures, and named-source claims on the page linked to verifiable external sources? Could a Google quality rater click through and confirm each claim? Fabricated statistics are one of the most common AI-content failure modes, and they are an explicit signal of low-quality content under Google's Search Quality Rater Guidelines.

3. Is there original perspective the AI could not have produced alone? Does the page contain first-person experience, original analysis of public data, a non-obvious recommendation, or a perspective that ties together sources in a way the source articles did not? If the page is essentially a summary of three other articles with no synthesis, the value is thin.

4. What is the page's unique value to the reader? Frame this in terms of “what does the reader learn here that they could not get from the first three results on the same query?” If the answer is “nothing specific,” the page is part of the volume problem the May 15 update is meant to address even if the AI involvement is light.

5. How many similar pages did you publish in the last 90 days? Volume is the trigger Google's scaled content abuse policy explicitly names. If the answer is “we published 60 city-page variants in March,” that is the highest-priority remediation target. If the answer is “we published one substantive piece per week,” the policy risk is much lower regardless of how AI-assisted the drafting was.

6. When was this last reviewed for accuracy? YMYL pages especially need to show ongoing care. A page on tax-deadline rules last updated in 2022 is a quality signal problem regardless of whether AI wrote it. A “Last reviewed: May 2026” stamp plus an actual review for accuracy is the lowest-cost reputation signal you can add to any page.

We recommend running the audit on your top 10 highest-traffic pages first, then on every page published in the last 90 days. Pages that fail multiple questions are candidates for rewrite, depublication, or in some cases consolidation with adjacent pages. The deliverable is a one-page audit summary: which pages passed, which need rewriting, and which should be removed. That is a defensible position to bring to a board meeting or a marketing review — much more defensible than “we have 200 blog posts and we're not sure which ones are safe.”

This is the same discipline we covered in our information gain audits post — the question of what original value your page provides is the same question, framed for AI-citation visibility rather than spam-policy compliance. The two frames point the same direction.

Hand checking off items on a printed audit worksheet with a pencil beside an open laptop, representing a six-question AI content audit for a small business site

How should your content workflow change?

If you have been running an AI-assisted content workflow, the May 15 clarification does not mean you have to stop. It means the workflow needs explicit checkpoints that the published content meets the helpful-content standard. Five workflow changes that close the most common gaps.

1. AI drafts, humans edit, names attached. Every published piece has a named author who is accountable for accuracy. The AI is a drafting tool, not a publishing tool. The human edit pass is substantive — meaning paragraphs get rewritten, examples get added or replaced, claims get sourced or cut. Light proofreading is not editing.

2. Source-link discipline at the paragraph level. Every specific claim that includes a statistic, a percentage, a dollar figure, or a quoted opinion has an inline link to a verifiable source. This is a policy in our shop and we recommend it to every SMB content team. The exercise of being forced to find the source for each claim usually catches fabricated stats before publication.

3. Originality check before publishing. Read the published piece against the top three results on the same query. If your piece contains the same information in slightly different words, that is a value problem. If your piece adds first-person experience, proprietary data, or a non-obvious synthesis, that is what AI search and traditional ranking both reward. The Search Engine Land coverage of AI Overview citation patterns confirms this — nearly half of AI Overview citations come from pages outside the top organic results, which is only possible because those pages bring something the top-ranked pages didn't.

4. Volume discipline. Pick a sustainable publishing cadence and stick to it. A small business that publishes one substantive post per week, indefinitely, will outperform a small business that publishes 60 thin posts in March and then nothing for two months. The volume problem is the most common reason for both spam-policy exposure and quality-signal devaluation.

5. Annual content review. Once a year, every page on the site gets reviewed for accuracy. Outdated content gets refreshed or retired. The “last reviewed” stamp on each page gets updated only after a real review, not as a no-op date change. This is one of the cheapest reputation-signal investments in your entire marketing budget.

These five workflow changes are mostly about discipline, not new tooling. The most expensive mistake we see Fort Wayne and Allen County small businesses make is paying for AI-content production services without paying for the human editorial layer that makes those tools safe to publish. Cost of one paragraph rewritten by a human is much smaller than cost of a manual action that drops your traffic 40 percent for six months. We covered the ROI math in detail in our content marketing ROI post.

Small editorial team gathered around a tablet reviewing a content workflow diagram with named author and source check stages for AI-assisted publishing

How does this fit into the broader 2026 Google quality direction?

The May 15 clarification is one piece of a longer arc. Search Engine Land's coverage of the March 2026 core update noted that quality-focused updates have been arriving on a consistent cadence, and the helpful-content framework has been the dominant theme since 2022. The trajectory is not “Google is hostile to AI” — it is “Google rewards content that demonstrates value, regardless of how it was produced, and devalues content that doesn't.”

Three patterns worth watching as the rest of 2026 unfolds:

  • More enforcement of scaled content abuse. Google has not published the cadence at which it expects to take manual actions, but the May 15 clarification reads as preparation for a wave. SMBs running high-volume programmatic content should treat the next 90 days as a window to clean up before enforcement activity intensifies.
  • Tighter integration between AI Overviews and traditional rankings. As the same policy framework applies to both surfaces, the signals that matter for one tend to matter for the other. Investing in E-E-A-T signals — author bios, citation footprints, original data — pays double for content that has to perform in both.
  • More transparency requirements. We expect Google to publish more guidance on what “substantive human review” of AI-assisted content looks like, possibly in the form of updates to the Search Quality Rater Guidelines. That guidance will be the most authoritative interpretation of where the lines sit.

The good news for honest SMB content teams is that all of this is aligned with the kind of content people actually want to read. The owner who writes from experience, sources their claims, names their author, and updates their pages is the same owner who builds an audience over time. The May 15 update is more clarifying than constraining for that pattern. It is constraining mostly for the pattern of “publish at volume, accept the noise, hope the algorithm rewards quantity.” That pattern was already not working in AI search; now it is also explicitly against policy across both surfaces.

Long horizontal road view from a low angle representing the trajectory of Google's 2026 quality direction for AI-generated and helpful content

For SMBs that want this kind of editorial discipline built into their content operation without standing up an internal editorial team, Button Block's Content Marketing services and Answer Engine Optimization services cover the workflow, the audits, and the publishing cadence as a managed package. Every piece we publish under a client byline goes through a human edit, a source-link check, and a Rule-7-style review for any compliance-sensitive industry (legal, healthcare, financial). That is not a marketing claim — it is the workflow that makes the published content defensible under exactly the policy clarification Google just made.

Our Answer Engine Optimization guide is the longer reference on the full AEO discipline; this post is the policy-compliance overlay on top of that work.

Want a second pair of eyes on your content workflow?

If you want to know whether your existing content workflow would survive a Google enforcement wave, our free 30-minute content audit covers a sampled run of the six-question audit above against your highest-traffic pages and the most recent 90 days of publishing. We will tell you honestly what we see, including if everything looks fine and no remediation is needed.

Frequently Asked Questions

No. Google clarified that its existing spam policies — including scaled content abuse — apply to manipulation of generative AI responses in Google Search, not only to traditional rankings. AI-assisted content with substantive human review, fact-checking, and original value remains permitted. The category that has always been against policy is scaled AI generation with thin or fabricated information; that did not change. The scope of where the policy applies did.
A manual action for AI-assisted drafting is unlikely if the workflow includes substantive human editing, fact-checked claims, named authorship, and a sustainable publishing cadence. The patterns that draw manual actions under Google's scaled-content-abuse policy are high-volume publication of thin AI-generated content, fabricated statistics, doorway-page patterns, and content with no accountable human author. Drafting with AI and editing carefully is not the risk profile the policy targets.
Google has not published a specific page-count threshold. The scaled-content-abuse policy uses qualitative language about generating "many pages without adding value." In our experience auditing SMB sites, sustained patterns of publishing 20-plus near-identical AI-generated pages per month, or any volume of pages with fabricated information, are the patterns that raise risk. A small business publishing one well-edited piece per week with original perspective is at low risk regardless of AI involvement.
Google does not require disclosure of AI tool usage. The Search Quality Rater Guidelines focus on whether content demonstrates value and expertise, not on the production tooling used. That said, transparency tends to align with trust signals — and trust is one of the E-E-A-T pillars. Some publishers choose to disclose AI assistance; the choice is editorial, not regulatory.
Run the six-question audit first. Old AI-generated pages that pass — named author, sourced claims, original value, not part of a scaled-content pattern — are usually fine to keep. Pages that fail multiple audit questions are candidates for rewrite or removal. Mass deletion is rarely the right move; targeted remediation of the highest-risk pages is.
It affects the policy framework that governs them, not the technical mechanics of how citations are selected. Content that violates the spam policies risks losing AI Overview citation eligibility in addition to traditional ranking eligibility. Content that complies with the helpful-content framework is more likely to win citations because it is the content AI synthesis tends to favor anyway.
Add a named, accountable human author to every published page, and require a fact-check pass for any specific claim before publication. Those two changes close the largest share of policy risk and quality risk together. They are also cheap to implement — they are workflow discipline, not technology investment.
Did Google ban AI-generated content on May 15, 2026?
No. Google clarified that its existing spam policies — including scaled content abuse — apply to manipulation of generative AI responses in Google Search, not only to traditional rankings. AI-assisted content with substantive human review, fact-checking, and original value remains permitted. The category that has always been against policy is scaled AI generation with thin or fabricated information; that did not change. The scope of where the policy applies did.
Can my small business get a manual action for using ChatGPT to draft blog posts?
A manual action for AI-assisted drafting is unlikely if the workflow includes substantive human editing, fact-checked claims, named authorship, and a sustainable publishing cadence. The patterns that draw manual actions under Google's scaled-content-abuse policy are high-volume publication of thin AI-generated content, fabricated statistics, doorway-page patterns, and content with no accountable human author. Drafting with AI and editing carefully is not the risk profile the policy targets.
How many AI-generated pages is too many?
Google has not published a specific page-count threshold. The scaled-content-abuse policy uses qualitative language about generating "many pages without adding value." In our experience auditing SMB sites, sustained patterns of publishing 20-plus near-identical AI-generated pages per month, or any volume of pages with fabricated information, are the patterns that raise risk. A small business publishing one well-edited piece per week with original perspective is at low risk regardless of AI involvement.
Do I need to disclose AI involvement in my content?
Google does not require disclosure of AI tool usage. The Search Quality Rater Guidelines focus on whether content demonstrates value and expertise, not on the production tooling used. That said, transparency tends to align with trust signals — and trust is one of the E-E-A-T pillars. Some publishers choose to disclose AI assistance; the choice is editorial, not regulatory.
Should I delete old AI-generated content?
Run the six-question audit first. Old AI-generated pages that pass — named author, sourced claims, original value, not part of a scaled-content pattern — are usually fine to keep. Pages that fail multiple audit questions are candidates for rewrite or removal. Mass deletion is rarely the right move; targeted remediation of the highest-risk pages is.
Does the May 15 update affect AI Overview citations?
It affects the policy framework that governs them, not the technical mechanics of how citations are selected. Content that violates the spam policies risks losing AI Overview citation eligibility in addition to traditional ranking eligibility. Content that complies with the helpful-content framework is more likely to win citations because it is the content AI synthesis tends to favor anyway.
What's the single most important workflow change I should make?
Add a named, accountable human author to every published page, and require a fact-check pass for any specific claim before publication. Those two changes close the largest share of policy risk and quality risk together. They are also cheap to implement — they are workflow discipline, not technology investment.

Sources & Further Reading