Fake Brands in AI Search: A Small-Business Defensive Playbook

A fictional brand earned AI-search visibility in a recent SE Ranking experiment. Here is what it actually proves — and four defensive moats a real small business can audit in an afternoon.

Lucas M. Button - Founder & CEO at Button Block
Lucas M. Button

Founder & CEO

Published: April 30, 202615 min read
Quiet small-business storefront at golden hour with a clean wood signboard and a window display suggesting a real local brand defensive presence in an era of AI search experiments

Introduction

A research team at SE Ranking published an experiment this week that produced exactly the kind of headline that makes small-business owners nervous. They invented a brand — no real product, no real customers, no operating history — published content under it across a small fleet of domains, and watched it pick up visibility in ChatGPT, Perplexity, and Google AI Overviews within a month. The Search Engine Land write-up framed it cleanly: “Can a fake brand win in AI search? New experiment says yes.”

The clickbait read of that finding is “AI search is broken” or “any business can fake their way in.” Both are wrong, and both produce bad strategic responses. The careful read is more useful. The experiment proves AI search visibility follows identifiable, manipulable patterns, which means real small businesses have a defensive playbook available. It also proves that some categories — newer, lower-competition, less-reviewed verticals — are more vulnerable to fake-brand pollution than others, and that owners in those categories should adjust accordingly.

This post does the responsible reframe. We will walk through what the experiment actually did and found, what it does not prove, the four AI-search trust moats a real Allen County or DeKalb County small business already has that a fake brand cannot fake quickly, and a Saturday-morning defensive audit any service business can run. We will also be candid about the categories where the experiment's findings should make you more aggressive, not less.

Key Takeaways

  • The SE Ranking experiment, as reported in Search Engine Land, ran for one month in March 2026 across one new domain plus 11 older domains, tracked 825 prompts and 15,835 AI answers across five AI systems, and showed a fictional brand earning visibility — but 96% of that visibility came from branded searches, not non-branded competitive queries
  • The experiment proves AI search retrieval follows repeatable signals; it does not prove any brand can fake its way to category dominance, and the SE Ranking team explicitly noted topical clusters alone are not sufficient
  • A real small business has four defensive moats a fake brand cannot replicate quickly: verified reviews on multiple platforms, longitudinal NAP consistency, real customer-named case studies and outcomes, and human-author bylines with verifiable credentials
  • The Saturday-morning audit covers all four moats in 90 minutes using only Google Business Profile, Yelp, BBB, your industry directories, and your own site
  • Categories where SMBs are more vulnerable to fake-brand pollution: newer service categories with thin review density, low-competition geographies, niche commercial intents under-served by established players. In those categories, citation-building and review velocity should be the priority, not less
  • The defensive moves are not panic responses. They are quarter-of-work investments that compound, and they are the same investments the brand signal data in Search Engine Land shows already separate AI-search winners from losers

What Did the Experiment Actually Do?

Editorial illustration of a small isolated test panel separated from the broader frame representing the constructed conditions of the SE Ranking fake brand experiment

A clear-eyed read of the Search Engine Land report gives the experiment's actual conditions, which are narrower than the headline implies.

Bogdan Babiak, writing about a research project SE Ranking ran over March 2026, describes a fictional brand created in “a real niche with actual competitors.” Content was published across one brand-new domain plus 11 domains older than one year. Seven content formats were tested: deep guides, alternatives listicles, best-of listicles, review articles, comparison pages, how-to content, and clickbait-style articles. The team tracked 825 prompts that generated 15,835 AI answers across five systems: ChatGPT, Google AI Overviews, Google AI Mode, Perplexity, and Gemini.

The headline finding is real: the fictional brand picked up visibility. But the structure of that visibility is the part that matters and is buried below the headline.

Per the SE Ranking report, 96% of the fictional brand's visibility came from branded searches — queries that already contained the brand name. New domains “struggled competing for non-branded topics against established competitors, even in low-competition niches.” In other words, the experiment did not show a fake brand defeating real competitors in head-to-head category queries. It showed that once a query mentions the brand by name, the AI systems can be coaxed into surfacing the fake brand's content as the explanatory source.

A second finding worth slowing down on: queries only the fake brand could answer generated 72% of all visibility, with the fictional brand outperforming authority domains “by up to 32x” in less than 30 days. The interpretation here is also narrower than it sounds. “Queries only the fake brand could answer” means the experiment created proprietary-sounding terminology, then ranked for queries about that terminology — a category effectively without competitors. Real categories have real competitors, and the 32x figure does not generalize past the constructed case.

Other findings reproduced from the report:

  • AI engines behave differently. Google AI Mode showed 90% consistency for branded visibility. Perplexity picked up new content fastest at one to three days but cited supporting domains. ChatGPT grew progressively. Gemini underperformed, with 60% missing citations even for branded queries.
  • Content format mattered. Deep guides averaged about 900 AI answers per page. Review articles averaged 257. Comparison pages averaged 145. How-to articles averaged 22. Clickbait averaged 19. Alternatives listicles averaged 4.
  • Volume compensated for quality in retrieval. Thirty thin pages of 500–750 words generated 1,897 total citations versus weak individual performance — though the SE Ranking team notes this reflects “retrieval likelihood rather than inherent superiority.”
  • Topical clustering alone failed. A hub page with 10 supporting articles generated zero AI citations despite proper indexing and internal linking.

The team explicitly notes the data is one month from one fictional brand in one niche, and that “topic clusters are not useless” — they are “not sufficient alone.”

What Does the Experiment Not Prove?

Editorial illustration of four panels with subtle bracket shapes representing the four claims the fake brand experiment does not actually prove

This is the part that gets skipped on most podcasts covering this story.

It does not prove any fake brand can win. The experiment ran in a constructed niche with proprietary terminology and no real competitor occupying the brand-search slot. In a real category — HVAC contractors in Allen County, dentists in Auburn, personal-injury attorneys in Fort Wayne — the brand-search slot is not empty. It is occupied by a real business with years of reviews, named clients, and operational history. The experiment's 96% branded-search visibility is exactly what an established small business already owns by default in its real category.

It does not prove AI search is broken. AI systems, per the SE Ranking team's own framing, “don't have their own sense of truth, verification processes, or critical thinking.” That has been true of LLMs since they shipped. It is not new. What the experiment shows is that AI search responds to signals — content depth, format, consistency, repetition, retrievability — and that those signals can be manipulated. That is also true of classic Google ranking and has been since 1998. The defensive response is not despair; it is to make sure the signals around your real brand are stronger than what a fake brand could quickly construct.

It does not prove that links, reviews, NAP consistency, and earned mentions stopped mattering. Per Andrea Schultz's Search Engine Land piece, the strongest AI search signals are still brand strength, entity validation, topical authority, reputation signals, and PR signals — and her analysis of approximately 75,000 brands found brands in the top 25% for web mentions average 169 AI Overview citations versus 14 for the next quartile. The fictional-brand experiment did not have time to accumulate any of those signals; it relied on retrieval mechanics inside the AI systems. Those retrieval mechanics are real but they sit on top of the brand-strength layer, not below it.

It does not prove every category is equally vulnerable. The experiment ran in a “low-competition niche.” In a saturated, well-reviewed category, the moats below are difficult to overcome quickly. In a thin-review or newer category, they are not. The honest read of the experiment is that vulnerability is category-dependent.

The Search Engine Land write-up's own warning is the right framing: “If a completely fictional brand can generate consistent citations and favorable recommendations under certain conditions, then brand narratives in AI search may be more flexible than they seem.” Note “under certain conditions.” The conditions matter.

Four AI-Search Trust Moats a Real Small Business Already Has

Editorial illustration of four concentric arc shapes representing four defensive moats reviews NAP consistency named case studies and credentialed bylines around a small business

A real Auburn or Fort Wayne small business has four properties a fake brand cannot fake in 30 days. These are the defensive moats. They are also, per the bland tax piece in Search Engine Land, three of the same signals that determine ordinary visibility: entity authority, information density, and signal alignment.

Moat 1 — Verified Reviews on Multiple Platforms

A fictional brand can publish content. It cannot quickly produce hundreds of timestamped, geographically-distributed, third-party-platform reviews from real customers. Google Business Profile reviews, Yelp reviews, BBB ratings, and industry-directory reviews (Houzz for home services, Avvo for legal, Healthgrades for medical) compound over years and are difficult to fabricate without leaving signals.

Per Schultz's framing in the new authority model piece, reputation signals — reviews, citations, and third-party mentions — are one of the five primary inputs. The longitudinal nature of real reviews is the defensive property. A real five-year-old business has 200 reviews dating back five years. A fake brand operating for 30 days cannot replicate that pattern without resorting to the kind of review fraud platforms now actively detect.

We covered the review-velocity side of this in our reviews and AI visibility post. The defensive update for 2026 is that the same review density that helped you in Google's local pack for the last decade is now an AI-search trust signal as well.

Moat 2 — Longitudinal NAP Consistency

NAP — name, address, phone — consistency across the web is one of the hardest signals to fabricate quickly. A real Allen County HVAC contractor has the same address and phone on Google Business Profile, Yelp, BBB, the Indiana Secretary of State business registry, the local Chamber of Commerce site, multiple supplier directories, an OEM dealer page or two, and old press mentions in the local paper. Those references accumulated over years and the geographic and temporal distribution is the property AI systems use to confirm the entity is real.

A fictional brand can publish a contact page with a fake address. It cannot retroactively appear in a state business registry, a Chamber directory, a supplier site, and a 2018 newspaper article. The longitudinal NAP layer is the second moat.

We walked through the cleanup version of this work for Northeast Indiana businesses in our post on Fort Wayne NAP consistency for AI bots. For the defensive read, the same audit doubles as a confirmation that your moat is intact.

Moat 3 — Real Customer-Named Case Studies and Outcomes

Per LSEO's piece on verifiable claims, AI systems prioritize quantified, checkable evidence over marketing language. A real small business can publish named case studies — customer names, project specifics, timelines, outcomes, addresses where applicable — that hold up to verification. A fictional brand cannot.

The verifiable-claims layer is the third moat, and it is the one most small businesses underuse. We see service businesses with five years of operating history who have never published a single named case study, while AI systems are increasingly weighting that exact form of evidence. The LSEO piece's example contrast is useful: “fast onboarding” is weak, “average onboarding completed in 11 days across 214 mid-market accounts in 2024” is strong. For a service business, the small-business equivalent is “installed 47 furnaces in DeKalb County in 2024 with an average install time of 6 hours” — quantified, checkable, and impossible to fabricate without leaving a paper trail.

Moat 4 — Human-Author Bylines with Verifiable Credentials

The fourth moat is the byline. A real Auburn HVAC firm has a real owner with a real NATE certification, a real license number from the Indiana Plumbing and HVAC Contractors Examining Board, and real years of experience that cross-check against state licensing records. A real Fort Wayne dental practice has dentists with verifiable Indiana State Board of Dentistry license numbers and ADA membership. A real personal-injury attorney has an Indiana Bar number, court admissions, and a published track record.

These credentials show up on the site, the LinkedIn profile, the state licensing database, the professional association directory, and any expert-witness or speaking history. Per the four signals piece in Search Engine Land, authority signals are one of the four primary determinants of AI search visibility, and the credentialed-byline is the most legible authority signal a small business can produce. A fictional brand can invent an author, but the cross-reference fails.

The four moats stack. A small business with reviews, NAP consistency, named case studies, and credentialed bylines is harder to displace by a fictional brand than a small business with any one of the four. The next section is the audit.

The Saturday-Morning Defensive Audit (90 Minutes)

Overhead flat lay of a small-business desk with a laptop showing four abstract dashboard panels and a notebook with four short bracket marks for a Saturday morning audit

This is the runnable layer. A Northeast Indiana service business — HVAC, dental, legal, plumbing, electrical, roofing, landscaping, accounting — can audit all four moats in a single Saturday morning using only public tools.

Reviews moat (25 minutes). Open Google Business Profile and pull the last 90 days of reviews. Count reviews and unanswered ones. Open Yelp and do the same. Open BBB. Open your industry directory (Houzz, Avvo, Healthgrades, etc.). Tally total review count, average rating, and review velocity (reviews per month) across all four platforms. Flag any platform where your review velocity has dropped meaningfully in the last 6 months. The defensive priority is unanswered reviews on every platform, then closing the slowest-velocity platform's gap.

NAP moat (25 minutes). Search Google for "[Your Business Name]" Auburn (or your city). Open the first 20 organic results. Note the address and phone listed on each. Compare to your canonical NAP. Repeat with "[Your Business Name]" Fort Wayne if you serve Allen County, and "[Your Business Name]" Indiana for state-level coverage. Flag any mismatches. The most common are old addresses on supplier directories, old phone numbers on Chamber sites, and inconsistent business name formats (Inc. vs LLC vs no suffix). Open the Indiana Secretary of State Business Search and confirm your registered name matches what is on Google Business Profile. Open your industry licensing board's database and confirm your license number is publicly listed and correct.

Case studies moat (20 minutes). Open your site's services or portfolio section. Count named case studies — case studies with actual customer names, project specifics, timelines, and outcomes. Most small businesses we audit have between zero and three. The defensive target is one named case study per primary service line. If you have five service lines, that is five case studies. Plan to add the missing ones over the next quarter; they are some of the highest-leverage content you can produce. We expand on the proprietary-data side of this work in our brand clarity in AI search post.

Bylines moat (20 minutes). Audit every page on your site that has author attribution. For each author, confirm the byline links to a live author page, the author page lists verifiable credentials, the credentials cross-reference against the state licensing database or professional association, and the LinkedIn profile is current and matches. The most common gaps are content published under a generic “Editorial Team” byline that has no verifiable credential, and named authors whose bios do not list license numbers or certifications.

The total: about 90 minutes for the full audit. Most owners we work with finish in 60. The output is a one-page document with four columns — Reviews, NAP, Case Studies, Bylines — and a list of the largest gaps in each. That document is the next 90 days of work.

When the Experiment Should Make You More Aggressive, Not Less

Editorial illustration showing three diverging arrows pointing into thinner cluster regions representing categories where small businesses are more vulnerable to fake brand pollution

The honest counterpoint section.

Some categories are genuinely more vulnerable to fake-brand pollution. The SE Ranking experiment ran in a “low-competition niche,” and the implication for SMBs in similarly-shaped categories is that the four moats above are necessary but possibly not sufficient. There are three category profiles that should adjust their response upward.

Newer service categories with thin review density. A vertical that has only existed for 5–10 years often has lower review density across all platforms. Examples include solar installers, EV charging installers, AI implementation consultants, and home-cleaning subscription services. In these categories, a 30-day-old fictional brand has less of a review gap to overcome. The defensive priority is review velocity — running a structured review-request workflow with every customer, focusing on Google and the highest-traffic industry directory in your category, and not assuming the moats are intact just because you are real.

Low-competition geographies. A small Northeast Indiana town with only one or two real businesses in a service category is a thinner moat than a saturated metro. If you are the only HVAC contractor in Waterloo or Garrett with a Google Business Profile, your moats are real but the AI systems may pick up newer entrants faster than they should. The defensive priority is geographic citation breadth — Chamber memberships, regional supplier directories, state association memberships, and industry-publication mentions that lock down your area-served claim.

Niche commercial intents under-served by established players. A specialty within a service category — say, “geothermal HVAC installation in DeKalb County” — may have no established player owning that exact intent. Real small businesses with the actual capability should claim the niche query before someone else does. The defensive move is information gain on the specialty: a single dense, named-case-study-supported page on the niche, indexed and linked from your service hub.

For these categories, we agree with the implicit takeaway from the SE Ranking experiment: the bar for what produces visibility has dropped, and real small businesses should be more aggressive about claiming the queries they actually serve before fictional brands or out-of-market players claim them first. That is not the same thing as panic. It is treating the moats as a starting position and the citation-and-content layer as the visible result.

How We Approach AI Search Defense for Northeast Indiana Clients

Wall calendar with three highlighted vertical bands representing a 90 day defensive program for closing the largest review and citation gaps over a quarter for a small business

For clients in Allen County, DeKalb County, and the broader Northeast Indiana market, our default first-90-days plan after this kind of experiment lands is the four-moat audit, then a quarter of work on the largest gap, then a quarter on the second-largest gap. Most owners want to do all four at once; the data does not support that pace. Each moat compounds slowly, and concentrating effort on one quarter at a time produces a tighter portfolio than spreading thin across four.

The visible result is the same outcome the brand-signals model in Search Engine Land describes: the AI systems start citing the brand more often because the surrounding signals — reviews, NAP, named outcomes, credentialed bylines — are denser than competitors. The invisible result is harder to disturb later by a fictional brand or out-of-market player coming in with content volume.

If the audit produced a list of gaps you do not have time to close yourself, that is the conversation our AEO service is built around. We start with the same audit you just ran, prioritize the gaps by leverage, and walk through a 90-day plan that closes the largest one without trying to fix everything at once. Our Answer Engine Optimization guide covers the broader framework for SMB owners who want to do the work in-house. Either path is workable. The slow version is the winning version.

Ready to Run Your Four-Moat Audit With a Partner?

Button Block helps Northeast Indiana small businesses build the review density, NAP consistency, named case studies, and credentialed bylines that hold up against AI-search experiments. Bring us the audit you just ran and we will turn it into a 90-day plan.

Frequently Asked Questions

Frequently Asked Questions

Per the Search Engine Land report on the SE Ranking experiment, a fictional brand earned visibility in ChatGPT, Perplexity, Google AI Overviews, AI Mode, and Gemini over a one-month run in March 2026. But 96% of the fictional brand’s visibility came from branded searches — queries that already named the brand. The experiment showed AI systems will surface a fictional brand’s content when the brand is named in the query, not that a fictional brand can defeat real competitors on category-level queries.
In a saturated category with strong reviews, NAP consistency, named case studies, and credentialed bylines, no — the four moats compound over years and are not replicable in 30 days. In a thin-review or newer category, the answer is closer to "less easily, but possibly." The defensive response is to audit and reinforce the moats now rather than wait for the question to become urgent.
Per the bland tax piece in Search Engine Land and the new authority model piece, the strongest signals are entity authority (your canonical brand definition), information density (proprietary data, named case studies, original research), and signal alignment (consistency across reviews, mentions, and customer conversations). The four-moat framework in this post — reviews, NAP, named case studies, credentialed bylines — is the small-business operationalization of those signals.
No, and we recommend against framing it that way to clients. AI search retrieval responds to signals — content depth, format, consistency, repetition, citation density — and those signals can be optimized for, sometimes by entities the AI systems do not vet for authenticity. That is true of every retrieval system since classic web search and is not unique to AI. The strategic response is to make sure the signals around your real brand are stronger than what a fictional brand could quickly produce.
For a typical Northeast Indiana service business, the audit takes about 90 minutes the first time, using only Google Business Profile, Yelp, BBB, your industry directory, the Indiana Secretary of State Business Search, and your industry licensing board’s public database (Indiana Plumbing and HVAC Contractors Examining Board, Indiana State Board of Dentistry, Indiana Bar). Most owners we work with finish in 60 minutes the second time. The output is a one-page list of gaps, prioritized by leverage.
We recommend against deleting content from your own site as a defensive response to the experiment. The defensive playbook is additive — review velocity, NAP cleanup, named case studies, credentialed bylines — not subtractive. If you have legitimate concerns about a competitor’s content, work through the four moats on your own site first; it is almost always the higher-leverage move.
Closely. The fake-brand experiment is the inverse of the reputation-defense problem we covered in our AI search reputation defense for small business post. Reputation defense is what to do when bad signals exist about your real brand. The four-moat playbook is what to do when no signals exist yet about a fictional brand pretending to be a competitor. Both rely on the same underlying assumption: AI systems weight the surrounding evidence about an entity, and the strongest evidence is verifiable, distributed, and longitudinal.
Did a fake brand really win in AI search?
Per the Search Engine Land report on the SE Ranking experiment, a fictional brand earned visibility in ChatGPT, Perplexity, Google AI Overviews, AI Mode, and Gemini over a one-month run in March 2026. But 96% of the fictional brand’s visibility came from branded searches — queries that already named the brand. The experiment showed AI systems will surface a fictional brand’s content when the brand is named in the query, not that a fictional brand can defeat real competitors on category-level queries.
Can a fake brand replace my real business in AI search results?
In a saturated category with strong reviews, NAP consistency, named case studies, and credentialed bylines, no — the four moats compound over years and are not replicable in 30 days. In a thin-review or newer category, the answer is closer to "less easily, but possibly." The defensive response is to audit and reinforce the moats now rather than wait for the question to become urgent.
What are the strongest defensive signals against fake-brand pollution in AI search?
Per the bland tax piece in Search Engine Land and the new authority model piece, the strongest signals are entity authority (your canonical brand definition), information density (proprietary data, named case studies, original research), and signal alignment (consistency across reviews, mentions, and customer conversations). The four-moat framework in this post — reviews, NAP, named case studies, credentialed bylines — is the small-business operationalization of those signals.
Is AI search rigged?
No, and we recommend against framing it that way to clients. AI search retrieval responds to signals — content depth, format, consistency, repetition, citation density — and those signals can be optimized for, sometimes by entities the AI systems do not vet for authenticity. That is true of every retrieval system since classic web search and is not unique to AI. The strategic response is to make sure the signals around your real brand are stronger than what a fictional brand could quickly produce.
How long does the four-moat audit take for a Fort Wayne or Allen County small business?
For a typical Northeast Indiana service business, the audit takes about 90 minutes the first time, using only Google Business Profile, Yelp, BBB, your industry directory, the Indiana Secretary of State Business Search, and your industry licensing board’s public database (Indiana Plumbing and HVAC Contractors Examining Board, Indiana State Board of Dentistry, Indiana Bar). Most owners we work with finish in 60 minutes the second time. The output is a one-page list of gaps, prioritized by leverage.
Should I delete content from sites that look like they could be fictional brand competitors?
We recommend against deleting content from your own site as a defensive response to the experiment. The defensive playbook is additive — review velocity, NAP cleanup, named case studies, credentialed bylines — not subtractive. If you have legitimate concerns about a competitor’s content, work through the four moats on your own site first; it is almost always the higher-leverage move.
How does this connect to AI search reputation management?
Closely. The fake-brand experiment is the inverse of the reputation-defense problem we covered in our AI search reputation defense for small business post. Reputation defense is what to do when bad signals exist about your real brand. The four-moat playbook is what to do when no signals exist yet about a fictional brand pretending to be a competitor. Both rely on the same underlying assumption: AI systems weight the surrounding evidence about an entity, and the strongest evidence is verifiable, distributed, and longitudinal.

Sources & Further Reading