
Introduction
For most of 2025, the phrase "answer engine optimization" meant one narrow thing: get your content cited in an AI Overview or a ChatGPT answer so a human reader could see it. The content was still written for people. AI systems were the new distribution channel, but humans were still the audience.
That assumption is breaking in 2026. Autonomous AI agents — systems that read, compare, reason, and transact on behalf of users without a human reading each page — are no longer a thought experiment. They are starting to drive real bookings, real purchases, and real lead capture. And the way those agents consume content is structurally different from the way a person reads a webpage.
A new framework from a senior Google engineer puts a name on the shift. According to reporting in Search Engine Land on April 15, 2026, Addy Osmani — Director of Engineering at Google Cloud — published an "Agentic Engine Optimization" playbook that outlines what agent-ready content actually looks like. The piece is notable for two reasons. First, it is one of the few pieces of first-party guidance we have from inside Google on this topic. Second, it reframes the conversation from be visible to AI to be usable by AI.
This guide unpacks what the framework actually says, how it fits alongside the agentic AI protocols we covered earlier this week, and what concrete changes a small or mid-sized business — including the service companies our team works with across Fort Wayne and Northeast Indiana — should make on their sites in the next 90 days.
Key Takeaways
- Addy Osmani, Google Cloud's Director of AI Engineering, has published an Agentic Engine Optimization framework defining five properties of agent-ready content
- The five properties are discoverability, parsability, token efficiency, capability signaling, and access control — each one maps to a different failure mode when an agent tries to use a site
- Osmani's specific recommendations include keeping key docs under defined token budgets, front-loading answers in the first 500 tokens, and preferring clean markdown over heavy HTML
- This guidance is separate from Google Search ranking — it is about whether AI agents can consume and act on your content, not about whether Google lists your page
- Service businesses preparing for AI-driven bookings should retrofit service pages, location pages, and FAQs against these five properties before spending more on traditional on-page SEO
- Local Fort Wayne SMBs in HVAC, dental, legal, and home services have a clear 90-day path that pairs this content layer with the agentic protocol layer
What Did Google's AI Director Actually Say?
The Search Engine Land coverage centers on an April 11, 2026 post by Addy Osmani in his role as Director of Engineering for AI at Google Cloud. Osmani framed the issue as a content engineering problem: documentation and marketing sites, he argued, are still being written primarily for human readers while a rapidly growing share of their traffic now comes from autonomous agents running on behalf of users.
To close that gap, Osmani proposed five properties that content needs to have before an agent can reliably use it:
- Discoverability — can an agent find the content without having to crawl your whole site
- Parsability — can an agent extract structured facts from the page without heuristic guessing
- Token efficiency — does the page fit inside a reasonable context window without wasting the agent's budget on boilerplate
- Capability signaling — does the page tell an agent what actions are possible and how to perform them
- Access control — can you authorize, rate-limit, and audit what agents are allowed to do
Each of those words is doing work. If any one property is missing, the agent usually fails silently — it visits your page, can't parse it, and moves to a competitor whose page it can parse. You never see the failure in your analytics because the agent never converts.
A few of Osmani's specific numerical guidelines were especially concrete. Per the Search Engine Land summary, he suggested:
- Quick-start guides under roughly 15,000 tokens
- Conceptual guides under roughly 20,000 tokens
- API references under roughly 25,000 tokens
- Front-loading the actual answer within the first 500 tokens of any document
These are not rules that a marketing SEO team would normally write. They come from the reality that large language models have finite context windows, and an agent using a 200,000-token context will not spend 40,000 of those tokens reading a page of stock photography captions and hero-section copy to extract one phone number.
Osmani also released an open-source audit tool called agentic-seo that scans a site against the framework. We mention it because it is the most tangible piece of first-party tooling we have for this category of work in 2026.
One important caveat from the Search Engine Land piece: this guidance is not about Google Search ranking. An Agentic Engine Optimization audit does not change your organic position in Google's blue-link results. It changes whether AI agents — including agents that ultimately recommend your business inside a Gemini, ChatGPT, or Perplexity experience — can consume your pages at all.

How Is This Different From "Regular" AEO?
A fair question at this point: haven't we been writing about answer engine optimization for a year? What is actually new here?
We covered the broader AEO playbook in our answer engine optimization guide, and the short version is that classical AEO focuses on getting cited in an AI-generated answer for a human reader. The optimization targets are things like question-format headings, concise paragraph answers, FAQPage schema, E-E-A-T signals, and bylined author pages.
Agentic Engine Optimization sits one layer deeper. The audience is not a human reading a cited snippet — it is an autonomous program that is going to take an action based on what it finds. That shifts priorities in three ways.
First, extraction matters more than persuasion. A human reader can tolerate a long intro and still pull the key fact out. An agent with a tight token budget often cannot. Osmani's "first 500 tokens" rule is effectively saying: if the answer is not at the top, the agent may leave before it gets there.
Second, action matters more than citation. The old AEO goal was a sentence in an AI Overview. The agentic goal is often a transaction: a booking, a quote request, a comparison, a product added to a cart. If your content cannot describe what actions are possible — price, availability, geographic scope, constraints — the agent cannot act on your behalf even if it cites you.
Third, machine-readable structure becomes load-bearing, not decorative. The difference between a page that mentions a price and a page that exposes a price inside structured data (Service schema's offers field, Product schema's offers, or similar) is the difference between an agent guessing and an agent being correct. The Google structured data documentation already tells you this matters for humans; the agentic era makes it matter for every non-human consumer, too.
That connects Osmani's content-layer framework to the protocol-layer work we covered recently in the 6 agentic AI protocols every business site should know. Agentic protocols define how an agent talks to your infrastructure — MCP, A2A, NLWeb, agents.json, and so on. Agentic content defines what's on the page once the agent arrives. Both layers have to be in place for an agent to actually complete a job.

What Are the Five Properties Agent-Ready Content Needs?
The framework is worth working through in a little more depth, because each property maps to a specific failure mode we see in real SMB sites — including sites that already rank well in traditional search.
Discoverability
Agents don't start by crawling your full site. They start from a handful of entry points: a homepage, a known llms.txt file, a sitemap, or a direct URL a user hands them. If your most important operational content — hours, service areas, pricing, eligibility — is buried three clicks deep, the agent will often miss it.
Two practical moves help here. One is publishing a proper llms.txt for AI discoverability at your root, pointing to the structured content an agent will actually need. Per the llms.txt proposal, the file should list canonical URLs for your key resources so an agent can fetch them directly instead of crawling the whole site. The other is making sure those URLs are stable — agents cache and re-use URLs in ways humans don't, so breaking a URL breaks more than one request.
Parsability
Parsability is the question of whether an agent can pull structured facts from your page without guessing. In practice, this means two things: clean HTML that doesn't require a headless browser to render, and embedded structured data (JSON-LD) that describes entities, actions, and offers.
Per the Search Engine Land summary, Osmani recommends clean markdown over heavy HTML where possible for agent-targeted documentation. For marketing sites where you still need styled HTML, the closest equivalent is semantic HTML plus comprehensive JSON-LD using schema.org types like LocalBusiness, Service, Offer, and FAQPage.
Token efficiency
Tokens are cheap for a single request and expensive at scale. An agent making hundreds of comparisons across dozens of providers has a hard budget. Pages that front-load answers and then expand — rather than pages that bury answers behind hero copy, testimonials, and brand storytelling — get read further.
Osmani's concrete numbers (about 15k tokens for quick starts, 20k for conceptual, 25k for API references, with the actual answer in the first 500) are specific to technical documentation, but the principle generalizes. For a service-business page, the equivalent is: lead with the service, area served, price range, and booking method; put the brand story below that.
Capability signaling
Capability signaling answers the question what can I do on this page? For a SaaS product, it's API methods, rate limits, and auth. For a local service business, it's whether the business can take a booking, generate a quote, or answer an eligibility question, and under what constraints (service area, hours, insurance, deposit policy).
There are several emerging formats for this — Osmani mentions skill.md and AGENTS.md files, and we've written about how MCP servers and AI tool integration work for exposing actions programmatically via the Model Context Protocol. But you don't need a full MCP server to start. A tightly-written service page with structured offers data and a clearly documented booking URL is most of the way there for many small businesses.
Access control
The last property is the one most SMB operators forget about until after something breaks. Access control covers who is allowed to call your agent-facing endpoints, at what rate, and with what authentication. It also covers how you audit what agents have done on your behalf.
Without basic access control, a popular agent-friendly page can get hammered by automated traffic. Simple, boring measures (a WAF rule, per-IP rate limits, bot identification in server logs) keep the agentic era from turning into an unplanned spike in unwanted load.

How Should You Rewrite a Page for Agent-Ready Content?
Turning the framework into concrete rewrites is where most teams get stuck. Here is a practical pattern we've been applying to client work in 2026. It is not the only valid approach, but it is grounded in the Osmani framework rather than speculation.
| Layer | Before (human-first) | After (agent-ready) |
|---|---|---|
| Page opener | Brand story and hero headline | 1-2 sentence fact-plus-context block answering the page's core question |
| Page body | Narrative marketing prose | Narrative prose plus explicit facts sections: service, area, price range, hours, constraints |
| Entities | Implied (business name in paragraphs) | Explicit in JSON-LD LocalBusiness / Service / Offer blocks |
| Actions | Contact form at bottom | Documented booking URL, quote URL, eligibility URL — each with schema |
| Navigation | Human-optimized menu | Human menu + llms.txt pointing agents to canonical resources |
| Trust signals | Testimonials, logos | Testimonials and logos, plus verifiable entity data (LEI, state license, BBB) |
The point of the table is not "strip out the marketing copy." Humans still need narrative, proof, and personality to make a decision. The point is that the agent-readable layer has to exist alongside the human layer. If the only way to extract your price range is to read three paragraphs of copywriter prose, the agent is going to guess — or worse, pick a competitor whose price is in structured data.
For service businesses specifically, our recommended sequence — which we walked through in more depth in how to prepare your service business for AI direct bookings — is:
- Pick the top three service pages by revenue
- Rewrite each opener to a 500-token answer-first block
- Add Service and Offer schema with explicit
areaServed, price range, andavailableChannel - Document the booking or quote action in a single stable URL
- Publish an llms.txt pointing to those URLs
- Add access control before you advertise any of it
That sequence is sufficient for most SMBs to get to an 80% agent-ready state without a full platform migration.
Honest Limits: What This Framework Does Not Do
A few caveats are worth stating plainly, because most of the AEO coverage in the industry has been uncritical.
This is one engineer's framework, not Google policy. Addy Osmani is a named, senior Google Cloud engineer, and his guidance is more authoritative than anonymous commentary on Reddit. It is not, however, a formal Google Search ranking signal or a Gemini grounding specification. The Search Engine Land piece is explicit on this point.
Token budgets will move. The specific numbers Osmani cites (15k / 20k / 25k tokens) reflect 2026 model context windows and tooling patterns. Those will shift — probably upward — over the next twelve months. Treat the principle of token efficiency as durable; treat the specific numbers as a snapshot.
Agent-ready is not the same as accessible. There is real overlap between making content parsable by agents and making it usable by people relying on assistive technology. But semantic, well-structured HTML for WCAG 2.2 accessibility is a separate discipline with its own requirements. Do both; don't conflate them.
Measurement is still primitive. Unlike classical SEO, we don't yet have a "Search Console for agents." You won't see a clean dashboard telling you which pages got parsed by which agent and which led to a conversion. Analytics for this channel is largely server-log forensics and referrer inference right now.
For the bigger strategic picture of how agent traffic relates to classical search traffic, we walked through trade-offs in agentic AI vs search marketing strategy for 2026. The short version: neither replaces the other yet, and investing in agent-ready content is additive to — not a substitute for — a working SEO program.

What Does This Mean for Fort Wayne and NE Indiana Service Businesses?
For the SMBs we work with across Fort Wayne, Auburn, Angola, and the rest of Allen and DeKalb Counties, the immediate question is usually: do we need to care about this in April 2026, or can we wait?
Our answer is that you have a narrow window to get ahead of a commodity change. Most of the local service category — HVAC, plumbing, electrical, dental, legal, home remodeling, veterinary — is not yet agent-ready. That means the operators who retrofit their service pages and location pages first will be disproportionately picked by agentic booking flows when those flows scale from a few thousand users to mainstream adoption.
The retrofit is modest. A typical Fort Wayne HVAC site, for example, has about six to ten pages doing real work — the homepage, the about page, a location page, service pages for AC, furnace, heat pump, and emergency service, and a contact page. Rewriting those six to ten pages with a 500-token opener, Service + Offer schema, and a stable booking URL is a one-to-two-week engagement, not a quarter-long project.
For dental practices, add an Insurance-accepted page and a New-Patient page to the list. For law practices in Allen County, add a Practice-area page per matter type with clear geographic scope (for example, "we handle estate planning in Allen, DeKalb, Whitley, and Noble Counties" stated as explicit areaServed in Service schema, not just as a paragraph). For home services, add a Service-area page per county.
Once that content layer is in place, pair it with the protocol layer from the agentic AI protocols post — at minimum an MCP server for booking, or a documented REST endpoint that an agent can call with reasonable auth. That combination — agent-readable content plus a callable action — is what lets a local business actually close a booking inside an agent flow, not just get mentioned in one.

Ready to Retrofit Your Site for the Agentic Era?
If your team wants help auditing an existing site against Osmani's five properties, running the structured-data and llms.txt work, or building the booking endpoint the agent will ultimately call, our AEO services team has been running this exact sequence for Northeast Indiana SMBs since the beginning of 2026.
We'll walk your top revenue pages, score them against the discoverability / parsability / token efficiency / capability / access framework, and ship the rewrites with schema, llms.txt, and access control wired up. For most six-to-ten-page service sites, that is a two-week engagement. Get in touch through our contact page to scope a project.
Start an Agentic Engine Optimization Audit
Pair the content layer with the protocol layer. Button Block runs the full retrofit for Fort Wayne and NE Indiana service businesses.
Book a scoping callFrequently Asked Questions
What is agentic engine optimization in plain English?
Agentic engine optimization (AEO in the agent sense, not the answer-engine sense) is the practice of structuring your website so autonomous AI agents — programs that read, compare, and take actions on a user's behalf — can actually use it. It focuses on five properties: discoverability, parsability, token efficiency, capability signaling, and access control.
Is agentic engine optimization the same as AEO for AI Overviews?
No. Classic AEO optimizes for citations inside AI-generated answers that a human reads. Agentic engine optimization optimizes for agents that will take actions on behalf of a user without a human reading each page. There is overlap — structured data helps both — but the priorities differ, especially around token efficiency and capability signaling.
Will agentic engine optimization affect my Google Search rankings?
Not directly. Per the Search Engine Land coverage of Addy Osmani's framework, this guidance is about whether AI agents can consume your content, not about your blue-link rankings. Many of the underlying best practices (clean HTML, structured data, semantic markup) also happen to be good classical SEO, but you should treat the two workstreams as separate.
What is the 500-token rule and why does it matter?
Osmani recommends placing the actual answer to a page's core question within the first roughly 500 tokens of the page. Agents have limited patience for preamble; if the answer is buried behind a long hero section and brand narrative, the agent may leave before reaching it. The rule generalizes to: lead with the fact, expand with context.
Should a Fort Wayne or Northeast Indiana SMB do this in 2026?
For most local Fort Wayne or NE Indiana service businesses — HVAC, dental, legal, plumbing, home remodeling — the retrofit is a one-to-two-week project, not a quarter-long initiative. If autonomous agents will plausibly drive bookings in your category in the next twelve months, getting ahead now is cheaper than doing it under deadline pressure later. Operators who move first in the Allen and DeKalb County market are the ones agentic flows will pick when adoption scales.
What tool can I use to audit my site against this framework?
Addy Osmani released an open-source audit tool called agentic-seo that scans a site against the five-property framework. It is a reasonable starting point. For production work, you'll also want to audit your structured data with Google's Rich Results Test, verify your llms.txt file, and stress-test your access-control layer before publicly advertising any agent-facing endpoints.
How is this connected to MCP servers and agentic protocols?
Agentic engine optimization is the content layer. Protocols like the Model Context Protocol are the action layer — they define how an agent actually calls your booking endpoint, queries your inventory, or submits a quote request. You need both. Agent-ready content without a callable action gets cited but can't convert; a callable action without agent-ready content doesn't get discovered in the first place.
Sources & Further Reading
- Search Engine Land: searchengineland.com/agentic-engine-optimization-google-ai-director-474358 — Agentic engine optimization: Google AI director outlines new content playbook (April 15, 2026)
- GitHub / Addy Osmani: github.com/addyosmani/agentic-seo — agentic-seo open-source audit tool
- llmstxt.org: llmstxt.org — The llms.txt proposal for AI discoverability
- Anthropic: modelcontextprotocol.io — Model Context Protocol specification
- Schema.org: schema.org/Service — Service type documentation
- Schema.org: schema.org/LocalBusiness — LocalBusiness type documentation
- Google Search Central: developers.google.com/search/docs/appearance/structured-data/intro-structured-data — Structured data general guidelines
- W3C: w3.org/TR/WCAG22 — Web Content Accessibility Guidelines (WCAG) 2.2
