How OptiAISEO measures your brand's AI search visibility — the query templates, engines, scoring, cadence, and known limitations.
OptiAISEO queries six AI engines across three tiers (Quick Scan, Standard, Deep Audit):
| Engine | Model | Audit Tier | Queries / Run |
|---|---|---|---|
| Google Gemini | gemini-2.0-flash | All tiers | 5 (Quick), 15 (Standard), 30 (Deep) |
| Anthropic Claude | claude-3-5-sonnet-20241022 | Standard + Deep | 10 (Standard), 20 (Deep) |
| OpenAI ChatGPT | gpt-4o-mini | Standard + Deep | 10 (Standard), 20 (Deep) |
| Google AI Overview | Serper.dev SERP parse | Deep only | 10 (Direct SERP) |
| Perplexity AI | pplx-7b-online | Deep only | 5 (Deep) |
| Microsoft Copilot (Grok) | grok-2 | Deep only | 5 (Deep) — limited availability |
We use 5 intent categories, each with 3 query templates, run against your domain and primary keywords:
Brand Authority
"What is [brand]?" "Tell me about [brand]" "Who are [brand]?"
Topic Coverage
"What is [primary service]?" "How does [product category] work?" "Best [industry] tools"
FAQ Readiness
"How to [key action]?" "What is the price of [service]?" "Is [brand] safe/legit?"
Competitor Comparison
"[Brand] vs [Competitor]" "Alternatives to [competitor]" "Best [brand category] for [use case]"
How-To Guidance
"How to do [brand's key outcome]?" "Step-by-step guide for [service]" "Tutorial for [primary feature]"
| Plan | Auto-Refresh Cadence | Manual Scans / Month |
|---|---|---|
| Free | Manual only | 3 |
| Pro | Weekly (Monday 08:00 UTC) | 20 |
| Agency | Weekly (Monday 06:00 UTC) | Unlimited |
Quarterly, we run our AEO methodology against 20 known brands (10 well-cited, 10 intentionally uncited) and publish the error rate in our changelog. Our current false-positive rate (brand incorrectly scored as cited) is < 4%. False-negative rate (brand cited but missed by our parser) is < 7%.
Last validated: Q1 2026. Next scheduled validation: Q2 2026.
Grok API constraints
Grok (X/Twitter AI) has limited API availability and rate limits. Results may be missing when Grok API is rate-limited or experiencing downtime. We skip Grok gracefully and note it in the report.
Model update lag
When AI models receive significant updates (e.g. a new GPT-4 training cutoff), citation patterns can shift within 24–48 hours. Our scores may temporarily over- or under-state visibility immediately after a major model update.
RAG and real-time search
Some AI engines (Perplexity, Gemini with grounding) use real-time web search to augment responses. Scores for these engines reflect current indexed content and may differ from engines using only training data.
Brand name ambiguity
Brands with generic names (e.g. 'Pipe', 'Beam') may receive false positives when the brand name appears in AI responses for unrelated reasons. We recommend adding a unique brand phrase in site settings to improve detection accuracy.
Geolocation bias
All queries are sent from US-based infrastructure. Brands with primarily local or regional presence may score lower than their actual regional visibility. Country-specific AEO scanning is on our roadmap.