AI Search Intelligence Methodology: How Dabudai Measures Revenue-Driving AI Visibility
Jump to
Intro:
This page explains how Dabudai measures AI visibility. We measure only outcomes that can drive business results.
We count a brand mention only when the AI answer includes a clickable link to the brand’s website or a specific page.
If the user can click and visit the site, it counts.
If there is no link, we do not count it in Dabudai metrics.
Who this methodology is for:
This methodology is for teams that want measurable outcomes from AI answers.
It is built for tracking traffic, sign-ups, bookings, and revenue.
What you will learn on this page:
What Dabudai counts and doesn’t count
How we run consistent measurements
Which metrics we use (with links to the Glossary)
How to replicate the method
How we turn results into actions
The Dabudai Standard (our measurement rules)
We measure AI visibility (linked only).
We count only outcomes that a user can click.
Scope (what counts)
We count a result only when an AI answer includes a clickable link to your domain or an official page you control.
If a user can click and visit your site, it counts.
Out of scope (what we don’t count)
Brand-name mentions without any link are not included in Dabudai metrics.
We may review them as context, but they are not a KPI.
Observability rule (no guessing)
We only count what is visible in the answer.
If an engine does not display links, linked visibility is not observable for that run.
We do not estimate or guess.
Fixed rules (stable by design)
Recommendation Rate = Top 5 (fixed list size). One answer = one win or loss.
Average Position: we always compute an average position across tracked buyer questions. If you appear in the list with a link, we use your visible position (1, 2, 3, 7, etc.).
Keep buyer questions and the output format stable to reduce noise.
Repeat runs when needed to confirm outcomes.
Example: If you appear as #4 with a link, it counts for Top 5 Recommendation Rate.
Buyer questions (the input)
What a buyer question is
A buyer question is a real, decision-focused query a customer asks before choosing a product or service.
We use buyer questions because they reflect what drives pipeline, sign-ups, and revenue.
In Dabudai, we track revenue-driving visibility only when answers include clickable links to your website or pages.
How we build ICP-relevant buyer questions (revenue-first)
We don’t write prompts randomly.
We build a buyer question set from real customer signals.
We combine:
Google long-tail query data (search demand + modifiers)
Call intelligence tools (for example, Gong-like platforms that analyze sales calls)
Company knowledge bases (docs, FAQs, enablement, internal guides)
Website analytics (top pages, conversions, drop-offs, onsite search)
Other customer data (CRM fields, win/loss notes, support tickets, product usage)
This keeps questions relevant to your ICP and tied to revenue intent.
The 4 buyer question types we track
Best (shortlists and recommendations)
Vs (head-to-head decisions)
Alternatives (replacement and switching intent)
How-to (evaluation, setup, and decision steps)
Other buyer questions (for example repeated buyer questions from Call intelligence tools)
How we run prompts (multi-run, programmatic)
We send the same buyer questions multiple times per day using programmatic runs.
We capture the same kind of answers users see in:
ChatGPT
Google AI Overviews
Google AI Mode
This helps us track changes over time.
It also reduces one-off results.
What our system collects and normalizes
Our pipeline collects each run and extracts:
The full answer text
The Top 5 recommendations (when present)
Clickable links to brand pages and competitor pages
Visible sources (when available)
Timestamps and engine metadata
We store and normalize the data so results are comparable over time.
If links are not shown, linked visibility is marked as not observable for that run.
How we show results in the client dashboard (aggregated views)
We don’t show raw outputs only.
We aggregate results and display breakouts:
Company-wide (overall performance)
By topic (topic-level visibility and links)
By buyer question (each tracked prompt)
This makes trends easy to spot and act on.
Linked pages ranking (your pages vs competitors)
We pull all linked pages in answers:
Your pages
Competitor pages
Then we rank pages that appear most often.
This shows which pages win links, which pages lose, and what formats dominate.
Third-party sources list (where AI “learns” from)
We also extract third-party sources (media sites, review portals, blogs). We generate a list of external sources that shape answers. This helps teams decide where to publish, pitch, and distribute content to earn citations.
Engines and coverage (where we measure)
We measure AI visibility across the answer engines your buyers actually use. Different providers can show links differently.
That’s why we track observability and normalize outputs.
Primary providers we support
Our core providers are:
ChatGPT
Google AI Overviews
Google AI Mode
Coverage can evolve over time.
But the measurement rules remain stable (see The Dabudai Standard).
Locale controls (country + language)
AI answers can change by country and language.
That is why Dabudai lets customers select:
the country for analysis
the language for analysis
This makes outputs closer to what real buyers see in that market.
What we extract from every answer (link intelligence)
For each run, we extract the full answer and all links shown in the response.
Our backend processes these links to identify:
links to your pages
links to competitor pages
third-party sources referenced by the engine
This is how we build page-level rankings and source lists.
Observability by provider (table)
Provider | Shows clickable links? | What we record |
ChatGPT | Yes | Linked mentions, average position, linked pages, visible sources, domains, url. |
Google AI Overviews | Yes | Linked mentions, linked pages,average position, visible sources, domains, url. |
Google AI Mode | Yes | Linked mentions, average position, linked pages, visible sources, domains, url. |
Note: When links are not visible, link-based metrics are marked as not observable for that run (see The Dabudai Standard).
Data acquisition: the context layer (“Brand Truth”)
This section explains how we ground analysis in business reality.
It reduces guesswork and makes results more consistent.
What we ingest (sources)
Source | What it contributes to Brand Truth |
Website pages | Public positioning, use cases, and page-level claims |
Knowledge base | Product details, feature truth, FAQs, enablement language |
CRM | ICP fields, segments, and what drives wins and losses |
Call insights | Real buyer language, objections, decision criteria, intent signals |
What “Brand Truth” means
Brand Truth is the set of validated claims AI should repeat accurately about your business.
It typically includes your one-line positioning, ICP, core use cases, proof points, and constraints (what you are not).
Validated means the content is approved by your team, consistent across sources, and kept up to date.
How Brand Truth is used
Build and version Brand Truth (with an update log).
Run buyer questions across providers.
Compare AI output vs Brand Truth and generate actions (what to clarify, what to prove, what to publish).
Glossary: Brand Truth →
Measurement framework (metrics used in Dabudai)
This section is a quick reference for the metrics you see in Dabudai.
Metrics overview table (linked-only)
Metric | What we measure (linked-only) | Where to find the definition |
AI Visibility Score | % buyer questions where AI shows a clickable link to your site or a specific page | Glossary → [link] |
AI Share of Voice (AI SOV) | Your share of linked mentions vs competitors in the same buyer question set | Glossary → [link] |
Average Position | Your average placement across tracked buyer questions, based on linked list positions | Glossary → [link] |
Recommendation Rate (Top 5) | % buyer questions where you appear in the Top 5 recommendations with a link | Glossary → [link] |
Citation / Source Coverage | % answers that link to your pages as sources (when sources are visible) | Glossary → [link] |
Short note: All metrics above are computed using Dabudai’s linked-only standard (see The Dabudai Standard).
“What counts” quick rules
Link required: we count outcomes only when a clickable link to your domain/page is visible in the answer.
Recommendation Rate is strict: it counts only when you are in the Top 5 with a link (win/loss per buyer question).
Average Position is always available: we compute it across all tracked buyer questions using a consistent “not present” handling rule (defined in The Dabudai Standard).
Observability: if links or sources are not visible in a provider UI, link-based metrics are marked as not observable for that run. We do not guess.
Which metric to use for which goal (decision table)
Use this table to choose one primary metric and one next action.
All success signals below refer to linked outcomes (see The Dabudai Standard).
Goal → primary metric → next action → success signal
Goal | Primary metric | Next action (one) | Best page type | Success signal |
Drive more revenue traffic from AI | Recommendation Rate (Top 5) | Publish one citable proof page for your main claim | Proof page | Higher Top 5 rate (with link) |
Beat competitor links | AI Share of Voice (AI SOV) | Create one “vs” page with decision criteria | Comparison page | SOV lift vs competitor (linked) |
Improve placement | Average Position | Add a criteria table + trade-offs to your key page | Comparison section | Better average position over time |
Increase citable pages | Citation / Source Coverage | Add Definition + Steps + FAQ to key pages | Glossary / Method page | More answers linking to your pages |
Increase overall linked presence | AI Visibility Score | Add a clear “best for” section on core pages | Core page | More linked mentions across questions |
How to replicate our measurement (manual version)
This is the manual way to replicate Dabudai’s measurement.
It is time-consuming, but it is verifiable.
The 7-step manual workflow
Pick 20 buyer questions that match your ICP and revenue intent.
Choose your providers (for example: ChatGPT, Google AI Overviews, Google AI Mode).
Define a stable output request (Top 5 list + short reasons + links when shown).
Run the same 20 questions multiple times per day in each provider.
For every run, write down all links shown, including:
your pages
competitor pages
third-party sources (media, review sites, blogs)
Repeat this for 7 days to capture variability and trends.
Calculate your metrics using the full week of answers and link data.
What to log (quick table)
What to log | Why it matters |
Provider + country + language | Results differ by locale |
Timestamp | Answers change over time |
Buyer question ID | Keeps runs comparable |
Top 5 list (if shown) | Used for shortlist metrics |
All visible links | Powers linked visibility and page ranking |
Third-party sources | Shows where influence comes from |
Note: If links are not shown in a provider interface, link-based results should be treated as not observable for that run (see The Dabudai Standard).
How we analyze results (Explain: the “why”)
Metrics show what happened.
Analysis explains why it happened.
We focus on what drives linked outcomes: the pages and sources engines choose to link to.
The 3 gaps model
Clarity gap: AI can’t describe what you do and who it is for in one sentence.
Proof gap: AI can’t support your claims with verifiable evidence.
Match gap: AI can’t confidently place you for the specific buyer question and constraints.
See detailed definitions in Glossary → [link]
Diagnosis workflow (step-by-step)
Find buyer questions where competitors get visible links and you don’t (in the selected country + language)
Compare the linked pages shown for each brand (your pages vs competitors).
Review third-party sources that appear and label the main gap: clarity, proof, or match.
Identify the strongest source drivers (first-party pages vs third-party pages).
Output a prioritized fix list mapped to a metric (what will move Top 5 rate, average position, or citation coverage).
Source attribution (table)
Source type | How it drives links | Best fix |
Homepage / core pages | Sets category and “best for” | Clear structure + intent sections |
Topic hubs / landing pages | Helps engines match intent by topic | Hub page + internal links + FAQ |
Proof pages | Makes claims believable | Case studies + methodology + limits |
Comparison pages | Helps selection and trade-offs | Vs pages + criteria tables |
Third-party sources | Adds coverage and reinforcement | Consistent listings + authoritative mentions |
Verification & accuracy (reducing variance)
AI systems can return different answers even for the same prompt.
That is why Dabudai reports aggregated results, not one-off outputs.
In general, a larger sample size produces a more valid signal.
Multi-run validation (sample size matters)
We run the same buyer questions multiple times per day and aggregate outcomes. We don’t treat a single response as truth.
We track trends across runs and across the full week.
Stability controls (keep inputs comparable)
We keep the testing setup stable:
Same buyer questions
Same providers
Same output request format
Same country + language (locale)
This reduces prompt drift and makes week-to-week changes comparable.
What creates variance (and how we handle it)
Source of variance | What we do |
Model or ranking updates | Track trends across days, not single runs |
Prompt format changes | Keep a stable template and change log |
Locale differences | Fix country + language per measurement |
Link display differences | Treat link-based metrics as not observable when links aren’t shown |
Human review (when needed)
We use human review for edge cases, such as:
Ambiguous links
Mixed brand names
Unclear lists
Complex narrative interpretation
Video: How to set up repeatable AI visibility measurement
This video shows how to configure a repeatable measurement workflow.
It follows the same methodology described on this page.
Measurement principle: competitor-blind prompting
We do not include competitor brand names in prompts. This reduces bias and keeps outputs closer to what real buyers see. Competitor comparisons are done after the run, during analysis.
FAQ (methodology only, linked-only)
Why does Dabudai count only mentions with clickable links?
Because links are actionable.
A clickable link can drive traffic, sign-ups, and revenue.
This keeps measurement focused on outcomes that matter.
What if AI mentions our brand name but does not include a link?
We treat that as out of scope for Dabudai metrics.
We may review it as context, but it is not counted as measurable visibility.
Why do you use a Top 5 format for Recommendation Rate?
Top 5 shortlists match how buyers evaluate options.
A fixed list size keeps results comparable over time.
What is the difference between Average Position and Recommendation Rate (Top 5)?
Average Position shows your typical placement across tracked buyer questions.
Recommendation Rate (Top 5) shows how often you appear in the Top 5 with a link.
Use Average Position for placement trends, and Recommendation Rate for shortlist wins.
Why can the same prompt produce different answers?
Answer engines can vary outputs due to model updates, ranking shifts, and context effects.
That is why we rely on aggregated results, not a single run.
What does “not observable” mean?
“Not observable” means links or sources were not visible in the provider interface for that run.
When this happens, link-based metrics cannot be measured reliably, so we do not guess.
How often should we measure AI visibility?
For meaningful trends, measure continuously and review results weekly.
If you ship changes, re-measure using the same buyer questions to track impact.
What is the first change that usually improves Recommendation Rate (Top 5)?
Start with clarity and proof on one core page.
Add a clear “best for” section and a short proof block that supports your main claim.
Do you include competitor names in prompts to benchmark results?
No. We use competitor-blind prompting to reduce bias.
Competitor comparisons happen after the run, based on the links returned naturally.
What should we do if competitors get links but we don’t?
Look at which pages and sources the engine links to for them. Then publish one targeted asset (proof, comparison, or topic hub) that matches the same buyer intent.












