Locale & Market Methodology: How Dabudai Measures AI Visibility by Country and Language

AI answers change by market.

The same buyer question can return different links and sources across countries and languages. That is why Dabudai measures AI visibility per topic locale.

In Dabudai, each topic is configured with one country and one language

All prompts inside the topic inherit that locale.

We apply locale in runs by sending the selected location in provider API requests and generating prompts in the topic language.

We lock country and language after topic creation because results are aggregated.
Mixing locales would break trend comparability and produce unreliable statistics.

If you need the same topic in two countries, create two topics. This keeps measurement valid and comparable over time

How Dabudai organizes measurement (org → topics → prompts)

Dabudai uses a simple structure so results stay consistent and comparable over time.

  • Organization: your company workspace. It includes company-wide dashboards.

  • Topics: tracking units inside the organization. Each topic is one measurement dataset with its own aggregated results.

  • Prompts: the buyer questions tracked inside a topic.


This matters because Dabudai aggregates links, positions, and metrics across repeated runs.

A clean structure prevents mixing countries and languages inside the same dataset.

The core rule

Locale (country + language) is set at the topic level and stays fixed. All prompts in the topic inherit that locale.

What a “topic locale” is

A topic locale is the fixed (and locked) combination of:

  • Country (the location used for provider runs)

  • Language (the language used to generate and send prompts)


Every tracked prompt inside the topic runs using that same topic locale. Country is passed in the API request as the selected location, and prompts are sent in the topic language. This prevents mixed inputs in one aggregated dataset and keeps results aligned with what users in that market see in provider interfaces.

How locale is applied in provider API runs

Locale is applied at run time, not treated as a label.
Each topic uses one fixed country + language, and all prompts inherit it.

Prompt language is set automatically

Prompt language is derived from the topic language.
Prompts are generated and sent to providers in that language.
This prevents mixed-language measurement inside one topic dataset.

Location is sent in the API request

For each provider run, Dabudai includes the topic’s country/location in the API request.
This helps align outputs with market-specific contexts.

Locale settings (quick table)

Setting

Comes from

Why it matters

Language

Topic language

Determines the language prompts are sent in

Country / location

Topic country

Anchors runs to a specific market context

What this means for your data

Inside one topic, all results come from:

  • the same country

  • the same language

  • the same topic prompt set

This keeps topic-level dashboards market-consistent and comparable over time.

Why locale cannot be changed after setup

Dabudai aggregates data across many runs within a topic dataset.

We aggregate positions, link occurrences, visible sources, and derived metrics (for example: SOV and Recommendation Rate).

If you change country or language after tracking has started, the dataset becomes mixed.

Mixed datasets create misleading trends and incorrect comparisons.

That is why topic locale is locked after creation.

What goes wrong when locales are mixed (quick table)

If you change…

You create…

Result

Country

Mixed market outputs

Links and sources stop being comparable; SOV and rates become misleading

Language

Mixed-language prompts and answers

Measurement drift; trend lines lose meaning

Both

Corrupted aggregation

Metrics no longer reflect one real market over time

How to measure the same topic across multiple countries

If you want to track the same topic in two markets, do not reuse one topic.

Create one topic per country + language, because locale is fixed and results are aggregated within a topic dataset.

The recommended setup (example)

  • Topic A: “AI Search Intelligence” — United States · English

  • Topic B: “AI Search Intelligence” — Germany · German


Both topics can use the same buyer questions.

Prompts will be sent in the topic language, and runs will use the topic country in the provider request.

Each topic produces outputs that match its market.

Goal → setup (quick table)

Goal

Correct setup

Track one topic in two countries

Create two topics (one per locale)

Compare performance over time

Compare within the same topic (same locale)

Expand to a new market later

Create a new topic for that locale


Why this improves validity

This approach keeps:

  • clean aggregation per market

  • comparable trends over time (no mixed locale data)

  • accurate page-level link and source lists as shown in provider answers

Video: Organization vs Topic dashboards (avoid mixed-locale data)

This video explains how to use Dabudai dashboards without mixing markets.
It shows the difference between company-wide reporting and topic-level execution.


Key rule: A Topic is one market dataset (country + language locked).

What the video covers

  • The dashboard hierarchy: Organization → Topics → Prompts

  • Why topics are locale-locked (country + language)


  • When to use Organization vs Topic views


  • How to go from “what changed” to “what to do next” (repeatable workflow)

When to use which view (quick table)

What you want to do

Best view

Report overall trends to leadership

Organization dashboard

Diagnose why performance changed

Topic dashboard

Decide what to ship next

Topic dashboard

Compare results fairly

Same Topic (same locale)

Organization-level dashboards (company-wide)

In the Organization dashboard, you see a roll-up of performance across the company.

Best for:

  • leadership updates


  • weekly reporting


  • spotting overall movement in metrics


Key idea:
Organization dashboards summarize multiple topics and prompt sets.

Topic-level dashboards (market-accurate)

In a Topic dashboard, you see detailed results for one measurement dataset:

  • that topic


  • all prompts inside the topic


  • the fixed country + language (locale)


Locale is fixed after topic creation and cannot be edited. This view is best for execution because it reflects one market setup.

Market setup checklist (copy-paste)

Use this checklist when creating a topic.

Key rule: One topic = one market dataset (country + language). Locale is locked after creation.

Topic setup checklist

  1. Pick a topic that matches a real buyer intent area.

  2. Choose one country where you sell or want to grow.

  3. Choose one language your buyers use in that market (prompts will be sent in this language).

  4. Select your providers (ChatGPT, Google AI Overviews, Google AI Mode).

  5. Add 20 buyer questions for that topic (ICP-relevant).

  6. Start tracking and keep the locale fixed for the measurement window (data is aggregated).

  7. If you need another market, create a second topic with a different country/language.

Common mistakes to avoid (quick table)

Mistake

Why it breaks measurement

Better approach

Mixing markets in one topic

Aggregates incomparable outputs

One topic per market

Trying to change country or language mid-topic

Creates mixed datasets and unreliable trends

Locale is locked; create a new topic

Using different prompt formats over time

Introduces prompt drift

Keep a stable format + change log

Comparing topics with different locales

Not an apples-to-apples comparison

Compare within the same locale (or use separate topics per market)

FAQ

Why can the same buyer question produce different results in different countries or languages?

Providers can return different links and sources by market.
Country and language affect what pages are considered most relevant.


How is locale applied in Dabudai?

Locale is applied at run time.
We send the selected country/location in provider API requests, and prompts are generated and sent in the topic language.


Why is locale set at the topic level, not per prompt?

Because Dabudai aggregates results across prompts and runs within a topic dataset.
Topic-level locale keeps the dataset consistent, and all prompts inherit that locale.


Can we change country or language after a topic is created?

No. Topic locale is locked to protect the aggregated dataset from mixed statistics.
If you need a new locale, create a new topic.


If we want two countries, can we reuse the same prompts?

Yes. You can reuse the same buyer question set across topics.
Just keep each topic tied to one fixed country + language.


Can we compare performance across countries?

Yes, but only by comparing separate topics (one topic per locale).
Avoid mixing locales inside one topic.


Which dashboard should we use for execution?

Use the topic dashboard. It reflects one market setup (fixed country + language) and one prompt set.

All posts

All posts

All posts