FAQ

Reports and
Methodology

Book a demo
Scroll

Reports and Methodology

A benchmark includes: • frequency of brand mentions in AI-generated responses
• context (industries, markets, and areas of specialization)
• comparison with up to 5 competitors
• analysis of selected user search intents
No. Sonada does not collaborate directly with OpenAI or Google. We perform controlled visibility tests and comparative analyses to ensure objectivity and repeatability of results.
Yes. Each analysis is based on defined search intents and identical model conditions. This ensures that quarterly reports reflect real changes in how AI perceives and describes your brand.
Share of Model measures how often your brand appears in AI generated answers compared to competitors. It also takes into account how strong and relevant those mentions are, giving a more complete picture than traditional visibility metrics.
We use clean testing environments. This means no personal history, no prior session data and no hidden bias. The results reflect how AI behaves by default, not based on individual users.
Yes. We compare AI generated answers with your verified brand information to find cases where the model gives incorrect or outdated details. Identifying these issues is the first step to fixing them.
Instead of simple categories like informational or transactional, we look at more realistic user questions. For example comparing tools or looking for a solution to a specific problem. This helps us understand whether your brand is actually being recommended in real scenarios.
We analyze what other topics and attributes are connected to your brand in AI responses. For example whether your brand is linked to innovation, quality or price. This helps adjust your positioning.
Yes. Different versions of the same model can behave differently. We take this into account and show how your visibility changes across them.
Instead of relying on a single response, we run multiple versions of the same query. This allows us to identify consistent patterns and the most likely outcomes, rather than random one off results.

Precision Analytics for the Generative Era Sonada’s Methodology

At Sonada, we believe that if you cannot measure something, you cannot improve it. As digital discovery shifts from traditional search results to AI generated answers, having clear and reliable data becomes essential. Our reporting and methodology are designed to show how large language models actually see, understand and recommend your brand.

Data driven and practical

Our approach is based on controlled simulation. AI systems like ChatGPT, Claude and Gemini do not behave in a fixed or predictable way. That is why we create structured testing environments where we can trigger specific user questions and scenarios. This allows us to capture results that are consistent and comparable over time. Instead of guessing, we analyze large volumes of generated responses to understand what actually leads to a brand being recommended.

The benchmark advantage

A Sonada benchmark report is not just a visibility score. It is a practical map of where your brand stands. We measure something we call Share of Model, which shows how often your brand appears in AI generated answers compared to your competitors. We also look deeper at how your brand is mentioned, including the context, sentiment and associated topics. This helps identify where you are strong and where there are gaps in how AI understands your brand.