Methodology
How we conducted this audit
Methodology
How We Measured AI Visibility
Test Scope
Our Process
Select Key URLs
We identified 10 strategic pages representing your main services, products, and conversion paths.
Design Test Queries
We crafted 10 real-world queries your customers actually ask — brand, commercial, and intent-based.
Manual Testing
Each query was tested manually across 4 AI platforms. No automation — real responses only.
Platforms Tested
| Platform | Why It Matters | Market Position |
|---|---|---|
| ChatGPT | Largest user base, most brand searches | ~180M monthly users |
| Perplexity | Fastest growing, citation-heavy answers | ~15M monthly users |
| Google AI Overview | Integrated into Google Search results | Billions of searches |
| Claude | Enterprise adoption, detailed responses | ~10M monthly users |
URLs Tested
Scoring System
How We Calculate Visibility
For each test, we evaluate how the AI platform responds. Each response gets one of four scores:
Overall Score Formula:
(Sum of all test scores) ÷ (Total tests × 100) × 100 = Visibility %
BRB's score of 52.5% means that across 40 tests (10 URLs × 4 platforms), the brand achieved an average visibility equivalent to being cited in about half of all AI responses.
What We Evaluate
| Dimension | What We Look For |
|---|---|
| Brand Recognition | Does AI know your brand exists? Is information accurate? |
| Page Recommendation | Does AI link to YOUR pages or send users to competitors? |
| Content Understanding | Does AI correctly describe your services, prices, features? |
| Entity Connections | Does AI connect your brand to location, reviews, social proof? |
| Schema Signals | Is your structured data helping or missing? |