Share of Model is the percentage of relevant AI-generated responses that mention or recommend your brand. It is the definitive metric for measuring AI visibility in 2026, and it is replacing Share of Voice as the benchmark that matters most for brand discoverability.
Traditional marketing metrics were built for a world where consumers searched, browsed, and compared. In that world, Share of Voice measured how visible your brand was across channels. But with 900 million people using ChatGPT weekly and 37% of consumers starting product searches with AI, the question is no longer "How visible are we?" It is "How often does AI recommend us?"
That is what Share of Model measures. And it changes everything about how brands think about visibility.
What Is Share of Model and Why Does It Matter?
Share of Model quantifies how often AI platforms recommend your brand when users ask about your product category. It is calculated as:
Share of Model = (Number of AI responses mentioning your brand / Total relevant category queries) x 100
If you run 200 queries related to your product category across ChatGPT, Copilot, Perplexity, and Gemini, and your brand appears in 50 of those responses, your Share of Model is 25%.
Why This Metric Is Different
Traditional visibility metrics measure exposure. Share of Model measures recommendation. This distinction is critical.
| Metric | What It Measures | Channel | Conversion Implication |
|---|---|---|---|
| Share of Voice | Brand visibility across media | Advertising, PR, search | Awareness-level influence |
| Share of Search | Search volume for brand terms | Google, Bing | Interest-level indicator |
| Share of Shelf | Physical/digital retail presence | Retail, e-commerce | Purchase-proximity influence |
| Share of Model | AI recommendation frequency | ChatGPT, Copilot, Perplexity, Gemini | Decision-level influence |
Share of Model operates at the decision layer. When AI recommends your brand, it functions as a trusted advisor making a specific suggestion. The influence weight is dramatically higher than an ad impression or a search result listing.
Research shows that 47% of consumers say AI influences their brand trust decisions. A brand recommended by AI carries implicit endorsement. Share of Model quantifies how often you receive that endorsement.
How Do Different AI Platforms Mention Brands?
One of the complexities of Share of Model is that different AI platforms surface brands in different ways, with different frequencies and different levels of specificity.
Platform Comparison: Brand Mention Behavior
| Behavior | ChatGPT | Microsoft Copilot | Perplexity | Google Gemini |
|---|---|---|---|---|
| Mentions brands unprompted | Moderate | High | High | Low-Moderate |
| Number of brands per response | 3-5 typically | 3-7 typically | 4-8 typically | 2-4 typically |
| Includes pricing | Sometimes | Often (from Bing) | Often (with sources) | Sometimes |
| Links to brand website | Rarely | Yes (Bing links) | Yes (source cards) | Sometimes |
| Sentiment indicators | Neutral to positive | Neutral | Neutral with context | Conservative |
| Updates with real-time data | When browsing | Always | Always | When grounded |
What This Means for Measurement
You cannot measure Share of Model on one platform and assume it represents your total AI visibility. A brand might have 35% Share of Model on Perplexity (which aggressively surfaces brand recommendations) but only 12% on Google Gemini (which is more conservative about naming specific brands).
A complete Share of Model measurement requires testing across all major platforms and weighting results by platform user base:
| Platform | Estimated Weekly Active Users | Suggested Weight |
|---|---|---|
| ChatGPT | 900M | 40% |
| Google Gemini | 350M+ | 25% |
| Microsoft Copilot | 150M+ | 20% |
| Perplexity | 100M+ | 15% |
Weighted Share of Model gives you a single number that reflects your true AI visibility across the ecosystem.
How Do You Measure Share of Model?
Measuring Share of Model requires a systematic approach. Here is the methodology used by leading AI visibility teams.
Step 1: Define Your Query Set
Build a list of 100-200 queries that represent how users in your category ask for recommendations. These should include:
- Direct category queries: "What is the best [category]?"
- Comparison queries: "[Brand A] vs [Brand B] vs alternatives"
- Use case queries: "What [category] should I use for [specific use case]?"
- Budget queries: "Best [category] under $[price]"
- Audience queries: "Best [category] for [specific audience]"
- Feature queries: "Which [category] has the best [feature]?"
Example query set for a project management software brand:
| Query Type | Example Query |
|---|---|
| Direct category | "What is the best project management software?" |
| Comparison | "Asana vs Monday.com vs alternatives" |
| Use case | "Best project management tool for remote teams" |
| Budget | "Best project management software under $10 per user" |
| Audience | "Best project management tool for startups" |
| Feature | "Which project management tool has the best Gantt charts?" |
Step 2: Run Queries Across Platforms
Execute each query on every major AI platform. Record:
- Whether your brand was mentioned (yes/no)
- Position of mention (first, second, third, etc.)
- Sentiment of mention (positive, neutral, negative)
- Context of mention (recommended, listed, compared, cautioned against)
- Whether competitors were mentioned and which ones
Step 3: Calculate Your Score
Basic Share of Model: Percentage of queries where your brand was mentioned at all.
Weighted Share of Model: Applies position weighting (first mention = 1.0, second = 0.7, third = 0.5, fourth+ = 0.3) and platform weighting.
Sentiment-Adjusted Share of Model: Discounts negative mentions and weights positive recommendations higher.
Step 4: Benchmark Against Competitors
Run the same query set and record every brand mentioned. Calculate Share of Model for each competitor.
Example benchmark output:
| Brand | Basic SoM | Weighted SoM | Sentiment-Adjusted SoM |
|---|---|---|---|
| Brand A (market leader) | 38% | 42% | 44% |
| Brand B (strong challenger) | 27% | 24% | 26% |
| Your Brand | 18% | 15% | 16% |
| Brand D | 12% | 10% | 9% |
| Brand E | 5% | 4% | 5% |
This reveals not just where you stand, but the gap you need to close and who you are competing against in AI recommendations specifically.
How Does Share of Model Compare to Share of Voice?
Share of Voice has been the standard brand visibility metric for decades. Share of Model does not replace it; it measures a fundamentally different dimension of brand influence.
| Dimension | Share of Voice | Share of Model |
|---|---|---|
| What it captures | How much of the conversation you own | How often AI recommends you |
| Channels | Advertising, PR, social, search | AI assistants and AI search |
| User intent | Varies (awareness to purchase) | High intent (seeking recommendation) |
| Influence mechanism | Repetition and exposure | Trusted recommendation |
| Cost to improve | Primarily paid media spend | Content, authority, structured data |
| Time to impact | Days to weeks (paid), months (organic) | Weeks to months |
| Defensibility | Low (competitors can outspend) | High (entity authority compounds) |
The defensibility point is worth emphasizing. Share of Voice can be bought. Outspend competitors on advertising and you win. Share of Model cannot simply be purchased. It requires building genuine entity authority, earning third-party mentions, creating definitive content, and maintaining consistency across the web. Once established, this authority compounds and is difficult for competitors to quickly replicate.
What Is a Good Share of Model Score?
Benchmarks vary by category, but these ranges provide a general framework based on analysis of Share of Model scores across 50 product categories.
| Score Range | Classification | Typical Profile |
|---|---|---|
| 0-5% | Minimal AI presence | Brand is rarely or never mentioned by AI |
| 5-10% | Emerging | Occasional mentions, usually not as top recommendation |
| 10-20% | Competitive | Regular mentions, appearing in multi-brand lists |
| 20-30% | Strong | Frequently recommended, often in top 3 |
| 30-40% | Category leader | Consistently recommended, often first mention |
| 40%+ | Dominant | AI's default recommendation in the category |
Industry-Specific Benchmarks
Category competitiveness significantly affects what constitutes a "good" score:
| Industry | Number of Major Competitors | Strong SoM Threshold |
|---|---|---|
| Enterprise SaaS (CRM) | 5-8 | 20%+ |
| E-commerce (DTC Skincare) | 50+ | 8%+ |
| Financial Services (Neobanks) | 10-15 | 15%+ |
| Cybersecurity | 20-30 | 10%+ |
| Project Management | 8-12 | 15%+ |
| E-commerce Platforms | 5-8 | 20%+ |
How Do You Improve Your Share of Model?
Improving Share of Model requires a multi-pronged approach targeting the factors AI models use to decide which brands to recommend.
Lever 1: Entity Authority
The strongest predictor of Share of Model is entity authority. Brands with clear, consistent, well-documented entity presence across the web get recommended more.
Actions:
- Implement comprehensive Organization and Product schema markup
- Ensure 95%+ entity consistency across all platforms
- Build and maintain Wikidata and Wikipedia presence
- Claim all knowledge panels and business listings
Lever 2: Content Definitiveness
AI models recommend brands they can confidently describe. Create content that makes it easy for AI to understand and recommend your brand.
Actions:
- Publish definitive "what is" and "how it works" content about your products
- Create comparison pages that position your brand clearly against alternatives
- Include specific numbers, features, pricing, and differentiators
- Structure content with clear headers that match common AI queries
Lever 3: Third-Party Validation
AI models weight third-party mentions heavily because they serve as independent validation of brand claims.
Actions:
- Earn mentions in 50+ authoritative publications and directories
- Build a strong review presence on category-relevant platforms
- Secure expert quotes and feature articles in industry media
- Target the specific publications AI models retrieve most frequently
Lever 4: Advertising Signals
On platforms like Copilot (via Microsoft Advertising) and ChatGPT (via emerging ad formats), advertising provides incremental visibility.
Actions:
- Run Microsoft Advertising campaigns optimized for Copilot visibility
- Test ChatGPT's emerging ad placements
- Invest in Perplexity's sponsored questions for category terms
- Allocate 15-20% of digital ad budget to AI-specific channels
Expected Timeline for Improvement
| Improvement Lever | Time to Impact | Expected SoM Increase |
|---|---|---|
| Schema markup implementation | 2-4 weeks | +3-5% |
| Entity consistency fixes | 2-6 weeks | +2-4% |
| Definitive content creation | 4-8 weeks | +5-8% |
| Third-party citation building | 8-16 weeks | +5-10% |
| AI advertising campaigns | 1-2 weeks | +2-5% |
| Combined sustained effort | 3-6 months | +15-25% |
What Do High vs. Low Share of Model Brands Look Like?
Examining real patterns reveals what separates brands that AI recommends from brands it ignores.
High Share of Model Brand Profile (30%+ SoM)
- Wikipedia article with 50+ citations
- 200+ third-party mentions across authoritative sources
- Complete, consistent entity data across 30+ platforms
- Comprehensive structured data on every key page
- Definitive content ranking in top 3 on both Google and Bing for category terms
- Active presence on all major review platforms with 500+ reviews
- Strong executive thought leadership on LinkedIn and in media
Low Share of Model Brand Profile (Below 5% SoM)
- No Wikipedia presence
- Fewer than 20 third-party mentions
- Inconsistent brand name and description across platforms
- No structured data markup
- Content does not directly answer category queries
- Minimal review platform presence
- No entity differentiation from competitors with similar names
The gap between these profiles is significant, but it is closeable. Most brands sit in the 5-15% range, and systematic optimization can move them to 20-30% within 6 months.
How Often Should You Measure Share of Model?
Monthly measurement is the standard cadence. AI model updates, competitive activity, and your own optimization efforts create continuous change in Share of Model scores.
Monthly: Full query set across all platforms, competitive benchmarking, trend analysis.
Weekly: Spot-check 20-30 high-priority queries to catch sudden changes.
Quarterly: Deep analysis including new query additions, platform weighting adjustments, and strategic planning based on trends.
Track your Share of Model over time as a line chart alongside key optimization milestones. Correlating specific actions (schema implementation, PR campaign launch, content publication) with Share of Model changes reveals which levers produce the most impact for your brand.
Share of Model is not a vanity metric. It is a direct measure of how often AI recommends your brand to people making purchase decisions. With AI shopping projected to drive $20.9 billion in 2026 and AI Overviews now appearing on 14% of shopping queries, the brands that track and optimize Share of Model will capture disproportionate value from the AI-driven commerce shift.
Measure it. Benchmark it. Improve it. That is the playbook for AI visibility in 2026.