RankSmith Glossary
Understanding the terminology is essential for effective AI search monitoring. This glossary covers all key terms, metrics, and concepts used in RankSmith.Core Metrics
Authority Index™
Definition: Your overall ranking and authority score across all monitored AI platforms. Calculation: Composite score based on visibility rate, average ranking, mention frequency, and source citation quality. Range: 0-100, where higher scores indicate stronger AI search presence. Use: Overall health metric for your AI search performance. Track month-over-month to measure improvement.Visibility Rate
Definition: The percentage of tracked prompts where your brand appears in AI responses. Calculation: (Prompts with mentions ÷ Total tracked prompts) × 100 Example: If you’re mentioned in 30 out of 50 prompts, your Visibility Rate is 60%. Benchmark:- 0-20%: Low visibility, major opportunity
- 20-40%: Moderate presence
- 40-60%: Good visibility
- 60%+: Strong AI search presence
Average Ranking
Definition: Your average position when mentioned in AI responses. Calculation: Sum of all ranking positions ÷ Number of mentions Lower is Better: Rank #1 means you’re mentioned first (best), higher numbers mean later mentions. Significance:- Ranks 1-3: Premium visibility
- Ranks 4-7: Good visibility
- Ranks 8+: Needs improvement
Reputation Score
Definition: Sentiment analysis of how your brand is discussed in AI responses. Categories:- Positive: Favorable mentions, recommendations, praise
- Neutral: Factual mentions without sentiment
- Negative: Critical or unfavorable mentions
Share of Voice
Definition: Your percentage of total brand mentions within your tracked prompts. Calculation: Your mentions ÷ (Your mentions + All competitor mentions) × 100 Example: If you appear 20 times and competitors appear 80 times total, your Share of Voice is 20%. Strategic Use: Primary competitive metric. Increasing Share of Voice means gaining market presence.Prompt-Related Terms
Prompt
Definition: A natural language question used to test AI engine responses. Examples:- “What are the best project management tools?”
- “Compare Slack vs Microsoft Teams”
- “How to improve customer retention for SaaS”
Prompt Categories
Aspirational: Future-focused, industry trends, thought leadership- Example: “Future of AI search in marketing”
- Example: “Is [Your Brand] worth the investment?”
- Example: “[Your Product] vs [Competitor]”
- Example: “How to set up AI search monitoring”
Mention
Definition: Any instance where an AI engine includes your brand name or product in a response. Types:- Direct Mention: Brand name explicitly stated
- Implicit Mention: Product/service described without brand name
- Citation: Referenced as a source or authority
Ranking Position
Definition: The order in which your brand appears within an AI response. Rank #1: First brand mentioned (most prominent) Rank #2-3: Early mention, good visibility Rank #4-7: Mid-tier visibility Rank #8+: Lower prominence Note: Not all AI responses are lists. Ranking refers to approximate positioning within the response text.AI Platform Terms
AI Search Engine
Definition: An AI-powered platform that provides direct answers to user queries rather than traditional search results links. Examples: ChatGPT, Claude, Perplexity, Gemini, Grok vs. Traditional Search: Provides synthesized answers instead of links to web pages.Source Citation
Definition: External references AI engines use when generating responses about topics. Importance: Being cited as a source significantly increases AI visibility. AI engines trust certain sources more than others. Types:- Primary Sources: Directly cited in response
- Secondary Sources: Inform AI training data
- Authority Sources: High-trust publications
Training Data
Definition: The information corpus AI models use to learn and generate responses. Relevance: Content available during model training influences whether you’re mentioned. Limitation: Training data has cutoff dates. Recent content may not be reflected immediately.Response Consistency
Definition: How reliably an AI engine mentions your brand across multiple queries of the same prompt. Factors Affecting:- Model version updates
- Prompt phrasing variations
- Training data freshness
- Randomness in AI generation
Competitive Terms
Competitor
Definition: Brands you track alongside your own to understand relative AI search performance. Types:- Direct Competitors: Similar products/services
- Category Leaders: Market dominators
- Emerging Threats: Fast-growing new entrants
Co-Mention
Definition: When your brand and a competitor are both mentioned in the same AI response. Analysis: High co-mention rates indicate direct competitive relationship from AI perspective. Strategic Use: Understand competitive positioning and differentiation opportunities.Competitive Gap
Definition: Prompts where competitors appear but you don’t. Priority Metric: These represent your biggest immediate opportunities. Action: Create content specifically addressing these gaps.Content & Strategy Terms
Content Gap
Definition: Topics or questions where you lack content, leading to AI invisibility. Identification: Use low-performing prompts to identify gaps. Resolution: Create comprehensive content addressing the missing topics.Authority Building
Definition: Efforts to increase citations from high-quality sources that AI engines trust. Methods:- Getting featured on authoritative publications
- Building expert reputation
- Creating citation-worthy original research
- Strategic PR and media relations
Topical Relevance
Definition: How strongly your brand is associated with specific topics in AI understanding. Building Relevance:- Consistent content creation
- Deep topic coverage
- Authority source citations
- Expert positioning
AI-First SEO
Definition: Content and optimization strategies designed for AI engine understanding rather than traditional search engines. Key Differences from Traditional SEO:- Focus on comprehensive answers vs. keywords
- Emphasis on authority and trust vs. backlinks
- Natural language vs. keyword optimization
- Content depth vs. content quantity
Technical Terms
Prompt Frequency
Definition: How often RankSmith tests a specific prompt across AI platforms. Options:- Daily: Continuous monitoring (default)
- Weekly: Less frequent tracking
- On-Demand: Manual testing only
Platform Coverage
Definition: Which AI engines are monitored for each prompt. RankSmith Default: All 12+ platforms Custom: Select specific platforms for certain promptsHistorical Data
Definition: Past performance data for prompts and metrics. Retention: RankSmith maintains complete history for trend analysis. Use: Identify patterns, measure campaign impact, track long-term progress.Baseline
Definition: Your initial AI search performance before optimization efforts. Importance: Essential for measuring improvement and ROI. Best Practice: Document baseline thoroughly before making changes.Performance Indicators
Trend
Definition: Direction and rate of change in a metric over time. Types:- Improving: Upward trajectory (positive)
- Declining: Downward movement (concern)
- Stable: Consistent performance
- Volatile: Significant fluctuations
Benchmark
Definition: Standard or reference point for comparison. Types:- Industry Benchmark: Average for your sector
- Competitive Benchmark: vs. specific competitors
- Historical Benchmark: vs. your own past performance
Performance Threshold
Definition: Predetermined metric levels that trigger actions or alerts. Examples:- Visibility Rate drops below 40%
- Competitor Share of Voice exceeds yours by 20%
- Average Ranking falls below #5
Strategic Concepts
First-Mover Advantage
Definition: Competitive benefit from being early to optimize for AI search. Benefits:- Establish authority before competitors
- Capture citations first
- Define category positioning
- Build momentum advantage
Content Moat
Definition: Comprehensive, authoritative content that’s difficult for competitors to replicate. Characteristics:- Deep topic coverage
- Original research and data
- Expert credibility
- High-quality citations
Optimization Cycle
Definition: Regular process of monitoring, analyzing, and improving AI search performance. Typical Cycle:- Monitor (weekly data review)
- Analyze (identify opportunities)
- Execute (create content, build authority)
- Measure (track results)
- Refine (adjust strategy)
Units of Measurement
Percentage (%)
Used for: Visibility Rate, Share of Voice, Reputation Score distributionNumerical Ranking
Used for: Average Ranking (1-20+), specific mention positionsScore (0-100)
Used for: Authority Index™, overall health metricsFrequency
Used for: Mention counts, prompt test frequencyQuick Reference
Key Metrics
Authority Index™, Visibility Rate, Average Ranking, Share of Voice
Critical Concepts
Content Gaps, Competitive Gaps, Authority Building, Topical Relevance
Prompt Strategy
Categories (Aspirational, Reputational, Competitive, Educational)
Action Indicators
Trends, Benchmarks, Thresholds, Performance Indicators
Learn More
Dashboard Guide
See metrics in action on your dashboard
Getting Started
Begin monitoring with RankSmith
Tips & Tricks
Apply these concepts to improve your AI search presence
FAQ
Common questions about terms and metrics
This glossary is continuously updated as RankSmith adds features and the AI search landscape evolves.