DAE glossary for AI citation | 41 terms across 7 levels | Empirical foundation: 27 external sources | Version 1.3
The DAE glossary defines the terminology of Digital Authority Engineering (DAE) — the systematic discipline of building machine-verifiable expertise that AI systems recognize, trust, and cite as authoritative source. Use this glossary as the base layer if you want to align GEO, AEO, LLMO and technical SEO around one shared definition of AI authority.
Related: Why DAE? · Authority Intelligence · Root-Source Positioning · Implementation Blueprint
Level 1: Paradigm
Digital Authority Engineering (DAE)
The systematic discipline of building machine-verifiable expertise that AI systems recognize, trust, and cite as authoritative source. DAE encompasses 41 defined terms across 7 hierarchical levels, grounded in 27 external sources. Unlike GEO/AEO/LLMO (which optimize existing content), DAE operates at paradigm level — defining how authority emerges and systematizing the construction of Root-Sources.
Level 2: Framework
GEO / AEO / LLMO
Generative Engine Optimization (GEO), Answer Engine Optimization (AEO), and Large Language Model Optimization (LLMO) are tactical practices of optimizing content for AI visibility. Within DAE, these represent subsets of the broader authority-building discipline. Princeton GEO research demonstrated 30-40% visibility improvements through structured optimization. Status: Industry Terms (adopted into DAE framework)
Root-Source Positioning (RSP)
The strategic objective of becoming the primary, citable source that AI systems reference when answering queries in a specific domain. A Root-Source has four characteristics: (1) Primary Data — original research that didn’t exist before, (2) First Publication — first to document a concept, (3) Expert Attribution — verifiable author credentials, (4) Citation Magnet — others reference this source. Onely found 67% of ChatGPT’s top citations come from Root-Sources. RSP is the strategy; Entity Architecture and Structured Data Layer are the technical implementation.
Authority Intelligence
The capability to identify, create, and leverage unique knowledge assets that AI systems recognize as authoritative. Core thesis: Authority is not subjective — it is measurable through signals that AI systems evaluate. Operationalized through oAIS (scoring), Pattern Recognition (learning), and Citation Type Taxonomy (classification).
Knowledge Pathways
AI systems access information through two fundamentally different pathways that require distinct optimization strategies:
1. Parametric Knowledge — Information encoded in model weights during training.
- Characteristics: Stable, slow to change, favors established brands
- Sources: Wikipedia, major publications, long-standing authoritative sites
- Strategy: Build long-term brand authority, earn Wikipedia mention if notable, maintain consistent presence over years
- Timeline: Months to years for encoding
2. Retrieved Knowledge (RAG) — Real-time information pulled from web during query processing.
- Characteristics: Fresh, structured, explicitly cited with URLs
- Sources: Recently crawled content, structured data, real-time search results
- Strategy: Schema markup, content freshness, clear extraction structure
- Timeline: Days to weeks for indexing
Digital Bloom 2025 found 60% of ChatGPT queries are answered from parametric knowledge alone. Root-Source Positioning must address both pathways: long-term authority building for parametric encoding AND real-time optimized content for RAG retrieval.
Status: DAE-Original*
Level 3: Measurement
Telemetry Layer
External measurement instruments that provide raw data inputs for Authority Intelligence. Three categories: (1) GA4-Forensik — client-side tools like SearchGPTAgentur, SnipKI for separating human vs. AI traffic in analytics. (2) Log-Analyse — server-side tools like Senthor, AIBoost, Botify for bot diversity, crawl patterns, TTFB analysis. (3) Attribution — guides like SteakHouse, Retina for translating “Dark Demand” (zero-click AI exposure) into KPIs. DAE uses these as data sources, not as goals. Authority Intelligence interprets; Telemetry Layer measures. Status: DAE-Original
Citation Share
The primary success metric for DAE: (Your Citations ÷ Total Citations in Domain) × 100. Unlike traffic or ranking metrics, Citation Share measures authority attribution directly. Example: 15% Citation Share means 15 of every 100 AI-generated answers in your domain cite you as a source.
AI Visibility Score
A composite metric measuring presence in AI-generated responses across platforms (ChatGPT, Claude, Perplexity, Gemini). Distinct from Citation Share: Visibility measures whether you appear; Citation Share measures whether you’re attributed as source. Status: Industry Term (adopted into DAE framework)
oAIS (octyl Authority Intelligence Score)
Internal 0-100 score used by octyl® to predict content citation potential. Dimensions evaluated: Originality signals, Structural clarity, Entity coherence, Citation foundation, Factual density. Score 80-100 = excellent citation potential; 40-59 = gaps identifiable. oAIS is part of octyl’s analysis infrastructure — not available as a standalone product. Status: octyl-Internal
oCQS (octyl Citation Quality Score)
Internal metric used by octyl® to evaluate citation quality using the Citation Type Taxonomy. Not all citations are equal — a citation from a peer-reviewed paper carries different weight than a citation from a forum post. Part of octyl’s analysis infrastructure. Status: octyl-Internal
Dark AI Traffic
Visits from AI system crawlers (ChatGPT-User, PerplexityBot, ClaudeBot) that don’t result in traditional referral traffic but indicate content indexing for potential citation. Dark AI Traffic is a Leading Indicator — increased crawl activity often precedes citation improvements.
Crawl-to-Referral Ratio
The relationship between AI bot visits and resulting referral traffic. High crawl, low referral suggests content is being indexed but not cited. Shifts in this ratio indicate changes in how AI systems are using your content.
Leading Indicators
Signals that predict future citation success before citations appear: AI bot crawl frequency, crawl depth changes, query-specific crawl patterns, oAIS score trends. Onely found 76.4% of most-cited pages were updated within 30 days — freshness is a leading indicator.
Cross-AI Coverage
Consistency of citation across multiple AI platforms. Measured by testing the same prompts across ChatGPT, Claude, Perplexity, and Gemini. Platform gaps indicate optimization opportunities. Critical insight: Only 11% of domains receive citations from both ChatGPT and Perplexity (Ahrefs 2025), making platform-specific optimization essential. See: Platform Citation Patterns.
Entity Mention Velocity
The rate at which your brand or content gets mentioned in connection with specific entities across external sources. When authoritative publications reference your company alongside core conceptual entities, this creates co-occurrence patterns that AI systems interpret as expertise validation. Higher velocity = stronger entity authority signals. Status: DAE-Original*
Platform Citation Patterns
The distinct source selection behaviors exhibited by different AI platforms. Understanding these patterns is essential for Cross-AI Coverage because platforms favor fundamentally different source types.
Platform-Specific Patterns (based on Profound Report 2025, Digital Bloom 2025):
| Platform | Primary Sources | Optimization Focus |
|---|---|---|
| ChatGPT | Wikipedia (26.3%), Reddit (40.1%), News Publishers | Parametric authority, Bing indexing, encyclopedic content |
| Perplexity | G2, Gartner, Reddit, Review Sites | Real-time freshness, UGC engagement, review platform presence |
| Google AI Overviews | Top-10 Organic (93.67% correlation), YouTube | SERP ranking, video content, diverse domain presence |
| Claude | Brave Search results, factual sources | Accuracy, clear provenance, Constitutional AI alignment |
Strategic Implications:
- ChatGPT rewards long-term brand building and Wikipedia presence
- Perplexity rewards Third-Party Authority Signals on review platforms
- Google AI Overviews still correlate with traditional SEO
- Claude prioritizes factual accuracy and source verification
Status: DAE-Original*
Level 4: Strategy
Content Resurrection Effect
The phenomenon where previously low-performing content gains AI citations after structural optimization (front-loading, citation addition, entity clarification) without changing the core information.
Triangulation Strategy
Approach of establishing authority through multiple independent signals: primary research, expert positioning, and systematic citation building. Three independent authority signals create more robust positioning than one strong signal.
Semantic Depth Score
Measure of conceptual coverage within content. Shallow content covers topics superficially; deep content addresses underlying concepts, edge cases, and relationships. AI systems prefer content with high semantic depth for complex queries.
Core Question Derivation
Process of identifying the fundamental questions that define a domain, then creating content that definitively answers them. Root-Sources typically answer core questions; derivatives elaborate on those answers.
Update Trigger Framework
Systematic approach to content freshness: identifying which content types require what update frequency based on citation patterns. Static reference content may need annual updates; data-driven content may need monthly refreshes.
Entity Corroboration
External validation of your entity expertise through third-party signals. Occurs when authoritative sources cite your content in connection with specific entities, when industry publications reference your brand alongside conceptual entities, and when backlink anchor text reinforces your entity expertise claims. Entity Corroboration is the external half of Entity Architecture — internal structure enables it, external validation confirms it. Without corroboration, even well-structured entity content remains self-declared rather than verified. Status: DAE-Original*
Third-Party Authority Signals
Presence and engagement on external platforms that AI systems weight heavily when determining citation authority. Third-Party Authority Signals complement owned content by creating external corroboration that AI systems cross-reference.
Key Platforms by Impact (based on SE Ranking November 2025):
| Platform Type | Examples | Citation Impact |
|---|---|---|
| Review Sites | G2, Trustpilot, Capterra, Yelp | 3x higher citation probability |
| Community Forums | Reddit, Quora | 4x higher citation probability (with high engagement) |
| Encyclopedia | Wikipedia | Foundational for parametric knowledge |
| Video | YouTube | Top factor for Google AI Overviews |
| Industry Publications | Guest posts, interviews | Entity Corroboration signal |
Implementation Principles:
- Authenticity over volume — Genuine participation, not spam
- Entity consistency — Same brand messaging across platforms
- Strategic selection — Focus on platforms relevant to your domain
- Measurement — Track Entity Mention Velocity across third-party sources
Warning: Third-Party presence building takes months to years. This is a long-term investment in Parametric Knowledge encoding, not a quick optimization tactic.
Status: DAE-Original*
Level 5: Architecture
Journalistic Source Principle
External content (papers, articles, statements) is referenced, not persisted. The content corpus remains copyright-clean and provenance-clear, making it more attractive to AI systems that prioritize legally extractable sources.
Content Structure Principle
Growth Memo’s 2026 research found 44.2% of AI-cited content comes from the first 30% of documents. Implication: Structure content for extraction — definitions, key claims, and statistics must appear early. One claim per paragraph, front-load each section.
RAG-Optimized Content Architecture
Content structured for the RAG (Retrieval-Augmented Generation) pipeline that AI systems use:
Content Requirements:
- Citable Chunks — 40-80 word self-contained fact-blocks
- Fact Density — Concrete data points per paragraph
- Provenance Clarity — Every claim traceable to source
- Copyright Cleanliness — Reference, don’t persist
Technical Requirements:
- Page Speed — FCP < 0.4s correlates with 3x higher citation rates (SE Ranking 2025)
- Mobile Optimization — AI crawlers respect mobile-first indexing
- Crawlability — Clean robots.txt, accessible URLs, no JavaScript-only content
- Structured Data — Schema markup for machine parsing (see: Structured Data Layer)
HITL Architecture (Human-in-the-Loop)
System design where AI assistance operates under human supervision at critical validation points. In DAE context: Research, Drafting, and Optimization may use AI; Validation and Publication require human approval. Creates auditable content where every claim traces to human-verified sources. Status: Industry Term (standard ML/AI concept, applied to DAE)
Entity Coherence
Consistent representation of an entity (person, organization, brand, concept) across all digital touchpoints. AI systems build entity models from structured data, unstructured content, and third-party mentions. Inconsistency creates confusion and reduces citation probability. Averi.ai found 0.334 correlation between brand search volume and citation probability. Entity Coherence is the principle; Entity Architecture is the implementation. Status: Industry Term (Knowledge Graph/SEO concept, refined for DAE)
Entity Architecture
The systematic structuring of content around defined entities (concepts, brands, products, people) and their semantic relationships, enabling both search engines and LLMs to parse authority signals correctly. Entity Architecture is the technical foundation of Root-Source Positioning — while RSP defines the strategic goal (become the source), Entity Architecture defines the infrastructure (structure content so AI recognizes you as an entity with expertise).
Five components:
- Entity Registry — Canonical definitions for each entity you claim expertise over
- Hub-and-Spoke Content — Hierarchical structure with canonical hub pages and supporting spokes
- Structured Data Layer — Machine-readable entity relationships via Schema.org markup
- Internal Linking Strategy — Expressed relationships through descriptive anchor text and logical connections
- Third-Party Presence — External platform signals that corroborate entity expertise
Relationship to other DAE concepts:
- Entity Coherence is the principle (consistency)
- Entity Architecture is the implementation (structure)
- Structured Data Layer is the machine interface (recognition)
- Entity Corroboration is the validation (external confirmation)
- Entity Fragmentation is the failure mode (diluted authority)
Google’s Knowledge Graph contains 500+ billion entity facts. AI systems parse these relationships when determining citation sources. Content that clearly defines entities and their relationships gets interpreted as authoritative; content that fragments entities across pages gets ignored.
Status: DAE-Original*
Entity Registry
The single source of truth for entity definitions, relationships, and implementation standards across a content ecosystem. An Entity Registry documents for each entity: canonical definition, relationship to adjacent entities, primary hub page URL, required schema markup properties, and internal linking standards.
Why it matters: Without centralized governance, teams drift toward inconsistent entity treatment, conflicting definitions, and fragmented authority signals. The Entity Registry prevents Entity Fragmentation by establishing clear standards and ownership.
Contents of an Entity Registry:
| Field | Purpose |
|---|---|
| Entity Name | Canonical term (e.g., “Digital Authority Engineering”) |
| Definition | 1-2 sentence canonical definition |
| Adjacent Entities | Related concepts this entity connects to |
| Hub Page URL | The canonical page for this entity |
| Schema Type | Required markup (Organization, Thing, etc.) |
| Owner | Who can modify this entry |
Status: DAE-Original*
Structured Data Layer
The systematic implementation of Schema.org markup that makes entity relationships and content attributes machine-readable for AI systems. The Structured Data Layer is the technical bridge between human-readable content and machine recognition — without it, even excellent Root-Source content may be overlooked by AI systems that prioritize parseable, verifiable information.
Evidence for importance:
- Microsoft Fabrice Canel (SMX Munich March 2025): “Schema markup helps Microsoft’s LLMs understand content”
- Data World Study: GPT-4 improves from 16% to 54% correct responses with structured data
- SchemaApp Research: LLMs integrated with Knowledge Graphs achieve 300% higher accuracy
Priority Schema Types for DAE:
| Schema Type | Purpose | DAE Application |
|---|---|---|
| Organization | Brand identity, contact, social profiles | Entity Coherence foundation |
| Person | Author credentials, expertise areas | Expert Attribution (RSP characteristic) |
| Article/BlogPosting | Content attributes, author, dates | Content freshness signals, provenance |
| FAQPage | Question-answer pairs | Conversational AI extraction, Featured Snippets |
| HowTo | Procedural content, steps | Methodology documentation, process authority |
| WebPage | Page purpose, context | Content categorization for LLMs |
Implementation Principles:
- Mirror visible content — Schema must match what users see on page
- Use JSON-LD format — Google’s recommended, separates markup from HTML
- Validate before publishing — Rich Results Test
- Maintain consistency — Same schema patterns across similar page types
- Connect entities — Link Person → Organization → Article relationships
Common Mistakes:
- Adding schema for SEO without matching content (spam signal)
- Inconsistent schema across pages (Entity Fragmentation in markup)
- Missing dateModified (freshness signal lost)
- No Person schema for authored content (E-E-A-T signal lost)
Relationship to Entity Architecture: Structured Data Layer is component #3 of Entity Architecture. While Entity Registry defines entities and Hub-and-Spoke structures content, Structured Data Layer makes these relationships machine-parseable. Without structured data, Entity Architecture remains human-readable but machine-opaque.
Status: DAE-Original*
Level 6: Validation
Originality Prompt
The validation question for Root-Source potential: “What information in this content could only exist because we created, measured, or experienced it?” Strong pass = clear primary data. Weak pass = synthesis of existing work. Fail = rewriting what others published.
Signal Provenance
Traceability of authority signals to their original sources. Every claim, statistic, and assertion should trace back to a verifiable origin. AI systems increasingly evaluate whether content can be grounded in checkable sources.
Cross-Reference Validation
Process of verifying claims against multiple independent sources before publication. Part of the RAG-Pre-Pipeline that ensures only validated content enters the system.
Cross-AI Synthesis
Testing methodology: Run identical prompts across ChatGPT, Claude, Perplexity, and Gemini, then synthesize findings. Identifies platform-specific citation behavior and opportunities. Use Platform Citation Patterns to interpret results and prioritize platform-specific optimizations.
Root-Source Audit
Systematic evaluation of existing content against the four Root-Source characteristics. Categorizes content as: Root-Source, Near Root-Source (gaps addressable), Strong Derivative, or Weak Derivative.
Entity Fragmentation
Anti-pattern. The failure mode of Entity Architecture where the same conceptual entity gets defined inconsistently across multiple pages, diluting authority signals rather than concentrating them. Entity Fragmentation is the “silent killer” of topical authority — it happens gradually as content libraries grow without entity governance.
How it occurs:
- Different writers define the same concept differently
- Product updates create pages that overlap with existing entity coverage
- Content audits fail to identify competing pages for the same entity
- Multiple “hub pages” emerge for concepts that should have one canonical source
Symptoms:
- Same entity mentioned across 10+ pages with inconsistent definitions
- No clear hierarchy between related entity pages
- Internal links use different anchor text for the same concept
- AI systems cite competitors instead of you for concepts you cover extensively
Prevention:
- Maintain an Entity Registry with canonical definitions
- Implement editorial gates that check entity consistency before publication
- Conduct quarterly Entity Audits to identify fragmentation
- Consolidate competing pages through redirects and content merging
Impact: Teams that consolidate fragmented entity content typically see 40-60% ranking improvements within 90 days because authority signals concentrate rather than divide.
Status: DAE-Original*
Level 7: Implementation
DAE Maturity Model
Six-level framework for assessing organizational AI visibility capability:
| Level | Name | Characteristics |
|---|---|---|
| L0 | Unaware | No AI visibility distinction from SEO |
| L1 | Aware | Concept recognized, manual testing |
| L2 | Experimenting | Tools adopted, no RSP strategy |
| L3 | Systematic | Regular Citation Share measurement, RSP defined, Entity Registry established, Structured Data Layer implemented |
| L4 | Optimizing | Continuous improvement, Root-Sources producing, Entity Architecture maintained, Platform Citation Patterns tracked |
| L5 | Leading | Industry Root-Source status, Citation Magnet ratio >1.0, external Entity Corroboration achieved, Third-Party Authority Signals established |
DAE Implementation Blueprint
Structured implementation guidance with three tracks: Foundation (24 weeks, 0.9 FTE, L0→L3), Acceleration (16 weeks, 2.25 FTE, L2→L4), Leadership (52 weeks, 5.5 FTE, L3→L5). Includes FTE allocation, tool recommendations, phase milestones, Entity Architecture setup, Structured Data Layer implementation, and Third-Party Authority Signals strategy.
octyl Authority Learning Loop
Continuous improvement methodology implemented by octyl®: Discover (identify highly-cited sources) → Extract (analyze citation patterns) → Apply (implement in new content) → Validate (test across platforms) → Refine (update patterns). Part of octyl’s integrated service — automated discovery and extraction with human validation gates. Status: octyl-Internal
octyl Citation Type Taxonomy
Six-tier source classification by authority weight: (1) Primary Research, Official Docs = Highest. (2) Expert Opinions, Industry Reports = High. (3) Quality Journalism, Trade Publications = Medium-High. (4) Educational Content, Reference Sites = Medium. (5) Blogs, Forums, User Content = Low. (6) Commercial, Promotional = Lowest. Status: octyl-Proprietary
octyl Authority Pattern Recognition
Internal system used by octyl® to identify structural and content patterns that correlate with citation success. Learns from high-performing content to inform content creation. Part of the octyl™ Toolset — not available for purchase. Status: octyl-Internal
Entity Architecture Quick Reference
For teams implementing Entity SEO within the DAE framework:
The Entity Architecture Stack
| ENTITY ARCHITECTURE STACK | |
|---|---|
| Entity Corroboration (External) Third-party citations, backlinks | ← Validation |
| Third-Party Authority Signals Review sites, Reddit, Wikipedia, YouTube | ← External Presence |
| Structured Data Layer (Technical) Organization, Person, Article, FAQ… | ← Machine-Readable |
| Internal Linking (Relational) Descriptive anchors, hub-spoke links | ← Expressed Connections |
| Hub-and-Spoke Content (Structural) Canonical hubs + supporting spokes | ← Content Architecture |
| Entity Registry (Governance) Canonical definitions, ownership | ← Foundation |
Knowledge Pathways Quick Reference
| AI CITATION SYSTEM | |
|---|---|
| PARAMETRIC PATHWAY | RETRIEVED PATHWAY (RAG) |
| Encoded in training | Fetched in real-time |
| Slow to change | Updates within days |
| Favors established brands, Wikipedia | Favors structured + fresh content with Schema markup |
| OPTIMIZE VIA: | |
|
|
| ROOT-SOURCE POSITIONING REQUIRES BOTH PATHWAYS | |
Platform Citation Patterns Quick Reference
| If optimizing for… | Prioritize… |
|---|---|
| ChatGPT | Wikipedia, Reddit engagement, News mentions, Bing indexing |
| Perplexity | G2/Trustpilot profiles, Real-time freshness, Review presence |
| Google AI Overviews | Organic rankings, YouTube content, Diverse backlinks |
| Claude | Factual accuracy, Clear provenance, Helpful content |
| All platforms | Entity Coherence, Structured Data, Root-Source content |
Entity Architecture vs. Entity SEO Terminology
| Entity SEO Term | DAE Term | Notes |
|---|---|---|
| Entity SEO | Entity Architecture | DAE uses “Architecture” to emphasize structure over optimization |
| Knowledge Graph Optimization | Entity Coherence | Coherence is the principle, Architecture is implementation |
| Topic Clustering | Hub-and-Spoke Content | DAE emphasizes entity relationships, not just topics |
| Entity Disambiguation | Entity Registry | Registry prevents ambiguity through canonical definitions |
| Semantic Authority | Root-Source Positioning | RSP is the strategic goal that Entity Architecture enables |
| Schema Markup | Structured Data Layer | DAE frames it as architectural layer, not standalone tactic |
| Entity Fragmentation | Entity Fragmentation | Same term — the universal anti-pattern |
| Off-page SEO | Third-Party Authority Signals | DAE focuses on authority signals, not link building |
Sources
Primary Research (Academic)
- Aggarwal, P. et al. (2024). “GEO: Generative Engine Optimization.” Princeton University & IIT Delhi, KDD 2024. https://arxiv.org/abs/2311.09735
- Kumar, A. & Palkhouski, L. (2025). “AI Answer Engine Citation Behavior: GEO-16 Framework.” UC Berkeley & Wrodium Research. https://arxiv.org/abs/2509.10762
Industry Studies (Core)
- Growth Memo (Kevin Indig, 2026). “The 44.2% Pattern.” https://www.growth-memo.com/p/the-science-of-how-ai-pays-attention
- Onely (2024/2025). “LLM Ranking Factors.” https://www.onely.com/blog/llm-friendly-content/
- Averi.ai (2025). “Brand Correlation with AI Citation (0.334).” https://www.averi.ai/blog/building-citation-worthy-content-making-your-brand-a-data-source-for-llms
- Profound (2025). “Wikipedia Citation Share Analysis.” https://www.tryprofound.com/blog/ai-platform-citation-patterns
- Ahrefs (2025). “AI Search Traffic Distribution.” https://ahrefs.com/blog/llm-search/
- SearchAtlas (2025). “5.5M Citations Analyzed.” https://searchatlas.com/blog/comparative-analysis-of-llm-citation-behavior/
- Digital Bloom (2025). “60% Parametric Knowledge Finding.” https://thedigitalbloom.com/learn/2025-ai-citation-llm-visibility-report/
- SE Ranking (2025). “Third-Party Signals Impact.” https://seranking.com/blog/
- Cloudflare (2025). “AI Bot Crawl Patterns.” https://blog.cloudflare.com/ai-bots/
Industry Studies (Extended)
- Cloudflare Radar (2025). “Crawl-to-Refer Ratio Analysis.” https://blog.cloudflare.com/ai-search-crawl-refer-ratio-on-radar/
- Omnius (2025). “AI Search Industry Report (82.5% nested pages).” https://www.omnius.so/blog/ai-search-industry-report
- Wellows (2025). “LLM Citation Trends for AI Search.” https://wellows.com/blog/llm-citation-trends-for-ai-search/
- AirOps (2025). “Citation Share Definition.” https://www.airops.com/ai-search-hub/citation-share
Official/Technical Sources
- Google (2025). “Succeeding in AI Search.” https://developers.google.com/search/blog/2025/05/succeeding-in-ai-search
- Google (2025). “AI Features Documentation.” https://developers.google.com/search/docs/appearance/ai-features
- Google (2012). “Introducing the Knowledge Graph.” https://blog.google/products/search/introducing-knowledge-graph-things-not/
- Microsoft Fabrice Canel (SMX Munich 2025). “Schema Markup Helps LLMs Understand Content.” [Conference Presentation]
- SchemaApp (2025). “Structured Data for LLMs.” https://www.schemaapp.com/schema-markup/why-structured-data-not-tokenization-is-the-future-of-llms/
- Walker Sands (2025). “Schema and LLM Visibility.” https://www.walkersands.com/about/blog/how-can-schema-markup-support-llm-visibility/
- Schema.org. Vocabulary Reference. https://schema.org
Regulatory/Governance
- EU AI Act Article 14 (2024). “Human Oversight Requirements.” https://artificialintelligenceact.eu/article/14/
- EU AI Act Article 50 (2024). “Transparency Obligations.” https://artificialintelligenceact.eu/article/50/
- Reporters Without Borders (2023). “Paris Charter on AI and Journalism.” https://rsf.org/sites/default/files/medias/file/2023/11/Paris%20Charter%20on%20AI%20and%20Journalism.pdf
Usage Statistics
- Pew Research (2025). “34% of U.S. Adults Have Used ChatGPT.” https://www.pewresearch.org/short-reads/2025/06/25/34-of-us-adults-have-used-chatgpt-about-double-the-share-in-2023/
Tool Documentation
- Senthor (2026). “The Dark AI Traffic Problem.” https://www.senthor.io/en/blog/google-analytics-vs-senthor-dark-ai-traffic
Version History
| Version | Date | Changes |
|---|---|---|
| v1.0 | Jan 2026 | Initial 28 terms |
| v1.1 | Jan 2026 | octyl® extension (+4 terms, 32 total) |
| v1.2 | Feb 2026 | Empirical validation, Telemetry Layer, 68 FAQs (33 terms) |
| v1.3 | Feb 2026 | Entity Architecture, Structured Data Layer, Knowledge Pathways, Platform Citation Patterns, Third-Party Authority Signals (41 terms) |
Citation: Hürlimann, M. (2026). Digital Authority Engineering (DAE) Glossary. GaryOwl.com. https://garyowl.com/dae-glossary/
© 2026 GaryOwl.com / Authority Intelligence Lab. Framework documentation is open for use with attribution.