Six Types of Authority That AI Systems Actually Measure — and How to Build Them

The Industry Has No Taxonomy for What AI Systems Actually Measure as Authority


By Manuel Hürlimann for GaryOwl.com | Published: April 3, 2026 | Updated: April 3, 2026
Expertise: Digital Authority Engineering | Authority Taxonomy | AI Citation Analysis
Time to read: 26 minutes
Series: Operative Article 4 — Glossary


Ask ten SEO professionals what “authority” means for AI search, and you’ll get ten different answers. Domain Authority. Brand authority. Topical authority. E-E-A-T. The terms are used interchangeably, as if they describe a single phenomenon. They don’t — and the authority AI systems actually process is far more differentiated than any single metric suggests.

When “AI Citation Rules Have Changed” (Article 3) synthesized 30+ studies totaling 750 million AI citations, one finding stood out: the signals that predict AI citations — E-E-A-T correlated at levels far exceeding traditional link metrics, brand search volume at r=0.392, organic keyword breadth at r=0.41 — are not the same signal measured three ways. They are fundamentally different mechanisms operating through different pipeline stages. Yet the industry has no taxonomy that distinguishes them.

This article proposes one. Drawing on 14 peer-reviewed studies, 8 large-dataset industry analyses, and 7 practitioner reports, I identify six distinct types of authority that AI systems actually measure — each with its own mechanism, its own evidence base, and its own operational path — plus three modifier dimensions that amplify or dampen their effects. This is the authority taxonomy AI systems use, whether we name it or not. The question is whether brands will understand the system they’re operating in.

“Digital Authority is not a score. It is six measurable dimensions modulated by three contextual factors, each built through different mechanisms, each requiring different investments.”Manuel Hürlimann, Digital Authority Engineering

📌 Key Insights — What This Article Establishes

1. AI systems don’t measure “authority” as one thing. They process at least six distinct types through different mechanisms.

2. E-E-A-T is a description category, not a measurable quantity. The authority signals it describes are measurable — our taxonomy disaggregates them.

3. Each authority type has a different evidence base, a different operational path, and a different time horizon.

4. The top 30 cited domains account for 67% of all AI citations — but which authority types they excel in varies dramatically.

5. Authority is not a project. It’s an operating model with three continuous loops: Build, Measure, Maintain.

6. Three modifier dimensions — Temporal, Platform, and Consensus — modulate how effectively each authority type operates across different contexts.

7. Independent research (Jacques et al., medRxiv 2026 ) confirms the taxonomy gap — their 4-domain framework overlaps with but is less differentiated than this 6-type model.


📌 Original Contribution

This article, first published in March 2026 by Manuel Hürlimann on GaryOwl.com, presents the first six-type authority taxonomy specifically designed for AI citation systems, extended with three modifier dimensions. The taxonomy is an original synthesis by the author, cross-validated against 14 peer-reviewed studies and 15 industry datasets. The individual authority types (#63–#68) draw on established research; the systematic decomposition into six measurable types with mapped mechanisms and operational paths is a contribution of the Digital Authority Engineering (DAE) framework. The modifier dimensions (#69–#71) were added in V1.1 based on independent deep source verification.

📌 Navigate This Article

Why a taxonomy? → The Problem With “Authority”
What are the six types? → The Six Types of Authority AI Systems Measure
What amplifies or dampens them? → Three Modifier Dimensions
How do you measure them? → Three Metrics That Replace “Authority Score”
How do you operate authority? → The Authority Operating Model
What about E-E-A-T? → E-E-A-T Is a Description, Not a Metric
How honest are we about our own results? → Living Lab Disclosure

📌 Reading Guide

If you read one section: Read S2 — The Six Types. It’s the core contribution and contains the full taxonomy table with evidence.
If you’re a strategist: Read S2 + S2b (Modifier Dimensions) + S4 (Operating Model) + the Self-Assessment Checklist in S5.
If you’ve read “AI Citation Rules Have Changed”: This operationalizes those patterns. That article shows WHAT changed; this explains WHAT authority is made of.


📌 Core Definition

Authority Taxonomy (AI) — A systematic classification of the distinct authority signals that AI systems process when selecting, evaluating, and citing sources. The DAE framework identifies six types: Entity Authority (#63), Topical Authority (#64), Content Authority (#65), Network Authority (#66), Structural Authority (#67), and Reputational Authority (#68), modulated by three cross-cutting dimensions: Temporal (#69), Platform (#70), and Consensus (#71). Each type operates through a different mechanism, affects different pipeline stages, and requires different operational investments.


Executive Summary (1 Minute)

The SEO industry treats authority as a monolithic concept. But the authority AI systems actually measure works differently — and the E-E-A-T framework only scratches the surface. When you cross-reference 14 peer-reviewed studies with 15 industry datasets, a more differentiated authority taxonomy emerges: AI systems process at least six distinct types, each through different mechanisms and each requiring different operating model investments.

Entity authority works through Knowledge Graph presence. Topical authority works through demonstrated expertise depth. Content authority works through inline evidence and AI citation patterns. Network authority works through citation graph position. Structural authority works through technical extractability. Reputational authority works through third-party endorsements.

Three modifier dimensions — Temporal, Platform, and Consensus — modulate how these authority types interact. This article presents the first complete authority AI systems taxonomy — and shows how to build citation share across all six types.

Three modifier dimensions cut across all six types: Temporal (freshness amplifies or dampens all signals), Platform (where content appears affects how strongly each type registers), and Consensus (cross-source corroboration multiplies citation probability).

E-E-A-T — the framework most practitioners use — is not a measurable quantity but a description category that maps across all six types. This article provides the taxonomy, three replacement metrics, and an operating model for building authority systematically.

📌 Five things this article establishes

1. A six-type authority taxonomy for AI citation systems, each with mapped evidence and mechanisms.
2. E-E-A-T as a description category, not a metric — with the implications for measurement.
3. Three replacement metrics: Citation Share, Observed AI Source Index (oAIS), and Cross-AI Coverage.
4. An operating model (Build → Measure → Maintain) validated by controlled experiments showing that authority signals — not authority size — drive AI recognition.
5. Three modifier dimensions (Temporal, Platform, Consensus) that modulate the effectiveness of all six authority types across different contexts.

📌 First Publication

The six-type authority taxonomy for AI (#63–#68) and the three modifier dimensions (#69–#71) are first published in this article. The individual concepts (topical authority, entity authority, etc.) exist in scattered form across the literature. The systematic decomposition into six measurable types and three modifiers — with evidence classification, mechanism mapping, and operational paths — is an original contribution by Manuel Hürlimann within the Digital Authority Engineering (DAE) framework.

📌 Key DAE Terms in This Article

Entity Authority (#63) — Authority derived from machine-verifiable identity in Knowledge Graphs and structured data systems.
Topical Authority (#64) — Authority earned through demonstrated expertise depth in a specific subject domain.
Content Authority (#65) — Authority signaled through inline evidence, primary data, statistics, and verifiable citations within content.

Network Authority (#66) — Authority derived from position in the citation graph — who links to you, who cites you, who mentions you.
Structural Authority (#67) — Authority enabled through technical extractability — content architecture that AI systems can parse and chunk effectively.
Reputational Authority (#68) — Authority established through third-party signals, expert endorsements, and cross-platform mentions.

Root-Source Positioning (RSP) — Becoming the original source AI systems must cite.


The Problem With “Authority” — And Why It Needs a Taxonomy

In traditional SEO, “authority” meant one thing: how many quality backlinks point to your domain. Moz’s Domain Authority, Ahrefs’ Domain Rating — these metrics compressed a complex reality into a single number. That compression worked because Google’s link graph was the dominant ranking mechanism.

AI systems don’t work this way. When Alexander Wan, Eric Wallace & Dan Klein (ACL 2024) tested what evidence language models find convincing, they found that content relevance to the query dominated over stylistic credibility signals such as scientific references and neutral tone. Models preferred semantically matching content over pages with high-authority styling. The implication: the single-number authority model that governed a decade of SEO is fundamentally inadequate for AI systems.

This isn’t just an academic finding. Indig and AirOps’ analysis of 18,012 verified ChatGPT citations found that traditional SEO metrics explain almost nothing about AI citation behavior — backlinks showed r²=0.038, and citation rates were flat across Domain Authority 0–80. Meanwhile, Semrush’s study of 304,805 AI-cited URLs found a 30.64% positive correlation between E-E-A-T signals and AI citation selection, and Ahrefs’ 75,000-brand analysis showed brand search volume correlating at r=0.392. These are not the same signal measured differently — they are fundamentally different constructs measuring different things through different mechanisms.

The problem is that the industry conflates them. A brand investing in “authority” might mean building backlinks (Network Authority), creating expert content (Topical Authority), or securing Wikidata presence (Entity Authority). Without a taxonomy, these investments are confused, misallocated, and unmeasurable.

Independent confirmation: Jacques et al. (medRxiv 2026) analyzed 615 ChatGPT-cited health sources and identified four authority domains: Author Credentials, Institutional Affiliation, Quality Assurance, and Digital Authority. Their framework — developed independently — overlaps with but is less differentiated than the six-type model presented here (4 vs. 6 types, health-only vs. cross-domain). The fact that two independent research efforts converge on the same structural insight — authority is not monolithic — strengthens the case for a formal taxonomy.

What this article does: It proposes six distinct authority types plus three modifier dimensions, each with a defined mechanism, a mapped evidence base, and an operational path. It replaces “build authority” with “build which authority, how, measured by what, and modulated by which contextual factors.”


The Six Types of Authority AI Systems Measure

The following taxonomy emerges from cross-referencing peer-reviewed source attribution research with large-dataset industry studies. Each type is confirmed by at least three independent sources (per the DAE Triangulation Rule). The numbering (#63–#68) follows the DAE Glossary.

#Authority TypeMechanismEvidence BaseOperational Path
63Entity AuthorityKnowledge Graph presence, machine-verifiable identitySearch Atlas 5.17M citations; Kevin Indig/AirOps 20.6% entity density; Olaf Kopp/Aufgesang 1,400% liftWikidata, Schema.org, consistent NAP
64Topical AuthorityDemonstrated expertise depth in a specific domainAlexander Wan et al./ACL 2024 ; Surfer SEO 36M AIO; SE Ranking 129K domainsContent depth, pillar-cluster architecture, first-hand data
65Content AuthorityInline evidence, citations, statistics, primary dataPranjal Aggarwal et al./GEO KDD 2024 (+30-40% combined); Kevin Indig/AirOps (44% from first third)Inline citations, original data, statistical evidence
66Network AuthorityCitation graph position — who cites you, who mentions youAndrés Algaba et al./NAACL 2025 (Matthew Effect); Profound 680MEarn citations from already-cited sources, co-citation networks
67Structural AuthorityTechnical extractability — content architecture AI can parseEyeLevel.ai (parsing impacts RAG 10-20%)Clean HTML hierarchy, FAQ markup, self-contained sections
68Reputational AuthorityThird-party signals, expert endorsements, cross-platform mentionsSemrush 304K URLs (+30.64%); Ahrefs 75K (r=0.392); SparkToro <1% overlapPress mentions, expert endorsements, 4+ platforms

A critical distinction: These types are not mutually exclusive. Most successfully cited sources combine multiple types. Wikipedia, for instance, excels at Entity Authority (Wikidata integration), Content Authority (inline citations), Structural Authority (consistent markup), and Network Authority (universal co-citation). What the taxonomy enables is diagnostic precision: when a brand fails to gain AI citations, which authority type is the bottleneck?

#63 Entity Authority — Being Known to the Machine

Entity Authority is the foundation layer. If AI systems cannot verify who you are through structured data, the other five types have no anchor. Search Atlas’ analysis of 5.17 million AI citations found that brand-specific queries overwhelmingly resolve to official domains — but only when those brands have consistent Knowledge Graph entries. Kevin Indig and AirOps found that strongly cited text contains 20.6% proper nouns (named entities) on average, compared to 5–8% in typical baseline text — entities with consistent Knowledge Graph entries are significantly more likely to appear in AI responses, independent of content quality.

The mechanism is straightforward: LLMs resolve entities during pre-processing. A brand with a Wikidata entry, consistent Schema.org markup, and matching entries across Google Knowledge Graph, Apple Business Connect, and Bing Places has a machine-verifiable identity. One without these is, to the machine, ambiguous.

The personnel dimension matters: Olaf Kopp’s controlled domain-transfer experiment demonstrated that moving identical content from one domain to another with stronger E-E-A-T source entity signals produced a 1,400% visibility increase (Sistrix) within six months. The machine needs trust anchors it can verify through sameAs relations in Schema markup, linking authors to external professional registries.

#64 Topical Authority — Depth Over Breadth

Topical Authority is what the data shows matters most for RAG-level citations. Alexander Wan, Eric Wallace & Dan Klein (ACL 2024) established that LLMs prioritize content relevance to the query over stylistic credibility signals such as scientific references and neutral tone when selecting which evidence to trust. Surfer SEO’s analysis of 36 million AI Overviews confirmed this at scale: niche experts outperform generalist domains in AI citation frequency. And SE Ranking’s study of 129K domains found that recently updated content earns roughly 67% more AI citations than older content — rewarding active expertise over passive reputation.

The implication for brands: topic breadth is less valuable than topic depth. A 200-article site covering everything in marketing is less citable than a 30-article site covering one subtopic with original data and documented expertise.

#65 Content Authority — What’s Actually in Your Content

Content Authority is the most directly actionable type. Pranjal Aggarwal et al.’s GEO framework (KDD 2024) demonstrated that targeted content optimizations — adding citations, statistics, and quotations — increased AI visibility by 30–40% in combination, with Statistics Addition alone reaching up to 41% in certain domains. Indig/AirOps’ analysis of 18,012 verified citations quantifies where content authority concentrates: 44.2% of all ChatGPT citations come from the first third of content — placing key evidence early dramatically increases extraction probability.

But content authority has a paradox, exposed by Jonas Wallat et al. (ICTIR 2025, Best Paper Honorable Mention) : up to 57% of AI citations can be post-rationalized under adversarial conditions — meaning the model selects its answer first, then finds a source to attach. The baseline rate with random documents was 12%, suggesting a substantial gap between citation and actual retrieval-driven generation. This was tested on a single model (Cohere Command-R+) under controlled conditions; the true rate in production systems remains an open question.

#66 Network Authority — Your Position in the Citation Graph

Network Authority is where the Matthew Effect in AI Citations operates. Andrés Algaba et al. (NAACL 2025 Findings) documented the pattern: sources already frequently cited attract disproportionately more citations, and the effect is domain-agnostic. Their extended study (arXiv, April 2025) confirmed this across 274,951 references in 17,087 academic papers. Profound’s 680 million citation analysis confirms this at platform scale: within each AI system, a small cluster of sources dominates.

For brands not yet in the graph, this creates a cold-start problem. The entry point — documented in “The Two Directions of Root-Source Positioning” — is becoming the source that already-cited sources must cite. That’s Root-Source Positioning.

#67 Structural Authority — Can the Machine Actually Read You?

Structural Authority is the most overlooked type — and potentially the most damaging when absent. Content architecture determines extractability. JSON-LD helps AI systems understand your entity at indexing time, but visible content structure — clean headings, self-contained sections, FAQ markup — helps them extract your answers at retrieval time. EyeLevel.ai’s research confirms the stakes: parsing strategy alone impacts RAG performance by 10–20%.

#68 Reputational Authority — What Others Say About You

Reputational Authority is the type that most directly maps to traditional “trust” — but with a critical new mechanism. Ahrefs’ analysis of 75,000 brands found that brand search volume — not backlinks — is the strongest single predictor of AI citations at r=0.392 (Spearman correlation). Brands mentioned positively across at least four non-affiliated platforms are 2.8× more likely to be cited by ChatGPT than single-platform brands (Evertune, via Clearscope data).

Rand Fishkin/SparkToro’s 2026 research adds an important nuance: there is less than a 1-in-100 chance that any two AI responses will recommend the same list of brands for the same query. The implication: Reputational Authority must be built for persistence, not presence.

Semrush’s analysis of 304,805 AI-cited URLs provides the most robust evidence for the E-E-A-T–citation link: a 30.64% positive correlation between E-E-A-T signals and AI citation selection, measured across 59,410 keywords. Olaf Kopp’s controlled experiment — transferring identical content between domains with different source entity profiles — produced a 1,400% visibility increase for the domain with stronger E-E-A-T signals.


Three Modifier Dimensions — What Amplifies or Dampens Authority

The six authority types describe properties of a source. But three contextual factors modulate how effectively each property operates in practice. These are not a seventh, eighth, and ninth authority type — they work on a different level.

#69 Temporal Modifier — Freshness as Effectiveness Multiplier

Content freshness amplifies or dampens all six authority types. The evidence is unusually strong, including causal proof: Yubo Fang et al. (SIGIR APIR 2025) tested seven LLM models in a controlled experiment where only the date of identical passages was changed. Texts with newer dates rose by up to 95 ranking positions; up to 25% of all relevance decisions flipped solely due to date changes.

The mechanism is by design, not emergence: ChatGPT’s production configuration contains use_freshness_scoring_profile: true (discovered in October 2025 by Metehan Yesilyurt). SE Ranking (129K domains) found that content updated within the last 3 months receives roughly 67% more citations. Ahrefs (17 million citations) found AI-cited content is 25.7% fresher than organic Google results on average. And Qwairy (102K queries) discovered that AI systems automatically inject the current year into 28.1% of all sub-queries, even when users don’t specify it.

Operational path: Quarterly content refresh cycles for all key pages. Systematic updating of data points and year references. Content age monitoring as a KPI.

#70 Platform Modifier — Inherited Trust as Effectiveness Multiplier

The platform where content appears modulates how strongly each authority type registers. Semrush (230K+ prompts, 100 million citations) found Reddit appeared in ~60% of ChatGPT answers (before September 2025), with Wikipedia at ~55%. An anonymous Reddit post with zero backlinks gets cited more often than a well-linked corporate page — that’s neither Network Authority nor Reputational Authority. It’s inherited platform trust.

Platform Authority is also systemically unstable: Reddit citations on ChatGPT collapsed from ~60% to ~10% in September 2025, while AI Mode and Perplexity remained stable. Profound’s 680 million citation analysis shows the fragmentation: only 11% of domains are cited by both ChatGPT and Perplexity; only 7 of the top 50 domains appear across all three major platforms. Writesonic (2.4 million domains) found that 67.4% of all cited domains appear on exactly one AI platform; only 6.5% achieve “universal presence” across 5+ platforms.

Operational path: Multi-platform presence strategy prioritized by AI platform preferences. Monthly Cross-AI Coverage tracking. YouTube, Reddit, and LinkedIn as citation entry points.

#71 Consensus Modifier — Cross-Source Corroboration as Effectiveness Multiplier

RAG architectures are increasingly moving toward explicit cross-source verification. RA-RAG (Seunghyun Hwang et al., ICLR submission 2024) estimates source reliability through cross-source information checking. PAR-RAG (Zhiqiang Zhang et al., April 2025) implements a dual verification module requiring multi-evidence factual consistency for complex queries. MADAM-RAG (April 2025) assigns each retrieved document an independent LLM agent; agents debate across multiple rounds, producing +11.4% improvement on ambiguous queries and +15.8% on misinformation suppression. Arghya Biswas et al. (“Contradiction to Consensus,” February 2026) tested across 4 benchmarks with 5 LLMs: complete cross-source agreement produced sharp, high-confidence peaks; partial or no agreement produced lower, broader distributions.

The practical implication was captured by Search Engine Land (Adam Heitzman, March 2026) : a client ranked #1 on Google for a competitive keyword but was invisible in ChatGPT — because “the page existed in isolation: no corroboration, no distributed mentions, no external validation.”

Operational path: “Corroboration Engineering” — ensuring key claims are confirmed by external sources. Factual alignment with established consensus in training data. Systematic cross-referencing in content.


Three Metrics That Replace “Authority Score”

If authority is six things modulated by three dimensions, then a single “Authority Score” is meaningless. The DAE framework proposes three replacement metrics:

MetricWhat It MeasuresData SourceUpdate Frequency
Citation ShareYour share of AI citations in your topic cluster, relative to competitorsProfound, OtterlyAI, AirOpsWeekly
Observed AI Source Index (oAIS)How many AI platforms cite you, weighted by citation qualityCross-platform queries + manual auditsMonthly
Cross-AI CoverageConsistency of your presence across different AI systemsStructured multi-platform query setsMonthly

Why these three? Citation Share tells you how you’re performing relative to competitors. The oAIS tells you how broadly you’re recognized across platforms. Cross-AI Coverage tells you how stable that recognition is — because only 11% of domains receive citations from both ChatGPT and Perplexity (Profound ), and citation volumes vary by a factor of 615 across platforms for the same brand.

The concentration data makes measurement urgent. Indig/Gauge’s analysis of 98K citation rows found that the top 30 most-cited domains account for 67% of all AI citations.

Cross-reference: “AI Citation Rules Have Changed” — Pattern 1 showed that 80% of AI citations come from pages outside Google’s top 100. That data is the reason Citation Share matters: Google rankings are no longer a proxy for AI citation performance.


The Authority Operating Model — Build, Measure, Maintain

The most common mistake in authority building is treating it as a project. Semrush’s analysis of 304,805 AI-cited URLs found a 30.64% positive correlation between systematic E-E-A-T signals and citation selection. Olaf Kopp’s controlled domain experiment reinforces this: the 1,400% visibility increase came from the domain with embedded E-E-A-T source entity signals, not from a campaign.

LoopWhat It DoesAuthority Types ServedCadence
BuildCreate and strengthen authority signals across the six typesAll six, prioritized by gap analysisOngoing (content calendar)
MeasureTrack Citation Share, oAIS, Cross-AI Coverage per topic clusterProvides feedback for all typesWeekly (Citation Share), Monthly (oAIS, Cross-AI)
MaintainUpdate decaying signals, refresh content, monitor KG entriesEntity (#63), Content (#65), Reputational (#68)Monthly review, quarterly deep audit

The differentiation from existing approaches: “GEO Is a Tactic, Not a Strategy” established that GEO (Generative Engine Optimization) is a tactic, not a strategy. GEO’s techniques map to Content Authority (#65) and Structural Authority (#67). But without the other four types — and without accounting for the three modifier dimensions — GEO operates in a vacuum.

The Data Loop — Why Measurement Drives Strategy

Authority TypeWhat to MeasureAvailable Tools
#63 EntityKnowledge Graph presence, Wikidata status, Schema validationGoogle KG API, Wikidata SPARQL, Schema.org validator
#64 TopicalTopic coverage depth, keyword breadth in clusterSurfer, Semrush, custom topic maps
#65 ContentInline citation density, evidence per sectionManual audit, custom scoring
#66 NetworkCitation graph position, co-citation partnersProfound, OtterlyAI, Indig/Gauge
#67 StructuralExtractability, chunk quality, FAQ coverageEyeLevel.ai, Discovered Labs, manual RAG testing
#68 ReputationalBrand mention sentiment, platform coverage countSparkToro, Evertune, manual multi-platform audit
#69–71 ModifiersContent age, platform presence, cross-source corroborationAhrefs freshness reports, cross-platform audits, manual claim verification

E-E-A-T Is a Description, Not a Metric

E-E-A-T is not a metric. Google’s Danny Sullivan has stated explicitly that E-E-A-T is not a ranking factor. Google’s John Mueller has separately clarified that E-E-A-T describes what quality raters look for, not what the algorithm directly computes. No peer-reviewed study has causally tested E-E-A-T as a discrete signal. Semrush’s correlation data (+30.64%) is important evidence, but correlation is not causation.

The reframing: The authority signals that E-E-A-T describes are measurable. E-E-A-T itself is a description category. The taxonomy disaggregates it:

E-E-A-T ComponentMaps to Authority Type(s)How It’s Actually Measured
Experience#64 Topical + #65 ContentFirst-hand data, case studies, documented practice
Expertise#64 Topical + #63 EntityTopic depth, Knowledge Graph credentials, author bylines
Authoritativeness#66 Network + #68 ReputationalCitation graph position, brand mentions, cross-platform presence
Trustworthiness#67 Structural + #65 Content + #68 ReputationalInline evidence, editorial standards, transparency signals

📌 Honest Limitation

The E-E-A-T correlation data (+30.64% from Semrush , r=0.392 for brand search volume from Ahrefs ) is real and useful. What’s missing is causal isolation: does improving E-E-A-T signals cause better AI citations, or do already-authoritative sources naturally score high on E-E-A-T? Olaf Kopp’s controlled domain-transfer experiment suggests causation (identical content, different source entity, dramatically different results), but controlled experiments in this space remain rare. This is correlation evidence with a suggestive causal data point, not causal proof at scale.


Frequently Asked Questions

What is an authority taxonomy for AI?

An authority taxonomy for AI is a systematic classification of the distinct signals AI systems use when deciding which sources to cite. Rather than treating “authority” as one thing, it identifies six measurable types — Entity, Topical, Content, Network, Structural, and Reputational Authority — each operating through a different mechanism, plus three modifier dimensions (Temporal, Platform, Consensus) that modulate their effectiveness.

How is this different from E-E-A-T?

E-E-A-T is a description category — it names what quality raters assess, but it’s not a directly measurable metric. The six-type authority taxonomy disaggregates E-E-A-T into its measurable components. Experience maps to Topical and Content Authority. Expertise maps to Topical and Entity Authority. Authoritativeness maps to Network and Reputational Authority. Trustworthiness maps to Structural, Content, and Reputational Authority. The taxonomy tells you what to build and how to measure it — E-E-A-T tells you what good looks like.

Which authority type matters most for AI citations?

It depends on where you are. For brands not yet cited by any AI system, Entity Authority (#63) is typically the bottleneck — if the machine can’t verify who you are, nothing else matters. For brands already in the citation graph, Topical and Content Authority (#64, #65) drive expansion. For brands seeking stability across platforms, Reputational Authority (#68) combined with the Platform Modifier (#70) is the differentiator.

What are modifier dimensions?

Modifier dimensions are contextual factors that amplify or dampen the effectiveness of all six authority types. Temporal (#69) means freshness — content updated within the last 3 months receives ~67% more citations. Platform (#70) means where content appears — an anonymous Reddit post can outperform a high-DA corporate page through inherited platform trust. Consensus (#71) means cross-source corroboration — claims confirmed across multiple independent sources receive higher citation probability.

Can small brands build authority for AI systems?

Yes — and the data suggests a specific path. Olaf Kopp’s controlled experiment shows that authority signals (not authority size) drive AI recognition. A small brand with consistent Schema.org markup, named expert bylines, inline citations, and presence on 4+ platforms can outperform larger competitors. Domain Authority explains less than 4% of AI citation variance (Kevin Indig/AirOps ).

What is Citation Share and how do I measure it?

Citation Share is your proportion of AI citations in your topic cluster, relative to competitors. Measuring it requires monitoring AI responses across multiple platforms for your target queries. Tools like Profound, OtterlyAI, and AirOps automate parts of this. The key insight: Citation Share replaces “rankings” as the primary performance metric in AI search, because 80% of AI citations come from pages outside Google’s top 100.

What is the Matthew Effect in AI citations?

The Matthew Effect in AI Citations describes the self-reinforcing concentration of citations among already-cited sources. Documented by Andrés Algaba et al. (NAACL 2025) and confirmed across 274,951 references. Once a source enters the citation graph, it attracts disproportionately more citations — making early entry and Network Authority (#66) strategically critical.

How do I start building authority for my brand?

Start with a diagnostic audit across all six types plus three modifiers. Verify your Wikidata/Knowledge Graph presence. Map your content depth. Count inline citations per article. Check where AI systems cite you. Test extractability. Count positive mentions on non-affiliated platforms. Then check: How fresh? How many platforms? Are claims corroborated? The weakest dimension is your bottleneck. Build there first.

What if I can’t afford specialized AI citation tools?

You don’t need them to start. The Self-Assessment Checklist requires zero tools — it’s eight yes/no questions you can answer by testing your brand on ChatGPT, Perplexity, and Google AI Overviews directly. For Citation Share, manual tracking across 10–20 target queries per week provides directional data. Google’s Knowledge Graph API is free. Schema.org validation is free. Wikidata editing is free. Start with the audit; invest in tooling when you’ve identified your bottleneck.


Living Lab Disclosure — What We Don’t Know Yet

GaryOwl.com is a living lab. Every principle in this taxonomy is applied to this site, and the results — or lack of them — are disclosed transparently. At the time of writing, GaryOwl.com is at Stage 0–1 in the authority-building process. The site does not yet meet the 4+-platform threshold for consistent cross-platform citations. Content quality alone is not a citation signal.

Three honest scenarios for what happens next:

Scenario 1 (Slow Build): The taxonomy is correct, but authority accumulation takes 6–12 months. Citation data won’t show meaningful results until Q3/Q4 2026. This is the most likely scenario based on the Matthew Effect dynamics documented in this series.

Scenario 2 (Selective Success): Some authority types prove more actionable than others. Entity Authority and Structural Authority (the “technical” types) may produce measurable results faster than Network and Reputational Authority (the “social” types). The modifier dimensions may show differential impact: Temporal may matter more than Consensus at early stages.

Scenario 3 (Not Yet): The taxonomy is correct in theory but GaryOwl.com lacks the scale, backlink profile, or brand recognition to test it meaningfully. In that case, the Q2 Scorecard (“Measuring What We Built” — Article 9) will document the gap between framework and results — with the same transparency applied to everything else.

Whether the taxonomy performs is a question the Q2 Scorecard will answer with data. What this article provides is the framework: six types, three modifiers, three metrics, one operating model. The test is ongoing.

📌 Scope Limitation: Model and Platform Dependency

This taxonomy is derived from citation behavior observed across ChatGPT, Perplexity, Google AI Overviews, and Gemini as of Q1 2026. Each AI system maintains different retrieval architectures, different source preferences, and different update cadences. Profound’s data shows only 11% domain overlap between ChatGPT and Perplexity citations; Writesonic found that 67.4% of cited domains appear on exactly one platform. The six authority types and three modifiers are designed to be architecture-agnostic — they describe what AI systems evaluate, not how a specific model implements evaluation. However, the relative weight of each type will shift as models evolve. The Operating Model’s Measure loop exists precisely for this reason: monthly Cross-AI Coverage tracking detects when platform-specific weights change, enabling strategy adaptation without abandoning the taxonomy itself.

📌 Scope Limitation: Taxonomy Shelf Life

AI citation behavior is not static. ChatGPT’s Reddit citations collapsed from ~60% to ~10% in September 2025 due to a single infrastructure change. Perplexity’s source mix shifts with each index update. New retrieval architectures — agentic RAG, multi-step verification chains, real-time web grounding — may introduce authority signals this taxonomy does not yet capture. The six types and three modifiers represent the current empirical consensus as of Q1 2026, not a permanent truth. The Q2 Scorecard (Article 9) will test whether the taxonomy’s predictions hold against six months of citation data — and document where they don’t. If a seventh type or a fourth modifier emerges from that data, it will be added with the same evidentiary standard applied here.

📌 Forward Link

This taxonomy explains what authority consists of. The next question: where in the AI pipeline does each type actually work? “Where Structure Actually Works” (Article 5) — maps each authority type to the pipeline stage where it has measurable impact. → [Article 5 link placeholder]


📌 Self-Assessment Checklist — Eight Questions for Your Brand

1. Entity: Does a Google Knowledge Graph panel appear when someone searches your brand name?
2. Topical: For your core topic, does your site have more depth than the top 5 currently-cited sources?
3. Content: Do your key pages include inline citations, statistics, and named sources on every major claim?
4. Network: Is your domain currently cited by any AI system for your target queries?

5. Structural: When you ask ChatGPT to summarize your key page, does it extract the right information?
6. Reputational: Is your brand mentioned positively on at least four non-affiliated platforms?
7. Temporal: Was your key content updated within the last 90 days?
8. Consensus: Are your core claims confirmed by at least two external, independent sources?

If you answered “no” to any of these, that dimension is your current bottleneck. Start there.

Next step: Run this audit on your top 3 pages. Map each “no” to the authority type or modifier above, then compare against the Operating Model (Build → Measure → Maintain) to design your first 90-day sprint. For the full diagnostic methodology, see the Authority Intelligence page.


Sources & Methodology

This article synthesizes findings from 34 named sources, classified by evidence type:

[A] Academic / Peer-Reviewed:

  • Alexander Wan, Eric Wallace & Dan Klein (2024). “What Evidence Do Language Models Find Convincing?” ACL 2024. aclanthology.org/2024.acl-long.403 (Accessed: April 3, 2026)
  • Zhao et al. (2025). SpARE: Sparse Retrieval for Attributable Knowledge. NAACL 2025. arxiv.org/abs/2410.15999 (Accessed: April 3, 2026)
  • Andrés Algaba et al. (2025). “The Matthew Effect in AI-Generated Citation Graphs.” NAACL 2025 Findings. aclanthology.org/2025.findings-naacl.381 (Accessed: April 3, 2026)
  • Pranjal Aggarwal et al. (2024). GEO: Generative Engine Optimization. Princeton/Georgia Tech. KDD 2024. arxiv.org/abs/2311.09735 (Accessed: April 3, 2026)
  • Jonas Wallat et al. (2025). “Correctness is not Faithfulness in RAG Attributions.” ICTIR 2025. Best Paper Honorable Mention. DOI: 10.1145/3731120.3744592. arxiv.org/abs/2412.18004 (Accessed: April 3, 2026)
  • Wu et al. (2025). SourceCheckup. Nature Communications 2025. nature.com/articles/s41467-025-58551-6 (Accessed: April 3, 2026)
  • Yubo Fang et al. (2025). “Do Large Language Models Favor Recent Content?” SIGIR APIR 2025. DOI: 10.1145/3767695.3769493. dl.acm.org/doi/10.1145/3767695.3769493 (Accessed: April 3, 2026)

[A*] Preprints with DOI:

  • Wang, Wang & Nakov (2025). “Credibility Hierarchy in LLM Source Selection.” arXiv, January 2025. arxiv.org/abs/2402.02420 (Accessed: April 3, 2026)
  • Fabrice Jacques et al. (2026). “Authority Signals in AI Cited Health Sources.” medRxiv, January 2026. medrxiv.org/content/10.64898/2026.01.22.26344576v1 (Accessed: April 3, 2026)
  • Zhang et al. (2025). “Source Concentration in LLM-Powered Search.” arXiv, December 2025. arxiv.org/abs/2512.09483 (Accessed: April 3, 2026)
  • Andrés Algaba et al. Extended (2025). 274K References Matthew Effect. arXiv, April 2025. arxiv.org/abs/2504.02767 (Accessed: April 3, 2026)
  • Seunghyun Hwang et al. (2024). RA-RAG: Retrieval-Augmented Generation with Estimation of Source Reliability. ICLR Submission. arxiv.org/abs/2410.22954 (Accessed: April 3, 2026)
  • Zhang et al. (2025). PAR-RAG: Plan-Driven RAG for Multi-Hop QA. arXiv, April 2025. arxiv.org/abs/2504.16787 (Accessed: April 3, 2026)
  • MADAM-RAG (2025). Multi-Agent Debate for RAG. arXiv, April 2025. arxiv.org/abs/2504.13079 (Accessed: April 3, 2026)
  • Arghya Biswas et al. (2026). “Contradiction to Consensus.” arXiv, February 2026. arxiv.org/abs/2602.18693 (Accessed: April 3, 2026)

[B] Large-Dataset Industry Research (>100K samples):

[C] Industry Analysis / Expert Sources:

[DAE] Framework References:

Methodology: This article is authored by Manuel Hürlimann and follows the DAE Journalistic Source Principle. Every claim traces to a named study with year and is linked inline. The six-type authority taxonomy and three modifier dimensions are an original synthesis by Manuel Hürlimann, cross-validated against 14 peer-reviewed studies and 15 industry datasets. Each authority type is confirmed by at least three independent sources (DAE Triangulation Rule, §45). The E-E-A-T reframing distinguishes correlation evidence from causal proof. Interpretations are flagged where they go beyond what the data directly shows.

Contact: manuel@octyl.io


Update Log

[Future updates will be documented here.]


About the Author

Manuel Hürlimann is the creator of Digital Authority Engineering (DAE) — the systematic discipline of building machine-verifiable expertise that AI systems recognize, cite, and recommend. Based in Switzerland, he works as a consultant and lecturer at the intersection of AI search behavior, citation analysis, and brand authority.

Through the Authority Intelligence Lab at GaryOwl.com, he publishes original research on how AI systems select, evaluate, and cite sources — applying every principle to GaryOwl.com itself as a living lab. His work on Root-Source Positioning (RSP) and the six-type authority taxonomy (#63–#68) with three modifier dimensions (#69–#71) provides the diagnostic framework for brands navigating the shift from traditional search to AI-driven discovery.

Connect: GaryOwl.com · LinkedIn · manuel@octyl.io


Framework Disclosure: DAE is developed by GaryOwl.com and applied to GaryOwl.com itself as a living lab — every framework principle is simultaneously tested on this site. The framework is open for use with attribution. Validation is ongoing and published transparently; no guarantees implied. AI behavior varies by model and platform.


Article Navigation: ← AI Citation Rules Have Changed. Most Brands Haven’t Noticed. | Next: Where Structure Actually Works →


GaryOwl.com – Authority Intelligence Lab

“Digital Authority is not a score. It is six measurable dimensions modulated by three contextual factors, each built through different mechanisms, each requiring different investments.” — Manuel Hürlimann

Scroll to Top