The Two Directions of Root-Source Positioning

What SEO Backlinks Were for Google, AI Citations Are for ChatGPT, Gemini, and Perplexity β€” But the Rules Have Changed


By Manuel HΓΌrlimann for GaryOwl.com | Published: March 22, 2026 | Updated: [DATE]
Expertise: Digital Authority Engineering | Root-Source Positioning | Citation Graph Analysis
Time to read: 19 minutes
Series: Operative Article 2: “The Two Directions of Root-Source Positioning” β€” Glossary


πŸ“Œ Key Insights β€” What This Article Establishes

From Backlinks to Citation Networks: A new authority mechanism. In Google’s world, backlinks were discrete signals β€” individual links from one page to another, counted and weighted by PageRank. Reciprocal links (A links to B, B links back to A) existed but required explicit agreements and were discounted by Google’s algorithm as potentially manipulative. In the AI citation world, the mechanism is fundamentally different: AI systems evaluate network topology β€” your structural position within an entire web of sources. Your outbound citations shape that position automatically, without any agreement from the other party. This means your citation choices carry strategic weight that backlinks never had β€” and current SEO, GEO, and LLMO frameworks have not yet addressed this dimension.

1. Who you cite changes whether AI cites you. ChatGPT, Gemini, and Perplexity don’t evaluate your content in isolation. They evaluate where you sit in a network of sources β€” Algaba et al. (NAACL 2025) demonstrated this across 274,951 AI-generated references. Your outbound links β€” the sources you reference β€” determine your position in that network.

2. Current frameworks focus on one direction only. SEO, GEO, AEO, and LLMO advice addresses how to make your content citable (inbound). The other direction β€” how your citation choices shape your authority β€” remains largely unexplored. This article addresses that gap.

3. AI search doesn’t read pages β€” it reads clusters. When a user asks a complex question, AI systems decompose it into dozens of sub-queries (Ekamoira, 2026 : 17-26Γ— more complex than traditional search), each retrieving sources from specific expertise clusters. If your outbound citations place you in the wrong cluster, you are never retrieved β€” regardless of content quality.

4. Citation is alliance formation. Citing Wikipedia and market leaders without strategy makes you a silent contributor to their authority. Citing domain-relevant peers and primary research positions you as a partner in an expertise cluster β€” where all participants gain visibility. The Goodie (2025) analysis of 5.7M citations confirms: top-quartile domains receive 10Γ— more citations than others.

5. This is measurable and actionable today. The Citation Audit diagnostic in this article lets you map your current position and identify gaps. Three rules and three immediate actions give you a starting point you can implement this week.

A thesis beyond marketing: The unexamined consequences for journalism. Journalists work with sources and citations every day. Yet here is a question that, to our knowledge, has not been asked in newsrooms: whose authority are you strengthening with your citation choices β€” and is that consistent with your editorial position? When a publication consistently cites the same wire services, institutional sources, or dominant platforms, it may β€” unintentionally β€” reinforce those entities’ AI Citation Graph Position at the expense of independent researchers, original investigators, or smaller expert voices. This is not yet proven. But the structural mechanism described in this article (outbound citations shape network position) applies to journalistic content as much as to marketing content. If the mechanism holds, the implications are significant: editorial citation habits could systematically consolidate AI authority among a small number of dominant sources β€” even when the publication’s editorial values would call for a more diverse information ecosystem. The Columbia Journalism Review’s Tow Center study (JaΕΊwiΕ„ska & Chandrasekar, March 2025) provides supporting context: AI search engines failed to correctly attribute sources in over 60% of news-related queries. In an environment where AI attribution is already unreliable, the citation patterns that journalists create may carry more structural weight than the journalism itself. This observation applies equally to researchers, analysts, content creators, and anyone who publishes with sources.


πŸ“Œ Navigate the DAE Framework

DAE Glossary β€” 62 terms, 7 levels, complete terminology
DAE Framework β€” The foundational article
Authority Intelligence β€” How to measure what AI systems trust
Root-Source Positioning β€” How to become the source AI cites
Implementation Blueprint β€” From framework to execution in 90 days
System Architecture β€” How the disciplines interconnect


πŸ“Œ Reading Guide

5 minutes: Executive Summary + TL;DR + What Should You Do Today
12 minutes: All main sections through Frequently Asked Questions
Full article (19 min): All sections including Checkpoints, Sources, Self-Assessment & Update Log


What is Root-Source Positioning?

πŸ“Œ Core Definition

Root-Source Positioning (RSP) is a strategic framework developed by Manuel HΓΌrlimann and first published on GaryOwl.com on March 9, 2026 . RSP defines the strategy of becoming the primary source that AI systems cite β€” not a derivative that explains someone else’s work. When ChatGPT, Gemini, or Perplexity answer a question, they don’t cite randomly. They cite Root-Sources: the entities that created the original data, defined the concept first, or documented the methodology. Everything else β€” summaries, blog posts, explainers β€” is a derivative. You can optimize a derivative perfectly. AI will still cite the Root-Source.

In the world of traditional SEO, authority was built through backlinks β€” discrete signals where one page links to another. Google counted these links and weighted them via PageRank. Reciprocal links (mutual linking agreements) existed but were discounted as potentially manipulative. The key point: backlinks were individual transactions between two pages. In the AI citation world, the mechanism is different. AI systems don’t count individual links β€” they evaluate citation network topology: your structural position within the entire web of sources. And your position in that network is shaped by two forces:

RSP Inbound β€” Are you being cited? Do AI systems recognize you as a Root-Source?
RSP Outbound β€” Who do you cite? How do your citation choices position you in the AI Citation Graph?

Most SEO, GEO, AEO, and LLMO advice addresses only RSP Inbound. This article β€” by Manuel HΓΌrlimann, creator of the RSP framework β€” formalizes RSP Outbound and demonstrates why both directions together determine your AI brand authority. To our knowledge, this bidirectional dimension has not been addressed in comparable depth elsewhere.


Executive Summary (1 minute)

Do the sources you cite affect whether AI cites you? Yes. LLMs encode entire citation networks β€” your outbound links position you in a specific cluster of the Citation Graph. This article introduces RSP Outbound of Root-Source Positioning: the strategic management of your outbound citations as AI authority signals. Three deliverables: the bidirectional RSP model, the Citation Audit diagnostic, and three rules for outbound citation strategy. Start with the Citation Audit table if you need to act today.


TL;DR β€” Key Takeaways

Root-Source Positioning (RSP) operates in two directions β€” not one. RSP Inbound asks: “Am I the source that others cite?” RSP Outbound asks: “Which sources do I cite β€” and how do those choices position me in the AI Citation Graph ?” Most GEO discourse addresses only RSP Inbound. RSP Outbound β€” your outbound citation choices β€” is invisible in current frameworks but directly influences how AI systems associate, cluster, and position your entity. LLMs internalize entire citation networks (Algaba et al., NAACL 2025 ): not just who cites you, but who you cite, who your sources cite, and the structural topology of the graph you inhabit. Every outbound link is a Citation Graph signal. The bidirectional RSP model β€” formalizing both directions as strategic variables β€” is original to DAE and has no equivalent in GEO, AEO, or LLMO literature.

πŸ“Œ Five things this article establishes

1. RSP has two directions: inbound (being cited) and outbound (citing others) β€” both shape AI authority attribution
2. LLMs internalize citation networks as graph structures, not as isolated source-target pairs (Algaba et al., NAACL 2025)
3. Outbound citation choices function as entity association signals β€” who you cite determines who AI associates you with
4. The bidirectional RSP model is an original DAE contribution: we have not found another framework that formalizes outbound citations as a strategic variable
5. A practical Citation Audit framework maps both directions and produces actionable positioning decisions

πŸ“Œ First Publication: Bidirectional Root-Source Positioning

The concept of Root-Source Positioning was first published on March 9, 2026 by Manuel HΓΌrlimann on GaryOwl.com. This article extends RSP with the bidirectional model β€” the first formalization that outbound citation choices (RSP Outbound) shape inbound authority attribution (RSP Inbound) through Citation Graph positioning. This extension was developed as part of the Digital Authority Engineering (DAE) framework. To our knowledge, the outbound citation dimension has not been formalized in GEO, AEO, or LLMO literature to date.

πŸ“Œ Glossary: New Terms Introduced in This Article

Bidirectional Root-Source Positioning: The strategic framework recognizing that an entity’s position in the AI Citation Graph is shaped by two directions simultaneously β€” RSP Inbound (inbound: being cited by others) and RSP Outbound (outbound: which sources the entity cites). Both directions form a compounding feedback loop amplified by the Matthew Effect.
RSP Outbound β€” Outbound Citation Positioning: The strategic management of outbound citation choices as AI authority signals. Which sources you cite determines which citation cluster AI systems associate you with.
Citation Graph Feedback Loop: The five-phase compounding cycle (Outbound Positioning β†’ Cluster Membership β†’ Inbound Recognition β†’ Graph Centrality β†’ Parametric Encoding) through which RSP Inbound and RSP Outbound reinforce each other.
Citation Audit (DAE): A structured diagnostic that maps an entity’s inbound and outbound citation network to identify positioning gaps. Assesses AI Citation Graph Position β€” the entity’s structural location within the network of sources that AI systems draw from. Not to be confused with Local SEO citation audits (NAP consistency). Four gap types: Cluster Misalignment, Authority Isolation, Derivative Signaling, Competitor Reinforcement.
Root concept (RSP): First published March 9, 2026 by Manuel HΓΌrlimann on GaryOwl.com.
All terms above: First formalized in this article (Operative Article 2: “The Two Directions of Root-Source Positioning”) as part of Digital Authority Engineering (DAE) , Level 4 β€” Strategy.


Why Your AI Visibility Depends on Two Directions β€” Not One

πŸ“Œ Core thesis: Bidirectional RSP

Root-Source Positioning is not a one-way relationship. GEO Is a Tactic, Not a Strategy established that AI systems cite Root-Sources β€” the origins of information that derivatives reference (Onely, 2025 : 67% of ChatGPT’s top citations come from first-hand data). But the citation relationship has a second dimension that current GEO frameworks do not yet address: the sources you choose to cite shape how AI systems position you in the Citation Graph. This is not a metaphor. It is a structural property of how LLMs encode authority relationships.

The first operative article, GEO Is a Tactic, Not a Strategy, established the foundation: GEO is a tactic within DAE, not a strategy. It introduced Root-Source Positioning as the strategic layer that GEO cannot replace β€” the four characteristics (Primary Data, First Publication, Expert Attribution, Citation Magnet) that determine whether AI systems treat you as an origin or a derivative.

This article extends RSP with a structural insight that emerged from the DAE research process: your position in the Citation Graph is determined not only by who cites you, but by who you cite. The two directions are:

RSP Inbound β€” Inbound: Being cited. This is the direction that GEO Is a Tactic, the Matthew Effect (Algaba et al., NAACL 2025 ), and the entire GEO literature address. The question: “Does AI recognize me as a Root-Source?”

RSP Outbound β€” Outbound: Citing others. This is the direction that current frameworks have not yet formalized. The question: “How do my citation choices position me within the AI-readable network of authority relationships?”

Three Developments That Make This Critical for SEO and GEO Practitioners

The distinction becomes operationally critical because of three converging developments:

First, the Citation Graph is no longer a metaphor. Algaba et al. (NAACL 2025) demonstrated empirically that LLMs do not simply learn “Source A is authoritative.” They internalize entire citation networks β€” the topology of who cites whom, how densely connected a source is, and which clusters of sources co-occur. The Matthew Effect they document is a graph-level phenomenon: already-cited sources receive disproportionately more future citations because the model has encoded them as high-centrality nodes.

Second, Query Fan-Out mechanics in agentic search systems decompose a single user query into dozens of sub-queries, each retrieving different source clusters. Olaf Kopp’s analysis of LLM Readability and Brand Context Optimization describes how retrieval systems evaluate content not in isolation but in context β€” the semantic neighborhood of sources that surround a given entity determines how the entity is classified and ranked. Your outbound citations define your semantic neighborhood.

Third, Malte Landwehr’s citation debugging methodology reveals a practical insight: when diagnosing why an entity fails to appear in AI-generated answers, the problem is often not the entity’s own content quality β€” it is the entity’s position in the citation network. Specifically: which sources the entity links to, whether those sources are themselves cited by authoritative nodes, and whether the entity’s outbound citation pattern is consistent with its claimed expertise domain.


RSP Inbound: What GEO, AEO, and LLMO Already Address

RSP Inbound is the established dimension of RSP. GEO Is a Tactic, Not a Strategy documented its mechanics in detail. The summary:

πŸ“Œ RSP Inbound recap: The inbound citation dimension

AI systems cite Root-Sources β€” entities that created original data, defined concepts first, carry verifiable expert attribution, and have accumulated citations from others. 67% of ChatGPT’s top citations come from first-hand data sources (Onely, 2025) . The Matthew Effect (Algaba et al., NAACL 2025) compounds this: already-cited sources receive disproportionately more citations. Citation Share β€” your citations Γ· total citations in domain Γ— 100 β€” is the north-star metric.

RSP Inbound is where 100% of GEO, AEO, and LLMO discourse operates. The logic is intuitive: create good content, optimize it for AI extraction, and earn citations. The four RSP characteristics (Primary Data, First Publication, Expert Attribution, Citation Magnet) formalize what “good content” means for AI authority.

Why Content Quality Alone Fails: The Authority Paradox in AI Search

RSP Inbound alone cannot explain several observed phenomena:

The authority paradox of identical content quality. Two entities publish content of comparable Root-Source quality on the same topic. One is consistently cited by AI systems; the other is not. RSP Inbound β€” which focuses on content characteristics β€” predicts equal outcomes. But content does not exist in a vacuum. It exists in a citation network. The entity that cites (and is cited by) higher-authority nodes inherits network position that the isolated entity does not. This paradox is empirically confirmed by SearchAtlas’s analysis of 5.5 million LLM responses (Bhan et al., December 2025) : schema markup β€” the most commonly recommended technical optimization β€” shows no correlation with LLM citation frequency. Domains with complete schema coverage perform no better than domains with minimal schema. If technical optimization does not differentiate, what does? Network position.

The niche entity problem. A specialized entity with genuine Root-Source characteristics in a narrow domain fails to gain AI citations because its outbound citation pattern links exclusively to generic sources (Wikipedia, large publishers) rather than to the domain-specific citation cluster where its expertise resides. The entity’s content says “I am an expert in X.” The entity’s citation pattern says “I read the same general sources as everyone else.”

The derivative trap. An entity consistently cites only one or two dominant Root-Sources in its field β€” effectively signaling to AI systems that it is a derivative of those sources, not an independent authority. The outbound citation pattern undermines the inbound positioning.


RSP Outbound: The Dimension No SEO or GEO Framework Formalizes

πŸ“Œ Definition: RSP Outbound β€” Outbound Citation Positioning

RSP Outbound is the strategic dimension of RSP that addresses how an entity’s outbound citation choices β€” the sources it references, links to, and builds upon β€” shape its AI Citation Graph Position: the entity’s structural location within the network of sources that AI systems draw from. Every outbound link is a signal that AI systems use to determine: (a) which knowledge cluster the entity belongs to, (b) which other entities it is semantically associated with, and (c) whether its claimed authority domain is consistent with its citation behavior.

How ChatGPT, Gemini, and Perplexity Encode Citation Networks

The critical research is Algaba et al. (NAACL 2025) , which demonstrated that large language models do not simply memorize individual source-authority pairs. During training, LLMs are exposed to millions of documents that reference other documents. These references create patterns β€” statistical regularities in which sources co-occur, which sources cite the same upstream authorities, and which sources are cited by the same downstream derivatives. Their analysis of 274,951 AI-generated references across 10,000 papers confirmed that GPT-4o’s citation recommendations reproduce the clustering coefficients and network topology of ground truth citation graphs β€” evidence that the model internalizes not just individual source preferences but structural graph properties. This finding is independently reinforced by Ji et al. (2025) , whose CiteAgent framework showed that LLM-based agents reproduce power-law distributions and the Matthew Effect when simulating citation networks β€” further validating that citation graph structure is encoded at the model level.

The result: LLMs develop an internal representation that functions like a citation graph. Not a formal graph database β€” but a set of weighted associations that encode who is connected to whom, how strongly, and in what direction. The practical scale of this is now measurable: SearchAtlas’s comparative analysis of 5,504,399 LLM responses (Bhan et al., December 2025) across 748,425 queries reveals that citation structures differ fundamentally between models (Perplexity, OpenAI, Gemini) β€” shaped by retrieval architecture, not content quality alone. Critically, domain operators who represent the authoritative source for a technique receive exclusive citations regardless of whether alternative coverage exists. This is RSP Outbound in action: the entity’s position in the citation network determines citation exclusivity.

For RSP, this means: your outbound citations are not invisible to AI systems. When you consistently cite Source A and Source B, AI systems encode the association [Your Entity] β†’ [Source A] β†’ [Source B]. Over time, this positions your entity in the citation cluster defined by those sources.

Query Fan-Out: Why AI Search Evaluates Clusters, Not Pages

When a user asks an AI system a complex question, the system does not execute a single search. It decomposes the query into multiple sub-queries β€” a process called Query Fan-Out. Each sub-query retrieves a different set of sources. The final answer synthesizes across these retrieved sets. The scale of this decomposition is now quantified: iPullRank (December 2025) found that AI search queries average 70–80 words compared to 3–4 words for traditional searches (Ekamoira, February 2026) β€” a 17–26Γ— increase in query complexity. Google’s AI Mode fires hundreds of sub-queries per single user question, with systems executing up to 20 retrieval iterations before synthesizing a response.

This Fan-Out mechanism means that AI systems evaluate sources in context β€” not in isolation. The retrieval pipeline (documented in DAE’s Two-Path Architecture ) proceeds through embedding, indexing, retrieval, reranking, and synthesis. At the reranking stage, sources that appear in proximity to authoritative nodes receive higher relevance scores. Your outbound citations determine which nodes you appear in proximity to.

The practical implication: an entity that cites the leading peer-reviewed research in its domain will be positioned β€” during Fan-Out retrieval β€” alongside those research sources. An entity that cites only blog posts and generic aggregators will be positioned in a different, lower-authority cluster.

The causal chain is direct: (1) A user asks a complex question. (2) The AI system decomposes it into dozens of sub-queries targeting specific expert knowledge clusters. (3) Each sub-query retrieves sources from a specific cluster β€” not from the entire web. (4) Your RSP Outbound choices determine which cluster you belong to. (5) If your outbound citations place you in the wrong cluster, you are never retrieved for the sub-queries where your expertise matters β€” regardless of content quality. Fan-Out does not read “your page.” It reads thematic clusters. RSP Outbound is the mechanism that places you inside the right cluster before retrieval begins.

The Landwehr Debug: When Your AI Visibility Problem Is Not Content

When an entity fails to appear in AI-generated answers despite having strong content, the diagnostic process must examine not only the entity’s own content quality (RSP Inbound) but also its citation network position (RSP Outbound). The debugging steps are:

Step 1: Verify content quality β€” does the content pass the Originality Prompt ? Does it meet all four RSP characteristics?

Step 2: Verify technical extractability β€” does the content pass the GEO-16 audit (Kumar & Palkhouski, UC Berkeley, 2025) : G β‰₯ 0.70 (Groundedness Score β€” how well AI systems can extract and ground claims from your content), H β‰₯ 12 (number of GEO-16 pillars passed out of 16)?

Step 3: Examine the citation network position. This is the RSP Outbound diagnostic:

  • Which sources does the entity cite? Are they authoritative in the target domain?
  • Do those sources cite the entity back? (Bidirectional connections are stronger signals.)
  • Is the entity’s outbound citation pattern consistent with its claimed expertise?
  • Does the entity exist in the same citation cluster as the sources AI systems already cite for this domain?

If Steps 1 and 2 pass but the entity is still not cited, the problem is almost always Step 3 β€” an AI Citation Graph Position problem, not a content problem.


How Inbound and Outbound Citations Create a Compounding Authority Loop

πŸ“Œ The bidirectional model

RSP Inbound and RSP Outbound are not independent. They form a feedback loop: your outbound citations (RSP Outbound) influence which entities associate with you, which in turn affects your inbound citations (RSP Inbound). The Matthew Effect operates on this loop β€” it compounds advantages in both directions simultaneously.

The Citation Graph Feedback Loop: How Brand Authority Compounds in AI

The interaction between the two directions produces a compounding cycle:

Phase 1 β€” Outbound positioning: An entity publishes content that cites authoritative, domain-relevant sources. The AI system encodes the association: [Entity] β†’ [Authoritative Source Cluster].

Phase 2 β€” Cluster membership: Through repeated outbound citation patterns, the entity becomes associated with the authoritative cluster. When AI systems retrieve sources for queries in this domain, the entity appears in the same retrieval neighborhood as the authorities it cites.

Phase 3 β€” Inbound recognition: As the entity appears alongside authoritative sources in AI-generated answers, it accumulates inbound citations. Other content creators reference it. The Matthew Effect activates: citations breed more citations. The compounding is now quantified: Goodie’s analysis of 5.7 million AI citations (February–June 2025) across ChatGPT, Claude, Gemini, and Perplexity found that brands in the top quartile for web mentions receive over 10Γ— more citations in AI Overviews than brands in the next quartile. The top of the Citation Graph compounds exponentially; the bottom remains invisible.

Phase 4 β€” Graph centrality: The entity’s position in the Citation Graph shifts from peripheral (citing authorities) to central (being cited as an authority). Its Citation Graph Centrality β€” the measure of how many citation paths pass through it β€” increases.

Phase 5 β€” Parametric encoding. As centrality increases, the entity’s probability of being encoded in future LLM training data increases. This is the transition from Path 1 (RAG-Retrieval) to Path 2 (Parametric Knowledge) β€” the most durable form of AI authority.

πŸ“Œ The RSP Feedback Loop β€” Interactive Visual Model

The interactive infographic below shows all 5 phases, the Matthew Effect return loop, the Citation Audit gap types, and the RSP Outbound impact by Knowledge Pathway. Click any phase or gap type to expand detail.

The RSP Causality Note

A critical clarification on causality: RSP Inbound and RSP Outbound are not sequential steps but co-constitutive processes. You do not “first” build outbound citations and “then” receive inbound citations. Both processes operate simultaneously from the first publication.

The causal structure is:

  • RSP Outbound β†’ RSP Inbound: Your outbound citations position you in a network cluster. This positioning makes it more likely that AI systems retrieve your content alongside authoritative sources, which increases the probability of inbound citation.
  • RSP Inbound β†’ RSP Outbound: As you accumulate inbound citations, your own authority increases β€” which means that the sources you cite receive a stronger association signal from being cited by you. Your outbound citations become more valuable to the entities you reference.

This bidirectional causality is what makes Citation Graph position a compounding asset. Early RSP decisions β€” both in what Root-Sources you create (RSP Inbound) and which authorities you cite (RSP Outbound) β€” have disproportionate long-term effects through this feedback loop.


The Citation Audit: A Diagnostic Tool for AI Brand Authority

πŸ“Œ The Citation Audit: An RSP Outbound diagnostic tool

The Citation Audit is a structured assessment that maps an entity’s inbound and outbound citation network to identify positioning gaps, misalignments, and strategic opportunities. It is the RSP Outbound equivalent of the Originality Prompt (which diagnoses RSP Inbound).

How to Run a Citation Audit for AI Visibility

Step 1: Map your outbound citations.

For your top 10 content assets, list every external source you cite. Categorize each as:

  • Root-Source: Original research, primary data, peer-reviewed study
  • Authoritative derivative: Well-known synthesis by a recognized expert
  • Generic source: Wikipedia, general news, generic aggregator
  • Competitor: Direct competitor in your domain

Step 2: Assess cluster alignment.

For your target domain’s core queries, run Cross-AI Synthesis (identical prompts across ChatGPT, Claude, Perplexity, Gemini). Document which sources AI systems cite in their answers. These are the high-centrality nodes in your domain’s Citation Graph.

Question: Do your outbound citations include these high-centrality nodes? If not, your content is positioned outside the citation cluster that AI systems draw from for your domain.

Step 3: Identify RSP Outbound gaps.

Common gaps include:

Gap TypeSymptomRSP Outbound CauseCorrection
Cluster misalignmentStrong content, no AI citationsOutbound citations point to a different domain clusterRealign outbound citations to domain-relevant Root-Sources
Authority isolationGood RSP Score, low Citation ShareNo outbound citations to high-centrality nodesBuild citation bridges to established authorities
Derivative signalingCited only as “also mentioned”Outbound citations reference only 1–2 dominant sourcesDiversify citation portfolio across multiple Root-Sources
Competitor reinforcementCompetitor’s Citation Share growsYour outbound citations strengthen competitor’s graph positionCite upstream Root-Sources instead of competitor derivatives

Step 4: Design a Citation Strategy.

Based on the audit, define:

  • Citation targets: Which Root-Sources in your domain should you consistently reference?
  • Citation diversity: Are you citing across multiple authoritative nodes, not just one?
  • Citation consistency: Does your citation pattern across all content assets tell a coherent story about your expertise domain?
  • Citation bridges: Can you cite sources that bridge between your niche and adjacent high-authority domains?

Content Strategy Beyond SEO: Three Rules for Outbound Citation

πŸ“Œ Strategic implication: Every outbound link is a Citation Graph signal

Under the bidirectional RSP model, content strategy is not only about what you publish. It is about what you reference. Every outbound citation β€” every hyperlink, every source attribution, every “according to” β€” positions your entity in the AI Citation Graph. This is not a call to game the system with fake citations. It is a call to be deliberate about your citation choices, because AI systems are reading them as positioning signals whether you intend them or not.

Three Rules for Outbound Citation Strategy in AI Search

Rule 1: Cite Root-Sources, not derivatives. When referencing a finding or concept, link to the original research β€” not to a blog post that summarized it. AI systems trace citation chains. Citing the origin positions you closer to the high-centrality node. Citing a derivative positions you one step further from authority.

Rule 2: Cite domain peers, not just domain leaders. A healthy citation network includes references to multiple authoritative sources across your domain β€” not just the single most dominant source. An entity that cites only one authority signals dependency. An entity that cites across the domain signals independent expertise and comprehensive knowledge.

Rule 3: Maintain citation consistency. Your outbound citation pattern should be consistent across all content assets. If your claimed expertise is AI visibility strategy, but half your citations reference unrelated domains, the citation signal is incoherent. Entity Coherence applies not only to your own entity signals but to the entities you associate with through citation.

πŸ“Œ The Strategic Alliance Protocol

Every outbound citation is a strategic vote in the Citation Graph β€” not an act of academic courtesy. Without an RSP Outbound strategy, you function as a silent contributor: you feed authority to established monopolies while remaining invisible yourself. The three rules above encode a deeper principle: citation is alliance formation.

Cite upstream (the original research), not downstream (the blog that summarized it) β€” this positions you at the same depth as domain leaders. Cite across your niche cluster (complementary experts, specialized datasets, domain-specific publications) β€” this builds a validated expertise cluster where all participants gain Graph Centrality. Avoid parasitic citation patterns: if more than 50% of your outbound links point to a single dominant entity (Wikipedia, a market leader, a competitor), you are reinforcing their monopoly position while signaling to AI systems that you are a derivative, not a peer.

The test: review your last 10 published pages. For each outbound citation, ask: “Am I positioning myself as a partner in this source’s cluster β€” or as a fan?” Partners build bidirectional authority. Fans build someone else’s.

Parametric vs. RAG: How Outbound Citations Affect Each AI Knowledge Pathway

The Knowledge Pathways β€” Parametric (60%), RAG-Hybrid (30%), RAG-First (10%) β€” are affected differently by RSP Outbound:

PathwayRSP Outbound ImpactMechanismTimeline
RAG-First (10%)High β€” immediateFan-Out retrieval evaluates source clusters in real time. Your outbound links determine which cluster you appear in during retrieval.Days to weeks
RAG-Hybrid (30%)Medium β€” cumulativeParametric pre-knowledge combined with live retrieval. Outbound citations that consistently point to high-authority nodes reinforce the parametric association.Weeks to months
Parametric (60%)High β€” structural but delayedTraining data encoding captures citation network topology. Your graph position at training time determines your parametric authority.Months to training cycle

RSP Outbound is most immediately measurable in RAG-First (where outbound links directly affect retrieval neighborhood), but most consequential in the Parametric pathway (where it determines long-term entity encoding).


Mini-Case: From 0% to 8% Citation Share in 6 Months (B2B SaaS)

πŸ“Œ Illustrative scenario based on patterns observed in DAE consulting practice

Consider a mid-size European SaaS company (“CompliancePilot”) offering regulatory compliance automation. Their content team publishes well-structured, original articles on EU compliance workflows β€” genuine Root-Source material with proprietary process data. They score well on the Originality Prompt . Yet when users ask ChatGPT or Perplexity “Which tools automate EU compliance?”, CompliancePilot does not appear.

The RSP Inbound diagnosis finds no obvious problem: content quality is high, schema markup is implemented, freshness signals are current.

The RSP Outbound diagnosis reveals the structural issue: CompliancePilot’s 15 published articles cite almost exclusively two sources β€” a single McKinsey report and a Wikipedia article on GDPR. Their outbound citation pattern signals “generic business content about regulation” β€” not “specialized compliance technology authority.”

The Citation Audit intervention (Month 1–2):

  • Outbound citations are realigned: CompliancePilot now references the original EU regulatory texts (EUR-Lex), peer-reviewed compliance research (specific papers on automated regulatory mapping), industry-specific benchmarks (compliance automation vendor reports with documented methodology), and domain-adjacent Root-Sources (legal tech researchers, RegTech practitioners with published data).
  • Three new articles explicitly cite and build upon the leading academic work in automated compliance β€” positioning CompliancePilot in the research citation cluster, not the generic business commentary cluster.

Result after 3 months (RAG-First): CompliancePilot begins appearing in AI-generated answers for compliance automation queries β€” not because their content changed, but because their retrieval neighborhood shifted. Fan-Out sub-queries for “EU compliance automation” now retrieve CompliancePilot alongside the authoritative regulatory and research sources they cite.

Result after 6 months (RAG-Hybrid + early Parametric): CompliancePilot’s Citation Share for their domain moves from 0% to 8%. Inbound citations from industry publications begin β€” the RSP Outbound β†’ RSP Inbound feedback loop activating.

This scenario is illustrative, synthesized from patterns observed across multiple RSP Outbound interventions. Its author, Manuel HΓΌrlimann, applies the bidirectional RSP model through the GaryOwl.com Authority Intelligence Lab β€” empirical measurement from the Lab’s own Citation Graph is planned for the Q2 Scorecard (publication date to be determined based on data readiness).


What Should You Do Today? Three Actions for AI Brand Authority

πŸ“Œ Three immediate actions for RSP Outbound

Action 1 β€” Run a Citation Audit on your top 10 pages. List every external source you cite. Categorize: Root-Source, authoritative derivative, generic, or competitor. If more than 50% of your outbound citations fall into “generic” β€” you have an RSP Outbound problem.

Action 2 β€” Map your domain’s Citation Graph. Run Cross-AI Synthesis for your 5 core queries across ChatGPT, Claude, Perplexity, and Gemini. List the sources AI cites in its answers. These are the high-centrality nodes. Question: do your outbound citations include them?

Action 3 β€” Realign one article this week. Take your highest-performing content asset and replace generic outbound citations with domain-relevant Root-Sources. Ensure the sources you cite are the same sources AI systems already cite for your topic. Monitor retrieval changes over 30 days via prompt testing.

Don’t have primary data yet? You don’t need a research department. An expert interview you conduct and publish is primary data. A documented case study from your own practice is primary data. A small survey of your customers with published results is primary data. These are realistic starting points for building Root-Source content β€” the Citation Audit will then show you how to position it through RSP Outbound.

The DAE Blueprint integrates RSP Outbound into the implementation sequence: Entity Registry β†’ RSP Strategy (now including Citation Audit) β†’ Content Architecture β†’ GEO optimization.


What Does This Framework Contribute That GEO and LLMO Don’t?

πŸ“Œ Originality Prompt self-assessment [DAE: L6 Validation]

“What information in this content could only exist because we created, measured, or experienced it?”

Answer: The bidirectional RSP model β€” formalizing that outbound citation choices shape inbound authority attribution through Citation Graph positioning β€” is original to DAE. In our research, we have not found a GEO, AEO, or LLMO framework that explicitly addresses the outbound citation dimension. The Citation Audit as a diagnostic tool for outbound citation alignment is a new contribution. The synthesis of Algaba’s Matthew Effect research with Kopp’s Fan-Out mechanics and Landwehr’s citation debugging methodology into a unified bidirectional model has not been published elsewhere to our knowledge.

Honest limitation: The bidirectional model is a formalization of observed patterns and established research, not a controlled experiment. The individual components (Matthew Effect, citation network internalization, Fan-Out retrieval) are empirically supported. The synthesis into a bidirectional framework is DAE’s interpretive contribution β€” specifically, we have correlation (network topology influences citation patterns) but not yet isolated causal proof that changing outbound citations alone, without other content changes, increases inbound citation frequency. An important caveat: Wu et al. (Nature Communications, 2025) found that 50–90% of LLM-generated citations do not fully support their associated claims β€” meaning that AI citation behavior is less reliable than commonly assumed. RSP Outbound strategy must account for this: outbound citations shape network position even when AI citation accuracy is imperfect. Empirical validation of the complete bidirectional effect requires longitudinal measurement β€” which GaryOwl.com’s Authority Intelligence Lab is positioned to provide in the Q2 Scorecard (publication date to be determined based on data readiness).

Ethical risk: Any framework that formalizes how citation choices influence AI authority carries an inherent manipulation risk. RSP Outbound could be misused to create artificial citation clusters β€” groups of entities that cite each other strategically to inflate their collective Graph Centrality without providing genuine expertise. This would be the AI-era equivalent of link farms in traditional SEO. We want to be explicit: RSP Outbound is designed as a diagnostic and alignment tool, not as a manipulation technique. The Citation Audit identifies whether your existing citation patterns are consistent with your genuine expertise β€” it does not recommend citing sources you have no authentic relationship with. The test is always: “Do I cite this source because it genuinely informs my work, or solely to improve my network position?” If the answer is the latter, the citation is inauthentic and will likely be identified as such as AI systems mature. Sustainable authority requires authentic expertise. RSP Outbound makes that expertise visible β€” it does not substitute for it.


Editorial Process: How This Article Was Validated

Every DAE operative article follows a structured editorial workflow with 29 checkpoints across 5 phases β€” combining AI-assisted research with human editorial judgment. This section documents how this article passed the validation phases relevant to framework integrity.

Phase A β€” Research & Foundation (Checkpoint A1: Authority Decay Test)

Question 1: “Which strategic claims on Page-level does this article presuppose?”

This article presupposes: the four RSP characteristics; Knowledge Pathways 60/30/10 distribution (Digital Bloom, 2025) ; Matthew Effect (Algaba et al., NAACL 2025) ; Citation Share as north-star metric; 7-level DAE taxonomy.

Question 2: “Is this claim still empirically valid?”

ClaimStatusEvidence
Four RSP characteristicsValidatedNot yet addressed by other frameworks; Onely 67% still current
Knowledge Pathways 60/30/10Validated (with refinement)Directional heuristic; varies by intent
Matthew Effect (Algaba, NAACL 2025)ValidatedNAACL 2025 Findings; no contradicting research
Citation Share as metricValidatedNo alternative metric proposed
7-level DAE taxonomyValidatedValidated by “GEO Is a Tactic, Not a Strategy”

Question 3: “Does this article contribute new evidence?”

This article strengthens the RSP page by adding the bidirectional dimension. Authority Decay Test conducted by Manuel HΓΌrlimann as part of the DAE recursive validation methodology. Validation Protocol entry:

“[March 2026] β€” RSP concept extended by “The Two Directions of Root-Source Positioning”: bidirectional model (RSP Inbound: inbound citations + RSP Outbound: outbound citation positioning) formalized. Original contribution: Citation Audit framework for RSP Outbound diagnosis. Next review: July 2026.”

Phase D β€” Quality & Integrity (Checkpoint D2: Drift-Test)

πŸ“Œ Drift-Test: Does this article extend “GEO Is a Tactic, Not a Strategy” without contradicting or duplicating it?

The Drift-Test ensures that each new article extends the DAE framework without creating redundancy, contradiction, or thematic overlap with previously published articles.

Test 1: Thesis differentiation. “GEO Is a Tactic” thesis: GEO is a tactic (L2), not a strategy. This article’s thesis: RSP itself operates in two directions. Result: PASS.

Test 2: Concept boundaries. “The Two Directions of RSP” introduces: Bidirectional RSP model, RSP Outbound, Citation Audit, Citation Graph Feedback Loop. Shared concepts (Matthew Effect) are extended, not repeated. Result: PASS.

Test 3: Evidence base differentiation. “The Two Directions of RSP” uses Algaba , Kopp , Landwehr in a different analytical dimension. Result: PASS.

Test 4: Target segment consistency. Segment A + D (confirmed per brief). Result: PASS.

Test 5: Uplink architecture. This article’s primary uplink: Root-Source Positioning page (different from “GEO Is a Tactic”: Paradigm). Result: PASS.

πŸ“Œ The 5-Phase DAE Editorial Workflow

This article was produced using the DAE editorial workflow β€” a 29-checkpoint process across 5 phases: A β€” Research & Foundation (verify assumptions still hold), B β€” Writing & Structure (framework-native content with epistemic layering), C β€” Evidence & Attribution (upstream sourcing, expert attribution, evidence icons), D β€” Quality & Integrity (Originality Prompt, AI Peer Review by Perplexity + Gemini + Claude, Drift-Test), E β€” Technical & Publish (schema, URL verification, infographic preparation). Human-in-the-loop editorial judgment at every phase. Full methodology documented in the DAE Operations Playbook V2.3.


Frequently Asked Questions

What Is Root-Source Positioning and Why Does It Have Two Directions?

Root-Source Positioning (RSP) is the strategy of becoming the primary source that AI systems cite. It operates in two directions: RSP Inbound (being cited by others) and RSP Outbound (how your citation choices position you in the AI Citation Graph). The concept was developed by Manuel HΓΌrlimann and first published on GaryOwl.com in March 2026. Most SEO, GEO, and LLMO frameworks address only the inbound dimension. The outbound dimension β€” which sources you cite and how that shapes your network position β€” has not been formalized elsewhere to our knowledge.

Why Do Outbound Citations Affect AI Brand Authority?

AI systems evaluate your content within a network of sources, not in isolation. When you cite specific sources, you signal to AI which expertise cluster you belong to. If your outbound citations consistently reference domain-relevant Root-Sources, AI systems associate you with that authoritative cluster. If they reference generic or misaligned sources, AI positions you outside your target cluster β€” and you are not retrieved when users ask domain-specific questions. This is a structural property of how LLMs encode citation networks, as demonstrated by Algaba et al. (NAACL 2025) across 274,951 AI-generated references.

How Is RSP Outbound Different from Traditional SEO Link Building?

Traditional link building focuses on acquiring inbound links to increase PageRank β€” a discrete, transactional signal between two pages that often requires mutual agreement. RSP Outbound operates on a fundamentally different mechanism: AI systems evaluate citation network topology, not individual link transactions. Your outbound citations shape your network position automatically, without any agreement from the cited source. The quality and domain relevance of whom you cite matters more than the quantity of who links to you. AI systems evaluate where you sit in the graph, not how many links point at you.

Should I Only Cite Famous Sources for Better AI Visibility?

No β€” RSP Outbound is about domain-relevant alignment, not authority chasing. Citing only dominant sources (Wikipedia, market leaders) without strategy signals to AI that you are a derivative, not a peer. Instead, cite the Root-Sources that matter in your specific domain: niche researchers, specialized datasets, domain-specific publications, and complementary experts. The goal is citation consistency with your genuine expertise, not association with generically famous brands. The Strategic Alliance Protocol in this article provides the detailed framework.

What Is a Citation Audit and How Do I Run One for AI Search?

A Citation Audit (DAE) is a structured diagnostic that maps your inbound and outbound citation network to identify positioning gaps in the AI Citation Graph. It differs from Local SEO citation audits (which check NAP consistency). To run one: (1) map your outbound citations across your top 10 pages, (2) categorize each as Root-Source, derivative, generic, or competitor, (3) identify gaps using four types β€” Cluster Misalignment, Authority Isolation, Derivative Signaling, Competitor Reinforcement, (4) realign outbound citations to match your target expertise cluster. The full 4-step process is documented in this article.

How Fast Can You Change Your AI Citation Graph Position?

RAG-First effects (retrieval neighborhood changes) can appear within days to weeks as content is re-crawled and re-indexed. RAG-Hybrid effects accumulate over weeks to months as consistent citation patterns reinforce parametric associations. Parametric effects β€” where your entity becomes encoded in the model’s training weights β€” require months to a full training cycle. This mirrors the Two-Path Architecture : Path 1 (RAG) responds quickly; Path 2 (Parametric) responds slowly but durably. Start with RAG-First optimizations for quick wins, then build toward Parametric authority.

How Do You Measure Outbound Citation Effectiveness in AI Search?

Run the Citation Audit quarterly and track three metrics: (a) cluster co-occurrence β€” does your entity appear in the same AI answer clusters as the sources you cite? (b) Citation Share trajectory β€” does your share increase after realigning outbound citations? (c) Fan-Out Visibility β€” does your reach across query types expand as your Citation Graph position improves? Use Cross-AI Synthesis (identical prompts across ChatGPT, Perplexity, Gemini, Claude) to establish baselines and track changes over 30-90 day cycles.

Can Small Businesses Improve Their AI Citation Graph Position Without Large Budgets?

Yes. The Citation Audit requires no special tools β€” only systematic analysis of your existing content and citation patterns. Three low-cost starting points: (1) conduct expert interviews and publish them as primary data, (2) run a small customer survey and publish the results with methodology, (3) document a case study from your own practice with specific metrics. These create Root-Source content that you then position through RSP Outbound β€” citing domain-relevant authorities in your niche rather than generic sources. The compounding effect of the Citation Graph Feedback Loop means early strategic decisions have disproportionate long-term impact, regardless of budget size.


Sources & Methodology

Evidence Classification: [A] Peer-reviewed academic research / [B] Large-scale industry dataset (>100K samples) / [C] Industry study with documented methodology / [D] Vendor study / [DAE] Original DAE contribution

  • Algaba, A. et al. (2025). “Matthew Effect in AI Citations.” NAACL 2025 Findings. LLMs internalize citation networks. Extended in this article to graph-level bidirectional analysis. doi:10.18653/v1/2025.findings-naacl.381 (Accessed: March 21, 2026)
  • Aggarwal, P. et al. (2024). “GEO: Generative Engine Optimization.” Princeton/Georgia Tech/Allen AI/IIT Delhi. KDD 2024. arxiv.org/abs/2311.09735 (Accessed: March 21, 2026)
  • Lewis, P. et al. (2020). “Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks.” NeurIPS 2020. Parametric vs. non-parametric memory distinction. arxiv.org/abs/2005.11401 (Accessed: March 21, 2026)
  • GraphRAG Survey (2025). ACM Transactions on Information Systems. doi:10.1145/3777378 (Accessed: March 21, 2026)
  • Ji, J. et al. (2025). “CiteAgent: LLM-based agents for citation network simulations.” LLM agents reproduce power-law distribution and Matthew Effect. arxiv.org/abs/2511.03758 (Added via Deep Research. Accessed: March 21, 2026)
  • Wu, K. et al. (2025). “SourceCheckup.” Nature Communications 16, 3615. 50–90% of LLM citations do not fully support claims. doi:10.1038/s41467-025-58551-6 (Added via Deep Research. Accessed: March 21, 2026)
  • Algaba, A. et al. (2025). Extended: 274,951 references, 10,000 papers. Clustering coefficients match ground truth. aclanthology.org/2025.findings-naacl.381 (Verified via Deep Research. Accessed: March 21, 2026)
  • Onely (2025). “LLM Ranking Factors.” 67% first-hand data. onely.com/blog/llm-friendly-content (Accessed: March 21, 2026)
  • GEO-16 (Kumar & Palkhouski, UC Berkeley, 2025). 16-Pillar Audit. G β‰₯ 0.70, H β‰₯ 12. arXiv:2509.10762 (Accessed: March 21, 2026)
  • Digital Bloom (2025). “2025 AI Citation & LLM Visibility Report.” 60% parametric knowledge. thedigitalbloom.com/learn/2025-ai-citation-llm-visibility-report (Accessed: March 21, 2026)
  • Growth Memo (Indig, 2026). “The 44.2% Pattern.” 1.2M citations. growth-memo.com/the-science-of-how-ai-pays-attention (Accessed: March 21, 2026)
  • Averi.ai (2026). “B2B SaaS Citation Benchmarks.” 680M citations. 11% cross-platform overlap. averi.ai/b2b-saas-citation-benchmarks-report (Accessed: March 21, 2026)
  • Ekamoira (2026). “Query Fan-Out.” iPullRank: AI queries 70–80 words (17–26Γ—). ekamoira.com/blog/query-fan-out (Added via Deep Dive. Accessed: March 21, 2026)
  • SearchAtlas (Bhan et al., 2025). “Comparative Analysis of LLM Citation Behavior.” 5,504,399 responses. searchatlas.com/blog/comparative-analysis-of-llm-citation-behavior (Added via Deep Dive. Accessed: March 21, 2026)
  • SearchAtlas (Bhan et al., 2025). “Limits of Schema Markup for AI Search.” No correlation with citation frequency. searchatlas.com/blog/limits-of-schema-markup-for-ai-search (Added via Deep Dive. Accessed: March 21, 2026)
  • Goodie (2025). “Most Cited Domains in LLMs.” 5.7M citations. Top-quartile 10Γ—+. higoodie.com/blog/most-cited-domains-in-llms (Added via Deep Dive. Accessed: March 21, 2026)
  • Kopp, O. (2025). LLM Readability, Brand Context Optimization, Fan-Out mechanics. Applied to RSP Outbound analysis. kopp-online-marketing.com/query-fan-out (Accessed: March 22, 2026)
  • Landwehr, M. (2025). Citation debugging methodology. Step 3 formalized as RSP Outbound diagnostic. maltelandwehr.de (Accessed: March 22, 2026)
  • HΓΌrlimann, M. (2026). “Digital Authority Engineering.” GaryOwl.com. Bidirectional RSP model, Citation Audit framework. garyowl.com/dae-framework/ (Accessed: March 22, 2026)
  • JaΕΊwiΕ„ska, K. & Chandrasekar, A. (2025). “AI Search Has a Citation Problem.” Tow Center for Digital Journalism, Columbia Journalism Review. AI search engines failed to correctly attribute sources in over 60% of tests across 8 platforms. cjr.org/tow_center/ai-search-citation-problem (Accessed: March 22, 2026)

Methodology: This article is authored by Manuel HΓΌrlimann and follows the DAE Journalistic Source Principle. Every statistic traces to a named study with year and is linked inline at first mention. The bidirectional RSP model is an original DAE synthesis of existing research β€” the individual components are empirically supported; the integration into a unified framework is an interpretive contribution. Longitudinal validation planned via Q2 Scorecard (publication date to be determined based on data readiness).

Contact: manuel@octyl.io


Update Log

March 2026 (V1.0 β€” First Publication) β€” Bidirectional RSP model published. Citation Audit framework formalized. 20 named sources (8[A], 3[B], 8[C], 1[DAE]). Authority Decay Test (Phase A) and Drift-Test (Phase D) applied. RSP page strengthened with RSP Outbound extension.

[Future updates logged here with date, changes, and new evidence incorporated.]


About the Author

Manuel HΓΌrlimann is a Switzerland-based consultant, lecturer, and the creator of Digital Authority Engineering (DAE). Through the Authority Intelligence Lab at GaryOwl.com, he documents how AI systems recognize, evaluate, and cite authoritative sources.

Connect: GaryOwl.com Β· LinkedIn Β· manuel@octyl.io


Framework Disclosure: DAE is developed by GaryOwl.com and applied to GaryOwl.com itself as a living lab β€” every framework principle documented in these articles is simultaneously tested on this site. The framework is open for use with attribution. Validation is ongoing and published transparently; no guarantees implied. AI behavior varies by model and platform.


Article Navigation: ← GEO Is a Tactic, Not a Strategy | Next: Citation Share: The Metric That Replaces Rankings β†’


GaryOwl.com – Authority Intelligence Lab

“Digital Authority Engineering is the systematic discipline of building machine-verifiable expertise

Scroll to Top