GEO six months on: the science is moving in favour of public relations
When I finished the GEO chapter for AI for Public Relations: A How-To Guide for Implementation and Management in November 2025, I described it as one of the most contested areas in the book.
The debate has not gone away, but the science is moving in a direction that should give corporate communications and public relations practitioners much greater confidence. Multiple academic and industry studies now suggest that earned media is doing the work in AI visibility that the chapter argued it might.
The question is whether the industry rises to the opportunity or gives it away to other disciplines as it did before with SEO.
Here’s the state of play
The directional finding has held
Earned media is disproportionately cited in AI-generated answers across multiple independent studies. It gives public relations practice a structural advantage that no other discipline can claim.
The instability is real, but solvable
Outputs vary by language, paraphrase, prompt structure, model version, retrieval mode and time of day. Volatility is a measurement problem the industry needs to solve.
New issues have emerged, notably ethics
Self-promotion bias and AI systems citing other AI systems both complicate the citation graph. The line between managing AI visibility and manipulating it is an ethical question the discipline now has to answer.
The absence of open standards is both a threat and an opportunity
Each agency and vendor has its own methodology and benchmarks. The discipline that helps establish open measurement standards will define the next decade of practice.
The longitudinal evidence is still to come
No published study yet tracks brand visibility, AI-cited share or commercial outcome over time with adequate methodology. The studies done now will set the benchmarks the rest of the industry uses.
Where this leaves public relations practice: the directional finding is firmer
Muck Rack updated its analysis last week based on 25 million links across ChatGPT, Claude and Gemini. Around 84% of citations come from earned media.
Mahe Chen and colleagues at the University of Toronto circulated a controlled experimental study in September 2025. ChatGPT skewed as far as 93.5% earned for well-known brands across multiple verticals, regions and languages.
Hard Numbers and Onclusive tested 143 brands across three LLMs in November 2025. 80% of brands with a strong earned media presence are recommended as category leaders, compared with 47% for brands with a weaker presence. Brands with the most positive sentiment are three times more likely to win head-to-head comparisons.
These are different methodologies, different prompt sets, but a consistent direction of travel. Earned media is disproportionately cited and disproportionately recommended in AI answers.
But the instability is firmer too
The Toronto group documented precisely the volatility that PR Agency One boss James Crawford warned about in my chapter. Outputs vary by language, paraphrase and prompt structure.
The visibility distribution is heavily skewed toward English-language tech and consumer brands. Sentiment is overwhelmingly positive because none of the prompt sets test for adversarial questioning.
The failure mode I missed: ethics
Tandeep Sangra has documented self-promotion bias in LLM recommendations. Brands publishing self-referential top ten content are disproportionately surfaced when users ask AI for service recommendations. Cross-citation networks superficially resemble independent corroboration. This is a means of gaming AI visibility.
The harder problem is one that the discipline cannot fix by behaving better.
The Hard Numbers Greenland report by Paul Stollery introduces a more interesting problem. 41 prompts about Greenlandic sovereignty produced answers anchored on Reddit, Grok conversation URLs, a Grokipedia page and four state media sources from Russia and China.
On contested topics where the editorial public record is limited, LLMs draw on low-quality user-generated content and AI-generated reference sites as the source of public record. These can also be gamed.
The line between managing AI visibility and manipulating it is the ethical question that the public relations industry now has to answer. The space between the two is where practice either earns its place or loses it.
The need for open standards
AIVO Standard’s PSOS framework is moving towards a standard, but it is opaque. Weighted formulas, interpretation bands, and a difference-in-differences attribution model linking the score to revenue.
PR Agency One called this out in its GEO playbook. It says Share of Model and vendor-generated rankings are pseudo-metrics that should not be put in front of a board.
Gartner's forecast that public relations budgets will double this year because of GEO was a huge overreach. It is unhelpful to a discipline that needs evidence and not headlines.
Each vendor publishes its own methodology and benchmarks. None is independently verifiable. None can be reproduced.
Where does this leaves public relations practice?
My position from November stands, but the evidence has strengthened.
Earned media drives AI visibility. The effect is real. It is replicating across studies. The measurement is unstable, but this is a problem we can solve. An open methodology and standard would close the gap.
The chapter stands. The case for earned media is stronger. The case for public relations to lead on standards and evidence in the AI visibility era is the strongest of all.
Further reading
This essay was originally posted on my Substack. The Wadds Inc. newsletter is read by more than 5,000 communications and public relations practitioners. We take a slower, critical perspective on the research, evidence and developments shaping the field.