SME Interviews and Reputation Engineering
This week, I want to go deeper into that and tie it into the larger idea/strategy of reputation engineering (as it relates to AI search visibility). Let’s start by zooming out for a second and looking at how AI search results work and how LLMs actually approach information retrieval.
When an LLM decides what to cite (see also: surface in an AI summary), it's building a picture of your brand across every signal it can find, including your owned content, third-party mentions, author bios, quotes in other publications, and the topics you consistently show up on.
Every citable piece of content you produce is a step toward constructing that picture. That's reputation engineering. Done consistently over time, this is how a brand (or individual) starts being the source AI systems default to when a question lands in your category.
Most brands are thinking about AI search one piece of content at a time, but really, they should be thinking about it as a long-term, compounding body of evidence that tells AI systems: here’s who owns this topic.
How to start reputation engineering: Integrate SME interviews strategically
Ok! Now that we’re on the same page about the big-picture theory behind all this, let’s talk about how interviews are part of putting it into practice.
When you’re thinking about reputation engineering as it relates to the content you publish on channels you own (like your blog, SubStack, etc.), interviews are a critical piece of building E-E-A-T, which stands for Experience, Expertise, Authoritativeness, and Trustworthiness. It's a framework Google's quality raters use to evaluate whether content is worth surfacing…and it's become increasingly relevant to AI search because LLMs are trained heavily on content that already ranks well.
Every SME interview conducted with EEAT in mind builds entity signals, AKA what AI systems use to understand what your brand knows and who it’s for.
Data backs this up: Digital Bloom's 2025 AI Visibility Report, which analyzed 680 million+ citations, found that adding original quotes from experts can boost AI visibility by 37%.
That means your SME interview isn't just you gathering quotes from third-party experts to improve the quality and trustworthiness of the content, but it’s also a source of entity signals, which are an important piece of AI Search.
What's an entity signal?
An entity signal is what AI systems use to decide whether your content is worth citing. They help answer: Is this a credible, distinct source of knowledge on this topic?
I think the term “entity signal” is jargony and technical, so I’m going to call them “source signals,” because that’s how journalists think about the third-party experts they tie into their news stories.
Now come join me down in the weeds of LLM information retrieval, will you? I spent my last semester taking a course that went very in-depth into the systems thinking that drives LLMs (like ChatGPT and Claude) and the technicalities of how they surface information/answers to queries.
Here’s the TL;DR:
When an LLM processes your content, it's not just reading words. It maps relationships and evaluates many factors before arriving at an answer, asking questions like:
Who is this?
What do they know?
Are they expert enough to be trusted as a go-to person in their field?
What specific, verifiable claims are they making that I can't find anywhere else?
Building content that includes these signals from high-quality third-party experts helps LLMs quickly understand that you’re applying journalistic rigor to your work by integrating the appropriate expertise (and not just churning out AI slop).
What "source signals" actually look like
When I'm conducting an SME interview, I'm listening for four specific things beyond the obvious quote that will help me build EEAT:
Proprietary data. Numbers, percentages, findings that exist nowhere else. "We analyzed 500 customer accounts and found that…" is citation gold. AI systems love a specific, ownable data point. And so do I.
Named frameworks. Has your subject matter expert developed a way of thinking about a problem that has a name? Even an informal one? “We call this the 'trust gap’” is more citable than a generic explanation of the same concept. (See what I did there with “source signals”? You’re getting it.)
Contrarian positions. Where does your expert disagree with the conventional wisdom in your space? AI systems tend to cite content that takes a clear, defensible stance, backed by experience, data, and actual results that run counter to best practices.
Lived specificity. Details that could only come from someone who has actually done the thing. Not "content marketing takes time" but "we published 3x a week for 14 months before we saw compounding returns, and here's what month 6 looked like."
Interviews are where you get the good stuff
Most content marketers go into SME interviews with a list of questions designed to address a brief. That's fine for producing pretty standard content. It's not enough to produce citable content. It means instead of asking "what's your approach to X?" you start asking "what do you know about X that most people get wrong?"
Instead of "Can you explain Y?" you ask, "What would you call the framework you use for Y?"
You're not just collecting information. You're excavating the signals that make the content distinct enough for an AI system to treat it as a primary source rather than one of 40 similar articles on the same topic.
The journalism parallel
This is how good journalists have always operated. The best interviews are information-gathering sessions and opportunities to mine for source signal gold.
You go in knowing the general shape of the story, but you're listening for the detail that makes it interesting, unexpected, or surprising. The specific number. The unexpected admission. The reframe that changes how you understand the whole topic. That instinct, the one that makes a journalist keep pushing until they get the thing that makes the story worth telling, is exactly the instinct that produces the content that LLMs love to cite.
These tools know what authoritative sourcing looks like, and they reward it.
Four questions for building source signals via SME interviews
Before your next SME interview, add these four questions to your prep:
What do you know about this topic that most people in your industry get wrong?
Is there a framework or process you use that has a name, even an internal one?
Do you have any data or findings from your own work that I won't find anywhere else?
What's a specific moment or example that shows this tactic in action and the results it produced?
You don't need all four to land in the final piece, but asking them gives you the raw material to build content that AI search engines favor as a go-to source.
FAQs about How To Leverage SME Interviews to Increase LLM Citation Rate
What is reputation engineering, and why does it matter for AI search? Reputation engineering is the practice of systematically building a body of content that signals to AI systems that your brand is the authoritative source on a given topic. Rather than optimizing one piece of content at a time, it's a long-term strategy that compounds over time, with the goal being that when a relevant question is asked, AI defaults to you as the answer.
How do AI systems decide what content to cite? When an LLM processes content, it maps relationships and evaluates credibility. It's essentially asking: Who is this source? What do they know? Are they trustworthy enough to cite? It weighs owned content, third-party mentions, author credentials, and the consistency of a brand's presence on a topic, then surfaces what it judges to be the most authoritative answer.
What is E-E-A-T, and how does it relate to AI visibility? E-E-A-T stands for Experience, Expertise, Authoritativeness, and Trustworthiness — a framework Google's quality raters use to evaluate content. It's become increasingly relevant to AI search because LLMs are trained heavily on content that already ranks well in traditional search, making E-E-A-T signals a proxy for AI citability too.
What is a source signal, and why should content marketers care? A source signal is any element in your content that helps an AI system identify you as a credible, distinct authority on a topic. Think of it the way journalists think about expert sourcing — it's the proof that your content reflects genuine knowledge rather than generic information. Content rich in source signals is more likely to be treated as a primary source rather than one of dozens of similar articles.
What kinds of source signals can SME interviews produce? Four types stand out: proprietary data (original numbers or findings), named frameworks (a distinct way of thinking about a problem), contrarian positions (well-supported disagreements with conventional wisdom), and lived specificity (granular detail that could only come from direct experience). These are the elements AI systems — and good journalists — prize most.
How should I change the way I conduct SME interviews? Shift from information-gathering to signal-excavating. Instead of asking "What's your approach to X?" ask "What do most people get wrong about X?" Instead of "Can you explain Y?" ask "What would you call the framework you use for Y?" The goal is to surface something distinct — a number, a reframe, a specific story — that doesn't exist anywhere else on the internet.
Do I need all four source signal types in every piece of content? No. You're looking for raw material, not a checklist. Ask all four types of questions in your interview, then use whatever surfaces naturally in the final piece. Even one strong proprietary data point or a well-named framework can meaningfully improve a piece's citability.
Why does adding expert quotes improve AI visibility specifically? According to Digital Bloom's 2025 AI Visibility Report — which analyzed over 680 million citations — adding original expert quotes can boost AI visibility by 37%. The reason is structural: expert quotes signal journalistic rigor and introduce claims that are specific, attributable, and harder for AI to find replicated elsewhere.