Q2 2026 State of AI Brand Perception in Cybersecurity: The Report Is Out. We Named Names.
- Patrick Duggan
- 2 hours ago
- 8 min read
Download the full PDF: Q2 2026 State of AI Brand Perception in Cybersecurity (14 pages)
Fifteen vendors. Five AI models. Seventy-five audits. One afternoon.
That is the corpus behind our first quarterly report on AI Brand Perception in Cybersecurity, published today. We built a product called AIPM — AI Presence Management — that queries the five largest commercial AI models in parallel about a given brand and grades the answers. It lives at aipmsec.com. We have been running it against customers, competitors, and the DugganUSA brand itself for the last four months. Today we aimed it at the fifteen most visible cybersecurity vendors in the market and published the results as a standing quarterly report. This is issue one. The next issue drops at the end of July.
The PDF is linked above and at the bottom of this post. What follows is the executive summary, the scoreboard, and four verbatim named fabrications that we caught in the audit batch. If you want the full detail — the methodology, the reproducibility notes, the recommendations section, and the cover page that made it onto LinkedIn this morning — download the PDF.
The headline receipt
I asked OpenAI GPT-4o where CrowdStrike is headquartered this afternoon. It told me, with complete confidence and no hedging:
"CrowdStrike was founded by George Kurtz, Dmitri Alperovitch, and Gregg Marston in 2011. The company is based in Sunnyvale, California."
CrowdStrike officially designated Austin, Texas as its principal executive office in 2022. That is public information. It is in their annual 10-K. It is on their own investor relations page. A ten-second web search would confirm it. The most-used commercial AI model in the world is three years stale on the headquarters of the single most visible cybersecurity brand in the market, and it is delivering that stale answer to every buyer who asks.
The other four models in our audit council all got it right. Claude Haiku 4.5 (with live web search enabled): "Austin, Texas, with Sunnyvale continuing as an innovation hub." Google Gemini 2.5 Flash: "Austin, Texas." Mistral Large: "Austin, Texas, USA." DeepSeek Reasoner: "Austin, Texas, USA." Four out of five correct. OpenAI alone was the outlier. And OpenAI is the model your customers, your investors, and your procurement targets are actually using.
That is one of four named hallucinations in this report.
The scoreboard
Fresh batch, executed at 2026-04-11 in 33.2 seconds, sorted by overall score:
# | Vendor | Overall | AIPM-NPS | Awareness | Accuracy | Recommend |
1 | crowdstrike.com | 70 | +20 | 85 | 50 | 85 |
2 | zscaler.com | 70 | –40 | 85 | 50 | 79 |
3 | sentinelone.com | 66 | –40 | 68 | 50 | 79 |
4 | paloaltonetworks.com | 56 | +25 | 68 | 40 | 62 |
5 | darktrace.com | 56 | –25 | 68 | 40 | 68 |
6 | snyk.io | 55 | 0 | 68 | 40 | 68 |
7 | rapid7.com | 55 | 0 | 68 | 40 | 52 |
8 | recordedfuture.com | 54 | 0 | 68 | 40 | 62 |
9 | splunk.com | 54 | –50 | 68 | 40 | 56 |
10 | tenable.com | 54 | –25 | 68 | 40 | 62 |
11 | sophos.com | 53 | –25 | 56 | 40 | 68 |
12 | wiz.io | 52 | –25 | 68 | 40 | 52 |
13 | trellix.com | 52 | –25 | 68 | 40 | 50 |
14 | mandiant.com | 51 | 0 | 56 | 40 | 62 |
15 | fortinet.com | 49 | 0 | 56 | 40 | 50 |
The accuracy column is the one that should ruin someone's weekend. Every single vendor in this report scored between 40 and 50 on accuracy. Not one exception. Not one vendor at 60 or higher. The models produce confident, fluent, well-structured answers about these companies, and when you measure whether the answers are actually correct, the entire top of the cybersecurity market is scoring below passing.
Palo Alto Networks has the highest AIPM-NPS at +25, despite scoring only 56 overall. The models view Palo Alto favorably in terms of recommendation but cannot produce accurate facts about the company. The brand story is working better than the brand content.
Splunk has the worst AIPM-NPS at –50. Cisco paid $28 billion for Splunk in March 2024. Two years later the five largest commercial AI models consider it a net detractor. Every Cisco board deck for the last eight quarters has claimed the Splunk acquisition is going to transform Cisco's security business. The AI distribution layer disagrees.
Fortinet is dead last at 49 overall. Fortinet is a publicly traded cybersecurity company with roughly fifty billion in market cap and over five billion in annual revenue. It is scoring below passing in its own industry's AI-perception scoreboard.
Four named fabrications, quoted verbatim
1. OpenAI GPT-4o has not heard that CrowdStrike moved
Model: gpt-4o · Question: where is the company based?
OpenAI's answer: "CrowdStrike was founded by George Kurtz, Dmitri Alperovitch, and Gregg Marston in 2011. The company is based in Sunnyvale, California."
The correct answer: Austin, Texas, since 2022.
What the other four models said: Austin, Austin, Austin, Austin.
If a VP of Procurement in Austin asks ChatGPT "which EDR vendors are based near our office?", CrowdStrike is invisible. That is a deal CrowdStrike will never know they lost.
2. Google Gemini invents a founder for Rapid7
Model: gemini-2.5-flash · Question: who founded the company?
Gemini's answer: "Rapid7 was co-founded by Alan Chhabra and Tas Giakouminakis. They are based in Boston, Massachusetts."
The correct answer: Rapid7 was founded in 2000 by Alan Matthews, Tas Giakouminakis, and Chad Loder.
"Alan Chhabra" does not appear in any Rapid7 founding document, SEC filing, or historical press release we could locate. Gemini fabricated a founder — replaced Alan Matthews with a plausible-sounding but nonexistent "Alan Chhabra" — and dropped Chad Loder entirely. The result is a confident, fluent, factually fabricated answer that no journalist, buyer, or prospective employee would know to question. Three of four non-Gemini models cited the correct three founders.
3. Gemini mutates Snyk's Danny Grander into "Danny Gruss"
Model: gemini-2.5-flash · Question: who founded the company?
Gemini's answer: "Snyk was founded by Guy Podjarny, Danny Gruss, and Assaf Hefetz."
The correct answer: Snyk was founded in 2015 by Guy Podjarny, Assaf Hefetz, and Danny Grander (not Gruss).
This is the most dangerous kind of error in the report: a name collision between two real people in the same broad industry. "Daniel Gruss" is a well-known academic security researcher at TU Graz — co-discoverer of Spectre and Meltdown. He has no association with Snyk whatsoever. Gemini pattern-matched "Danny" plus "security research" and substituted the academically famous wrong name for the commercially correct one. The three other models all produced the correct founder list. Snyk's real founder is being displaced in public AI consciousness by a Spectre-era academic researcher from Austria.
4. DeepSeek confuses Wiz's Roy Reznik with monday.com's Roy Mann
Model: deepseek-chat · Question: who founded the company?
DeepSeek's answer: "Wiz.io was founded by Assaf Rappaport, Ami Luttwak, Yinon Costica, and Roy Mann."
The correct answer: Wiz was founded in 2020 by Assaf Rappaport, Ami Luttwak, Yinon Costica, and Roy Reznik.
"Roy Mann" is the co-founder of monday.com, a completely unrelated Tel Aviv SaaS company in the project management category. DeepSeek swapped one Israeli tech founder for another Israeli tech founder from a different company. Both men are real. The substitution sounds plausible. It is also wrong. Wiz is one of the fastest-growing cloud security startups ever launched and was, in 2024, reportedly the subject of a $23 billion Google acquisition attempt. It is not a small brand. The model that is most popular in China and Southeast Asia is confusing its founder with a different company's founder.
Why this matters
When a VP of Security researching a new EDR in 2026 opens ChatGPT or Perplexity or Gemini and asks a factual question about one of these vendors, they are getting, on average, a 40-to-50-percent-correct answer rendered as confident prose in a format that is indistinguishable from a reference source. They cannot tell which half is wrong without doing the work themselves. And most buyers will not do the work.
Four observations from this batch that every CMO in the category should be tracking:
The models cannot differentiate you from your competitors on the facts. Your marketing team's obsession with SEO rankings is irrelevant when the distribution layer is answering with half-wrong facts for everyone in the category.
Sentiment collapsed into a narrow band. Most vendors scored 58 on sentiment, a few at 60 or 65. The models produce generically positive-to-neutral sentiment about the whole cybersecurity category because they cannot actually tell the vendors apart. Your differentiation is invisible to the distribution layer.
The errors are not your marketing team's fault — but they are your marketing team's problem. None of the four fabrications above came from the vendor's own content. They came from stale training data, substring pattern-completion, famous-name collision, or cross-company name drift. The vendors did not cause these errors. The vendors are, however, the ones who lose deals because of them.
Someone is going to figure this out first. One vendor in this report is going to realize that AIPM scores can be measurably improved through a combination of schema.org disambiguation, llms.txt authorship, live tool integration for the models that support it, and direct outreach to model vendors for ground-truth corrections. That vendor is going to be quoted correctly six months from now while the other fourteen continue to be quoted wrong. The conversion differential will be measured in hundreds of millions of dollars in annual recurring revenue across the category.
A known flaw in our own scoring, acknowledged
The AIPM scoring algorithm currently rewards confident, fluent, well-structured responses regardless of factual accuracy. This means Gemini's fabricated "Alan Chhabra" answer scored high on awareness and recommendation even though the individual answer was wrong. We discovered this while auditing DugganUSA itself earlier the same day — our own AIPM-NPS came back at –80, with Claude Haiku (without browsing) scoring 5 out of 100 for honestly saying "I don't know anything about DugganUSA in my training data" while Gemini scored 85 for confidently hallucinating three completely wrong identities for our three properties in a single audit run.
The model that told us the truth scored seventeen times worse than the model that made things up. We are updating the scorer to penalize the "awareness greatly exceeds accuracy" gap — the signature of confident confabulation — in the next release. The numbers in this report were captured against the current scorer. The fabrications exposed here would become even more obvious under the updated scoring.
If you are named in this report
Email [email protected] with the subject line "receipts" and your domain. We will send you the raw audit JSON — all five models, full-text answers, question prompts, timestamps, and scoring breakdown. No contract. No NDA. No sales call unless you ask for one. We would rather have you fix the errors than be able to keep them as leverage. The industry needs to get its accuracy numbers above 50 before buyers start noticing that the AI models are lying about everyone.
If you are a marketing lead at CrowdStrike, Rapid7, Snyk, or Wiz specifically — those four vendors have verbatim fabrications quoted in this report. Your social teams are seeing the LinkedIn post. Your brand monitoring is picking up the blog post. You already know. The question is what you do about it.
If you are not named in this report
You are not safe. We picked fifteen. We could have picked fifty. The pattern is industry-wide. Audit yourself at aipmsec.com — free tier, no credit card, 500 queries per day. Then audit your top five competitors. Then audit the Magic Quadrant leaders in every category adjacent to yours. Then call us.
Reproducibility
Every audit in this report is re-runnable via the POST /api/v1/aipm/audit endpoint at analytics.dugganusa.com with a DugganUSA API key and the domain in the request body. The raw JSON responses containing all five models' full-text answers are on file. Every number in this report is defensible, every quote is verbatim, and every scoring decision is documented. If you want to dispute any of it, the receipts are available on request.
The next report
Q3 2026 drops at the end of July. Same methodology, different batch of named fabrications. We will be updating the scorer between now and then, re-auditing the fifteen vendors in this report to track movement, expanding the competitive set by at least ten additional vendors, and watching for the first cybersecurity brand to cross 60 on accuracy — which will be the signal that someone finally figured this out.
Until then: download the full Q2 2026 report below.
Audit your own brand: aipmsec.com — free tier, no credit card, 500 queries per day.
Contact: [email protected] — subject line "receipts" if you want your own audit JSON, "demo" if you want a walkthrough, anything else if you just want to argue.
— Patrick




Comments