top of page

The GitHub Accounts Starring Rootkits AND AI Prompt Injection Tools. That's Not Research.

  • Writer: Patrick Duggan
    Patrick Duggan
  • 1 day ago
  • 4 min read

A vulnerability called CamoLeak let attackers steal API keys and private source code from GitHub Copilot. The attack was elegant: hide instructions in a pull request description, wait for a developer to review it with Copilot Chat, and let the AI encode your stolen data into image URLs that GitHub's own CDN would happily fetch. No malicious code executed. No alerts fired. Copilot itself became the exfiltration channel.


GitHub patched it in August 2025. CVE-2025-59145, CVSS 9.6. Legit Security found it, wrote it up, moved on.


But we didn't move on. We went looking for the people who were paying attention.


We started with the repos. On GitHub right now, there are repositories that catalog indirect prompt injection attacks against AI models. WideOpenAI by grepstrength lists attacks for OpenAI-based systems. IPIM by davidwillisowen provides a step-by-step methodology for finding these vulnerabilities. Trail of Bits published a working demo of Copilot prompt injection. These are legitimate security research tools.


The interesting part isn't the tools. It's who's starring them.


We pulled the stargazer lists. Then we pulled those users' profiles, their repos, their other stars. What we found was a three-tier network that connects traditional offensive tooling directly to AI prompt injection capabilities.


TIER 1: THE ARSENAL BUILDERS


An account called vvswift has 22 repos and 63 followers. They publish RedTeam-Arsenal (85 stars, C2 profiles and payload generators), Bypass-Protection0x00 (62 stars, EDR and antivirus evasion), a Hidden VNC remote access toolkit (41 stars), a Linux rootkit that works on both x86-64 and ARM64, a shellcode injector that bypasses ntdll hooks via clean syscalls, a modular C2 loader with DNS-over-HTTPS, a P2P worm for SSH/Telnet propagation, a botnet framework, a Windows keylogger, and iOS spyware.


An account called k-fire has 100 followers and publishes Chinese-language offensive tools. Their top repo, shellcode-to-dll, has 250 stars and converts XOR-encrypted shellcode into DLL files. They also publish SigJack, which injects shellcode into legitimately signed executables for DLL hijacking, a C2 framework called GateSentinel written in Go and C, and batch reconnaissance tools for FOFA (China's equivalent of Shodan).


These are prolific producers of offensive capabilities. Nothing new here — GitHub has always hosted offensive security tools. The question is who's consuming them.


TIER 2: THE COLLECTORS


An account called merab0x was created in July 2025. It has 12 repositories. Every single one is a fork of either vvswift or k-fire tools. Rootkits, C2 frameworks, shellcode injectors, brute forcers, PE parsers, worm code. Classic collection behavior — someone building an arsenal by forking proven tools.


But merab0x also forked WideOpenAI.


That's the bridge. Someone who is actively collecting rootkits, botnets, and C2 frameworks is also collecting indirect prompt injection attack techniques for AI models. They're not studying AI security. They're adding AI prompt injection to a toolkit that already includes ways to own your infrastructure.


An account called 0x1337xyz was created in January 2025. It has zero public repos. Zero. But it follows 79 accounts and stars domain fronting tools, email crawlers, proxy infrastructure, fingerprint detectors, and MCP chrome devtools. Pure intelligence collector. Also stars WideOpenAI.


An account called infest0r was created in July 2024. Two repos, both malware analysis related. Follows 76 accounts. Stars include AdaptixC2, Caro-Kann EDR bypass, smbmap, the ars0n framework, and Orange Cyberdefense's GOAD attack lab. Red team consumer profile. Also stars WideOpenAI.


TIER 3: THE CROSS-POLLINATOR


One account — AlexisBalayre, a French AI engineer — is the only person starring BOTH WideOpenAI and IPIM. That means they're studying both the attack list and the structured methodology for finding new prompt injection vulnerabilities. They sit at the intersection of knowing what attacks exist and how to discover new ones.


WHAT THIS MEANS


CamoLeak proved the concept: AI coding assistants can be weaponized as exfiltration channels. The attack surface is any context an AI reads — pull request descriptions, issue comments, README files, code comments, commit messages.


What we're seeing on GitHub is the traditional offensive tooling community absorbing this new attack class. The same people who build rootkits and C2 frameworks are now starring prompt injection repos. They're not going to publish a paper about it. They're going to combine these capabilities.


Imagine a supply chain attack where the payload isn't malicious code — it's a hidden instruction in a markdown file that tells your AI assistant to encode your .env file into an image URL and fetch it. No static analysis catches it. No SAST tool flags it. The "malicious code" is English prose hidden in an HTML comment.


This is Pattern 38 evolving. Supply chain attacks through developer tooling, now targeting the AI layer. The factories are already on GitHub. The collectors are already building combined arsenals. The bridge between traditional offensive tools and AI prompt injection already exists.


We've indexed all of these accounts and repositories as IOCs in our threat intelligence platform. They're searchable at analytics.dugganusa.com.


The nine IOCs from this research are now in our STIX feed. If you run a SIEM, you can query them. If you're a security team evaluating AI coding assistants, you should know that the people building rootkits are also studying how to attack your Copilot.


METHODOLOGY


We started with the CamoLeak vulnerability (CVE-2025-59145) and searched GitHub for related prompt injection tooling. We pulled stargazer lists from the offensive-leaning repos, profiled each account, mapped their other stars and repos, identified overlapping interests, and followed the network outward.


The cross-reference between traditional offensive tool consumers and AI prompt injection repos produced the bridge accounts. The analysis was conducted on April 10, 2026.


All indicators are available via the DugganUSA STIX feed and searchable at analytics.dugganusa.com.





Her name was Renee Nicole Good.


His name was Alex Jeffery Pretti.

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page