In the mid-to-late 1800s humanity developed the humble beginnings of evidence based western medicine, doctors treated illness using medication (mostly opium) and performed basic surgery. We began to see reductions in mortality rates and improvements in public health. Medical history of course, didn’t end in the 1800s, beginning in the 1870s new “medical” boom occurred: patent medicine vendors travelling the countryside pitching cures.
These cures weren’t for diseases that medicine was already addressing such as cholera or typhoid rather they were for entirely new ailments. Fun sounding diseases such as americanitis (the disease of living too fast), neurasthenia (exhaustion with technology) and hysteria (a dark stain on our medical history) became the modern plague. They weren’t pitched as evolutions of known diseases, rather they were new conditions that medicine couldn’t understand because it was “outdated”. Opium? How quaint, Surgery? How barbaric.
The new era of disease required a new proprietary fix with its own new secret ingredients. Vendors would publish almanacs, hold lectures and make charts flooded with medical-adjacent terms and classification systems. Suddenly everyone wanted, nay, needed Dr Kilmer’s Swamp Root or Lydia Pinkham’s Vegetable Compound. Doctors were dismayed, and pointed out that these new conditions were in fact just anxiety, poor diet or depression wrapped into a much better looking wrapper. This was dismissed by the public and the patent vendors as just them “not getting it”, they were out of date dinosaurs that hadn’t yet adapted to the reality of the new disease.
Why is this relevant? Well, patent medicines did sometimes contain novel active ingredients (usually boatloads of cocaine and alcohol). Some potentially might have worked. Instead of integrating these discoveries or understandings into medicine, vendors built entire parallel industries around them, replete with new diagnostic frameworks and certifications. This didn’t stop in the US until 1906 when the Pure Food and Drug Act forced vendors to disclose ingredients and stop making fraudulent claims.
Flash forward to the present, in the mid to late 2010s humanity developed the humble beginnings of functional AI. Whilst the concept for a large language model had existed for a while, the emergence of both the internet and affordable hardware enabled scale beyond our wildest dreams. The history of AI didn’t end in the 2010’s, in the early to late 2020’s we started to see these tools explode. The adoption of these tools without consideration or planning led to the development of an entire industry built around the need to secure these tools. This industry created its own taxonomy of threats with pretty charts and lectures designed to legitimise it. Prompt injection, emergent capabilities and model poisoning became all the rage.
This new era of threats required it's own proprietary practices, tools and teams. Suddenly everyone wanted what these vendors were selling. Security practitioners were dismayed and pointed out that these new conditions were in-fact just product and application security wrapped into a different, more appealing wrapper.
Why is this relevant? Well there is some truth to the idea that novel security challenges exist within AI, but it’s being packaged as if there is a need for it to operate separately from ProdSec and AppSec. There is an entire parallel industry being created when its really just an extension of our existing practices with some new learnings necessary.
The patent medicine industry for today’s ailment is inventively named AI security.
We’re in an AI boom/bubble/whatever, I am not going to debate about the validity or reality of it because it isn’t relevant. You, me and everyone else have got a job to do and that job most likely involves securing something that is getting or already has AI, regardless of our feelings or opinions. Technology being technology and hype-cycles being hype-cycles we have quickly found ourselves facing a petri-dish of half-baked immature technology smearing rapidly expanding lifeforms all over the place. We’re also continually being bombarded with the latest and greatest solution to fix the problems that tend to occur when half-baked immature technology gets unleashed upon business and individuals alike. What are these problems? Are they something unique? My argument (if it wasn’t already clear) is no and no.
AI Security is largely just ProdSec and/or AppSec. Note: I say largely, because there is legitimate necessity for researchers to focus on novel attacks that might arise within these models and a need to update our classification methodologies. Furthermore there is space for new approaches to solving problems and ways that we can use these tools to our benefits. But as a defender/blue/purple teamer I struggle to see the need for this to exist as a separate discipline.
AI systems are just software systems (usually APIs) with unusual/quirky characteristics. They take inputs, process them (through parameters as opposed to explicit logic) and produce outputs. The concerns these systems create map neatly to traditional product security domains. To clarify this I have outlined some of them below:
Prompt Injection aka Input Validation Failure -
Regardless as to if you’re sanitising SQL inputs or sanitising prompts to prevent injection, the pattern is identical. We have our methods for addressing this including:
Input sanitisation
Structured input formats
Delimiters between system instructions and user input
Filtering layers before processing (anyone heard of a WAF before)
Output validation
Principle of least privilege
Input length limiting and rate limiting
Jailbreaking aka AuthN/AuthZ Bypass -
Securing who can query your data and under what circumstances is the same problem whether its a model or an API.
System level guardrails independent of the model
Filtering layers before and after processing
Behavioural analysis for detecting policy violations
Principle of least privilege
Monitoring for known patterns (WAF again)
Model Poisoning aka Software Supply Chain attack -
A poisoned npm package and a backdoored model weight from Hugging Face are exactly the same threat category. Both are just third party dependencies
Model provenance tracking and signing
Model sourcing decisions
SBOMs
Dependency scanning
Code review
Integrity of training pipelines
Controlled training environments
Version control
This mapping continues all the way up and down the threat landscape and I might dedicate a large post to this alone.
So what's the prescription from this ProdSec lead? Extend your existing ProdSec/AppSec program to cover AI systems. Train your team on the quirks of probabilistic behaviour and adversarial ML, yes. Add some new tools for model scanning and prompt testing, absolutely. But don't create a parallel security organisation. Don't buy a vendor platform that treats AI security as fundamentally separate from the rest of your security posture. The patent medicine era ended when regulators forced transparency about ingredients. The AI security vendor boom will end when organisations realise they already have most of the ingredients in their existing security programs, they just need to apply them to a new (if quirky) type of software system.
The patent medicine vendors were selling sugar water with cocaine. The AI security vendors are selling you AppSec/ProdSec with buzzwords. Some of the ingredients are genuinely useful. Most of what you need, you already have.

