Welcome Back
Hello, it’s been a hot minute again. How are you? I’m great, thanks for asking!
Today I wanted to just drop a quick blog to get my current thoughts onto the page regarding where we’re going with AI, its consequences, and how I’m approaching the short term (1-5 years) for my career. This post is merely a prediction, my analysis, a faux prophecy if you will, based on everything I read, surmise, and my general gut feelings when using AI every day. Do not construe it as a given, nor modify your path based on my ramblings!
I’ll be discussing:
- Your skills decay
- What will happen to pentesting
- Putting my money where my mouth is
- Are we cooked, chat?
The Skills Decay
Command Line-Fu Is Dead
Can you feel it happening? Areas of your technical skills atrophying? Perhaps you’re becoming more reliant on your favourite AI assistant for remembering syntax. After all, why would you need that sed command anymore when it was inferior to what Claude gives you anyway?
Do you generate wordlists using bash now, or ask GPT to respond in a code block with variations on a word you gave it? Or maybe you just ask it to whip up a script to leetspeak everything for you? We’re now approaching a time in history where we have to selectively choose what part of our skillset is important to retain and what is redundant. No advice here, just be cautious about what you’re holding onto and whether it’s useful. Work in processes and blueprints rather than specifics now. Your methodology is far more valuable than your cheatsheet. In fact, your cheatsheet is redundant for the most part, along with all that time you spent memorizing grep commands. Sorry.
Alas, do not feel attacked, you are not alone. I took great pride in my command-line-fu but succumbed to it no longer being a useful thing to occupy brain space. But what else, other than being a cmd-whizz is going to be useless? Well… writing code.
Writing Code vs Knowing Code
Indeed, senior software engineers are talking about writing less code than ever. They’re becoming architects of their systems, rather than authors. Or at least they’re using more boilerplate AI generation than ever. And with good reason: The latest Opus model is an incredibly proficient coding assistant, why not expedite the speed you write code? Does it matter? No, I don’t think so. These systems are going to get so good that your ability to write code is redundant. What isn’t redundant is knowing what you’re writing, knowing when to say yes or no to design decisions based on your experience, and more importantly for us security folks, ensuring whatever the hell you program isn’t a house on fire security wise. We spent a lot of time learning secure coding, don’t trade it for efficiency!
The Cost of Knowledge Is Not Going to Zero
That brings me onto knowledge. I hear “the cost of knowledge is going to 0” - but what is knowledge? Is knowledge the writing of the code? Is SaaS dead because we can all create websites to be whatever we want now? Wix, Squarespace, and similar sites, have allowed anyone to do this for as long as I can remember. Got an idea for an e-commerce product? 99% of the time Shopify has your back and can be designed exactly how you need.
What about complex systems, CRM or ERPs? Well sure, you can probably save a ton of money building something in-house. But we’ll need to solve the data transfer (their proprietary format to your new system) issue, then you’ll need to maintain it, secure it, build your own new features, take on all the responsibility yourself.
Anyway, my point is, being able to build a system means nothing on its own. We’ve been able to build systems for a long time. Doesn’t mean people will actually do it and see it through the development to product lifecycle.
The Agentic Caveat
One area I’ll caveat here is agentic AI, as this does start to somewhat solve the issue of the code alone not being the answer. When entrepreneurial AI systems can take just an idea, create the product, market it properly, handle the payments, solve the taxation issues, become an HR expert, and essentially be the BUSINESS rather than just the proprietary code, my stance here may shift slightly.
I believe my original argument will hold until agentic AI can handle the full business lifecycle. Then we may be screwed.
Wider Potential Damage: The Self
I do think there’s an underdiscussed and wider nuance at play here, and I came up with this nice quote whilst I was sipping some green tea:
Your efficiency using neural networks may be the very thing decaying your own, most important, neural network.
As a society, we’re becoming overly reliant. The emerging data has started to suggest it too - have a quick Google of recent studies on the amount of activation your brain uses when addressing a question via AI compared to traditional Googling. Your brain is an organ. Treat it like any other organ in your body, with nourishment!
Keep your brain active. Keep challenging yourself mentally. Do not rely on AI solely, use it as your assistant, not your replacement. Augmentation is, in my opinion, going to always trump autonomous replacement. Whilst I do not believe these systems will ever be swiped away from under our feet now they exist, especially given the speed at which token cost seems to be decreasing and the accessibility of local models, it is important to continue being mentally engaged. Not for your work. But for you. For your mental health! For your feeling of accomplishment! For your ability to be creative and solve problems!
You never know, in a world where everyone sucks at thinking, being a thinker might become useful again. Ahh, full circle!
What Will Happen to Pentesting?
The Autonomous Platform Problem
What about the future of pentesting? Aikido, XBow, Dreadnode… autonomous AI pentesting platforms are popping up left, right, and centre. We should assume that these platforms will continue to improve and also reduce in cost over the next 5 years. We should also assume that they provide better coverage, they are not constrained by time, they do not get tired because it’s 5:30pm, and that there will be more of them cropping up year on year.
This is a legitimate risk to the penetration testing industry. I foresee consultancies piggybacking off the best platforms to augment their own testing; human-led with AI backup. Auto-pentest platform licenses can be leased to consultancies who haven’t developed in-house solutions: margins cut, but hopefully findings increase? We shall see.
I’m still on the fence about whether, as a sysadmin or risk manager, I’d let an autonomous agent rip on an internal network. Honestly? I don’t think so. There’s still too much unpredictability, hallucinations, and too much at stake. But a staging web app? Go ham.
A silver lining for consultancies is that there are going to be more web applications than ever, with everyone and their cat now vibe coding apps up. Whilst Claude Code and Codex no doubt know about secure coding practices, I do not observe secure coding practices being followed UNLESS explicitly asked in the design documents. Of course, with the introduction of Skills, and a growing number of platforms that create applications for you have security instructions baked into their backend system prompts, I suspect this will improve.
A good example is an app I vibed up recently for a gamified TODO list: until I started discussing RBAC on the UUIDs, there was none. It just didn’t exist. Sure, it’s great that it used UUIDs, but security was an afterthought and only because I (as a pentester) knew it should exist. Not a security or tech person? Best of luck!
This is a long way of me saying there is going to be lots more apps with secrets dangling in JS or plain HTML because people do not understand local vs production deployments. Lots of unprotected API endpoints. And if they haven’t done their due diligence to ask its security to be reviewed by AI either, then there’s going to be lots of sensitive information leakage issues over the next couple of years.
The Human Element
If we thought scoping and pre-game calls were important in the age of human-led pentesting; we are in for a whole new ball game in the autonomous pentesting field. This is especially pertinent when we consider business logic issues and access control restrictions. A proper definition of who should and shouldn’t be able to access what, is now more critical than ever.
Previously, we’d parse documentation and infer from discussions with clients what is expected behaviour. Autonomous AI assessments remove this humanistic element, which may result in critical ‘vulnerability, but not technically a vulnerability’ issues being missed.
A subjective vulnerability? Let’s say that.
The Regulatory Gap
Finally, some may say that an AI pentest shouldn’t be able to be used to obtain ISO certifications or pass SOC2 compliance… I was originally of this opinion, too!
Until I remembered that there’s pentest companies hiring fresh graduates on the equivalent of supermarket salaries in the UK, who operate with very little oversight. These guys are being sent off at extortionate day rates as though it actually gives some assurance.
In my opinion, this should not pass compliance requirements. But it does and will continue to do so until pentest companies are regulated more tightly to only allow testers of a certain capability to become pentesters in the first place (thought: if there was a law that you need CRT to perform any pentesting, rather than just CHECK* work), or at least some regulations are put in place that all junior testers are never solely performing tests. Yes, it happens, and yes, stuff will be missed.
*CHECK is a UK scheme that testers need to achieve to perform tests in the public sector (government, etc).
Anyone in this space should also be closely watching regulatory acts, such as the EU AI Act, which is already somewhat being enforced. This will push greater regulatory oversight on companies shipping systems with AI embedded into any aspect of their design. In practice, this means demand for people who understand both AI internals and security assurance is likely to grow, not shrink, over the next few years. It also raises interesting questions about autonomous pentesting tools themselves… if an AI agent is the one making security-critical decisions about your infrastructure, where does that sit within the Act’s risk framework?
Do we interpret it as the owner of the system then assumes the liability, such as how an employee of the organization would be subject to? I don’t have the answer yet, but I suspect we’ll find out soon enough.
Putting My Money Where My Mouth Is
So with all of that being said… the skills decay, the shifting pentesting landscape, the regulatory unknowns… What are you doing about it, Toby?
Well, I’m far less focused on pentesting at the moment. I spent the better part of a decade learning everyday how to get better at hacking, and I loved it. The creativity, the problem solving, the thrill of getting a shell. But as I mentioned before, these autonomous platforms are only going to get better, margins are going to get tighter, and I’d rather be ahead of the curve than behind it.
AI Security and Assurance
I made the conscious decision to pivot toward AI security and assurance. The EU AI Act, the regulatory gaps I mentioned, someone needs to understand both the AI internals and the security implications. Coming from a pentesting background, I think that’s a really solid set of skills to have. That’s the bet I’m making. I want to get intimate with these systems like I used to with networks and websites I was hacking: properly understand the fundamentals, the architectures, the training pipelines. Not just how to prompt them, but how they actually work under the hood. I’ve always believed that real security knowledge comes from understanding the thing you’re trying to protect (or hack), and AI is no different. We’re in the wild west phase, and I wanna become a sheriff!
I’m also investing more time in bug bounty. Not as a full-time income, but as a way to keep my offensive skills sharp without being tied to the consultancy model. More flexibility, more autonomy, and it keeps the bills ticking over while I’m investing time and energy elsewhere.
A Note on Privilege
I should caveat all of this with the fact that I’m fortunate enough to be in a position where I can afford to take this kind of risk right now. No mortgage, no kids, partner in full-time work, decent savings to fall back on. Not everyone has that luxury, and I’m very aware of that. If your circumstances don’t allow you to make a leap like this, that doesn’t mean you’re behind… it means you’re being more sensible than me! This is just my path right now.
Could I be wrong? Haha, absolutely. But I’d rather make a deliberate move based on where I think things are heading than stay comfortable and hope the landscape doesn’t shift under me in 5 years time.
Are We Cooked, Chat?

Just kidding. I think most jobs will be ok, at least for the next 5 years. Sure, there is going to be displacement in the lower ends. But we still have businesses who refuse to have a website or do their accounting books on paper. Companies move slow. Those companies will die over time. They’ll lose out to a far more efficient startup. But that startup will probably also die.
In essence, I think we’ll see hypergrowth, rapid busts, and business lifecycles becoming shorter and shorter. I reckon we’ll either see a breakdown of large conglomerate “do it all” companies and more specialised niche software companies pop up, get funding, do really well for a short period before becoming obsolete as they themselves became complacent or slow…
Alternatively, we’ll see large scale consolidation from big companies picking up every decent AI startup, though, this isn’t sustainable and over time, startup builders may get more of a thrill from competing rather than exiting.
Businesses which directly scale with the strength of the models will likely do well. If the model improves and your product improves, you just need to be offering something either suitably complicated enough that someone doesn’t want to bother vibe coding it, or marketed well enough to stand out from your competitors. Marketing is probably going to become one of the most sought after skills in the next 10 years, because there’s so much noise, who can stand out? I think successfully cracking how to be mentioned in AI results is vital. I believe they call it GEO (Generative Engine Optimization). Coincidentally, it’s making me want to go and vibe code a startup around GEO.
Ultimately, juniors may struggle more in the market as they’re not providing that much value anymore. However, long term they’re necessary in the ecosystem of growing organizations (otherwise, who will become senior over time).
But short term, I think we’re ok. The speed at which digitization was adopted suggests the general adoption rate is going to be slow, despite us in the inner circle screaming about how powerful this technology is. Either way, it’ll be a fun ride over the next few years, and hopefully we continue to do what is right at a moral and ethical level for society, rather than just worrying about money.
Haha. Money really doesn’t buy you anything apart from time anyway. Ultimately, we all have the same finish line eventually.
Later, hackers and learners.