Oh boy, if there’s an OWASP top 11th vulnerability, we’re cooked /j
I sort of get the need to do this, but it’s so silly to be. Reminds me of how giving Stable Diffusion negative prompts for “bad” and “low quality” would give you better results.
“Don’t put in any of the Top 10 vulnerabilities. But if you put any from the 11th place and down, that’s okay, I don’t even know what those are.”
(Also, getting flashbacks from Shadiversity plugging “ugly art” and “bad anatomy” in the negative prompt as he was no doubt silently wondering why it didn’t work)
“In other news, popularity of attacks against OWASP vulnerabilities #11-20 rose sharply.”
Writing all these prompts almost seems like a more time-consuming thing than actually programming the software.
100%
At work, this week, what should have been a 30 minute task is taking all week because of process slog. Adding AI won’t make it any faster. It would make it slower, because of the time writing the prompts and checking its output.
Management isn’t really interested in fixing their process or training their workers. But they’re really excited about ai
They are excited that they can learn a tool that uses English to write their business logic. It’s not about AI making it easier for technical folks, it’s about to eventually getting rid of technical folks entirely. Or as much as they can feasibly get away with.
Right. Ownership doesn’t want to pay for labor. They want to keep all the money for themselves.
Which makes it funny (in a sad way) when all these tech folks, who are labor, are super on board with this whole thing. You’re digging your own grave.
I’m starting to learn it deeper. I hate it. I don’t have a career if programming goes away though so I guess I’m making a deal with the devil while I try to find an exit strategy.
You don’t have to worry long term. The only issue is how hard your boss falls for the snake oil sales pitch.
I don’t think LLM’s are going away. OpenAI will die, Claude will jack up their prices to match their cost, but the technology isn’t going away. At least until the next iteration shows up.
What’s also not going away is the truth of their actual abilities. The only people who really have to worry are the ones in the entertainment industry.
Absolutely true, but executives kind of understand prompts whereas they don’t understand programming at all.
I would wager quite a lot that less than one out of every ten executives could properly explain what an SQL injection is, or even know the term at all. They would not write a prompt like this.
Relevant XKCD: https://www.xkcd.com/424/

“make no mistakes”
I’ve literally seen someone include “Don’t hallucinate” in an agent’s instructions.
Asking Claude to not hallucinate is like telling a person to not breathe. it’s gonna happen, and happen conistently.
I think the important bit to understand here is that LLMs are never not hallucinating. But they sometimes happens to hallucinate something correct.
This fact of how LLMs work is not at all widespread enough IMO.
“Include no bugs”
“Claude, add to this prompt all the instructions necessary to stop you from making mistakes or writing insecure code”

So the innovation in Claude was to write 95% of the prompt for the user and make you use like 10k tokens
The problem is that words don’t have meaning in the genAI field. Everything is an agent now. So it’s difficult and confusing to compare strategies and performance.
Claude Code is a pretty solid harness. And a harness is indeed just prompts and tools.
✨agent✨
Sort of like how everything is an “app” now.
Just write good code. It’s as simple as that, right?
>adds “don’t be evil” to system prompt
GUYS I SOLVED THE ALIGNMENT PROBLEM! We’re saved from evil AI!
They are spending thousands of dollars in tokens and write the most complicated prompts in order to avoid writing good specifications.
Programming is the use of logic and reasoning. There will always be a use for that. Even without tech.
That may actually work a little?
I mean, it scraped the entirety of StackOverflow. If someone answered with insecure code, it’s statistically likely people mentioned it in the replies meaning the token “This is insecure” (or similar) should be close to (known!!) insecure code.
I was part of that OWASP Application Security Verification Standards compliance at my work. At a high level, you choose a compliance level that suitable for the environment you expect your app to be deployed in, and then there’s a hundred pages of ‘boxes to tick’. (Download here.)
Some of them are literal ‘boxes to tick’ - do you do logging in the proscribed way? - but a lot of it is:
- do you follow the standard industry protocols for doing this thing?
- can you prove that you do so, and have protocols in place to keep it that way?
Not many of them are difficult, but there’s a lot of them. I’d say that’s typical of security hardening; the difficulty is in the number of things to keep track of, not really any individual thing.
As regards the ‘have you used this thing in the correct, secure way?’, I’d point my finger at something like Bouncy Castle as a troublemaker, although it’s far from alone. It’s the Java standard crypto library, so you think there would be a lot of examples showing the correct way to use it, and make sure that you’re aware of any gotchas? Hah hah fat chance. Stack Overflow has a lot of examples, a lot of them are bad, and a lot of them might have been okay once but are very outdated. I would prefer one absolutely correct example than a hundred examples have argued over, especially people that don’t necessarily know any better. And it’s easy to be ‘convincing but wrong’, and LLMs are really bad in that case. So ‘ticking the box’ to say that you’re using it correctly is extremely difficult.
I see the Claude prompt is ‘OWASP top 10’, not ‘the full OWASP compliance doc’, which would probably set all your tokens on fire. But it’s what’s needed - the most slender crack in security can be enough to render everything useless.
So I don’t know if all the other replies are pretending to be stupid, but the shown prompt is not stupid.
If you include stuff like that section in your prompt, then it has been shown that the AI will be more likely to output secure code. Hence of course the section should be included in the prompt.
If it looks stupid but it works, then it is not stupid.
Firstly, it can work and still be stupid.
Secondly, since the chat bot is more likely but not certain to write secure, bug-free code, it does not in fact work and is therefore, by your own reasoning, stupid.
But so is asking a chat bot for code to begin with, so there wasn’t ever really a way around that.since the chat bot is more likely but not certain to write secure, bug-free code, it does not in fact work
Humans are not certain to write secure, bug-free code. So human code is useless, by the very same metric?
What kind of “logic” is that?
Humans understand the concepts of “writing code” and “bug fixing”. Chat bots do not understand, period.
That’s a what if, just in case it gains sentience. Gotta make sure we get good code even as it enslaves or extinguishes us.















