Its basically a requirement for the OSEP cert put out by offsec so there are a fair amount of cybersec guys who at least piddle with it if you are looking for some projects or a community hopefully that is a good start.
Its basically a requirement for the OSEP cert put out by offsec so there are a fair amount of cybersec guys who at least piddle with it if you are looking for some projects or a community hopefully that is a good start.
Pen Tester here. While i don’t focus on LLMs, it would be trivial in the right AI designed app. In a tool-assist app without a human in the loop as simple as adding to any input field.
&& [whatever command you want]] ;
If you wanted to poison the actual training set in sure it would be trivial, but It might take awhile to gain some respect to get a PR accepted, but we only caught an upstream attack on ssh due to some guy who feels the milliseconds of a ssh login sessions. Given how new the field is, i don’t think we have developed strong enough autism to catch this kind thing like in SSH.
Unless vibe coders are specifically prompting chatgpt for input sanitization, validation, and secure coding practices then a large portion of design patterns these LLMs spit out are also vulnerable.
Really the whole tech field is just a nightmare waiting to happen though.
Which faang company are you sr. engineer at?
He said effective
I feel the pain in your comment.
I too have been burned by “cross-platform” tooling. What I’ve learned is the more complex your project is, the less likely it is to have simple cross compliation.
But with that huge caveat, I’ll say I’ve had a better time doing cross comp on dotnet than I have rust. Either of them are infinitely better than learning cmake though. That’s definitely just my amateur take though. I’m sure smarter people will tell you I’m wrong.