justOnePersistentKbinPlease@fedia.iotoTechnology@lemmy.ml•"participants who had access to an AI assistant wrote significantly less secure code" and "were also more likely to believe they wrote secure code" - 2023 Stanford University study published at CCS23
251·
20 days agoNo. I would suggest you actually read the study.
The problem that the study reveals is that people who use AI-generated code as a rule don’t understand it and aren’t capable of debugging it. As a result, bigger LLMs will not change that.
No, unfortunately you are wrong.
Gpt4 is a better version of gpt3.
The brand new one that is allegedly “unhackable” just has a role hierarchy providing rules and that hasn’t been fulled tested in the wild yet.