silence7@slrpnk.net to Technology@lemmy.worldEnglish · 10 months agoHow Googlers cracked an SF rival's tech model with a single word | A research team from the tech giant got ChatGPT to spit out its private training datawww.sfgate.comexternal-linkmessage-square38fedilinkarrow-up1184arrow-down119file-textcross-posted to: technology@lemmy.world
arrow-up1165arrow-down1external-linkHow Googlers cracked an SF rival's tech model with a single word | A research team from the tech giant got ChatGPT to spit out its private training datawww.sfgate.comsilence7@slrpnk.net to Technology@lemmy.worldEnglish · 10 months agomessage-square38fedilinkfile-textcross-posted to: technology@lemmy.world
minus-square∟⊔⊤∦∣≶@lemmy.nzlinkfedilinkEnglisharrow-up5·10 months agoHate to break it to you, but you’re more qualified than me! I only did a Coursera cert in machine learning.
minus-squareModva@lemmy.worldlinkfedilinkEnglisharrow-up3·10 months agoMy fun guesswork here is that I don’t think the neural net weights change during querying, only during training. Otherwise the models could be permanently damaged by users.
minus-square∟⊔⊤∦∣≶@lemmy.nzlinkfedilinkEnglisharrow-up1·10 months agoThe neural net doesn’t change of course, but previous text is used as context for the next generation.
Hate to break it to you, but you’re more qualified than me!
I only did a Coursera cert in machine learning.
My fun guesswork here is that I don’t think the neural net weights change during querying, only during training. Otherwise the models could be permanently damaged by users.
The neural net doesn’t change of course, but previous text is used as context for the next generation.