Poisoned AI went rogue during training and couldn’t be taught to behave again in ‘legitimately scary’ study::AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.
Anyone can go ahead and make their own gpt4chan (real thing). I trained mine on pretty toxic subreddits- and it is already pretty mean.