WormGPT Is a ChatGPT Alternative With ‘No Ethical Boundaries or Limitations’::undefined

  • tree@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    A scary possibility with AI malware would be a virus that monitors the internet for news articles about itself and modifies its code based on that. Instead of needing to contact a command and control server for the malware author to change its behavior, each agent could independently and automatically change its strategy to evade security researchers.

  • vrighter@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    As more people post ai generated content online, then future ai will inevitably be trained on ai generated stuff and basically implode (inbreeding kind of thing).

    At least that’s what I’m hoping for

    • Paralda@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      That’s not really how it works, but I hear you.

      I don’t think we can bury our heads in the ground and hope AI will just go away, though. The cat is out of the bag.

  • KairuByte@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    1 year ago

    Everyone talking about this being used for hacking, I just want it to write me code to inject into running processes for completely legal reasons but it always assumes I’m trying to be malicious. 😭

    • dexx4d@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I was using chatGPT to design up a human/computer interface to allow stoners to control a lightshow. The goal was to collect data to train an AI to make the light show “trippier”.

      It started complaining about using untested technology to alter people’s mental state, and how experimentation on people wasn’t ethical.