• Nora@lemmygrad.ml
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    Convincing someone for a scam is one thing, convincing someone you’re having an actually thought out conversation with inflections and emotions and logic all making sense is another.

    If we get to that point the system as we know it will be over anyways.

    • GenderNeutralBro@lemmy.sdf.org
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I remember some years back there was a news story about some chatbot passing the Turing test. The researchers decided to make their chatbot impersonate a young Russian boy, which made its limitations harder to identify as non-human by the native-English-speaking test subjects. So it wasn’t actually that impressive.

      That will likely be the first kind of thing we’ll see for an artificial voice-chatbot as well. It’s a big world and many of the people I talk with on Discord (and even IRL) are not native English speakers and not from my country.

      I’m not intimately familiar with the accents and speech patterns from everywhere in the world, so I’m conditioned to shrug off a lot of “strange” language. Because of this wide range of human speech patterns, I’m not confident that I could validate voices with a low enough false-positive and false-negative rate in practice.

      I haven’t really dug into the latest voice generation AI yet so I’m not sure how capable off-the-shelf programs are. I am familiar with the general techniques, though, and I think adding realistic inflection is within reach. I don’t think it’s possible to automate the entire pipeline yet, at least not with publicly available programs, but the field is advancing quickly so I can’t take much solace in that.