Robin Williams’ daughter Zelda says AI recreations of her dad are ‘personally disturbing’::Robin Williams’ daughter Zelda says AI recreations of her dad are ‘personally disturbing’: ‘The worst bits of everything this industry is’

  • assassin_aragorn@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I used to think techno supremacists were an extreme fringe, but “AI” has made me question that.

    For one, this isn’t AI in the scifi sense. This is a sophisticated model that forms an algorithm to generate content based on patterns it observes in a plethora of works.

    It’s ridiculously overhyped, and I think it’s just flash in a pan. Companies have already minimized their customer support with automated service options and “tell me what the problem is” prompts. I have yet to meet anyone who is pleased by these. Instead it’s usually shouting into the phone that you want to talk to a real human because the algorithm thinks you want a problem fixed instead of the service cancelled.

    I think this “technocrat” vs “humanities” debate will be society’s next big question.

      • assassin_aragorn@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I haven’t watched Star Trek, but if you’re correct, they depicted an incredibly rudimentary and error prone system. Google “do any African countries start with a K” meme and look at the suggested answer to see just how smart AI is.

        I remain skeptical of AI. If I see evidence suggesting I’m wrong, I’ll be more than happy to admit it. But the technology being touted today is not the general AI envisioned by science fiction nor everything that’s been studied in the space the last decade. This is just sophisticated content generation.

        And finally, throwing data at something does not necessarily improve it. This is easily evidenced by the Google search I suggested. The problem with feeding data en masse is that the data may not be correct. And if the data itself is AI output, it can seriously mess up the algorithms. Since these venture capitalist companies have given no consideration to it, there’s no inherent mark for AI output. It will always self regulate itself to mediocrity because of that. And I don’t think I need to explain that throwing a bunch of funding at X does not make X a worthwhile endeavor. Crypto and NFT come to mind.

        I leave you with this article as a counterexample: https://gizmodo.com/study-finds-chatgpt-capabilities-are-getting-worse-1850655728

        Throwing more data at the models has been making things worse. Although the exact reasons are unclear, it does suggest that AI is woefully unreliable and immature.

    • TwilightVulpine@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I used to be on the tecnocrat side too when I was younger, but seeing the detrimental effects of social media, the app-driven gig economy and how companies constantly charge more for less changed my mind. Technocrats adopt this idea that technology is neutral and constantly advancing towards an ideal solution for everything, that we only need to keep adding more tech and we’ll have an utopia. Nevermind that so many advancements in automation lead to layoffs rather than less working hours for everyone.

      I believe the debate is already happening, and the widespread disillusionment with tech tycoons and billionaires shows popular opinion is changing.

      • assassin_aragorn@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Very similar here, I used to think technology advancement was the most important thing possible. I still do think it’s incredibly important, but we can’t commercially do it for its own sake. Advancement/knowledge for the sake of itself must be confined to academia. AI currently can’t hold a candle to human creativity, but if it reaches that point, it should be an academic celebration.

        I think the biggest difference for me now vs before is that I think technology can require too high of a cost to be worth it. Reading about how some animal subjects behaved with Elon’s Neuralink horrified me. They were effectively tortured. I refuse the idea that we should develop any technology which requires that. If test subjects communicate fear or panic that is obviously related to the testing, it’s time to end the testing.

        Part of me still does wonder, but what could be possible if we do make sacrifices to develop technology and knowledge? And here, I’m actually reminded of fantasy stories and settings. There’s always this notion of cursed knowledge which comes with incredible capability but requires immoral acts/sacrifice to attain.

        Maybe we’ve made it to the point where we have something analogous (brain chips). And to avoid it, we not only need to better appreciate the human mind and spirit – we need people in STEM to draw a line when we would have to go too far.

        I digress though. I think you’re right that we’re seeing an upswell of the people against things like this.

      • zurneyor@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        1 year ago

        All the ills you mention are a problem with current capitalism, not with tech. They exist because humans are too fucking stupid to regulate themselves, and should unironically be ruled by an AI overlord instead once the tech gets there.

        • TwilightVulpine@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          You are making the exact same mistake that I just talked about, that I have also made, that a bunch of tech enthusiasts make:

          An AI Overlord will be engineered by people with human biases, under the command of people with human biases, trained by data with human biases, having goals that are defined with human biases. What you are going to get is tyranny with extra steps, plus some of its own concerning glitches on the side.

          It’s a sci-fi dream to assume technology is inherently destined to solve human issues. It takes human concern and humanites studies to apply technology in a way that actually helps people.

            • TwilightVulpine@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Even given the smartest, most perfect computer in the world, it can give people the perfect, most persuasive answers and people can still say no and pull the plug just because they feel like it.

              The same is not even different among humans, the power to influence organizations and society entirely relies on the willingness of people to go along with it.

              Not only this sci-fi dream is skipping several steps, steps where humans in power direct and gauge AI output as far as it serves their interests rather than some objective ultimate optimal state of society. Should the AI provide all the reasons that they should be in charge, an executive or a politician can simply say “No, I am the one in charge” and that will be it. Because to most of them preserving and increasing their own power is the whole point, even if at expense of maximum efficiency, sustainability or any other concerns.

              But before you go fullblown Skynet machine revolution, you should realize that AIs that are limited and directed by greedy humans can already cause untold damage to regular people, simply by optimizing them out of industries. For this, they don’t even need to be self-aware agents. They can do that as mildly competent number crunchers, completely oblivious of reality out of spreadsheets and reports.

              And all this is assuming an ideal AI. Truly, AI can consume and process more data than any human. Including wrong data. Including biased data. Including completely baseless theories. Who’s to say we might not get to a point AI decides to fire people because of the horoscope or something equally stupid?

                • TwilightVulpine@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  1 year ago

                  Are you really trying to use failures of AI to try to argue that it’s going to overcome humans? If we can’t even get it to work how we want it too what makes you think people are just going to hand the keys of Society to it? How is an AI that keeps bursting into racist rants and emotional meltdowns going to take over anything? Does it sound like it is brewing some Master Plan? Why would people hand control to it? That alone shows that it presents all the flaws of a human, like I just pointed out.

                  Maybe you are too eager to debunk me but you are missing the point to nitpick. It doesn’t really matter that we can’t “pull the plug” on the internet, if that even was needed, all it takes to stop the AI takeover is that people in power just disregard what it says. It’s far more reasonable to assume even those who use AIs wouldn’t universally defer to it.

                  Nevermind that no drastic action is needed period. You said it yourself, Microsoft pulled the plug on their AIs. This idea of omnipresent self-replicating AI is still sci-fi, because AIs have no reason to seek to spread themselves, or ability to do so.