Top physicist says chatbots are just ‘glorified tape recorders’::Leading theoretical physicist Michio Kaku predicts quantum computers are far more important for solving mankind’s problems.

  • AeroLemming@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    1 year ago

    They don’t really demonstrate general intelligence. They’re very powerful tools, but LLMs are still a form of specialized intelligence, they’re just specialized at language instead of some other task. I do agree that they’re closer than what we’ve seen in the past, but the fact that they don’t actually understand our world and can only mimic the way we talk about it still occasionally shines through.

    You wouldn’t consider Midjourney or Stable Diffusion to have general intelligence because they can generate accurate pictures of a wide variety of things, and in my opinion, LLMs aren’t much different.

    • flossdaily@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      1 year ago

      I’ve been working extensively with gpt4 since it came out, and it ABSOLUTELY is the engine that can power rudimentary AGI. You can supplement it with other tools, and give it a memory… ZERO doubt in my mind that GPT4-powered are AGI.

      • AeroLemming@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        I disagree, but I guess we’ll have to wait and see! I do hope you’re wrong, as my experience with ChatGPT has shown me how incredibly biased it is and I would rather hope that once we do achieve AGI, it doesn’t have a political agenda in mind.

          • AeroLemming@lemm.ee
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            edit-2
            1 year ago

            They seem to be patching it whenever something comes up, which is still not an acceptable solution because things keep coming up. One great example that I witnessed myself (but has since been patched) was that if you asked it for a joke about men, it would come up with a joke that degraded men, but if you ask it for a joke about women, it would chastise you for being insensitive to protected groups.

            Now, it just comes up with a random joke and assigns the genders of the characters in the joke accordingly, but there are certainly still numerous other biases that either haven’t been patched or won’t be patched because they fit OpenAI’s worldview. I know it’s impossible to create a fully unbiased… anything (highly recommend There is No Algorithm for Truth by Tom Scott if you have the interest and free time), but LLMs trained on our speech have learned our biases and can behave in appalling ways at times.

            • AwakenedLink@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              Worse, the majority of the data used by LLMs comes from the internet; a place that often brings out the worst and most polarized sides of us.