• 0 Posts
  • 95 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle




  • Still remember my father’s face when he realized I was sharing all his personal documents with Limewire.

    My internet was so slow that I was tweaking all the settings and doing pattern analysis on bandwidth traffic with nonsense tweaks. So I probably tweaked that setting and felt the internet was 10kbps faster.

    I also recall placing the modem under a blanket because I thought the internet was faster when the modem was warmer. My father’s face when he saw the modem had melted… and the flabbergasted repair technicians theorizing that our house was probably hit by a thunder and it caught fire somehow. Little did they know.

    I was just a desperate kid willing to try anything to be able to download 1 song per day.











  • You keep asking questions like “can a model build a house” but keep ignoring questions like “can an octopus build a house”. Then asking “can a model learn in seconds how to escape from a complex enclosure” and then ignoring “can a newborn human baby do that?”

    Can an octopus write a poem? Can a baby write an essay? Can an adult human speak every human language, including fictional languages?

    Just because it isn’t as intelligent as a human doesn’t mean this isn’t some type if intelligence.

    Go and check what we call AI in videogames. Do you think that’s a simulated human? Go see what we’ve been calling AI in chess. Is that a simulated human being playing chess? No.

    We’ve been calling Artificial intelligence things that are waaaaaay dumber than GPTs for decades. Even in the academia. Suddenly a group of people decided “artificial intelligence must be equal to human intelligence”. Nope.

    Intelligence doesn’t need to be the same type of human intelligence.



  • Things we know so far:

    • Humans can train LLMs with new data, which means they can acquire knowledge.

    • LLMs have been proven to apply knowledge, they are acing examns that most humans wouldn’t dream of even understanding.

    • We know multi-modal is possible, which means these models can acquire skills.

    • We already saw that these skills can be applied. If it wasn’t possible to apply their outputs, we wouldn’t use them.

    • We have seen models learn and generate strategies that humans didn’t even conceive. We’ve seen them solve problems that were unsolvable to human intelligence.

    … What’s missing here in that definition of intelligence? The only thing missing is our willingness to create a system that can train and update itself, which is possible.


  • What is intelligence?

    Even if we don’t know what it is with certainty, it’s valid to say that something isn’t intelligence. For example, a rock isn’t intelligent. I think everyone would agree with that.

    Despite that, LLMs are starting to blur the lines and making us wonder if what matters of intelligence is really the process or the result.

    A LLM will give you much better results in many areas that are currently used to evaluate human intelligence.

    For me, humans are a black box. I give them inputs and they give me outputs. They receive inputs from reality and they generate outputs. I’m not aware of the “intelligent” process of other humans. How can I tell they are intelligent if the only perception I have are their inputs and outputs? Maybe all we care about are the outputs and not the process.

    If there was a LLM capable of simulating a close friend of yours perfectly, would you say the LLM is not intelligent? Would it matter?