LLMs operate using tokens, not letters. This is expected behavior. A hammer sucks at controlling a computer and that’s okay. The issue is the people telling you to use a hammer to operate a computer, not the hammer’s inability to do so
It would be luck based for pure LLMs, but now I wonder if the models that can use Python notebooks might be able to code a script to count it. Like its actually possible for an AI to get this answer consistently correct these days.
In what way is presenting factually incorrect information as if it’s true not a bad thing?
Maybe in a “it is not going to steal our job… yet” way.
True but if we suddenly invent an AI that can replace most jobs I think the rich have more to worry about than we do.
Maybe, but I am in my 40s and my back aches, I am not in a shape for revolution :D
Lenin was 47 in 1917
That’s looking at the bright side :D
LLMs operate using tokens, not letters. This is expected behavior. A hammer sucks at controlling a computer and that’s okay. The issue is the people telling you to use a hammer to operate a computer, not the hammer’s inability to do so
Claude 3 nailed it on the first try
It would be luck based for pure LLMs, but now I wonder if the models that can use Python notebooks might be able to code a script to count it. Like its actually possible for an AI to get this answer consistently correct these days.