• 0 Posts
  • 70 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle

  • There are two dangers in the current race to get to AGI and in developing the inevitable ANI products along the way. One is that advancement and profit are the goals while the concern for AI safety and alignment in case of success has taken a back seat (if it’s even considered anymore). Then there is number two - we don’t even have to succeed in AGI for there to be disastrous consequences. Look at the damage early LLM usage has already done, and it’s still not good enough to fool anyone who looks closely. Imagine a non-reasoning LLM able to manipulate any media well enough to be believable even with other AI testing tools. We’re just getting to that point - the latest AI Explained video discussed Gemini and Sora and one of them (I think Sora) fooled some text generation testers into thinking its stories were 100% human created. In short, we don’t need full general AI to end up with catastrophe, we’ll easily use the “lesser” ones ourselves. Which will really fuel things if AGI comes along and sees what we’ve done.




  • Nothing that high level. Different systems are running independently, some may be redundant to each other in case one fails. But run something long enough especially in extreme conditions and things can drift from the baselines. If a power off and on regularly prevents that it’s a lot easier than trying to chase down gremlins that could be different each time they pop up for different reasons.

    Even NASA I believe has done such resets from Apollo through the unmanned probes from time to time. Mentioning Windows, the newest versions don’t really do this baseline reset if you just shut them down, even if you disable the hibernate/sleep modes, while a restart does.







  • The sell of the paper is a new fuel storage medium. The positive part is that creating a fuel from existing carbon sources means (hopefully) less petroleum pumped out of the ground to contribute more carbon. The negative is that it leans more to that than the permanent sequestering, and I can’t seem to pick out a net energy use anywhere, but basic physics tells us it will take more energy to do the process in entirety, even if most of it results in large scale storage. I doubt that happens because removal of carbon vs. putting into a new form to be used is like burying money. Which leads to something I’ve noticed pop up only in the past month or so…a new term added. “Carbon capture, utililization, and storage”. CCS has already been very heavily into the production of carbon products to support their efforts, after all they have to make a profit, right? The only real storage done is a product to inject into the ground to help retrieve more oil. Again, they aren’t going to just bury the money, that’s foolhardy for a business.

    Sorry for more negativity in the thread. Just calling a spade a spade. Those who don’t like the feeling that gives can just ignore it and focus on the new science that will save us.





  • Models are geared towards seeking the best human response for answers, not necessarily the answers themselves. Its first answer is based on probability of autocompleting from a huge sample of data, and in versions that have a memory adjusts later responses to how well the human is accepting the answers. There is no actual processing of the answers, although that may be in the latest variations being worked on where there are components that cycle through hundreds of attempts of generations of a problem to try to verify and pick the best answers. Basically rather than spit out the first autocomplete answers, it has subprocessing to actually weed out the junk and narrow into a hopefully good result. Still not AGI, but it’s more useful than the first LLMs.



  • It’s not AGI that’s terrifying, but how people are so willing to let anything take over their control. LLMs are “just” predictive text generation with a lot of extras to make things come out really convincing sometimes, and yet so many individuals and companies basically handed over the keys without even second guessing its answers.

    These past few years have shown how if (and it’s a big if) AGI/ASI comes along, we are so screwed, because we can’t even handle dumber tools well. LLMs in the hands of willing idiots can be a disaster itself, and it’s possible we’re already there.