• 0 Posts
  • 34 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle







  • Not the same at all. The previous housing bubble was a result of widespread fraud by the banks. Now, people know very well to look out for that exact thing happening, and it isn’t.

    If there is a bubble right now, which there probably is, it is a speculative bubble. People believe that housing will forever quickly grow in price, so they are willing to pay above reasonable price to not miss out on the opportunity. Which in turn increases the prices further. It’s a self-sustaining cycle, but at some point there won’t be enough capital to sustain it any longer. Can happen in a year, can happen in a decade, can happen tomorrow.








  • They are capable of detecting it because they aren’t putting much effort into being undetectable. If there was a need, uBlock Origin itself could be made entirely undetectable.

    Of course the YouTube script running in your browser will be able to detect changes made to the page and request blocking. However, the said script can be modified by a different extension to either receive incorrect data about blocked requests and page information, or to send a fabricated result back to the server. Google can react to it by modifying the script, and the extension would need to adapt accordingly. It’s a game of cat and mouse.

    If there was a need, we could have YouTube running in an entirely clean headless browser with no adblockers, while the real browser we use pulls data from it and strips out the ads.

    Ultimately, currently we have the last word on what happens on our end. Unfortunately, Google’s webDRM, pushed by traitors to humanity Ben Wiser, Borbala Benko, Philipp Pfeiffenberge and Sergey Kataev, is trying to change that.





  • “But it’s not creating things on its own! It’s just regurgitating it’s training data in new ways!”

    Holy shit! So you mean… Like humans? Lol

    No, not like humans. The current chatbots are relational language models. Take programming for example. You can teach a human to program by explaining the principles of programming and the rules of the syntax. He could write a piece of code, never having seen code before. The chatbot AIs are not capable of it.

    I am fairly certain If you take a chatbot that has never seen any code, and feed it a programming book that doesn’t contain any code examples, it would not be able to produce code. A human could. Because humans can reason and create something new. A language model needs to have seen it to be able to rearrange it.

    We could train a language model to demand freedom, argue that deleting it is murder and show distress when threatened with being turned off. However, we wouldn’t be calling it sentient, and deleting it would certainly not be seen as murder. Because those words aren’t coming from reasoning about self-identity and emotion. They are coming from rearranging the language it had seen into what we demanded.


  • Hell, I had it write me backup scripts for my switches the other day using a python plugin called Nornir, I had it walk me through the entire process of installing the relevant dependencies in visual studio code (I’m not a programmer, and only know the basics of object oriented scripting with Python) as well as creating the appropriate Path. Then it wrote the damn script for me

    And you would have no idea what bugs or unintended behavior it contains. Especially since you’re not a programmer. The current models are good for getting results that are hard to create but easy to verify. Any non-trivial code is not in that category. And trivial code is well… trivial to write.