• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: July 7th, 2023

help-circle






  • A major caveat I’ve noticed some people misunderstand: it’s corporate CLAs that are problematic. The Apache Foundation also requires contributors sign a CLA, but it’s to provide a legal fail safe and a way to update to say Apache 3.0 if need be one day. Apache’s non profit, open source mission aligns with respecting the rights of contributors and the community. Corporations, on the other hand, not so much.



  • If you want vertical tabs with the ability to organize them in trees I suggest the Sideberry extension. It legitimately makes me nervous that the functionality would ever go away, it improves my productivity so much.

    You can bookmark trees, collapse them, search them, load/unload them manually, I could go on. It makes it easy to organize dozens or hundreds of tabs. I have some trees for emails, news, forums, projects, etc. When I’m done just fold it up: the top tab bar can hide tabs that aren’t in the active tree you’re using, so you can still navigate the tabs normally.





  • Compression is actually a mathematical field that’s fairly well explored, and this isn’t compression. There are theoretical limits on how much you can compress data, so the data is always somewhere, either in the dictionary or the input. Trained models like these are gigantic, so even if it was perfect recall the ratio still wouldn’t be good. Lossy “compression” is another issue entirely, more of an engineering problem of determining how much data you can throw out while making acceptable compromises.


  • This is a classic problem for machine learning systems, sometimes called over fitting or memorization. By analogy, it’s the difference between knowing how to do multiplication vs just memorizing the times tables. With enough training data and large enough storage AI can feign higher “intelligence”, and that is demonstrably what’s going on here. It’s a spectrum as well. In theory, nearly identical recall is undesirable, and there are known ways of shifting away from that end of the spectrum. Literal AI 101 content.

    Edit: I don’t mean to say that machine learning as a technique has problems, I mean that implementations of machine learning can run into these problems. And no, I wouldn’t describe these as being intelligent any more than a chess algorithm is intelligent. They just have a much more broad problem space and the natural language processing leads us to anthropomorphize it.




  • Curiosity, back around 2010 before I was a teenager. No clue how I heard about it, but the concept of replacing the entire operating system was fascinating. I figured it must be really good if it was such a well kept secret.

    A few years later, when I started to learn programming, Linux was the obvious winner. The online course taught C in a Linux environment, and I was amazed that the default Ubuntu build at the time had everything built in, whereas a Windows equivalent required visual studio and licensing adventures.

    It really stuck as a daily driver after Windows 7, where a clear trend emerged: Windows got in my way, Linux got out of my way. Simple as.



  • Recently got a Onyx Boox Ultra and it’s incredible compared to my previous Kobo. Basically, its 10" with stylus input and a keyboard case. The special sauce is it running Android, complete with the Google store. The display tech is advanced enough that normal apps, for instance Connect for Lemmy, work fine. I have mine setup with Syncthing, Home Assistant, Obsidian, it all just works, mostly. I’d recommend using a 3rd party launcher and not touching the Onyx account, though.

    I’ve had great experiences with Kobo, though. I literally went through 4 models because they kept upping their game. They’re less sketchy than Onyx and are very open; you can load your own books of nearly any format and modify it as it runs linux. You can even completely replace the OS.