• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle


  • Generally, if someone’s being a total asshole so severely that they have to be yeeted with several thousand other unaware bystanders, I expect to see a bunch of examples within the first… 2, maybe 3, links.

    If someone can point me to a concise list of examples (actual data), I find it more disturbing that an admin on another server can yeet my account because they make noise on a discord server.I mean, yes, federating is a feature, but why even offer the ability to enroll users? Maybe for a group of friends, or something, but just rando users is nothing but a liability to everyone involved.




  • I almost thought I had written your comment and completely forgot about it. No, I just almost made the exact comment and want that hour of my life back.

    If there was some over the top racist rant, I sure didn’t see it. And the admin pushing for the defederation sounds so bizarre. Bizarre is the best word I could come up with because “petty” makes me think it was like high school politics. This is closer to a grade school sandbox argument.

    The worst I saw was “defedfags” and it was used in a way that was meant to highlight how they never said anything offensive. Like saying, “If you thought what I said before was offensive, let’s see how you respond to something intended to be negative.”

    The crazy thing is that the decision is being made because the admin just liked a post. It’s not even because of the post content - which has nothing controversial and appeared maybe 8 times in my Lemmy/kbin feed yesterday.

    Editing to add that this is the article: https://kbin.social/search?q=wakeup+call




  • At first glance, I probably thought JXL was another attempt at JPEG2000 by a few bitter devs, so I had ignored it.

    Yeah, my examples/description was more intended to be conceptual for folks that may not have dealt with the nitty gritty. Just mental exercises. I’ve only done a small bit of image analysis, so I have a general understanding of what’s possible, but I’m sure there are folks here (like you) that can waaay outclass me on details.

    These intermediate-to-deep dives are very interesting. Not usually my cup of tea, but this does seem big. Thanks for the info.


  • (fair warning - I go a little overboard on the examples. Sorry for the length.)

    No idea on the details, but apparently it’s more efficient for multithreaded reading/writing.

    I guess that you could have a few threads reading the file data at once into memory. While one CPU core reads the first 50% of the file, and second can be reading in the second 50% (though I’m sure it’s not actually like that, but as a general example). Image compression usually works some form of averaging over an area, so figuring out ways to chop the area up, such that those patches can load cleanly without data from the adjoining patches is probably tricky.

    I found this semi-visual explanation with a quick google. The image in 3.4 is kinda what I’m talking about. In the end you need equally sized pixels, but during compression, you’re kinda stretching out the values and/or mapping of values to pixels.

    Not an actual example, but highlights some of the problems when trying to do simultaneous operations…

    Instead of pixels 1, 2, 3, 4 being colors 1.1, 1.2, 1.3, 1.4, you apply a function that assigns the colors 1.1, 1.25, 1.25, 1.4. You now only need to store the values 1.1, 1.25, 1.4 (along with location). A 25% reduction in color data. If you wanted to cut that sequence in half for 2 CPUs with separate memory blocks to read at once, you lose some of that optimization. Now CPU1 and CPU2 need color 1.25, so it’s duplicated. Not a big deal in this example, but these bundles of values can span many pixels and intersect with other bundles (like color channels - blue can be most efficiently read in 3 pixels wide chunks, green 2 pixel wide chunks, and red 10 pixel wide chunks). Now where do you chop those pixels up for the two CPUs? Well, we can use our “average 2 middle values in 4 pixel blocks” approach, but we’re leaving a lot of performance on the table with empty or useless values. So, we can treat each of those basic color values as independent layers.

    But, now that we don’t care how they line up, how do we display a partially downloaded image? The easiest way is to not show anything until the full image is loaded. Nothing nothing nothing Tada!

    Or we can say we’ll wait at the end of every horizontal line for the values to fill in, display that line, then start processing the next. This is the old waiting for the picture to slowly load in 1 line at a time cliche. Makes sense from a human interpretation perspective.

    But, what if we take 2D chunks and progressively fill in sub-chunks? If every pixel is a different color, it doesn’t help, but what about a landscape photo?

    First values in the file: Top half is blue, bottom green. 2 operations and you can display that. The next values divide the halves in half each. If it’s a perfect blue sky (ignoring the horizon line), you’re done and the user can see the result immediately. The bottom half will have its values refined as more data is read, and after a few cycles the user will be able to see that there’s a (currently pixelated) stream right up the middle and some brownish plant on the right, etc. That’s the image loading in blurry and appearing to focus in cliche.

    All that is to say, if we can do that 2D chunk method for an 8k image, maybe we don’t need to wait until the 8k resolution is loaded if we need smaller images for a set. Maybe we can stop reading the file once we have a 1024x1024 pixel grid. We can have 1 high res image of a stoplight, but treat is as any resolution less than the native high res, thanks to the progressive loading.

    So, like I said, this is a general example of the types of conditions and compromises. In reality, almost no one deals with the files on this level. A few smart folks write libraries to handle the basic functions and everyone else just calls those libraries in their paint, or whatever, program.

    Oh, that was long. Um, sorry? haha. Hope that made sense!


  • Oh, I’ve just been toying around with Stable Diffusion and some general ML tidbits. I was just thinking from a practical point of view. From what I read, it sounds like the files are smaller at the same quality, require the same or less processor load (maybe), are tuned for parallel I/O, can be encoded and decoded faster (and there being less difference in performance between the two), and supports progressive loading. I’m kinda waiting for the catch, but haven’t seen any major downsides, besides less optimal performance for very low resolution images.

    I don’t know how they ingest the image data, but I would assume they’d be constantly building sets, rather than keeping lots of subsets, if just for the space savings of de-duplication.

    (I kinda ramble below, but you’ll get the idea.)

    Mixing and matching the speed/efficiency and storage improvement could mean a whole bunch of improvements. I/O is always an annoyance in any large set analysis. With JPEG XL, there’s less storage needed (duh), more images in RAM at once, faster transfer to and from disc, fewer cycles wasted on waiting for I/O in general, the ability to store more intermediate datasets and more descriptive models, easier to archive the raw photo sets (which might be a big deal with all the legal issues popping up), etc. You want to cram a lot of data into memory, since the GPU will be performing lots of operations in parallel. Accessing the I/O bus must be one of the larger time sinks and CPU load becomes a concern just for moving data around.

    I also wonder if the support for progressive loading might be useful for more efficient, low resolution variants of high resolution models. Just store one set of high res images and load them in progressive steps to make smaller data sets. Like, say you have a bunch of 8k images, but you only want to make a website banner based on the model from those 8k res images. I wonder if it’s possible to use the the progressive loading support to halt reading in the images at 1k. Lower resolution = less model data = smaller datasets to store or transfer. Basically skipping the downsampling.

    Any time I see a big feature jump, like better file size, I assume the trade off in another feature negates at least half the benefit. It’s pretty rare, from what I’ve seen, to have improvements on all fronts.



  • True. They created their own problem by trying to up each other’s lumens claims over and over to the point where decent flashlights are claimed to have 5.6 million lumens and included 25000mAh 18650s.

    Most of the $5+ flashlights are probably fine for most people’s needs. I have several and they’ve been fine for me. Different models, similar modes, similar brightness, and all fine for walking the dog or if the power goes out. Now, if I were relying on them for survival, I might think twice. All have held up fine, including the 12 year old one from dealextreme (pre-alibaba). But, since I don’t know if people are asking for recommendations where spec accuracy matters, I’m hesitant to recommend them to random people on the internet.

    (I had to check, just for fun, and there are 18650 batteries listed as 19900mAh. Pretty impressive, since Panasonic is capped out at 3500-3600.)


  • Yeah, it really caught me off guard the first time I used the site. It was during one of those special celebration discount days where they had the audacity to mark items as literally $0.01 when basically nothing was that price.

    For 3D printer filament, which is usually bought in 1kg/2.2lb spools, most places list a 2m sample or a 250g spool to game the search. And my other favorite is the whack-a-mole shipping setup where on variation might be free shipping, but choose a different color and the shipping jumps to $300+.

    With Amazon, I’m seeing a ton more overpriced items discounted to still higher priced than their competition. If you look at their deals pages, you can find things like portable monitors for $70 (down from $150), but checking that category shows the same monitor (same specs under a different name) for $60.

    Here’s as close as I can find right now, since all the lightning deals are ending for the day. There’s a USB laptop docking station that’s “discounted” from $139 to $70. There isn’t an exact match (there usually is), but similar products go for ~$60-$70 (2 HDMI, 4+ USB3 ports, 100W PD, ethernet). What’s funnier is that the specific company’s Amazon site has at least 4 identical docks at slightly different prices.


  • I just tried it again on desktop and it worked, but the reason was that I downloaded an extension a while ago and forgot about it. When I disabled the extension, it stopped working.

    There used to be a way to enable installing any extension on mobile FFx Dev, but I’m not sure if that still works. The desktop extension just changes the user agent string, so that might be another route to enabling it.



  • I use AliExpress for electrical parts (except anything with memory), 3D printer parts, and small crap I don’t mind waiting for, but never anything I would be angry about if it never arrived. Also, nothing I consume or wear or need for safety, and I’m wary of anything that’s supposed to be plugged into the wall for long periods of time unattended.

    I wouldn’t say I’ve been surprised, but my expectations are low. It’s all cheap stuff, but as long as you’re not needing the stuff you buy, it’s fine. Dollar store quality with the scent of plastic and cigarettes.

    That being said, beware of scams. The one that seems acceptable to them is to list one cheap part for the listing, along with variations of the full device. That way it looks like the lowest price in search results, but when you click it, the selected variation is the cheap part. Like, you’ll search for “pliers set” and see a listing for $1, compared to others around $15. When you select it, the product page will have a carrying case for $1 and the various pliers for twice as much as the competition. What’s better is that the case will be selected automatically, not the thing in the picture you clicked on or the picture you see first in the product pages’ gallery.

    There are also scam stores that pop up with super low prices compared to others on the site can disappear overnight and the cancellation/refund process is a super pain. Contact customer service once and just submit a claim with your CC company. Their refund process will try to keep telling you to wait for another week, and that includes the reps you get on chat. If you’re suspicious and still order, always follow the shipping info. They will estimate a reasonable delivery date, you’ll get a shipping notification, but it will sit in limbo. The shipping folks are separate from the scammers, so if you see the package actually move towards a shipping center, you’re in the clear. If it says they received shipping information for over a week, you got screwed.

    Ignore flash drives/SSDs, batteries, and assume any flashlights are 1/100th the brightness claimed (literally). Oh, and watch shipping costs. Something with free shipping can be 10x the price of the product if you add a second one to your cart.


  • I’ve tried to warn people about them. I got a 10 pack early on while learning and it almost made me give up the hobby. Classic n00b mistakes? Some, but after I set that filament aside in a drybox, I had almost no problems. The only mistakes I made with those other brands were due to strategies I developed to rescue prints from IIIDMax’s garbage. I must have used 10-20 other brands over the next year, revisiting the cursed spools occasionally.

    I thought I could relegate the leftovers to my 3D pen. Somehow that satan-spawned plastic jammed it up. The pen is basically a soldering iron, a motor, and 2 gears. I’ve fed strips of PETG bottles cut by hand through it. The filament wasn’t precise enough for my no precision 3D pen.


  • I’m generally a Windows user, but on the verge of doing a trial run of Fedora Silverblue (just need to find the time). It sounds like a great solution to my… complicated… history with Linux.

    I’ve installed Linux dozens of times going back to the 90s (LinuxPPC anyone? Yellow Dog?), and I keep going back to Windows because I tweak everything until it breaks. Then I have no idea how I got to that point, but no time to troubleshoot. Easily being able to get back to a stable system that isn’t a fresh install sounds great.