Real, good quality, factory-made discs, maybe. Anything else (from bad quality factory stuff to writable discs), not so much. And backups where not done on factory-pressed discs.
Real, good quality, factory-made discs, maybe. Anything else (from bad quality factory stuff to writable discs), not so much. And backups where not done on factory-pressed discs.
The same promises we got with CD then.
Any of these large business coming out to explicitly say “this will not happen” is concerning.
Yes, I did. And yes, it is possible. It’s terribly slow in comparison, making it less useful. It very quickly devolves into random mumbling or get stuck in weird loops. It also hogs resources that are actually used by other tasks you may be doing.
I mainly test dev AI solutions, and moving from 1B to 7B models made them vastly more pertinent. And moving from CPU implementation (Ryzen 7 3700X) to GPU (RTX 3080 Ti) made them fast enough to be used as quick completion and immediate suggestion without breaking workflow, in addition to freeing resources for IDE, building tools and the actual software being run, while running it on CPU had multi-seconds delay, which made this use case completely useless.
I’m still waiting for them to make it an optional extension… oh wait.
Decent models are huge; an average one requires 8GB to be kept in memory (better models requires something like 40 to 70 GB), and most currently available engines are extremely slow on a CPU and requires dedicated hardware (and even relatively powerful GPU requires a few seconds of “thinking” time). It is unlikely that these requirements will be easily squeezable in current computers, and more likely that dedicated hardware will be required.
Not everyone have to check something. But there are people that do routinely check popular stuff, either on their own or for their job. Sometimes this raises issues, which are usually handled appropriately. Of course if you download a little unknown piece of software made by a single person and never advertised anywhere, you’ll have to do the job yourself. But anything semi-popular attracts enough attention to get some level of audit, at least because business uses a lot of open source. There are even businesses whose main product is auditing and developing open source, kind of like bounty hunters.
And of course there are counter-examples, too. TrueCrypt got pulled out quite dramatically, and I’m not sure we know why even now. But the more sensitive the stuff, the higher the chance of it getting some level of investigation.
Like you said, the issue is in verification by the end-user. It is trivial to provide a digitally signed (and timestamped) file. It is also trivial to provide trusted tools to verify these files. It is immensely difficult to provide a solution user will care about; which is why more often than not the most people asks companies in the data authenticity business is “can we show a green check on screen? That would be perfect!”.
And we end up with something that nobody checks beyond the “it’s probably ok” phase. If the goal is to teach the masses about trusting their source, either they have a miracle solution, or it just won’t work. (and all that is assuming people actually care about checking the authenticity of the stuff they see, which is not a norm as it is…)
With Edge, MS decided to re-implement some stuff through their own library. I don’t have an exhaustive list, but one particular thing is that for a while, SubtleCrypto (used for various operations within JavaScript) was present, but some mandatory algorithms were not available in Edge while they worked fine everywhere else (and maybe even in the non-chromium based Edge, which I don’t remember testing).
So, yes, there are differences beyond the integration of MS services. They are unlikely to matter to most people, but for dev it does reintroduce some weird quirks, as MS does.
You may want to try BG3 on linux, too. Might even get better performances at this point.
Well, it would not be fun if people suddenly voted against themselves just to do the right thing for everyone.
You have the correct idea, but it’s way too late for most people. This “pre-selection” made by most services have been in place for a long while, and these days people even complains when they are not fed with it.
It is a sad state of affair; thankfully at some point enough people might move away from these automated suggestions, but I’m not holding my breath.
I doubt webtoon is built on wordpress :D
The number of sites that still supports RSS is impressive when you think about how niche it is right now. I was surprised when I saw some big comics sites had it.
If there was a fiable framework for that in use by most applications, it’s fairly safe to say it would still have exceptions for the OS’s provided apps, “to improve the user experience”.
Yeah, good luck with that. You can tell someone “if you lose this token, all data are unrecoverable”, they’ll reply with “ok, got it!” and about two and a half second later call you saying “Hey I lost my token can you recover my data?”.
Because of the “more or less” part of your post. Oversimplifying things is nice for a quick explanation, but physics don’t care about your simplified model once you get up there, gravity isn’t completely uniform, random space stuff sends you slightly off your path, and your target move in a mostly (but not 100%) predictable way, around your planet.
The point is, donations barely cover the “salary” of its president (7-something millions dollar) and funds allocated to dev dwindle each year. Which is plainly stated in their yearly reports. The google money is a large part of what makes it possible to do anything else than pay the board; the donations are the cherry on the cake at this point.
There is a difference between forks made by other people to tweak a project/do something specific for it, and the base project’s dev team moving away from whatever it became.
I’m usually not in favor of such fork because the reason for moving away is sometime dubious; some project just rename themselves to “start fresh and drop legacy compatibility issue”. But in the case of Firefox, Mozilla is the thing holding back features while adding bloat. Since it can’t change to a saner structure with more long-term sustainability plans, devs/engineers could move into a fork to not be bound to that anymore.
Of course it’s not that easy; for all the bad Mozilla (foundation or other, I don’t care much that they are two entities at this point since one is owned by the other) is doing to the actual software, they do provide salaries. At least, for now.
There are way less extreme example of doctors just fucking things up for a bag of money.