Even half an hour next to the PA without special ear plugs is enough to permanently harm your hearing.
Even half an hour next to the PA without special ear plugs is enough to permanently harm your hearing.
What you’re describing used to be right under X11, but under Wayland the compositor handles all rendering itself. For Gnome that’s mutter, which is also maintained by the gnome project.
Seriously, stop being an asshole. Coil whine is a well-documented behaviour that creates a loud, high pitched noise.
As coil whine is at the very limit of what human hearing can accomplish, it doesn’t take much until you’re unable to hear it. So you’re likely too old or went to too many concerts to be able to hear it.
Good ears? the question is when, not where, and the answer is half a lifetime ago.
It’s just like those shitty recipe sites that tell you their grandma’s life story for hours before giving the recipe. Get to the point, who cares about the anecdotes of some writer?
I don’t want to connect with everyone always everywhere. It’s just like small talk, which may be acceptable or even essential in some cultures, while considering rude and wasteful where I’m from.
Don’t SteamVR tools work on linux as well? Not that it’d help in your situation, where you’re stuck with proprietary GPU drivers and proprietary VR tools.
Why so? AMD supports Wayland just fine, while having good enough performance. As a VR dev, AMD still including a USB C port on GPUs should actually be even more convenient for you.
So how do you juggle having to see dozens of windows at the same time then?
I’m a software dev as well.
But I often layer multiple windows in the same tile of the screen. e.g. I may have the IDE with the software I’m working on in one tile, the IDE with the library source code I’m working with in the second tile, and a live build of the app in the third tile. But I’ve also got documentation, as a website, in the same tile as the IDE with the lib’s source.
Now when I switch between the IDE with the lib’s source, and the browser with the lib’s documentation, I only want that tile to change. No problem, with KDEs taskbar and window switcher I can quickly do that.
But when using the applications menu on Gnome I get a disrupting UI across all screens that immediately rips me out of whatever I was doing.
Why’d you have to use TC? KDEs dolphin can do all that natively.
Personally, configuring KDE was much simpler and more robust compared to the dozen addons I needed for Gnome, which also broke every now and then after updates.
I tried that, but IMO it’s much simpler and more robust to just configure KDE than to install a dozen Gnome extensions that end up broken after updates anyway.
Unless you’re writing ruby on rails on a 13" macbook, you’ll run into Gnome’s limitations when working.
Gnome is in many ways so focused that it makes a lot of productivity use impossible. You always have to open the menu to launch software, you’ve got no system tray, and worst of all, Gnome apps are so simplified that you constantly run into the limitations when using it productively.
When working with dozens of windows open at the same time across multiple monitors, I’m a fan of KDE. And KDE apps tend to also have all the extra features I need to handle weird situations, files, and edge cases.
The 50€ Patreon tier perks include “everything ad-free”. And there’s no repo or source available anywhere.
WTF
deleted by creator
NIF can’t really ever reach Q>1. All the statements of having reached that only include the energy that reaches the capsule. The energy the lasers actually use is orders of magnitude larger.
This theoretical Q>1, where the plasma emits more radiation than it receives, have been reached by other reactors before.
But while tokamak or stellerator designs need a 2-3× improvement to produce more energy than the entire system needs, the NIF would need a 100-1000× improvement to reach that point, which is wholly unrealistic with our current understanding of physics.
Most fusion attempts try to keep a continuous reaction ongoing.
Tokamak reactors, like JET or ITER do this through a changing magnetic field, which would allow a reaction to keep going for minutes, the goal is somewhere around 10-30min.
Stellerator reactors try to do the same through a closed loop, basically a Möbius band of plasma encircled by magnets. The stellerator topology of Wendelstein 7-X was used as VFX for the closed time loop in Endgame. This complex topology allows the reaction to continue forever. Wendelstein 7-X has managed to keep its reaction for half an hour already.
The NIF is different. It doesn’t try to create a long, ongoing, controlled reaction. It tries to create a nuclear chain reaction for a tiny fraction of a millisecond. Basically a fusion bomb the size of a grain of rice.
The “promise” is that if one were to just repeat this explosion again and again and again, you’d also have something that would almost continually produce energy.
But so far, the NIF has primarily focused on getting as much data as possible about how the first millisecond of a fusion reaction proceeds. The different ways to trigger it, and how it affects the reaction.
The US hasn’t done large scale nuclear testing in decades. Almost everything is now happening in simulations. But the first few milliseconds of the ignition are still impossible to accurately model in a computer. To build a more reliable and stronger bomb, one would need to test the initial part of a fusion reaction in the real world repeatedly.
And that’s where the NIF comes in.
If you actually calculate the maximum speed at which information can travel before causing paradoxes, in some situations it could safely exceed c.
For two observers who are not in motion relative to each other, information could be transmitted instantly, regardless of the distance, without causing a paradox.
The faster the observers are traveling relatively to each other, the slower information would have to travel to avoid causing paradoxes.
More interestingly, this maximum paradox-free speed correlates with the time and space dilation caused by the observers’ motion.
From your own reference frame, another person is moving at a speed of v*c. The maximum speed at which you could send a message to that observer, without causing a paradox, looks something like c/sqrt(v) (very simplified).
Interesting, from what I can find online even though it’s unique to the vita it’s still just the memorystick pro duo protocol under the hood, with a DRM system similar to the one Sony uses for their modern CFExpress Type A cards.
Sure, it’d be a solution for five minutes until someone delids the secure enclave on the gaming card, extracts the keys, and builds their own open source hw alternative.
High-performance FPGAs are actually relatively cheap if you take apart broken elgato/bmd capture cards, just a pain in the butt to reball and solder them. But possibly the cheapest way to be able to emulate any chip you could want.
That’s definitely wrong. You should follow danielle’s mastodon, she’s working on elementary all the time.