2024 might be the breakout year for efficient ARM chips in desktop and laptop PCs.

  • Justin@lemmy.jlh.name
    link
    fedilink
    English
    arrow-up
    72
    arrow-down
    13
    ·
    edit-2
    10 months ago

    This entire article just to hype up Qualcomm releasing a new CPU? I havent seen any evidence to suggest that this new Qualcomm CPU won’t be trash like all the other ones.

    ARM on PC isn’t happening any time soon. They’re not more efficient than x86 CPUs at all.

    Here’s a speed comparison between Qualcomm and AMD’s best cpus from last year. Same TDP.

    https://www.cpu-monkey.com/en/compare_cpu-qualcomm_snapdragon_microsoft_sq3-vs-amd_ryzen_7_7840u

    Here’s Jim Keller, the father of both AMD Ryzen and the Apple M1, saying that ARM is not necessarily more efficient than x86:

    https://chipsandcheese.com/2021/07/13/arm-or-x86-isa-doesnt-matter/

    The only reason why Apple was able to make a successful ARM CPU was because they control the entire OS and the entire supply chain, and they have super expensive exclusivity contracts with TSMC. (because they literally make 50% of all phones in the world)

    AMD’s x86 CPUs are actually faster and more efficient than Apple’s ARM CPUs on the same 5nm process node, but Apple is consistently 2 years ahead when it comes to silicon manufacturing, because of their TSMC deals.

    Qualcomm doesn’t have any of that, and there is no way their CPUs are going to be so much better than AMD’s that people are going to be willing to put up with ISA incompatibilities. Windows on ARM has been a flop.

    At least servers are more reasonable to see ARM chips, because all the software is open-source and all the major cloud vendors are making their own CPUs.

    Nothing against ARM, or alternative ISAs in general, people just don’t understand that x86 vs ARM is not about power efficiency at all, it’s about supply chains and software compatibility.

    • helenslunch@feddit.nl
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      3
      ·
      10 months ago

      The only reason why Apple was able to make a successful ARM CPU was because they control the entire OS and the entire supply chain

      One has to assume similar efforts are being undertaken with Qualcomm, Intel, Google, Microsoft, etc.

      I don’t think anyone thinks slapping an ARM processor in a Windows laptop is going to suddenly make them more efficient.

      • MonkderZweite@feddit.ch
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        10 months ago

        I don’t think anyone thinks slapping an ARM processor in a Windows laptop is going to suddenly make them more efficient.

        I say most think exactly that.

    • Lojcs@lemm.ee
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      5
      ·
      10 months ago

      Here’s a speed comparison between Qualcomm and AMD’s best cpus from last year. Same TDP.

      Amd’s chip runs on 28 watts and is built on 4nm, qc’s runs on 7 watts and is built on 5nm. They are not equivalent.

      AMD’s x86 CPUs are actually faster and more efficient than Apple’s ARM CPUs on the same 5nm process node, but Apple is consistently 2 years ahead when it comes to silicon manufacturing, because of their TSMC deals.

      Comparing amd 7840u pro (4 nm, 28W) with apple m2 pro 10 core (5 nm, 28W), amd is 7% faster in single core and 10% faster in multi core. It’s unclear how it would be if they were on the same node. Feels they’d be the same

    • corbin@infosec.pubOP
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      6
      ·
      10 months ago

      The SQ3 was a custom design only for Surface tablets, I’m not sure it’s representative of Qualcomm’s future generally-available hardware. Early benchmarks on the Snapdragon Elite are much more promising but TDP and other important details are still missing.

      You’re definitely right that software vertical integration is the missing piece. We’re starting to see a little bit of that in the PC ecosystem (e.g. windows using the AI core on newer CPUs/SoCs for live camera and mic effects) but more needs to happen there.

      • Justin@lemmy.jlh.name
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        10 months ago

        That’s true. I haven’t looked that closely at QC’s most recent chips, just pointing out that they’re usually slower/hotter/more-expensive

        It’s good to see competition, but people should manage their expectations. They’re gonna have to be a lot faster/efficient than the AMD 7840u in order to make running ARM worth it on PC.

        It’ll be a fight, and in 2025 they’ll have to compete with Zen 5, too.

    • mryessir@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      My X13s with Linux, at 250 nits brightness while browsing via WLAN and playing music from the browser via bluetooth uses 5-8W in total.

    • frezik@midwest.social
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      10 months ago

      You shouldn’t trust TDP numbers. They’re most useful to get a ballpark idea of what size cooler you’ll need for a given chip (and even then, Nactua has their own rating system for matching coolers to chips). AMD, in particular, reinvents their TDP formula regularly and plays with the numbers to get the output they want for comparison purposes.

      Anyway, I’d be fine if ARM ends up being only on par with x86. It’s still a way out of the insanity of the x86 architecture and opens up so many more companies who can make chips.

  • blazera@kbin.social
    link
    fedilink
    arrow-up
    43
    arrow-down
    2
    ·
    10 months ago

    It says it a few times about x86 being decades old…but so is ARM? I dont know whats supposed to be game changing about it.

    • frezik@midwest.social
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      10 months ago

      X86 has an incredible amount of cruft built up to support backwards compatibility all the way back to the 8086. ARM isn’t free of cruft, but it’s nowhere on the same level. Most of that isn’t directly visible to customers, though.

      What is visible is that more than three companies can license and manufacture them. The x86 market has one company that owns it, another who licenses it but also owns the 64 bit extensions, and a third one who technically exists but is barely worth talking about. It’s also incredibly difficult to optimize, and the people who know how already work for one of main two companies (arguably only one at this point). Even if you could legally license it as a fourth player, you couldn’t get people who could design an x86 core that’s worth a damn.

      Conversely, ARM cores are designed by CS students all the time. That’s the real advantage to end users: far more companies who can produce designs. If one of them fails the way Intel has of late, we’re not stuck with just one other possibility.

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      35
      ·
      edit-2
      10 months ago

      I’m guessing you’ve never used an ARM Mac.

      They don’t look all that fast on GeekBench (more on that in further down) but in real world usage they are incredibly fast. As in an entry level 13" school homework laptop will have performance on par with a high end gaming PC with a thousand watt PSU.

      I don’t have a high end gaming PC to compare, but I do have a mid-range one and I’ve stopped using it… my laptop is so much faster, quieter, cooler, that even though the PC has more games… I just put up with the modest selection (about half the games I own) that run on a Mac. It’s not just gaming either… I’m also able to compile software perfectly fast, I can run docker with a dozen containers open at the same time without breaking a sweat (this is particularly impressive on the Mac version of Docker which uses virtual machines instead of running directly on the host), and stable diffusion generates images in about 20 seconds or so with typical generation settings.

      The best thing though is I can do all of that on a tiny battery that lasts almost an entire day under heavy load and multiple days under normal load. I’ve calculated the average power draw with typical use is somewhere around 3 watts for the entire system including the screen. It’s hard to believe, especially considering how fast it is.

      On the modest GeekBench score Apple ARM processors have - it’s critical to understand GeekBench is designed to test very short bursts and avoid thermal throttling. Intel’s recent i9 processors, with good cooling, will thermal throttle after about 12 seconds and GeekBench is designed to avoid hitting that number by doing much shorter bursts than that. Apple’s processors not only take far longer to thermal throttle, they also “throttle” by reducing performance to barely lower than full speed.

      But even worse than that - one of the ways Apple achieves incredible battery life is they don’t run the processors at high clock rates for short bursts. The CPU starts slow and ramps up to full speed when you keep it under high load. So something quick, like loading a webpage, won’t run at full speed and therefore GeekBench also isn’t running at full speed either.

      A third difference, and probably the biggest one, is Apple’s processor has very fast memory and also massive memory caches which are even faster. Again that often doesn’t show up on CPU benchmark because it’s not really measuring compute power. But real world software spends a massive amount of time just reading and writing to memory and those operations are fast on Apple’s ARM processors.

      You really can’t trust the benchmarks when you’re comparing completely different processors. You need to try real world usage, and the real world usage difference is game changing. Trust me, when proper fast processors (not just a laptop running with a phone CPU) are available on PCs, everyone will realise Mac users were right - ARM is way better than x86. This isn’t like AMD vs Intel. It’s more like HDD vs SSD.

        • abhibeckert@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          13
          ·
          edit-2
          10 months ago

          The Mac I use is a few years old and available secondhand for under $500. You can get the same CPU/GPU in an iPad which is available, brand new, for $600. I think that’s a reasonable price for a school computer.

      • joshhsoj1902@lemmy.ca
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        2
        ·
        10 months ago

        I work on an ARM Mac, it’s fine. If you’re just doing light work on it, it works great! Like any other similarly priced laptop would.

        Under load, or doing work outside what it is tuned for, it doesn’t perform spectacularly.

        It’s a fine laptop, the battery life is usually great. But as soon as you need to use the x86 translation layer, performance tanks, battery drains, it’s not a great time.

        Things are getting better, and for a light user, It works great, but I’m much more excited about modern x86 laptop processors for the time being.

        • sir_reginald@lemmy.world
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          2
          ·
          edit-2
          10 months ago

          they are getting dowvoted because they said macbooks are “entry-level school laptops”, which I find hilarious.

          macbooks are a luxury, paying way more for the same specs (with more battery life, I’ll grant you that).

      • KeenFlame@feddit.nu
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        10 months ago

        All apple products I’ve bought have had their performance artifically destroyed by firmware

        Not doing that again

        Ever

        Have fun when this computer breaks on purpose so you buy a new one

  • TheGrandNagus@lemmy.world
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    2
    ·
    edit-2
    10 months ago

    This is a Qualcomm marketing piece.

    And no, the most exciting 2024 tech won’t be a CPU with similar or lower performance to other comparable CPUs on the market, with the added benefit of less software compatibility.

  • Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    2
    ·
    10 months ago

    Can’t wait. I recently bought a firewall that gets noticeably warm on idle, even with a little case that has a heat sink. We need more energy efficient PCs.

  • geekworking@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    10 months ago

    One of the hurdles to ARM is that you need to recompile and maintain a separate version of every piece of software for the different processors.

    This is a much easier task for a tightly controlled ecosystem like Mac than the tons of different suppliers Windows ecosystem. You can do some sort of emulation to run non-native stuff, but at the cost of the optimization that you were hoping to gain.

    Another OS variation also adds a big cost/burden to enterprise customers where they need to manage patches, security, etc.

    I would expect to see more inroads in non-corporate areas following Apple success, but not any sort of explosion.

    • originalucifer@moist.catsweat.com
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      10 months ago

      micrsoft has spent the last few years rebuilding their shit to work on ARM. no idea how far theyve come, but you will absolutely see windows on arm for the enterprise.

      • frezik@midwest.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        Apple has the benefit of having done architecture transitions a few times already. Microsoft has been trying to get everyone out of the “Program Files (x86)” directory for over decade.

        • originalucifer@moist.catsweat.com
          link
          fedilink
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          10 months ago

          apple doesnt have the burden of being backwards compatible for 3 decades and able to run on most commoditized hardware.

          apple undoubtedly has it easier than a company thats actually in use in most of the business world.

          • frezik@midwest.social
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            10 months ago

            Uhh, one reason they don’t is that they have made the switch twice. Even if they didn’t have to deal with any other third party, they still had to convince Adobe, and Adobe doesn’t want to do shit if they don’t have to.

    • qjkxbmwvz@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      On the other hand, a completely open ecosystem works well too — ARM for Linux feels exactly like ARM on x86/64 in my experience. Granted this is for headless stuff on an (RPi and Orange Pi, both ARM, both running Debian), but really the only difference is the bootloader situation.

  • Hypx@kbin.social
    link
    fedilink
    arrow-up
    10
    arrow-down
    3
    ·
    10 months ago

    This is just a repeat of the same old pro-RISC myths from decades ago. There is very little performance difference between x86 and any RISC based CPU, at least when pertaining to the ISA itself. Apple merely has the advantage of having far more resources available for CPU development than their competitors.

    • frezik@midwest.social
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      10 months ago

      Modern x86 is a CISC outer layer around a RISC inner core. It didn’t hang on this long by ignoring RISC, but by assimilating it. RISC really did change everything, but not by the way everyone thought.

  • bamboo@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    10 months ago

    It would be fascinating to see Qualcomm, NVIDIA, AMD, Mediatek, and possibly others all competing to build the best ARM SoCs for windows devices, especially after so many years of Intel stagnating and Apple eating their lunch with their ARM SoCs.

    • akrot@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      10 months ago

      competing to build the best ARM SoCs for windows devices

      You mean desktop, and not Windows? Because if anything Windows is becoming a botnet device. I hope linux support is OOB.

      • bamboo@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        10 months ago

        Windows arm devices boot with UEFI, so standard ARM UEFI images should work, just like on x86. I would bet drivers should be alright too, since these ARM SoCs will likely be similar to ones used in Linux SBCs and Android devices.

  • R0cket_M00se@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    11
    ·
    10 months ago

    “The most exciting tech isn’t the thing that currently exists and is being improved and integrated daily, it’s this other thing we don’t even know for sure will maybe happen.”

    • corbin@infosec.pubOP
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      10 months ago

      Right, it’s less exciting now because it’s already here. I’m not expecting radically improved GPT models or whatever in 2024, probably just more iteration. The most exciting stuff there might be local AI tech becoming more usable, like we’ve seen with stable diffusion.

      • sir_reginald@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        10 months ago

        I’m just expecting performance optimisations, especially for local LLMs. Right now there are models as good as GPT-4 (Goliath 120B), but they require 2 RTX 4090 to run.

        The models that require less powerful equipment are not as good, of course.

        But hopefully, given enough time, good enough models will be able to run with mid end hardware.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      5
      arrow-down
      5
      ·
      10 months ago

      “Forget about the possibility that we may finally have developed machines that think, that comprehend the world in a way similar to how humans do and can communicate with us on our level. This new chip design might end up with comparable capabilities to the existing chip design!”

      Yeah, there was no need to try to hype this up as the biggest thing ever.

      • richieadler@lemmy.myserv.one
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        5
        ·
        10 months ago

        Forget about the possibility that we may finally have developed machines that think, that comprehend the world in a way similar to how humans do and can communicate with us on our level.

        That isn’t what’s happening with “IA” right now.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          5
          arrow-down
          4
          ·
          10 months ago

          Which is why I said possibility, I knew picky people would jump on the comment like this.

        • R0cket_M00se@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          7
          ·
          10 months ago

          You clearly don’t work in a field where it’s gutting swaths through workflows and taking up serious slack.

          You can describe your problem to it in native English, so it does communicate on our level. It comprehends training data in the same way a human comprehends our lived experience and assimilates the data in the same manner. It’s not truly “reasoning”, but it’s leagues ahead of anything we had even four years ago and it’s only going to grow from here.

          Commercial ventures are finding new uses cases everyday and to people in IT it’s hilarious in the same way that people who thought the Internet was a fad were hilarious.

            • R0cket_M00se@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              6
              ·
              10 months ago

              I literally said “it’s not truly reasoning” to clarify that while it’s drawing on its training data in the same way you draw on your experiences when making new decisions, it can’t really create original thought.

              Once again lemmy proves reading comprehension is too damn hard.

              • richieadler@lemmy.myserv.one
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                5
                ·
                10 months ago

                I thought you were the one saying IA thinks, it was someone else. Apologies for that.

                OTOH you can take your sarcasm and insert it rectally.

  • 👍Maximum Derek👍@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    10 months ago

    Does anyone else worry that the rise of personal computers using super custom SOCs is going to have negative effects on our abilities to build our own machines?

  • smileyhead@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    10 months ago

    Ah yes, let’s welcome one device - one operating system myth to the desktops, with people choosing hardware because of software feauture that could be installable. Welcome the expiration date on computers called “years of software support” and welcome overall unfriendlyness for alternative systems.

    Performance and efficency is one side of the coin. But let me remind you that Qualcomm (among with Google) is the reason we cannot have lifetime updates for our phones, ROMs build needs to be specific for each model and making a phone with anything but Android is nearly impossible.

    I’ll take ARM over x86, but I’ll take AMD/Intel over Qualcomm thousand times more.

  • helenslunch@feddit.nl
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    edit-2
    10 months ago

    AI is currently limited in application (and legislation). I think when we start seeing it in things like document ecosystems like Google Workspace or Microsoft Office, or in operating systems like Windows 12 and Android, that’s when we’ll start seeing what it’s really capable of.

    Also open-source applications that aren’t necessarily limited by laws or corporate optics.

    Thinks like creating helper bots that aid in troubleshooting or “assistants” that can draft/send emails, create calendar events, answer questions based on emails, etc.

    But yeah in it’s current state it is mostly just a glorified search engine.

    • melroy@kbin.melroy.org
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      10 months ago

      ow yea 2024 will definitely be the year where AI gets integrated into all those products.

  • Chemical Wonka@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    10 months ago

    But one detail that we cannot forget is that with the increase in ARM architecture in PCs and laptops we will probably see an increase in fully locked hardware. We don’t need the expansion of the ARM architecture for PCs if it doesn’t come with hardware and software freedom