• Cocodapuf@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      12 days ago

      Ok, should I know who framework is? I’ve been a PC gamer since forever and I’ve never heard of this company.

      • umbrella@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 days ago

        its modular like a desktop pc. you can fix it and upgrade piecemeal instead of junking it, also like a desktop. if you are a gamer you dont need to be in that common situation where the cpu still holds but the gpu is already oooolddd that usually happens on laptops.

        • Thinker@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          ·
          12 days ago

          repairable and upgradable*

          I know it’s an absolutely banal nitpick, but I think it’s unfortunately a revelation in the current laptop market that ~90% of a laptop stays good for a really really long time, and the other 10% can be upgraded piecemeal as the need arises. Obviously this was never news to the Desktop world, but laptop manufacturers got away with claiming this was impossible for laptops in the name of efficiency and portability.

    • Pizza@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      1
      ·
      13 days ago

      I wasn’t prepared. I’ve been eyeing a mini for a while and this thing kills it on value compared to what I would get in a similar price point.

        • Pizza@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 days ago

          Mac mini and studio. The overall power comparison remains to be seen but cost to spec ratio I would have had to spend over 6k and couldn’t have 16tb of memory, frameworks was around 3200.

  • rockSlayer@lemmy.world
    link
    fedilink
    English
    arrow-up
    86
    arrow-down
    2
    ·
    13 days ago

    Lmao the news about this desktop is strangling their website to the point of needing a 45 minute waiting list

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      5
      ·
      12 days ago

      I visited their website literally within about 10 minutes of them announcing the product and I had to wait 8 minutes to get in.

      If framework has, or had, one problem, it was that the main appeal of their products was the repairability, the products themselves were only okay in terms of specs. Well now they have really decent specs as well.

      I could absolutely see schools wanting to deploy these to their students.

    • Pizza@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      13 days ago

      Guilty. This thing came out at the perfect time and I was considering building my own or a Mac mini but this has 95% of what I’m looking for for less than a spec compromised Mac mini. So I preordered. And I kept hitting refresh lol.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 days ago

        I always think the Mac mini is just a bit too mini. It’s a desktop so it’s not exactly going to be moved around a lot so it doesn’t need to be quite that tiny and this thing is a good compromise between still being small but without being so small that it offers no upgradability.

        And I know Apple says otherwise but surely that thing must get thermally throttled at some point.

      • Liz@midwest.social
        link
        fedilink
        English
        arrow-up
        18
        ·
        13 days ago

        Yeah that touchscreen tablet convertible machine is what has me psyched. I’m not the target for it, and already own a 16, but I could see that thing selling well. I honestly think they came out with the desktop because they just kinda felt they needed a desktop.

        • Evrala@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          12 days ago

          I have a 16 and a 13, I thought I’d give away the 13 when I got the 16 but I keep using the 13 as well cause of how portable it is. Lot nicer to lounge about with the 13 than the 16.

          I might get the 12 to replace my 13 and use it for drawing practice and web browsing. Performance wise it’d be a downgrade from my 1280p but I don’t really need the performance.

  • ObsidianZed@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    3
    ·
    13 days ago

    Much like their laptops, I’m all for the idea, but what makes this desirable by those of us with no interest in AI?

    I’m out of that loop though I get that AI is typically graphics processing heavy, can this be taken advantage of with other things like video rendering?

    I just don’t know exactly what an AI CPU such as the Ryzen AI Max offers over a non-AI equivalent processor.

    • Appoxo@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      16
      ·
      12 days ago

      what makes this desirable by those of us with no interest in AI?

      Juat maybe not all products need to be for everyone.
      Sometimes it’s fine if a product fits your label of “Not for me”.

    • unexposedhazard@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      13 days ago

      Much like their laptops

      Its nothing like their laptops, thats the issue :/ Soldered in stuff all around, nonstandard parts that make it useless for use as a standard PC or gaming console.

      • ObsidianZed@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        13 days ago

        Sorry, I was stating that “much like their laptops, I like the idea of these desktops.” I was not trying to insinuate that they themselves are alike.

    • NuXCOM_90Percent@lemmy.zip
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      1
      ·
      edit-2
      13 days ago

      There is a massive push right now for energy efficient alternatives to nvidia GPUs for AI/ML. PLENTY of companies are dumping massive amounts of money on macs and rapidly learning the lesson the rest of us learned decades ago in terms of power and performance.

      The reality is that this is going to be marketed for AI because it has an APU which, keeping it simple, is a CPU+GPU. And plenty of companies are going to rush to buy them for that and a very limited subset will have a good experience because they don’t have time sensitive operations.

      But yeah, this is very much geared for light-moderate gaming, video rendering, and HTPCs. That is what APUs are actually good for. They make amazing workstations. I could also see this potentially being very useful for a small business/household local LLM for stuff like code generation and the like but… those small scale models don’t need anywhere near these resources.

      As for framework being involved: Someone has kindly explained to me that even though you have to replace the entire mobo to increase the amount of memory, you can still customize your side panels at any moment so I guess that is fitting the mission statement.

      • ilinamorato@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 days ago

        For modularity: There’s also modular front I/O using the existing USB-C cards, and everything they installed uses standard connectors.

    • miss phant@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      13 days ago

      I hate how power hungry the regular desktop platform is so having capable APUs like this that will use less power at full load than a comparable CPU+GPU combo at idle, is great, though it needs to become a lot more affordable.

      • Kilgore Trout@feddit.it
        link
        fedilink
        English
        arrow-up
        1
        ·
        12 days ago

        Production costs are not low either, and AMD still needs to profit. AMD’s APUs are already very affordable.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      13 days ago

      There’s lots of workstation niches that are gated by VRAM size, like very complex rendering, scientific workloads, image/video processing… It’s not mega fast, but basically this can do things at a reasonable speed that you’d normally need a $20K+ computer to even try. Like, if something takes hours on an A6000 Ada or an A100, just waiting overnight on one of these is not a big deal. Cashing or failing to launch on a 4090 or 7900 XTX is.

      That aside, the IGP is massively faster than any other integrated graphics you’ll find. It’s reasonably power efficient.

  • Laser@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    10
    ·
    13 days ago

    Xbox with the ability to run windows is what the article is basically saying.

      • Laser@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        13 days ago

        I think I need to give Linex a try again. I tried ubuntu in 2008 but found it too difficult to do the things I was used to doing on windows. I now have a bit more coding experience and will probably pick it up quicker

        • themadcodger@kbin.earth
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          13 days ago

          It’s gotten so much better since 2008. Ubuntu is good for servers, but probably not what you’re looking for on a desktop. And you really don’t need to have a coding background to use it, though it also depends on your use case.

          • Laser@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            13 days ago

            I just recall trying to put music on an ipod in ubuntu was a nightmare

            • brucethemoose@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              13 days ago

              Probably still is, lol.

              Apple stuff works best in the Apple ecosystem, though most of what works on Windows can work on linux.

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          13 days ago

          You don’t have to pick and choose, you can dual boot.

          But the only thing I boot Windows for these days is gaming and Microsoft Teams. Linux has come a long way since 2008.

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          12 days ago

          Any that needs a lot of VRAM and good CPU performance on a budget, but not necessarily the real time performance of a W7900 or whatever.

    • Laser@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      7
      ·
      13 days ago

      Love the downvotes for saying something that is in the article! Feels just like reddit!

  • Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    73
    arrow-down
    1
    ·
    13 days ago

    Not really sure who this is for. With soldered RAM is less upgradeable than a regular PC.

    AI nerds maybe? Sure got a lot of RAM in there potentially attached to a GPU.

    But how capable is that really when compared to a 5090 or similar?

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      48
      arrow-down
      2
      ·
      edit-2
      13 days ago

      The 5090 is basically useless for AI dev/testing because it only has 32GB. Mind as well get an array of 3090s.

      The AI Max is slower and finicky, but it will run things you’d normally need an A100 the price of a car to run.

      But that aside, there are tons of workstations apps gated by nothing but VRAM capacity that this will blow open.

      • KingRandomGuy@lemmy.world
        link
        fedilink
        English
        arrow-up
        23
        ·
        12 days ago

        Useless is a strong term. I do a fair amount of research on a single 4090. Lots of problems can fit in <32 GB of VRAM. Even my 3060 is good enough to run small scale tests locally.

        I’m in CV, and even with enterprise grade hardware, most folks I know are limited to 48GB (A40 and L40S, substantially cheaper and more accessible than A100/H100/H200). My advisor would always say that you should really try to set up a problem where you can iterate in a few days worth of time on a single GPU, and lots of problems are still approachable that way. Of course you’re not going to make the next SOTA VLM on a 5090, but not every problem is that big.

        • KeenFlame@feddit.nu
          link
          fedilink
          English
          arrow-up
          1
          ·
          12 days ago

          Exactly, 32 is plenty to develop on, and why would you need to upgrade ram? It was years ago I did that in any computer let alone a tensor workstation. I feel like they made pretty good choices for what it’s for

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          12 days ago

          Fair. True.

          If your workload/test fits in 24GB, that’s already a “solved” problem. If it fits in 48GB, it’s possibly solved with your institution’s workstation or whatever.

          But if it takes 80GB, as many projects seem to require these days since the A100 is such a common baseline, you are likely using very expensive cloud GPU time. I really love the idea of being able to tinker with a “full” 80GB+ workload (even having to deal with ROCM) without having to pay per hour.

          • KingRandomGuy@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            12 days ago

            Yeah, I agree that it does help for some approaches that do require a lot of VRAM. If you’re not on a tight schedule, this type of thing might be good enough to just get a model running.

            I don’t personally do anything that large; even the diffusion methods I’ve developed were able to fit on a 24GB card, but I know with the hype in multimodal stuff, VRAM needs can be pretty high.

            I suspect this machine will be popular with hobbyists for running really large open weight LLMs.

            • brucethemoose@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              12 days ago

              I suspect this machine will be popular with hobbyists for running really large open weight LLMs.

              Yeah.

              It will probably spur a lot of development! I’ve seen a lot of bs=1 speedup “hacks” shelved because GPUs are fast enough, and memory efficiency is the real bottleneck. But suddenly all these devs are going to have a 48GB-96GB pool that’s significantly slower than a 3090. And multimodal becomes much more viable.

              Not to speak of better ROCM compatibility. AMD should have done this ages ago…

          • wise_pancake@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            12 days ago

            This is my use case exactly.

            I do a lot of analysis locally, this is more than enough for my experiments and research. 64 to 96gb VRAM is exactly the window I need. There are analyses I’ve had to let run for 2 or 3 days and dealing with that on the cloud is annoying.

            Plus this will replace GH Copilot for me. It’ll run voice models. I have diffusion model experiments I plan to run but are totally inaccessible locally to me (not just image models). I’ve got workloads that take 2 or 3 days at 100% CPU/GPU that are annoying to run in the cloud.

            This basically frees me from paying for any cloud stuff in my personal life for the foreseeable future. I’m trying to localize as much as I can.

            I’ve got tons of ideas I’m free to try out risk free on this machine, and it’s the most affordable “entry level” solution I’ve seen.

            • brucethemoose@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              12 days ago

              And even better, “testing” it. Maybe I’m sloppy, but I have failed runs, errors, hacks, hours of “tinkering,” optimizing, or just trying to get something to launch that feels like an utter waste of an A100 mostly sitting idle… Hence I often don’t do it at all.

              One thing you should keep in mind is that the compute power of this thing is not like an A/H100, especially if you get a big slowdown with rocm, so what could take you 2-3 days could take over a week. It’d be nice if framework sold a cheap MI300A, but… shrug.

              • wise_pancake@lemmy.ca
                link
                fedilink
                English
                arrow-up
                3
                ·
                12 days ago

                I don’t mind that it’s slower, I would rather wait than waste time on machines measured in multiple dollars per hour.

                I’ve never locked up an A100 that long, I’ve used them for full work days and was glad I wasn’t paying directly.

        • Amon@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          12 days ago

          No, it runs off integrated graphics, which is a good thing because you can have a large capacity of ram dedicated to GPU loads

            • brucethemoose@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              12 days ago

              Most CUDA or PyTorch apps can be run through ROCM. Your performance/experience may vary. ZLUDA is also being revived as an alternate route to CUDA compat, as the vast majority of development/intertia is with CUDA.

              Vulkan has become a popular “community” GPU agnostic API, all but supplanting OpenCL, even though it’s not built for that at all. Hardware support is just so much better, I suppose.

              There are some other efforts trying to take off, like MLIR-based frameworks (with Mojo being a popular example), Apache TVM (with MLC-LLM being a prominent user), XLA or whatever Google is calling it now, but honestly getting away from CUDA is really hard. It doesn’t help that Intel’s unification effort is kinda failing because they keep dropping the ball on the hardware side.

  • Blue_Morpho@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    10
    ·
    13 days ago

    This is a standard a370 mini PC at a high price.

    There’s Beelink, Minisforum, Aoostar and many others.

  • 4shtonButcher@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    2
    ·
    13 days ago

    Now, can we have a cool European company doing similar stuff? At the rate it’s going I can’t decide whether I shouldn’t buy American because I don’t want to support a fascist country or because I’m afraid the country might crumble so badly that I can’t count on getting service for my device.

    • ArchRecord@lemm.ee
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      12 days ago

      For the performance, it’s actually quite reasonable. 4070-like GPU performance, 128gb of memory, and basically the newest Ryzen CPU performance, plus a case, power supply, and fan, will run you about the same price as buying a 4070, case, fan, power supply, and CPU of similar performance. Except you’ll actually get a faster CPU with the Framework one, and you’ll also get more memory that’s accessible by the GPU (up to the full 128gb minus whatever the CPU is currently using)

      • commander@lemmings.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        9
        ·
        11 days ago

        I swear, you people must be paid to shill garbage.

        Always a response for anyone who has higher standards, lol.

        • ArchRecord@lemm.ee
          link
          fedilink
          English
          arrow-up
          5
          ·
          11 days ago

          “It’s too expensive”

          “It’s actually fairly priced for the performance it provides”

          “You people must be paid to shill garbage”

          ???

          Ah yes, shilling garbage, also known as: explaining that the price to performance ratio is just better, actually.

  • NuXCOM_90Percent@lemmy.zip
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    12
    ·
    13 days ago

    So… now Framework Corp is selling non-upgradable hardware?

    I dunno. Conceptually I want to like Framework. But their pricing means it is basically never worth buying and upgrading versus just buying a new laptop (seriously, run the numbers. You basically save 10 bucks over two generations of shopping at Best Buy). But they also have a system that heavily encourages people to horde spare parts rather than just take it to an e-waste disposal facility/bin.

    • ilinamorato@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      12 days ago

      their pricing means it is basically never worth buying and upgrading versus just buying a new laptop (seriously, run the numbers. You basically save 10 bucks over two generations of shopping at Best Buy).

      Maybe so. But the big difference is, you can upgrade iteratively rather than taking the entire hit of a new device all at once. So I can buy all of the individual components of my next laptop a few hundred dollars at a time over the course of a couple of years, and use them as I get them. By the time I’ve ship-of-theseus’d the whole device, I may have spent the same amount of money on that new computer, but I paced it how I wanted it. Then I put all of the old components into an enclosure and now I can use it as a media center or whatever. Plus, if something breaks, I can fix it.

      • NuXCOM_90Percent@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        12 days ago

        What exactly can you upgrade iteratively?

        From the laptop perspective (because the desktop is totally all about that side panel life):

        1. Memory: Ultrabooks are hell, no arguments there. But many brands have increasingly allowed at least one SODIMM to be swapped out and many still no longer solder the other one. And I’ll say, from personal experience, that buying and swapping out RAM in a relatively new-ish laptop often comes out closer to the price of just paying for the upgraded SKU to begin with. So there is the logic of “I’ll add another 16 GB in two years” but… yeah.
        2. Storage: Again, same. Except that they tend to not even solder down the nvmes. There are some particularly asshole vendors but they are few and far between. And this totally is worth doing since they tend to be fairly standard nvme drives or the small SSD that I always forget the format of. Rather than RAM that is only used by laptops and NUCs and costs an arm and a leg…
        3. Ports: Framework laptops just use USB C dongles for everything. They have a semi-proprietary format for those but it is still, fundamentally, a usb c dongle. And, from talking to a mutual on a discord who has one, it has the same fundamental problem that USB C dongles/hubs do when installing the more finicky OSes (hi Proxmox and OpnSense) where you can’t actually access the hub capabilities until AFTER the OS is installed (the more live CD based distros avoid this). So no difference in terms of upgrades and modularity outside of having fewer vendors to buy a dongle from if you care about form factor that much.
        4. CPU: Only if you swap out the motherboard which is the vast majority of the price of the laptop anyway.
        5. Keyboard, display, etc: These are less “upgrades” so much as replacements. Which are good arguments for repairability but also… go actually look at ifixit’s website and see how many laptops are repairable. It is mostly just apple who suck horrifically

        And just because it always amuses me and never fails, let’s price out upgrading/replacing a framework (uplacing?). I’ll assume no parts failed to keep prices simple and “You can replace your keyboard every time it fails over a five year period” is not the flex people think it is. I’ll use the intel core ultra series 1 because that is in stock and not a preorder. We are dealing with last year’s model (I think. I haven’t followed Intel laptop processors too much) so there is inherently wiggle room there, but it is theoretically fair as it is last year’s model for both of them since I had to dig deep into the framework site to find an Intel since fuck Best Buy’s website if you are trying to compare AMDs (also fuck AMD for their naming insanity).

        So we are already looking at the framework being about 120 USD more expensive without looking at any configurations or upgrades.

        So let’s get into that hyperbolic time chamber and totally not have gay sex with the glistening man hunk known as Vegeta. Five years later, let’s consider an upgrade… to the same SKU.

        On the Framework marketplace, another 125H mobo costs 399 USD right now.

        • Framework: 999 + 399 = 1398 for two generations of a laptop
        • Best Buy: 879 + 879 = 1758
        • For a total savings of 1758-1398 = 360 USD over 5 years of getting soaked by that galick gun

        Which is nothing to balk at. But that assumes that your display and keyboard held up and didn’t need replacing, you liked all the default dongles Framework gave you (which is apparently just four USB C ports… to plug into the four USB C ports on the laptop), and, most importantly, that Framework didn’t change their form factor (I am not sure if they did for the 16 inch laptops to support the “modular” keyboards). Every spare dongle or repaired/upgraded part costs money. Versus being guaranteed a “pristine” new laptop… full of massive amounts of bloatware that you immediately format the shit out of to put Linux on that.

        And, obvious grain of salt, the past few times I have done this exercise it was closer to 100 USD. Framework just happen to be dumping large amounts of old stock right now for their new models so the prices are better and the comparisons are more tedious.


        Again, conceptually I like Framework. And, for as much as I mock them, I actually do like the form factor for their dongles a lot. Give me a computer with a shit ton of USB C ports but also let me leep it usable at work without needing to carry around my sketchy anker dongle/dock. And I don’t really fault them too much for not letting you actually swap CPUs since that was basically something only the sickest of sickos did until the AM4 socket lasted like 40 years somehow.

        But their key strength is marketing and that has only gotten stronger since they got the full power of linus media group behind them because that company needs to protect their shareholders’ investment.

        And, like I said before, I do worry that this just encourages people to hoard parts. Like… anyone who has built a desktop or two has that big plastic bin full of old ram and mobos and even graphics cards that they might use someday but never will (PSU is totally worth saving though).

        • ilinamorato@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          12 days ago

          What exactly can you upgrade iteratively?

          At the price point, being able to upgrade memory, storage, and motherboard is unique. And I know you say that it’s the “vast majority” of the cost, but I just bought a Framework 13 last month (I know, great timing) and the mainboard was right around half the total cost. So sure, the most expensive single component, but it means that I can upgrade to a better-performing machine in the future for half the price and not need to junk everything else.

          Framework laptops just use USB C dongles for everything.

          Correct. But honestly, having the swappable I/O is fantastic; over the last five laptops I’ve owned, I’ve only upgraded because I wanted new capabilities once. For the other four, it’s because a component failed; and in two of them it was a USB port, while in a third it was a charging port. Being able to replace those would have extended the lives of those machines substantially.

          fewer vendors to buy a dongle from

          Actually, they’re open-source (not proprietary). And since they’re USB-C, you could probably just take out the card and plug a dongle right in there if you really needed to (I have not tried this).

          Framework: 999 + 399 = 1398 for two generations of a laptop

          I’m planning to hold on to this device for a whole lot longer than two generations. If I can, I’d like to hang on to it for 15-20 years. The laptop I upgraded from was five years old or so (and would still be going strong if it didn’t have a port that was about to die and un-upgradeable RAM and storage), and my desktop is 13 years old and still going strong, so this isn’t terribly unreasonable. I would estimate that I’ll end up pouring about $2000, all told, into this laptop over that time period, likely replacing 3-4 laptop purchases and giving me a better machine during that time period.

          that assumes that your display and keyboard held up and didn’t need replacing,

          Both of which would be cheaper than a new device. A new display is $150 and a new keyboard is $30. I don’t know about the longevity of each component, but based on the research I did it’s definitely not worse than an off-the-shelf machine.

          you liked all the default dongles Framework gave you (which is apparently just four USB C ports… to plug into the four USB C ports on the laptop),

          There aren’t any defaults. When you spec out your kit, you choose which cards to purchase. Replacing them costs about $10. (EDIT: The USB-C ones cost $10. The other ones are variously priced between $10-40, and then there are some storage expansions that cost more because they’re basically SSD in the expansion card form factor).

          and, most importantly, that Framework didn’t change their form factor

          They’ve only done that once since they launched, across six updates to the components. When they made that upgrade, they offered a $90 top cover to bring first gen devices up to second gen specs.

          (I am not sure if they did for the 16 inch laptops to support the “modular” keyboards).

          There’s only been one generation of the 16 inch laptops, and they’ve always had the modular keyboards. The refresh they announced yesterday is just to components, not to chassis.

          Every spare dongle or repaired/upgraded part costs money.

          Yep, and I’m fine with that because it means that I can spec it out the way I want; I don’t have to pay for I/O that I’ll never use. My old laptop had an SD card reader and a DisplayPort output; I literally never used either. The one I had before it had a SATA connector on the external I/O, and a couple of other pieces of nonsense that I didn’t want or need. Actually, thinking back, I don’t know if I’ve ever owned a laptop (until this one) where I actually used all of the ports.

          And I don’t really fault them too much for not letting you actually swap CPUs since that was basically something only the sickest of sickos did

          Yeah, I think swappable CPUs on a laptop are a thing of the past. I hope I’m wrong, but I just don’t see it coming back.

          I do worry that this just encourages people to hoard parts

          I DON’T HAVE A PROBLEM

          I CAN STOP WHENEVER I WANT TO

    • Avid Amoeba@lemmy.ca
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      13 days ago

      You get fast memory as a result. If you don’t care about the fast memory, there’s no good reason to buy this, with their motherboard. There’s a use case this serves which can’t be served by traditional slotted memory and the alternative is to buy 4-5 NVIDIA 3090/4090/5090. If you want that use case, then this is a pretty good deal.

      • NuXCOM_90Percent@lemmy.zip
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        4
        ·
        edit-2
        13 days ago

        And your phone isn’t repairable because it needs to be water proof. Your earbuds because of power efficiency. Etc.

        Also, I suggest watching this https://www.youtube.com/watch?v=K3zB9EFntmA.

        But, to be clear: I am actually not as opposed to the idea of soldered ram when you have “an excuse”. Same with phones. But framework is a brand that tries to build itself on minimizing e-waste and maximizing repairability and… hey, at least we can still swap out the side panel on their prebuilt!

        • Avid Amoeba@lemmy.ca
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          edit-2
          13 days ago

          As far as I read LPCAMM in its current state does not work for this. The electrical noise is too high. These things aren’t the same. A repairable waterproof phone can be made without glue by making it a bit thicker. In the case of RAM today, we’re hitting fundamental physics limitations with speed of electricity and noise. At this point the physical interconnect itself becomes a problem. Gold contact points become antennas that induce noise into adjacent parts of the system. I’m not trying to excuse Framework here. I’m saying that the difficulty here borders on the impossible. If this RAM was soldered and it had bandwidth no different than SODIMM or LPCAMM modules then I’d say Framework fucked up making it soldered, majorly. As I said, there’s no point buying this if you don’t care about the fast RAM and use cases that need it like LLMs. Regular ITX board with regular AM5 is the way to go.

          E: To be clear, if this bandwidth could be achieved with LPCAMM, then Framework fucked up.

    • DacoTaco@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      5
      ·
      edit-2
      13 days ago

      No, the pc is upgradable. They explicitly said in the event that the desktop was suppose to be an actual desktop with replaceable parts as much as technically possible. Only ram is tied to the mobo/cpu because of technical limitations of the amd cpu

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      5
      ·
      edit-2
      13 days ago

      It will be faster than most next-gen laptops, and it’s much cheaper than a similarly-specced Asus Z13. Strix Halo uses a quad channel 8533Mhz bus, 2 full Zen CCDs like you find in desktops/servers, and a 40 CU GPU. Its more than twice the size/performance of two true “laptop chips” put together.

      Everything except the APU/RAM/Mobo combo is upgradable, and you don’t have to replace the whole machine if the board fails.

      I mean, if you don’t need that kind of compute/RAM, this system is not for you, and old gaming desktops are probably better deals for pure gaming. But this thing has a niche.

      • NuXCOM_90Percent@lemmy.zip
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        3
        ·
        13 days ago

        Everything except the APU/RAM/Mobo combo is upgradable, and you don’t have to replace the whole machine if the board fails.

        So… storage, case, and USB C dongles?

        • DacoTaco@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          6
          ·
          13 days ago

          Fans, case, ports, side panel, …
          Whatever you do with a pc, you can do with this.
          Just not separately replace ram and cpu because of the cpu design of amd.

          Hell, it can be connected to another one to make on hell of a compute monster too.

        • Mac@mander.xyz
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          12 days ago

          You can change the squares on the front panel!!!

          /s

      • DacoTaco@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        edit-2
        13 days ago

        I think the framework desktop would be an absolute powerhouse as a workstation desktop.
        Think developers ( that still use desktops ), people who do raw computational power for science, servers, ai development, …

  • Billiam@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    13 days ago

    So can someone who understands this stuff better than me explain how the L3 cache would affect performance? My X3D has a 96 MB cache, and all of these offerings are lower than that.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      13 days ago

      This has no X3D, the L3 is shared between CCDs. The only odd thing about this is it has a relatively small “last level” cache on the GPU/Memory die, but X3D CPUs are still kings of single-threaded performance since that L3 is right on the CPU.

      This thing has over twice the RAM bandwidth of the desktop CPUs though, and some apps like that. Just depends on the use case.

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          12 days ago

          Honestly CPUs are bad for AI, especially in this case where there’s a GPU on the same bus anyway.

          Of the top of my head, video encoding and compression/decompression really likes raw memory bandwidth. Maybe some games? Basically, wherever the M Pro/Max CPUs are really strong compared to the base M, these will excel in the same way.

  • Jollyllama@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    12 days ago

    Calling it a gaming PC feels misleading. It’s definitely geared more towards enterprise/AI workloads. If you want upgradeable just buy a regular framework. This desktop is interesting but niche and doesn’t seem like it’s for gamers.

    • xradeon@lemmy.one
      link
      fedilink
      English
      arrow-up
      10
      ·
      11 days ago

      Hmm, probably not. I think it just has the single 120mm fan that probably doesn’t need to spin up that fast under normal load. We’ll have to wait for reviews.

      • cholesterol@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        11 days ago

        I also just meant given the size constraints in tiny performance PCs. More friction in tighter spaces means the fans work harder to push air. CPU/GPU fans are positioned closer to the fan grid than on larger cases. And larger cases can even have a bit of insulation to absorb sound better. So, without having experimented with this myself, I would expect a particularly small and particularly powerful (as opposed to efficient) machine to be particularly loud under load. But yes, we’ll have to see.

      • yeehaw@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 days ago

        I have a Noctua fan in my PC. Quiet AF. I don’t hear it and it sites beside me.

  • Diplomjodler@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    13 days ago

    I really hope this won’t be too expensive. If it’s reasonably affordable i might just get one for my living room.

    • Dudewitbow@lemmy.zip
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      1
      ·
      13 days ago

      they already announced pricing for them.

      1099 for the base ai max model with 32gb(?), 1999 for fully maxed with the top sku.

      • jivandabeast@lemmy.browntown.dev
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        13 days ago

        $1k for the base isn’t horrible IMO, especially if you compare it to something like the mac mini starting at $600 and ballooning over $1k to increase to 32GB of “unified memory” and 1tb of storage.

        I get why people are mad about the non-upgradable memory but tbh I think this is the direction the industry is going to go as a whole. They can’t get the memory to be stable and performant while also being removable. It’s a downside of this specific processor and if people want that they should just build a PC

        • Dudewitbow@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          13 days ago

          i actually think its not the worst priced framework product ironically. Prebuilt 1k pcs tend to be something like a high end cpu + 4060 desktop anyways, so specs wise, its relatively speaking, reasonable. take for example cyberpower pcs build here, which is of the few oems iirc Gamers Nexus thinks doesn’t charge as much of a SI tax on assembly. it’s acutally not incredibly far off performance wise. I’d argue its the most value Framework product per dollar ironically.

          • Ulrich@feddit.org
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            12 days ago

            Prebuilt 1k pcs tend to be something like a high end cpu + 4060 desktop anyways

            That value proposition evaporates when you factor in repairability and upgradability of those prebuilts.

            • havocpants@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              ·
              12 days ago

              and if you actually want a PC for gaming on, a discrete gpu (eg: 7900xt) is going to be at least 3x faster at throwing polygons around than the 8060S. This thing is definitely better for AI workloads than gaming.

    • commander@lemmings.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      14
      ·
      11 days ago

      Not strange at all.

      They’re a business that makes its money off of selling hype to morons.

      • Nalivai@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        12 days ago

        Apparently AMD wasn’t able to make socketed RAM work, timings aren’t viable. So Framework has the choice of doing it this way or not doing it at all.

        • JcbAzPx@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          16
          ·
          12 days ago

          In that case, not at all is the right choice until AMD can figure out that frankly brain dead easy thing.

          • alphabethunter@lemmy.world
            link
            fedilink
            English
            arrow-up
            13
            arrow-down
            2
            ·
            12 days ago

            “brain dead easy thing”… All you need is to just manage signal integrity of super fast speed ram to a super hungry state of the art soc that benefits from as fast of memory as it can get. Sounds easy af. /s

            They said that it was possible, but they lost over half of the speed doing it, so it was not worth it. It would severely cripple performance of the SOC.

            The only real complaint here is calling this a desktop, it’s somewhere in between a NUC and a real desktop. But I guess it technically sits on a desk top, while also being an itx motherboard.

      • Jyek@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        1
        ·
        12 days ago

        Signal integrity is a real issue with dimm modules. It’s the same reason you don’t see modular VRAM on GPUs. If the ram needs to behave like VRAM, it needs to run at VRAM speeds.

        • Natanox@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          6
          ·
          12 days ago

          Then don’t make it work like that. Desktop PCs are modular and Framework made a worse product in terms of modularity and repairability, the main sales of Framework. Just, like… wtf. This Framework product is cursed and shouldn’t exist.

          • brucethemoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            12 days ago

            There’s little point in framework selling a conventional desktop.

            I guess they could have made another laptop size with the the dev time, but… I dunno, this seems like a niche that needs to be filled.

            • Manalith@midwest.social
              link
              fedilink
              English
              arrow-up
              4
              ·
              12 days ago

              This is where I’m at. The Framework guy was talking about how very few companies are using this AMD deal because the R&D to add it to existing models wasn’t very viable, you really only have the Asus Z13 so I feel like being ahead of the game there will be a benefit in the long run as far as their relationship with AMD. Plus they’re also doing a 12-in laptop now as well, so it’s not like they committed all their resources to this.

    • enumerator4829@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      50
      arrow-down
      2
      ·
      12 days ago

      Apparently AMD couldn’t make the signal integrity work out with socketed RAM. (source: LTT video with Framework CEO)

      IMHO: Up until now, using soldered RAM was lazy and cheap bullshit. But I do think we are at the limit of what’s reasonable to do over socketed RAM. In high performance datacenter applications, socketed RAM is on it’s way out (see: MI300A, Grace-{Hopper,Blackwell},Xeon Max), with onboard memory gaining ground. I think we’ll see the same trend on consumer stuff as well. Requirements on memory bandwidth and latency are going up with recent trends like powerful integrated graphics and AI-slop, and socketed RAM simply won’t work.

      It’s sad, but in a few generations I think only the lower end consumer CPUs will be possible to use with socketed RAM. I’m betting the high performance consumer CPUs will require not only soldered, but on-board RAM.

      Finally, some Grace Hopper to make everyone happy: https://youtube.com/watch?v=gYqF6-h9Cvg

      • unphazed@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        12 days ago

        Honestly I upgrade every few years and isually have to purchase a new mobo anyhow. I do think this could lead to less options for mobos though.

        • confusedbytheBasics@lemm.ee
          link
          fedilink
          English
          arrow-up
          7
          ·
          12 days ago

          I get it but imagine the GPU style markup when all mobos have a set amount of RAM. You’ll have two identical boards except for $30 worth of memory with a price spread of $200+. Not fun.

        • enumerator4829@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          6
          ·
          12 days ago

          I don’t think you are wrong, but I don’t think you go far enough. In a few generations, the only option for top performance will be a SoC. You’ll get to pick which SoC you want and what box you want to put it in.

          • GamingChairModel@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            12 days ago

            the only option for top performance will be a SoC

            System in a Package (SiP) at least. Might not be efficient to etch the logic and that much memory onto the same silicon die, as the latest and greatest TSMC node will likely be much more expensive per square mm than the cutting edge memory production node from Samsung or whatever foundry where the memory is being made.

            But with advanced packaging going the way it’s been over the last decade or so, it’s going to be hard to compete with the latency/throughout of an in-package interposer. You can only do so much with the vias/pathways on a printed circuit board.

              • GamingChairModel@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                11 days ago

                No, I don’t think you owe an apology. It’s a super common terminology almost to the point where I wouldn’t really even consider it outright wrong to describe it as a SoC. It’s just that the blurred distinction between a single chip and multiple chiplets packaged together are almost impossible for an outsider to tell without really getting into the published spec sheets for a product (and sometimes may not even be known then).

                It’s just more technically precise to describe them as SiP, even if SoC functionally means something quite similar (and the language may evolve to the point where the terms are interchangeable in practice).

      • exocortex@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 days ago

        There’s even the next iteration already happening: Cerebras is maling wafer-scale chipa with integrated SRAM. If you want to have the highest memory-bandwith to your cpu core it has to lay exactly next to it ON the chip.

        Ultimately RAM and processor will probably be indistinguishable with the human eye.

      • barsoap@lemm.ee
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        2
        ·
        12 days ago

        I definitely wouldn’t mind soldered RAM if there’s still an expansion socket. Solder in at least a reasonable minimum (16G?) and not the cheap stuff but memory that can actually use the signal integrity advantage, I may want more RAM but it’s fine if it’s a bit slower. You can leave out the DIMM slot but then have at least one PCIe x16 expansion slot. A free one, one in addition to the GPU slot. PCIe latency isn’t stellar but on the upside, expansion boards would come with their own memory controllers, and push come to shove you can configure the faster RAM as cache / the expansion RAM as swap.

        Heck, throw the memory into the CPU package. It’s not like there’s ever a situation where you don’t need RAM.

        • enumerator4829@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          1
          ·
          12 days ago

          All your RAM needs to be the same speed unless you want to open up a rabbit hole. All attempts at that thus far have kinda flopped. You can make very good use of such systems, but I’ve only seen it succeed with software specifically tailored for that use case (say databases or simulations).

          The way I see it, RAM in the future will be on package and non-expandable. CXL might get some traction, but naah.

          • Mister Bean@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            8
            ·
            12 days ago

            Couldn’t you just treat the socketed ram like another layer of memory effectively meaning that L1-3 are on the CPU “L4” would be soldered RAM and then L5 would be extra socketed RAM? Alternatively couldn’t you just treat it like really fast swap?

            • enumerator4829@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              6
              ·
              12 days ago

              Wrote a longer reply to someone else, but briefly, yes, you are correct. Kinda.

              Caches won’t help with bandwidth-bound compute (read: ”AI”) it the streamed dataset is significantly larger than the cache. A cache will only speed up repeated access to a limited set of data.

            • Balder@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              12 days ago

              Could it work?

              Yes, but it would require:

              • A redesigned memory controller capable of tiering RAM (which would be more complex).
              • OS-level support for dynamically assigning memory usage based on speed (Operating systems and applications assume all RAM operates at the same speed).
              • Applications/libraries optimized to take advantage of this tiering.

              Right now, the easiest solution for fast, high-bandwidth RAM is just to solder all of it.

            • barsoap@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              ·
              12 days ago

              Using it as cache would reduce total capacity as cache implies coherence, and treating it as ordinary swap would mean copying to main memory before you access it which is silly when you can access it directly. That is you’d want to write a couple of lines of kernel code to use it effectively but it’s nowhere close to rocket science. Nowhere near as complicated as making proper use of NUMA architectures.

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            edit-2
            12 days ago

            The cache hierarchy has flopped? People aren’t using swap?

            NUMA also hasn’t flopped, it’s just that most systems aren’t multi socket, or clusters. Different memory speeds connected to the same CPU is not ideal and you don’t build a system like that but among upgraded systems that’s not rare at all and software-wise worst thing that’ll happen is you get the lower memory speed. Which you’d get anyway if you only had socketed RAM.

            • enumerator4829@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              12 days ago

              Yeah, the cache hierarchy is behaving kinda wonky lately. Many AI workloads (and that’s what’s driving development lately) are constrained by bandwidth, and cache will only help you with a part of that. Cache will help with repeated access, not as much with streaming access to datasets much larger than the cache (i.e. many current AI models).

              Intel already tried selling CPUs with both on-package HBM and slotted DDR-RAM. No one wanted it, as the performance gains of the expensive HBM evaporated completely as soon as you touched memory out-of-package. (Assuming workloads bound by memory bandwidth, which currently dominate the compute market)

              To get good performance out of that, you may need to explicitly code the memory transfers to enable prefetch (preferably asynchronous) from the slower memory into the faster, á la classic GPU programming. YMMW.

              • barsoap@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                12 days ago

                I wasn’t really thinking of HPC but my next gaming rig, TBH. The OS can move often accessed pages into faster RAM just as it can move busy threads to faster cores, gaining you some fps a second or two after alt-tabbing back to the game after messing around with firefox. If it wasn’t for memory controllers generally driving channels all at the same speed that could already be a thing right now. It definitely already was a thing back in the days of swapping out to spinning platters.

                Not sure about HBM in CPUs in general but with packaging advancement any in-package stuff is only going to become cheaper, HBM, pedestrian bandwidth, doesn’t matter.

                • enumerator4829@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  11 days ago

                  The thing is, consumers didn’t push Nvidias stock sky high, AI did. Microsoft isn’t pushing anything sane to consumers, Microsoft is pushing AI. AMD, Intel, Nvidia and Qualcomm are all pushing AI to consumers. Additionally, on the graphics side of things, AMD is pushing APUs to consumers. They are all pushing things that require higher memory bandwidth.

                  Consumer will get ”trickle down silicon”, like it or not. Out of package memory will die. Maybe not with you next gaming rig, but maybe the one after that.

            • Jyek@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              12 days ago

              In systems where memory speed are mismatched, the system runs at the slowest module’s speed. So literally making the soldered, faster memory slower. Why even have soldered memory at that point?

              • barsoap@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                edit-2
                12 days ago

                I’d assume the soldered memory to have a dedicated memory controller. There’s also no hard requirement that a single controller can’t drive different channels at different speeds. The only hard requirement is that one channel needs to run at one speed.

                …and the whole thing becomes completely irrelevant when we’re talking about PCIe expansion cards the memory controller doesn’t care.

      • wabafee@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        11 days ago

        Sound like a downgrade to me I rather have capability of adding more ram than having a soldered limited one doesn’t matter if it’s high performance. Especially for consumer stuff.

        • Zink@programming.dev
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          11 days ago

          Looking at my actual PCs built in the last 25 years or so, I tend to buy a lot of good spec ram up front and never touch it again. My desktop from 2011 has 16GB and the one from 2018 has 32GB. With both now running Linux, it still feels like plenty.

          When I go to build my next system, if I could get a motherboard with 64 or 128GB soldered to it, AND it was like double the speed, I might go for that choice.

          We just need to keep competition alive in that space to avoid the dumb price gouging you get with phones and Macs and stuff.

  • ganoo_slash_linux@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    6
    ·
    13 days ago

    I feel like this is a big miss by framework. Maybe I just don’t understand because I already own a Velka 3 that i used happily for years and building small form factor with standard parts seems better than what this is offering. Better as in better performance, aesthetics, space optimization, upgradeability - SFF is not a cheap or easy way to build a computer.

    The biggest constraint building in the sub-5 liter format is GPU compatibility because not many manufacturers even make boards in the <180mm length category. Also can’t go much higher than 150-200 watts because cooling is so difficult. There are still options though, i rocked a PNY 1660 super for a long time, and the current most powerful option is a 4060ti. Although upgrades are limited to what manufacturers occasionally produce, it is upgradeable, and it is truly desktop performance.

    On the CPU side, you can physically put in whatever CPU you want. The only limitation is that the cooler, alpenfohn black ridge or noctua l9a/l9i, probably won’t have a good time cooling 100+ watts without aggressive undervolting and power limits. 65 watts TDP still gives you a ryzen 7 9700x.

    Motherboards have the SFF tax but are high quality in general. Flex ATX PSUs were a bit harder to find 5 or 6 years ago but now the black 600W enhance ENP is readily available from Velkase’s website. Drives and memory are completely standard. m.2 fits with the motherboard, 2.5in SATA also fits in one of the corners. Normal low profile DDR5 is replaceable / upgradeable.

    What framework is releasing is more like a laptop board in a ~4 liter case and I really don’t like that in order to upgrade any part of CPU, GPU or memory you have to replace the entire board because it’s soldered on APU and not socketed or discrete components. Framework’s enclosure hasn’t been designed to hold a motherboard+discrete GPU and the board doesn’t have a PCIe slot if you wanted to attach a card via riser in another case. It could be worse but I don’t see this as a good use of development resources.

    • Acters@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      12 days ago

      I think the biggest limiting factor for your mini PC will always be the VRAM and any workload that enjoys that fast RAM speed. Really, I think this mini PC from framework is only sensible for certain workloads. It was poised as a mobile chip and certainly is majorly power efficient. On the other hand I don’t think it is for large scaling but more for testing at home or working at home on the cheap. It isn’t something I expected from framework though as I expected them to maintain modularity and the only modularity here is the little USB cards and the 3D printed front panel designs lol

      Edit
      Personally I am in that niche market of high RAM speed. Also, access to high VRAM for occasional LLM testing. Though it is an AMD and I don’t know if am comfortable switching from Nvidia for that workload just yet. Renting a GPU is just barely cheap enough.