The narrative in AI infrastructure over the last two years has been dominated by the enormous and growing demand for compute capacity and its economic consequences, such as the buildout of data centers and the consequent shortages of key resources such as land, water, power, and copper.

But of all these bottlenecks, memory is by far the most significant. The demand for memory is now outpacing the demand for other drivers of compute capacity. The implications of this will ripple through not just the economics of data centers, but the cost of every single consumer and enterprise hardware device.

In this piece, we unpack the market action around memory prices, its ripple effects across the consumer and industrial electronics market, and the supply and demand curve that is emerging around AI. Critically, we explain why the amount of memory being purchased by AI companies like OpenAI seems to be more than what they need, and how the threat of on-device inference might actually be incentivizing an engineered memory shortage.

  • ThomasWilliams@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    ·
    22 hours ago

    Will lower memory availability to consumers increase reliance on cloud-based storage and demand for data centers? >

    No, because you need more RAM to run a smart terminal than a standalone micro, because there’s no secondary memory available to rely on.

    Therefore, according to our best estimates, OpenAI likely needs less than 30% of the 10.8 million wafers it’s planning to buy

    OpenAI hasn’t actually paid for any of that, its sold on credit with a 6 year repayment period on hardware that will only last 2-3 years at most. That’s why no memory manufacturers are increasing capacity, as they would if they thought there was any long term increase in demand.