From EE Times, March 27:
An elasticity lens for 2026–2028
....MUCH MOREThe semiconductor memory market is once again in an up‑cycle, but it doesn’t look like the familiar boom‑and‑bust pattern veterans expect. Prices for DRAM and NAND have surged on tight wafers, capital discipline, and the gravitational pull of AI infrastructure.
Unlike prior cycles, price escalation in DRAM and NAND no longer spreads uniformly across end markets. What we’re witnessing is a structurally asymmetric supercycle in which memory’s share of the bill of materials (BOM) and an application’s reliance on capacity and bandwidth now determine who absorbs price shocks and who blinks first. In other words, elasticity has become an application‑level variable, not a commodity‑level constant.
By early 2026, DRAM pricing had climbed approximately 80% quarter‑on‑quarter, while NAND and storage pricing rose by roughly 50%. These moves were fueled by supply constraints, cautious capex from suppliers, and sustained demand from AI accelerators and data‑centric workloads. But the “rising tide” hasn’t lifted all boats equally. The divergence across segments exposes the limits of traditional commodity analysis and makes a strong case for a BOM‑centric elasticity framework to forecast behavior through 2028.
From commodity lens to BOM‑centric elasticity
The core of the framework is straightforward: quantify the memory share of system BOM, gauge performance sensitivity to memory capacity or bandwidth, and assess the room to modify specs without breaking the product’s value proposition or qualification envelope.
These three axes sort applications into low-, medium-, and high-elasticity tiers—each with distinct pricing tolerance, redesign timelines, and cancellation risks.
Low elasticity: AI infrastructure and servers
AI and enterprise servers, along with select high‑end platforms, such as advanced medical imaging, sit at the inelastic end. Here, memory is architecturally inseparable from performance and monetization: High-bandwidth memory (HBM) stacks and large DDR5 footprints directly dictate throughput, latency, and accelerator utilization. Even when memory exceeds 40–50% of the BOM, cutting capacity undermines platform economics more than it saves cost.
Typical 2026 AI nodes deploy between 192 GB and 288 GB of HBM per system, with additional DDR5 and 20–30 TB of NVMe, pushing memory content into five‑digit dollars per system. Yet elasticity remains low because any reduction directly degrades accelerator utilization and total cost of ownership. Through 2026–2028, availability rather than price is expected to remain the dominant constraint.
Medium elasticity: industrial, automotive, and telecom
Industrial automation, automotive domain controllers, and telecom RAN compute live in the middle. Memory is important, but not singularly defining. These markets are governed by long qualification cycles, safety cases, and reliability regimes.
These systems operate under long qualification cycles and strict reliability constraints, limiting rapid redesign but allowing gradual adaptation. At the same time, this allows measured adaptation: capacity right‑sizing, phased rollouts, and targeted platform delays.
Typical configurations range from 32GB to 64GB of DDR4 or DDR5 memory paired with moderate storage capacities. Under continued price pressure, OEMs pursue capacity right-sizing, staggered deployments, and selective platform delays rather than immediate cancellation.
High elasticity: consumer and cost‑driven electronics
Consumer platforms, such as TVs, set-top boxes, and home gateways, treat memory as a cost line. While memory has a meaningful share of BOM, it provides limited differentiation payoff.
Typical configurations include 1GB–2 GB DRAM and 8–32 GB NAND or eMMC storage. Even modest memory price increases trigger immediate de‑contenting, launch delays, or program cancellations. These are the segments that break first when memory inflation exceeds perceived end‑user value.
What the elasticity lens changes in practice...
Also at EE Times:
AI’s Booming Demand Meets a Semiconductor Reality Check
The Memory Supercycle: How Allocation Is Creating New Infrastructure Bottlenecks