Google's TurboQuant Wipes $100 Billion From Memory Chip Stocks
Google's new AI compression technique slashes memory needs sixfold, sending Samsung and SK Hynix shares into freefall.

Samsung Electronics shares fell 14.2% and SK Hynix dropped 18.7% on April 2 after Google DeepMind published a paper describing TurboQuant, a quantization technique that reduces AI model memory requirements by approximately six times with minimal accuracy loss.
The two-day selloff erased more than $100 billion in combined market capitalization from the world's two largest memory chip manufacturers, according to Bloomberg calculations.
What TurboQuant Does
The paper, published on arXiv and presented at a Google research event in Mountain View, described a method for compressing the numerical precision of AI model weights from 16-bit floating point to sub-3-bit representations while maintaining 97.4% of original model performance on standard benchmarks.
In practical terms, an AI model that currently requires eight Nvidia H100 GPUs could run on one or two using TurboQuant compression, according to the paper's authors.
"This changes the economics of inference completely," said Dylan Patel, chief analyst at SemiAnalysis. "If every AI company needs one-sixth the memory chips they expected to buy this year, that's a demand shock the industry hasn't priced in."
High Bandwidth Memory (HBM) chips — the specialized memory used in AI accelerators — had been the primary growth driver for both Samsung and SK Hynix. SK Hynix derived 53% of its 2025 revenue from HBM products, according to its most recent earnings filing. Samsung's memory division had projected 40% HBM revenue growth for 2026.
Industry Response
SK Hynix CEO Kwak Noh-jung issued a statement on April 2 calling the selloff "an overreaction based on a single research paper" and noting that "the gap between laboratory results and production deployment is measured in years, not weeks."
Samsung declined to comment directly but pointed to its March 31 announcement of a record 110 trillion won ($82 billion) AI investment plan for 2026, which it said demonstrated "confidence in sustained demand for advanced memory solutions."
Nvidia's shares fell 6.3% on the news, a smaller decline that analysts attributed to the fact that reduced memory needs could actually accelerate GPU adoption by lowering total system costs.
"It's bad for memory makers and potentially good for GPU makers," said Stacy Rasgon, a semiconductor analyst at Bernstein. "If running AI gets cheaper, more companies will run AI. The question is whether volume growth offsets per-unit memory reduction."
Broader AI Hardware Shift
TurboQuant is part of a broader trend toward AI efficiency gains that is reshaping hardware demand projections. Meta published a similar compression technique called SpinQuant in late 2025 that achieved 4-bit quantization with 95% accuracy retention. Microsoft's BitNet research demonstrated 1-bit model architectures that performed competitively on certain tasks.
The cumulative effect is a growing consensus among researchers that the "bigger is better" era of AI scaling may be giving way to an efficiency era.
"We're entering a phase where algorithmic improvements outpace hardware improvements," said Sara Hooker, who leads Cohere for AI's research lab. "That's historically been bad for chip companies that bet on demand always going up."
The PHLX Semiconductor Index fell 4.8% on April 2, its worst single-day decline since January. The index is now down 11.3% year-to-date, compared with a 6.1% decline for the S&P 500.
Geopolitical Dimensions
The timing complicates the US-led effort to restrict China's access to advanced AI chips. The MATCH Act, introduced in Congress on March 31, would tighten export controls on semiconductor manufacturing equipment to China.
But if AI models can achieve comparable performance with less advanced hardware, the strategic value of cutting-edge chip restrictions diminishes.
"TurboQuant is a gift to Chinese AI labs," said Gregory Allen, director of the Wadhwani Center for AI at the Center for Strategic and International Studies. "If you can run frontier-quality models on last-generation chips, export controls lose their teeth."
Micron Technology, the third-largest memory manufacturer, reports earnings on April 10. Analysts expect management to address TurboQuant's potential demand impact directly.
Sources for this article are being documented. Albis is building transparent source tracking for every story.
Get the daily briefing free
News from 7 regions and 16 languages, delivered to your inbox every morning.
Free · Daily · Unsubscribe anytime
🔒 We never share your email


