Moore's Law, first proposed by Intel co-founder Gordon Moore in 1965, is the observation that the number of transistors on a microchip roughly doubles every two years while costs decrease, leading to exponential improvements in computing power and efficiency.
It has not entirely ended, but the original strict pace has been broken or slowed significantly due to physical, economic, and manufacturing limits approaching atomic scales. As of 2025, the semiconductor industry has shifted focus from pure transistor scaling to multidimensional innovations like architectural changes, advanced packaging (e.g., 3D stacking and chiplets), materials advancements (e.g., gate-all-around transistors), software optimizations, and AI-driven efficiencies to sustain performance gains.
Opinions vary among key players:
- Intel maintains that Moore's Law is alive, with room for significant performance leaps through hardware advancements, arguing that progress can't regress and emphasizing continued manufacturing breakthroughs.
- Nvidia's CEO Jensen Huang has repeatedly stated it's dead or slowing down, pointing to reused process nodes in GPUs and a pivot to AI software and accelerated computing to outpace traditional scaling.
- AMD highlights ongoing improvements even on older architectures via software, aligning with the view that gains will persist through combined hardware and optimization efforts.
Experts predict it may become fully obsolete by the 2030s due to quantum limits like the uncertainty principle, but for now, the "law" endures in an evolved form as a guiding principle for innovation rather than a rigid rule. Emerging technologies, such as photonic computing or new materials, are being explored to potentially extend or replace it.