Feeds:
Posts
Comments

Posts Tagged ‘Moore’s Law’

In the early, exciting days of the PC era – when our firm was getting started — the pace of growth in technology was largely driven by the now famous “Moore’s Law” named after Intel co-founder Gordon Moore, who noted that the power of the PC’s main processor doubled roughly every two years.  That law has governed pretty much the entire computer realm for nearly 50 years.

But no more.  Or should we say, no Moore.

The physical limitations of electrons and heat in confined spaces are bringing this biannual doubling of capacity close to its predictable end.  Moreover, designing chips significantly faster today requires a different sort of Moore-doubling – as in doubling (or more) the cost of the chip fab plants that make them, which are now in the $10 billion plus range.  As a result, there are few competitors remaining, even as the market for chips rose by more than 20% in 2017 alone.

As tempers fray between the U.S. and China and the physics of the matter intervene, the future of the industry looks increasingly messier – and thus ripe for all manner of competition, collapse and new innovations.

China, which does have the money to compete, is on a global quest for technological supremacy by 2025 in national push, and has long been a voracious consumer of American technology, which has often been given up freely American by firms as a right to compete there.  But we’re not here to argue politics, trade wars notwithstanding.

This is a complex supply chain starting with the purest of silicon dioxide mined from the Appalachian Mountains and shipped to Japan to be turned into pure silicon ingots.  These are then sliced into wafers in Taiwan or South Korea and imprinted meticulously with equipment made in the Netherlands.  The design pattern might come from ARM or Intel or one of a handful of other chip designers, and it’s all eventually packaged into ceramic containers that populate any chip board out there today, to be tested in China or Vietnam or the Philippines.  The resulting circuit board arrives in Mexico or Germany or China for assembly into a robot or a PC or a cloud server.

One edge that the West, in particular the U.S., holds is that the semiconductor industry relies greatly on what one industry expert calls “repetitive cycles of learning,” ensuring higher barriers to entry for those without deep prior experience and knowledge.  So it gets harder.  Then again, the effect of something called Dennard scaling has meant that shrinking components tend to offer fewer and fewer benefits in chip making over successive generations.  Thus, being a few steps behind the industry leaders may not matter so much.

But with the demise of Moore’s law, for perhaps the first time in decades, there opens a whole new competitive opportunity.

Quantum computing, which relies on principles of physics that exist at the atomic level, afford the opportunity to think in entirely new ways about how we make the next generation(s) of computers.

Quantum can speed up some calculations immensely, even if at the expense of doing so a bit less accurately.  Still, this may hold computational benefits in many fields where absolute calculation perfection is not required.  Google, IBM, Microsoft and others have quantum-computing projects they’re working on right now.  Here again though, China is making big bets, the technology is nascent and not yet fully practicable, and the winners of the future are unclear.

What is clear is that the Moore’s Law that governed the growth of our industry when we started in the 1980s is destined to be something altogether different to the next generation of computing pioneers.

We wish them the best of luck.

Read Full Post »

moores_law_graphAs two recent articles in the Wall Street Journal point out, the celebrated tech axiom known as Moore’s Law is reaching its 50th birthday.  The term is named for the idea first posited by Gordon Moore that the number of transistors that could be crammed into a given space would double about every two years or so, and continue to do so, indefinitely.  And we all know what happens when you double something over and over again… pretty soon we’re talking large numbers.

As an article by Michael S. Malone in the April 18th Journal pointed out, the idea began as a graph (shown here) that illustrated  an article in Electronics magazine.  It didn’t gain Moore’s named affixed to it for another ten years.  At the time, Moore was working for Fairchild Semiconductor.  He would later co-found and become CEO of Intel.

Over 50 years, Moore’s idea has indeed stood the test of time, despite regular predictions of its imminent demise.  Chip performance doubles about every 18 months.

But it is getting harder.

Today, the design and testing of the next generation of chips comes at a cost of $132 million, according to International Business Strategies, Inc. of Los Gatos, California.  Just ten years ago, that cost was a mere $16 million.  The circuitry required for today’s newest chips has a width of just 14/billionths of a meter, allowing manufacturers to squeeze hundreds of millions more transistors on a chip than previously able.  But, as the Journal’s Don Clark points out in another article, “designing products that use so many more components takes lots of time and money.”

While it’s said that the shrinkage can continue for at least another decade, the price of these chips is going up dramatically.  As Micron Technology CEO Mark Durcan notes, “There will be smaller and smaller pieces of the market that will pay for the improvement.”

Early on, Moore predicted that the number of components on a single chip would double every year or so from 60 to about 65,000 by 1975.  Back in the early days of Fairchild, a transistor sold for $150.  Today, Intel’s Core i5 chip includes 1.3 billion transistors and sells for the equivalent of about a penny for every 70,000 transistors.

So while the relative cost (as viewed in cost per transistor) may have declined dramatically, the fact is, making these chips is becoming prohibitively expensive as chip heat constraints bump into the walls of physics and smaller pieces of the market are willing to pay for these advances.  There may indeed be a non-technical – but fiscal — reason that “the end is near” for Moore’s famous law.  Chip fab plants now run upwards of $10 billion.  Production delays due to manufacturing defects at Intel caused it to be a half-year late on its latest chip.

Then again, new technologies (like stacked three-dimensional circuits with 32 or 48 layers per chip) will keep boosting device capacities.  Intel plans to deliver a chip for specialized applications this year with 8 billion transistors – 133 million times more than when Gordon Moore first made his bold prediction.

So perhaps fans of Moore’s Law might say as Mark Twain once said… “the rumors of my demise have been greatly exaggerated.”

Read Full Post »