Moore’s Law Enforcement

Education, Industry News — By on June 4, 2010 08:09

Last week, in the run up to his 6th birthday, my son asked me an awkward question – what shape is the internet? With a gift for finding the existential fault lines between the conceptual and the tangible, his questions regularly leave me dithering and exasperated. But the question is an intriguing one and deserves thought. Rapidly running out of planes and dimensions to imagine in, the only conclusion that I reached is that whatever shape it is, it is certainly, in some way, exponential. Everything about the internet is exponential – the growth of number of hosts, domains and Facebook accounts, the amount of traffic – all has grown exponentially. Geometric progression seems to be woven into the fabric of the Internet. Why?

Certainly demand is a factor – as a species we love to communicate in tiny misspelled messages and have an unlimited appetite for watching gorillas play the drums – but this is only a part of the story. It is no accident that internet expansion mirrors the growth of the technology that underpins it. As the processing and communication capacity of the vast interconnected network of fibres, wires and silicon humming and whirring in climate-controlled rooms across the world doubles, so the number of emails, tweets and gorillas doubles. This growth is both tracked and predicted in the supremely elegant Moore’s Law which describes the trend in integrated circuit fabrication in which the density of transistors doubles every two years.

This claim first appeared in a 1965 paper written by Gordon E. Moore, but it was not until 1970 that Caltech professor Carver Mead termed the phrase Moore’s law. It is often misquoted, variously being misapplied to processing power and memory density, and predictions of its demise are a more or less annual event. But despite this, the law continues to prove unswervingly accurate.

Clearly, Moore’s law is not an actual law, at least not in the sense of the laws of physics, which are eternally inviolable, or in the legislative sense, where on-the-spot fines might be handed to semiconductor manufacturers failing to meet the mandated geometric feature density. Rather, it is simply a prediction, albeit a fabulous one – a shining gem set in a sea of awful predictions. There is an alternate view that it has become self-fulfilling prophecy as the law sets the bench-mark for technological improvement. This gives rise to an interesting thought – if Moore had, in 1965, claimed that recent impressive rate of improvements would just fizzle out because computers were already powerful enough, then would today’s mobile phones be the size of a garden shed?

The head-line beneficiary of Moore’s law has been microprocessors. The very first of these, the Intel 4004, contained around 2,300 transistors. Today’s Quad-core Itanium packs 2 billion. This information allows us to put Moore to the test. The 4004 was introduced on November 15th 1971. The Quad-core Itanium was released Feb 8th 2010, about 38.6 years later. Applying Moore, 2300 x 2(38.6/2) = 1.48×109 or about 1.5 Billion. It seems that Moore’s law is being exceeded, as, by this calculation, Intel should hit 2 Billion 19.73 years after the 4004, or April 2011 – not too far ahead of schedule in the scheme of things. (This is not good mathematics, please don’t quote it).

Another less publicised beneficiary is the family of silicon devices known as FPGAs. These relatively new devices contain uncommitted silicon resources that may be configured to perform any function the designer wishes. Crucially, they may be reconfigured any number of times which means that more or less anyone can develop their own bespoke silicon chips without the inconvenience of having to build a wafer fabrication plant. In doing so, FPGA vendors have created a viable alternative to the traditional approach to high-performance computing which has been to formulate general purpose processing elements into parallel structures on proprietary data highways – so called cluster programming. But it is difficult and expensive to do this efficiently – and I think it is self-evident that nothing is as efficient as creating a chip specifically for your application – in Calrec’s case, processing audio.

Calrec has made dramatic use of this technology in digital mixing console designs. The Alpha products started by off-loading the repetitive numerical processing tasks to FPGAs in order to save DSP cycles. Later, all DSP processing was transferred to a small number of FPGA chips, realising a massive increase in efficiency. Moore’s law increases in FPGA capacity have meant that the latest DSP design, Bluefin2, packs all the audio processing for more than 1000 channels and hundreds of output busses into just four FPGA chips. This reduces power requirements, meaning smaller power supplies and less heat to manage, but, most significantly, the huge reduction in components and connectors means better reliability.

Of course, all this powerful FPGA silicon takes a great deal of programming. There’s no doubt that development tools have improved, but we can’t expect to find a programming language that frees us from the burden of clarifying our ideas. Perhaps this is the downside to FPGA, as designs are most efficiently realised in less abstract languages. We do not benefit from a Moore’s law of software efficiency.

Interestingly, the problem appears to be the reverse for general purpose processors – as chips have become more powerful and storage density increased, there has been a tendency for the installation footprint of newer programmes to increase. Computers come with masses of pre-installed bloatware or foistware, mostly there to market trial versions of unwanted programmes. Another cause of unwanted software is the proliferation of competing standards all of which require support. Inevitably, the situation has a law to describe it – Wirth’s Law, which states that “software speed is decreasing more quickly than hardware speed is increasing”.

The attraction of Moore’s Law is not the uncanny accuracy of the prediction, now 45 years old. Rather, it embodies the self-confidence that is the driving force behind half a century of mind-spinning technological innovation. It is the mantra of an industry in such supreme state of health that it leaps technological hurdles that would stop others in their tracks, time after time, like some industrial Usain Bolt on titanium springs.

The current prediction is that Moore’s law will last until 2015. My suspicion is that it will continue far beyond. Although the original claim refers only to silicon, and it may be the case that silicon has run its race, there are alternative technologies lined up waiting to pick up the baton. Perhaps optical, chemical or even quantum computing will be the proud defender of Moore’s law for the next half century.

If it does, the possibilities are interesting. Some more dubious maths – if a 1000 channel broadcast mixing console uses about 3.3 Billion transistors in its DSP processing, it might comfortably fit into two large currently available FPGA devices. If Moore’s law continues to apply, in just eight years time, the same chips would provide 256,000 channels of processing – at least enough for all the broadcast consoles in the United Kingdom.

There is a deep inspiring beauty in the application of mathematics where it might least be expected; the more arcane the better. Seth Lloyd, a quantum mechanic at MIT shows how the potential computing capacity of a kilogram of matter equals Pi times energy divided by Planck’s constant. This is a big number. About 5.0 x 1050 operations per second. This is enough to provide everyone on planet earth with 2000 times more processing powerthan there is in the world today. What possible calculation could better this? The FFT of a kiss, the z-plane transform of disappointment?

There are a small group of futurists who believe that Moore’s law style improvements in technology will eventually lead to the acceleration of intelligence, which itself will lead to an intelligence explosion and a technological singularity, beyond which, the state of the world or the human condition cannot be predicted. Perhaps Armageddon, or maybe a matrix-like virtual world where machines rule our minds.

Are these people mad? Undoubtedly. But could they be right? Let’s hope not. Best not to worry about it too much. I know I can’t – I have to go and bake an Internet-shaped birthday cake.

Tags:

2 Comments

  1. mrbluetone says:

    Given that that we increasingly rely on technology (at the same rate?), and the merchandise which we consume reduces in size (at the same rate?), and is possibly moving toward a ‘matter’ based technology, would it not be safe to assume that we to, the human race, will eventually shed all our diminishing apparatus (limbs etc), reduce in size (at the same rate?), in order to use the merchandise and gently devolve from a giant (then really tiny) intelligent thumb back into the radioactive, primieval stew from whence we originated. In which case, your son could have his cake, eat it and actually be it… though he may not be in the same place at the same time to witness it…
    What shape did his cake turn out? Or did you make him a Pie?

  2. stevechen says:

    If multi-core CPU is fast enough, why we need DSP? DSP is only a tiny-part of CPU sub-set but FPGA is not. I do not think FPGA will die in 2015 but is DSP.

Leave a Comment