From the first section of chapter four of The Story of Semiconductors by John Orton, c. 2004.
This is absolutely fascinating. It puts into perspective:
- how far we've come; and,
- how fast things are moving.
This was done quickly using a keyboard that has a "bad" n. I've tried to correct all the typographical errors but can't guarantee that all typographical errors have been corrected.
Note: below:
- Ge: the semiconductor, germanium; and,
- Si: the semiconductor, silicon.
Chapter 4: Silicon, Silicon, and yet more Silicon.
4.1 Precursor to the revolution
With the crucial advantage of hindsight, we are very well aware of the sea change consequent upon the invention of the transistor but it should not really surprise us to learn that those struggling to come to terms with it at the time were less readily persuaded.
Yes, it [the transistor] was small and yes, it used far less power than the incumbent device (the thermionic valve -- the vacuum tube) but there were disadvantages too. There was the problem of excess noise and the difficulty in producing devices which could amplify at high frequencies. Needless to say, in its early days, the transistor was seen essentially as a possible replacement for the valve -- many of the companies taking part in its development were primarily valve (or, since they were mainly American companies such as RCA, GE, Sylvania, and Philco, tube) companies whose main business was, and continued to be for some considerable time, either valves or tubes. [In American these were called "vacuum tubes"; in England they were called thermionic valves.]
It is important to recognize that, though solid state circuity was eventually to dominate the market, sales of valves did not even reach their peak until 1957 and showed little sign of serious decline until the late 1960s -- the transistor might be an exciting technical advance but was not at all obvious that it represented a major commercial investment.
The possible exceptions were the small start-up companies, such as Texas Instruments (TI), Farichild, Hughes, or Transitron, who carried none of the tube or valve baggage which encumbered the larger companies but they were, by definition, small and insignificant! They were, however, flexible and enterprising and it was from them that many of the important innovations in semiconductor technology were to come.
Technical innovation might be exciting and full to the brim with promise but, during the 1950s, the chief problem in transistor manufacture was one of reproducibility. We have already touched on the difficulty of controlling the base width in double-doped and alloyed structures which had a direct and crucial effect on cut-off frequency but there was the additional problem of encapsulation which frequently failed to stabilize the device against atmospheric pollution.
Many manufacturers were obliged ot divide their product into "bins" containing high-grad devices which might sell for $20 apiece down to run-of-the-mill (crumby?) specimens which they were lucky enough to offload for 75 cents!
Only with the emergence of planar technology could these problems be overcome -- and this process was not even invented until 12 years after the point contact transistor.
And, needless to say, it took several more years to become widely accepted.
Nevertheless, early transistors did find applications, first in hearing aids where the low power requirement, low weight, and small volume were obvious bonuses (though the excess noise associated with many devices could hardly have been welcomed by users!) and in small portable radios -- the ubiquitous "Transistor" which did more than anything to bring the word into common usage. [This explains the "hearing aid" mania that began in the 60s and to some extent, continues.]
Again, it was on one of the small firms (TI) which saw the opportunity and forged an arrangement with the Industrial Development Engineering Associates (IDEA) to produce the "Regency TR1" radio in October 1954. It was challenged, in the following year, by Raytheon with its own model and subsequently by subsequently by numerous others, including, significantly, Sony who later contributed to the delight of youth (and the chagrin of the elderly!) with its highly successful "Walkman" personal tape player.
Applications in car radios followed soon afterward and, in spite of various sticky patches, by the year 1960 there were some 30 US companies making transistors to a total value of over $300 million (see Braun and Macdonald 1982: 76 - 77.)
Another area of application which attracted immediate attention was that of computers. These were still in a very primitive state of development during the 1950s -- analogue computers had been used in radar systems as early as 1943 but the first general purpose digital electronic coputer (ENIAC -- Electronic Numerical Integrator and Calculator) was not built (at Penn State University) until 1946.
It filled a large room, used 18,000 valves and dissipated 150kW!
British computing skills had been honed by code-breaking endeavours with the Colossus machine during the Second World War (Colossus was first introduced in 1943 and by the end of the war there were no less than 10 machines in use) and this experience was probably vital to the development in Cambridge of a rival to ENICAC, known as EDSAC (Electronic Digit Storage Automatic Calculator).
This appeared towards the end of the 1940s, while the first transistorized computer was probably the TRADIC developed by Bell for the US Army in 1954, employing 700 transistors and 10,000 Ge diodes (all hand-wired!), followed by a commercial computer from IBM, containing over 2,000 transistors, in the following year.
The low dissipation and small physical size of the transistor gave it an immediate advantage and its solid state construction offered hope of much improved reliabiliyy -- however, it was initially limited in speed by its relatively poorly controlled base width and once the decision to use digital techniques became generally accepted, this took on a more serious aspect because of the extra speed required for digital processing (see Box 4.1).
Indeed, there was relatively little enthusiasm for the long-term future of such machines -- a US survey at the end of the 1940s suggested theat the likely national need might be satisfied by about a hundred digital computers!
Such are the perils of technological forecasting! In mitigation, one must accept that, at that time, they [computers] were relatively expensive and ponderous instruments.
While the commercial and consumer markets for transistors and transistorized equipment were still in an uncertain state, there could be little doubt of the seriousness of US military interest.
Much military electronic equipment had either to be portable, to be airborne, or to be attached to missiles where size, weight, and ruggedness were at a premium. The transistor therefore came as a heaven-sent opportunity to the military purchasing arm and, right from the word "go," government finance for transistor development was widely available -- indeed, there was more than a hint to suggest that military backing kept the youthful transistor industry afloat during a large part of the 1950s. [I was born in 1951; the history of the computer age and my life are almost exact contemporaries.]
Something between 35% ad 50% all US annual semiconductor production was destined for military use during the period 1955- 63 (Braun and Macdonald 1982: 80).
(It should be remembered, too, that it was largely pressure from the military that led to the early demise of Ge in favour of Si as the preferred transistor material on the grounds of its much better resistance to thermal runaway).
Added to this came the decision by President Kennedy in 1961 to mount an intensive space programme, with the intention to "put a man on the moon by 1970."
Once again, given the modest lifting capability of current US rockets, weight was a vital factor and all electronics must therefore be transistorized.
Ruggedness and reliability, too, were better served by solid state devices than by the older, relatively fragile vacuum tubes.
The European industry, though technically well advanced, received only a fraction of this level of support, and with inevitable consequences -- competition with America was, at best, patchy and generally ineffective.
Nor was this state of affairs helped by some unfortunate technical planning.
An unhappy example lies at the door of the British Post Office (then responsible for telecommunications as well as mail delivery, see Fransman 1995: 89 - 97).
When it became clear, after the Second World War, that domestic and industrial demand for telephone services was soon likely to escalate, the Post Office, in 1956, took the bold decision dramatically to upgrade its telephone switching capabilities by leapfrogging from the rather ancient mechanical switching technology then in use to an advanced, digital " time division multiplexed" (TDM) system, employing fast electronic switches.
This was designed to bypass the more modest technology the being contemplated by most of their rivals, the crossbar switching system and to give the United Kingdom an almost unassailable lead in this important field.
It failed on account of the inadequacy of the components then available -- a complete exchange was installed in Highgate Wood in 1962, only for it to succumb to excess heat from the 3,000 thermionic values employed (see Chapuis and Joe 1990: 62).
At the time when the decision was made to go ahead, the transistor was far too uncertain a prospect (Ge devices were liable to thermal breakdown and Si had scarcely had time assert itself -- it was, in any case, rather slow for digital applications-- see Box 4.1) so the choice of an old, well tried component technology was probably inevitable. (Even though this did contrast with the boldness of the overall project aims!)
Success with similar TDM switching systems had, in fact, to wait until 1970 when suitable integrated circuits (IC) became available. What was worse from the UK industry viewpoint was the resulting attempt to salvage something from the ruins by reverting to he original mechanical switching technology, thus robbing the Post Office suppliers of the opportunity to develop intermediate switch technology, based on transistors and (as they became available) integrated circuits. It was a body blow for UK solid state device technology from which it never quite recovered.
These references to integrated circuits (ICs) serve to bring us back to our mainstream discussion of the development of solid state active devices, for it was the invention of the integrated circuit in 1958 - 9 which provide the jumping-point for the real electronic revolution which still shows no sign of slowing. It was clear to many, "wizz kids" of the 1950s that the transistor had the potential for the development of large-scale, though compact, electronic circuits, and several attempts were made to facilitate progress in this direction.
However, it soon became apparent that there was a limitation set by the necessary interconnections -- all of which required individual attention with bonder or soldering iron -- and several people began thinking of way to overcome this. The first public proposal for integration has been credited to an Englishman, Geoffrey Dummer of the Royal Radar Establishment (RRE, as then was) who presented a conference paper in Washington in May 1952, and who, by 1957, has persuaded the RRE management to fund a contract with the Plessey Company to build a flip-flop circuit based on his ideas. This resulted in a scale model which seems to have created considerable interest among American scientists but very little excitement within the United Kingdom! In fact, it was at TI in September 1958 that Jack Kilby first built an actual circuit in the form of a phase-shift oscillator. It used Ge, rather than Si because, at the time, Kilby could not lay hands on a suitable Si crystal and it employed external connecting wires individually bonded to the components but it demonstrated the use of the bulk Ge resistance to form resistors and a diffused p-n junction diode to provide capacitance -- there was no need to add these functions by hanging discrete components onto the semiconductor circuit. As a demonstration of the integration principle, it may be likened to the point contact transistor -- a huge step forward but some way from commercial viability.
The practical breakthrough came from Fairchild Semiconductors in the following year, in the form of a patent application by Robert Noyce claiming a method of making an integrated circuit using the Si planar process and forming the necessary interconnections by evaporating metallic films and defining them by photolithography. This was surely the practical way to go but it was nearly 2 years (March 1961) before Fairchild made their first working circuits based on these principles, closely followed by Texas in October 1961. These two companies were serious rivals, not only with regard to IC manufacture -- a titanic patent battle, also ensured over the question of priority in the basic invention (see the stimulating account given in Reid 2001). It took nearly 11 years of legal jousting [think Charles Dickens, Bleak House] before the Court of Customs and Patents Appeals finally adjudicated in favour of Fairchild -- Robert Noyce was officially declared the inventor of the microchip! Not that it mattered very much -- by that time the world of chips had moved on to such a degree that the issue had become of little more than academic interest and, in any case, the two protagonists Kilby and Noyce were, on a personal basis, more than happy to share the credit. In the year 2000, Kilby was awarded a half share in the Nobel prize and, doubtless Noyce would have joined him had he not died some 10 years earlier. That it should have taken the Nobel Committee more than 40 years to acknowledge a technical development of this magntude must be seen as both remarkable in itself and sad in the extreme in that it prevented Noyce from receiving his rightful share of the honour.
So prodigious have been the ramifications of their invention that one is somewhat taken aback to learn of the initial lack of interest shown by equipment manufacturers in these early circuits. The problem was that they were too expensive -- it was actually cheaper to build the same circuit from individual component, hard-wired together, than to buy the appropriate integrated version from TI or Fairchild. Sales were minimal. Stalemate!
That was until May 1961 when President Kennedy threw down his famous challenge that America should put a man on the moon by the end of the decade. Almost immediately it became clear that the required rocket guidance would demand highly sophisticated computer technology and that such advanced circuity could only be realized in integrated form. Hang the expense -- this was the only way to go! Such a dramatic kick-start to a technological revolution smacked of divine intervention by a Higher Being with an unfair interest in the fledgling US chip industry -- certainly no other country ever received a comparable boost. The result was demonstrated by the number of ICs sold: in 1963 the number was a mere 500,000, by 1966 it had risen to 32 million.
Government spending may have been the vital stimulus but the importance of diversification was quickly appreciated. Jack Kilby was put to work at TI to develop a revolutionary consumer product in the shape of a pocket calculator which appeared in1971. No fewer than 5 million calculators were sold in 1972. At the same time the digital watch made its appearance took the consumer market by storm. Ted Hoff of Intel developed the first microprocessor also in 1971 and the first personal computer (PC) followed in 1975 in the form of a Popular Electronics kit! The revolution was well and truly launched and the industry has hardly cast a backward glance.
Progress in increasing complexity of integrated circuits has shown a quite remarkable steadiness -- in 1965 Gordon Moore (a physical chemist working in Noyce's group at Fairchild) made his famous pronouncement which came to be known as "Moore's Law," that the number of components on an IC would continue to double every year and such has almost been the case. A careful examination of the data up to 1997 suggests that the annual increase is actually closer to a factor of about 1.6 but the really striking feature is its long-term consistency, encouraging confident prediction for future increases, at least as far as the end of the first decade of the new millennium.
Solid state circuitry has gone from "small scale integration" (SSI, up to 20 "gates" in the 1960s to "medium scale integration" (MSI, 20 - 200 gates) at the of the 1960s through "large scale integration" (LSI, 5,000 - 1000,000 gates) in the 1980s and what might be called "ultra large scale integration" (ULSI, 100,000 - 10 million gates) by the end of the 1990s.
Moore himself, continued to play a role in these developments -- together with Robert Noyce, he left Fairchild in 1966 to found Intel whose sales rose from $2,700 in 1968 to $60 million in 1973 and in the year 2000 to $32 billion. The basis of this performance has, of course, been the steady decrease in size of the component transistors and we shall look in more detail at this anon. However, we must first bactrack to examine another important breakthrough, the development, at last, of a real field effect transistor (FET).
******************************
More From John Orton's Book
The first example was section 3.1 transcribed here.
Do a word search, "serendipity" at this post. No less than three events in the early invention of the transistor were due to serendipity, were serendipitous, or were fortuitous.
Now, transcribing section 4.2 we have yet another serendipitous / fortuitous / accidental discovery which was critical for development of the microchip / transistor / semiconductor. Link here. To wit:
4.2 The metal oxide silicon transistor. It begins:
The Metal Oxide Silicon (MOS) transistor was yet another product of the fertile ground cultivated by Bell Telephone Laboratories and, once again, it involved just a small element of good fortune. The critical step in its evnention was the (accidental!) discovery that the Si surface can be oxidized to form a highly stable insulating film which possesses excellent interface qualities (i.e. the interface between the oxide layer and the underlying silicon). We have already commented on the importance of this interface in passivating Si planar transistors which, in turn, led to the practical realization of integrated circuits. The further application in the metal oxide silicon field effect transistor (MOSFET) turned out to be a singularly important bonus.[Comment: so, now in the very beginning of the transistor story four discoveries that were serendipitous (serendipity), fortuitous, or accidental. Absolutely amazing.]
Continuing:
We saw in the previous chapter that the quest for a FET (which would function in a manner closely parallel to that of the thermionic valve) had already acquired something of a history.
It was a patent awarded in 1930 to a Polish physicist, Julius Lilienfield (who emigrated to America in 1926), that thwarted William Shockley in his original attempt to patent such a device but, even more frustratingly, the existence of high densities of surface states on Ge and Si which prevented the Bell scientists from actually making one.
Even though the application of a voltage to a "gate" electrode may have been successful in inducing a high density of electrons in the semiconductor region beneath it, these electrons were not free to influence the semiconductor's conductivity because they were trapped in surface (or interface) states.
What was needed was a surface (or, more probably, an interface) characterized by a low density of these trapping states (of order 10^15 m^-2 or less) but, at the time, no one knew how to produce it.
Brattain and Bardeen had continued to study the problem of surface states until 1955, eight (8) years after their invention of the point contact transistor, but it was not until 1958 that another Bell Group under "John" Atalla discovered the low density of states associated with a suitably oxidized SI surface. It was necessary ...
... and... and then a long paragraph explaining the next process ... ending ...
... However, the key result was that their densities were below the above limit of 10^15 m^-2, thus making possible the development of a practice FET. This was finally achieved at Murray Hill in 1960. Within a few years RCA pioneered the introduction of MOS devices into integrated circuits and this technology rapidly came to dominate that of bipolar (e.g. n-p-n) devices in many applications.
Then a long paragraph of technical details, describing a process which is known as "inversion," the channel itself often being refrerred to as a "inversion layer."
Another short technical paragraph.
Then, another long technical paragraph, bottom of page 102, which begins:
A virtue of these curves ...As already explained in Box 4.1, digital signal processing (which is fundamental to present-day-computing and information transfer) depends on the use of short voltage (or current) pulses which are generated and moved around an array of electronic circuits in incredibly complicated fashion but the basis is, nevertheless, simple. At any point in the circuit, "information" is represented by the presence (digital "1") or absence (digital "0") of a pulse voltage. Typically its amplitude is about 5 V but the exact value is less important than the ability of monitoring circuitry to determine, with a high degree of certainty, that the pulse is either present or absent......
Now, skip ahead to the end of section 4.2:
In summary, then, we see that, by the early 1960s, the two principal active devices, bipolar and MOS transistors had become available to the electronic engineer and the story from this point is one of continuing miniaturization to improve speed and packing density in IC design and on the other hand, the development of large-scale devices with large voltage-handling capacity for use i high power applications.
Which device to use in which application depended, of course, on the specification required.
In general, MOSFETs have an advantage in IC design on account of their lower power dissipation and modest demand on silicon area, though bipolar devices are capable of faster switching speeds at the expense of more power dissipation and greater demand on space.
The dissipation advantage inherent in the use of MOS devices was further enhanced in the late 1960s by the development of Complementary MOS (CMOS) circuity in which each switching element takes the form of a pair of transistors, one NMOS and one PMOS, the important feature being that power is dissipated only when the switch operates -- in the quiescent state (whether storing a digital 0 or 1). no current flows.
Section 43. Semicoductor technology. Oh, no. This section starts at the bottom of page 107 and doesn't end until page 120, and it begins:
In one sense (the commercial sense), this section is the most important in the book!
The reader should already be persuaded of the important part played by well controlled semiconductor materials in the development of transistors and integrated circuits. Without high-quality germanium the transistor could never have been discovered and without high-quality silicon the integrated circuit would still be a mere concept.
However, even greater importance attaches to the role played by technology.
Without the amazing skills built up by semiconductor technologists we might still be trying to wire together crude individual transistors on printed circuit boards, rather than linking powerful integrated circuit chips to build fast computers with almost unimaginable amount of memory.
A long, long paragraph. Then:
...but this still leaves unanswered the question of how to define their precise positions [the precise position of transistors on a chip]. This step, known as "photolithography," probably representing the most important single contribution to the technology, originated in the printing industry and was adapted for microelectronic applications by a number of American companies such as Bell, TI,and Fairchild at the beginning of the 1960s. As an aid to understading it, we refer to the earlier process of making mesa transistors, depeding on selective etching to form the local bumps on the semiconductor surface which defined the active device area.
Interestingly, this process is wll described in Simon Winchester's book.
*********************************
The Book Page
So, I’m reading The Perfectionists: How Precision Engineers Created the Modern World, by Simon Winchester, c. 2019.
Chapter
8 on “GPS” was incredibly fascinating, but then I started reading
Chapter 9 — and whoo-hoo!. It turned out to be the story of a machine that makes machines, a machine that was sent to Chandler, Arizona, in 2018.
The chapter begins:
Once every few weeks, beginning in the summer of 2018, a trio of large Boeing freight aircraft, most often converted and windowless 747s of the Dutch airline KLM, takes off from Schiphol airport outside Amsterdam, with a precious cargo bound eventually for the city of Chandler, a desert exurb of Phoenix, Arizona.
The cargo is always the same, consisting of nine white boxes in each aircraft, each box taller than a man. To get these profoundly heavy containers from the airport in Phoenix to their destination, twenty miles away, requires a convoy of rather more than a dozen eighteen-wheel trucks. On arrival and finally uncrated, the contents of all the boxes are bolted together to form one enormous 160-ton machine — a machine too, in fact, a direct descendant of the machine tools invented and used by men such as Joseph Bramah and Henry Maudslay and Henry Royce and Henry Ford a century and more before.
Just like its cast-iron predecessors, the Dutch-made behemoth of a tool (fifteen of which compose the total order due to be sent to Chandler, each delivered as it is made) is a machine that makes machines. Yet, rather than making technical devices by the precise cutting of metal from metal, this gigantic device is designed for the manufacture of the tiniest of machines imaginable, of which perform their work electronically, without any visible moving parts.
......
......
The particular device sent out to perform such tasks in Arizona, and which, when fully assembled, is as big as a modest apartment, is known formally as an NXE:3350B EUV scanner It is made by a generally unfamiliar but formidably important Dutch-registered company known simply by its initials, ASML.
Each of of the machines in the order costs its customer about $100 million, making the total order worth about $1.5 billion.
Wow, wow, wow. I did not see that coming.
But it certainly connects a lot of dots.
***********************
Continuing
On page 291, more of the ASML story.
Enormous machines such as the fifteen that started to arrive at Intel's Chandler (Arizona) fab from Amsterdam in 2018 are employed to help secure this goal. The machines' maker, ASML -- the firm was originally called Advanced Semiconductor Materials International -- was founded in 1984, spun out from Philips, the Dutch company initially famous for its electric razors and lightbulbs (sic). The lighting connection was key, as the machie tools that the compay was established to make in those early days of the integrated circuit used intense beams of light to etch traces in photosensitive chemicals on the chips, and then when on to employ lasers and other intense sources as the dimensions of the transistors on the chips became ever more diminished.
Then the process is explained in great detail.
Then:
With the latest photolithographic equipment at hand, we are able to make chiops today that contain multitudes: seven bilion transistors on one circuit, a hudred million transistors corralled within one square millimeter of chip space. But with numbers like this comes a warning. Limits surely are being reached -- remember, this was being written in 2018. The train that left the railhead in 1971 may be about to arrive, after a journey almost half a century, at the majesty of the terminus. Such a reality seems increasingly probable, not least because as the space between transistors diminishes ever more, it fast approaches the diameter of individual atoms. And with spaces that small, leakage of some properties of one transistor (whether electric, electronic, atomic, photonic, or quantum-related properties) into the field of another will surely soon be experienced.
There will be, in short, a short circuit -- maybe a sparkless and unspectacular short circuit, but a misfire nonetheless, with consequences for the efficiency and utility of the chip and of the computer or other device at the heart of which it lies.
So, they need new machines to make chips even smaller.
An American company (which ASML subsequently bought) had already developed a unique means of producing this particular and pecuiar type of EUV radiation. Some said the company's method verged on the insane, and it is easy to see why.
If everything works properly -- and at the time of this writing, it seems to be -- then the first of these supercomplex chips, made in this bizarre manner, will be on offer from 2018 onward. And Moore's law, by then fifty-three years old, will prove to have kept itself on target, again.
But.
How much longer? The use of EUV machines may allow the law's continuance for a short while more, but then the buffers will surely be collided with, at full speed, and all will come to a shuddering halt. The jib, in other words, will soon be up.
A Skylake transistor is only about one hundred atoms thick -- and although the switching on and off that produces the ones and zeros that are the lifeblood of computing goes on as normal, the fact that such minute components contain so very few atoms makes the storage and usage of these digits increasingly difficult, steadily more elusive. See also this post.
There are plans for getting around the limits, for eking out a few more versions of what might be called "traditional" chips by, among other things, making the chips themselves increasingly three-dimensional -- by stacking chip on top of chip and connecting each for forests of ultraprecisely aligned and very tiny wires. This would allow the number of transistors in a chip to keep on increasing for a while without our having to reduce the size of individual transistors.
Other talk:
- the curious one-molecule-thick substance graphene;
- molybdenum disulfide, black phosphorus, and phosphorus-boron compunds as possible alternatives to silicon.
And then finally: quantum.
Light squeezing, for example, allows some actual measurement (rather than calculatioin, which is the basis of immensely small numbers -- see page 298 -- the Planck length -- 0.0000 (34 zeroes) 16229 meters, or about twenty decimal places smaller than the diameter of a hydrogen atom. The time it would take a photo to journey through a Planck length: 5.49 x 10^-44 seconds.
Back to Intel's 14A chip:
No comments:
Post a Comment