For this week’s blog we are going to begin with an unusual declaration before asking you to read what we have to say: We do not really know much about semiconductors; in fact, the vast majority of people have very little understanding about semiconductors despite the enormous impact they have on all of our daily lives.

The late British science fiction writer Arthur C. Clarke proposed a few adages that have come to be known as Clarke’s Three Laws:

  1. When a distinguished but elderly scientist states that something is possible, they are almost certainly right. When they state that something is impossible, they are very probably wrong.
  2. The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
  3. Any sufficiently advanced technology is indistinguishable from magic.

All three of those are true of to semiconductors.

With that said, there are great changes underway about how they are made and where they are made, and it is worth keeping at least a high-level awareness of some of those developments if only to understand some of the forces shaping how the world works. When the production of fossil fuels or staple crops is disrupted, a whole range of socio-economic knock-on effects follow. The same can be said for semiconductors, and there are a lot more places producing fossil fuels and staple crops than there are producing semiconductors, especially the most complicated ones.

Briefly, what is a Semiconductor?

Without getting too technical, a semiconductor is a material that can conduct electric current in a solid state where that conductivity can be manipulated by electric or magnetic fields, by exposure to light or heat, and/or by the introduction of impurities.

Put another way, a semiconductor allows people who know what they are doing to monitor and change the input and output of electricity.

It would be hard to draw up a comprehensive list of all the ways this is useful. Basically anything electrical more complicated than a lightbulb has a semiconductor in it these days, and even many lightbulbs do in fact have semiconductors: LEDs —light-emitting diodes— are semiconductors. The production of semiconductor devices has grown by more than 9% year over year since the late 1970s, and by 2018 that number is believed to have exceeded one trillion for the first time, with perhaps a total of seven trillion devices having been produced since the technology became commercially available after the Second World War.

Can we be more specific?

For the rest of this blog post for the sake of talking about one category of semiconductor devices we are going to focus down on transistors and integrated circuits.

A transistor is a semiconductor device that can amplify or switch the flow of electricity, and an integrated circuit is a collection of electric pathways on one flat piece of semiconductive material. Put the two of them together, and you have a processor that was called first a chip and then fairly quickly a microchip as the ability to build more and more transistors and circuits into a smaller and smaller piece of semiconductive material advanced.

In 1965 at the dawn of microchips, Gordon Moore —the co-founder of Fairchild Semiconductor and Intel, a company he would also serve as CEO for a time— hypothesized that the number of components per integrated circuit would double every year for at least the next decade. By 1975 he estimated a doubling of capacity and capability every two years, a prediction the semiconductor industry has used to set R&D targets and guide long-term planning, thus becoming to an extent a self-fulfilling prophecy. For decades this ‘doubling every two years’ has been taken as a touchstone, eventually being dubbed ‘Moore’s Law,’ and the advances in digital electronics, the falling costs of microprocessors, the increase in memory capacity, and even the growing pixel resolution of digital cameras are all in large part connected to the steady drumbeat of progress it set.

By 2010, microprocessor architects reported Moore’s Law was beginning to come up against the practical limits of physics. At the nanometer scale of modern microchips there is only so much room left between circuit paths, and the circuits themselves can only be so wide, limiting the amount of energy that can flow through them without issue. Most experts including Moore himself expect Moore’s Law to no longer be relevant by as soon as 2025, but further progress will still be made through R&D in other semiconductor materials, chip architecture alternatives, quantum computing, and AI and machine learning optimizing the performance of existing processing power.

The future of semiconductor devices remains bright, but the steady rate of dependable forward progress is slowing down, and whatever succeeds next will be a fundamental departure from how we got this far that fast. What does that mean for the future? For the time being, it means the experts are the only ones whose opinions are informed, and even they are hedging their bets as they wait to see which good idea will produce the best results.

We are on the brink of worldwide change in one of the underpinnings of how everything works, and the dice is still in the air at this very moment. That is worth thinking about from time to time.

How Are They Made Right Now?

The very short answer is, it’s complicated.

The microchip manufacturing process involves hundreds of steps from design to mass production, almost all of it in specialized ISO class 1 ‘zero dust’ cleanrooms within dedicated fabrication facilities where temperature and air quality are strictly controlled. Microchips are made up of dozens, scores, even hundreds of layers of silicon wafer aligned one on top of another to within millionths of a millimetre of variance. Different layers with differently sized features are etched using different lithography equipment. To give a sense of how small and precise circuits have become, a human red blood cell is 7000 nanometers in diameter, and some of the finest circuits in a cutting-edge microchip are only 10 nanometers wide.

There are three broad categories of chipmakers: Integrated Device Manufacturers (IDMs) like Intel and Samsung design and manufacture chips in enormous fabrication facilities; fabless semiconductor companies like AMD and Qualcomm focus on chip design without owning their own manufacturing abilities; foundries are chip manufacturers who make chips to other companies’ specifications without having design or R&D capabilities of their own.

Where Are They Made Right Now?

Here is where it gets really interesting.

As you might expect in an industry that has become so fundamental to the world economy while also experiencing rapid growth for decades on end, success breeds success. First-movers and innovators are bought up and hired onboard by established players with deep pockets. Top talent is motivated to work for the top businesses. Consolidation and relentless process optimization have created giants in the United States, Taiwan, South Korea, Japan, and the Netherlands, but China’s government subsidies, procurement preferences, and other preferential policies have seen a meteoric rise in its share of the global market from 3.8% of worldwide chip sales less than a decade ago to 9% in 2020. For context, Japan, the United States, and the EU each produced around 10% of the globe’s microchips in 2020.

With that said, not all chips are created equal. While China’s capacity for chip fabrication has been meteoric, what it manufactures is not of the current generation pressed up against the edge of Moore’s Law, and they are still playing catch-up, not innovating. Almost ninety percent of the chips at the bleeding edge of capabilities are made in Taiwan and South Korea, and that competitive advantage is constantly reinforced by government and private investment into R&D done by an elite pool of experts with deep institutional knowledge who have no interest in sharing their arcane knowledge with would-be competitors.

In a world of typhoons, earthquakes, tsunamis, pandemics, and political tensions, having most of the world’s chip production in one particularly vulnerable and volatile area has supply chain planners scrambling to make their risk models make sense.

The answer they keep coming up with is reshoring.

At one point the United States made 37% of the world’s microchips. Today it cannot guarantee the chips it needs for domestic applications will be delivered from overseas manufacturing plants without disruption. That cannot stand.

At the beginning of the year President Biden announced Intel is building a $20-billion semiconductor fabrication facility in Columbus, Ohio. A bill for another $90 billion in R&D, manufacturing, and supply chain for semiconductors has been proposed, including $52-billion in incentives for chip manufacturers to build their new facilities in the United States.

Intel has since announced two more chip plants in Arizona that will cost $20-billion, and TSMC (Taiwan Semiconductor Manufacturing Co.) has also begun construction on a $12-billion facility in Arizona.

There are still enormous challenges ahead. These facilities will employ thousands of trained workers who will need to be inducted into an incredibly complex profession. Details of sourcing electricity, water, and other raw materials are still being figured out. Global supply chains will need to be completely rethought as major portions of the production footprint shifts from Southeast Asia to North America. What does it even mean to the future of the industry when the next generation of state-of-the-art facilities is being built on the other side of the globe from what has come before?

It is an exciting time. We do not have all the answers, or even educated predictions. Things are in motion though, and that is worth knowing and keeping an eye on. We look forward to discussing this further in the future.

Geoff Micks
Head of Content & Research
Executive Platforms

Geoff joined the industry events business as a conference producer in 2010 after four years working in print media. He has researched, planned, organized, run, and contributed to more than a hundred events across North America and Europe for senior leaders, with special emphasis on the energy, mining, manufacturing, maintenance, supply chain, human resources, pharmaceutical, food and beverage, finance, and sustainability sectors. As part of his role as Head of Content & Research, Geoff hosts Executive Platforms’ bluEPrint Podcast series as well as a weekly blog focusing on issues relevant to Executive Platforms’ network of business leaders.

Geoff is the author of five works of historical fiction: Inca, Zulu, Beginning, Middle, and End. The New York Times and National Public Radio have interviewed him about his writing, and he wrote and narrated an animated short for Vice Media that appeared on HBO. He has a BA Honours with High Distinction from the University of Toronto specializing in Journalism with a Double Minor in History and Classical Studies, as well as Diploma in Journalism from Centennial College.