Semiconductors 3.0

I am sick and tired to hear the same buzzwords a zillion times every single day. IoT, Cognitive Computing, Cloud Computing, Fog Computing, Big Data, Artificial Intelligence, Smart-whatever (from objects to cities and pretty soon toilet paper, I’m afraid). What really makes me sick is that these buzzwords are used, most of the time, to avoid entering into the “low-level” details and are just thrown at us to justify unclear strategies. Intel wants to build the IoT fabric. What is that exactly? Does that mean they’ll compete with Cisco? For me, it’s just useless garbage language to avoid saying they don’t have a clue where they are going. On the other side of the competitive landscape, SoftBank buys ARM because of IoT. Right, obvious, isn’t it. But what does it mean exactly? Why would a Telco (or a pure financial institution, I’m not sure really) need to own a CPU architecture? And why would ARM dominate IoT like they dominate mobile? What software legacy makes their Instruction Set Architecture desirable in the IoT space? Hey, I am not saying Intel will not be the IoT fabric or ARM won’t be powering the IoT edge devices. I am just saying that throwing buzzwords at us isn’t sufficient to explain and justify strategies. Throwing slogans at people is what advertisers or politicians do. The tech world, i.e. us, should be a little bit more pragmatic and should not accept slogans as absolute truth.

Alright, now that I have expressed my frustration, let me try to enter in the details and see what IoT might do to the competitive landscape. The title of this post is Semiconductors 3.0, so, you might guess that there was a 1.0 and a 2.0.

Semiconductors 1.0: The PC era
If we look at the different phases of semiconductor market evolution, we can clearly see the PC era as THE first major continental drift that totally changed the landscape. Intel became the huge behemoth we know, DRAM became strategically important and put Japan Inc. on the planet, and, probably even more important, software legacy became a major driving force. This was the first mass market “platform”, the Wintel platform. That era was characterised by several 100Mu of “objects” (PCs, workstations) priced at $1000. I am just using round numbers for the sake of simplification. These are the orders of magnitude. A $100B market, that’s enough to change the landscape.

Semiconductors 2.0: The Smartphone era
PC stopped growing and started declining. What came next was a battery powered device, the Smartphone. Intel totally blew it. Qualcomm became the huge behemoth we know. Flash memory became strategically important and put Korea Inc. on the planet and the software changed totally. Microsoft also blew it and now we have Apple and Google platforms both running on ARM cores. Because of the Apps store phenomenon, the platforms are only 2 because the App gap is enough to kill even a good platform if you can’t find all of the popular apps on it (Windows Phone, Symbian and a few others died or are dying as a result). This era is characterised by several Billions of units of objects (in this case, phones or tablets) priced at $100. Again, I am using round numbers to keep it simple. So, the $100B market size is actually similar to the PC era.

So, what’s next? 
Guess what, Mobile isn’t growing much anymore. The only growth is in the low and mid-range, in developing countries, so price pressure is getting worse and worse. This time, Qualcomm is not even alone. Mediatek, Samsung and several Chinese competitors are also capable of making mobile SOCs and modems, running the very same Android. The poor western guys who tried to compete are all dead after loosing tens of billions of $. Broadcom, ST-Ericsson, NXP, Infineon, Icera/Nvidia, Intel, TI, the list is long and sad. One could say that the 2.0 era is also a drift towards the east. Korea and China are much stronger than they were in the 1.0 PC era. Taiwan which was a strong player in the PC days, remained a strong player in the Mobile period, mostly thanks to TSMC and Mediatek.

Semiconductors 3.0: The Internet of Everything era
Where is the next growth sector? I guess we all agree it has to do with the emergence of a bunch of new connected devices. But, we should be careful trying to extend the 1.0 and 2.0 rules (I intentionally did not use “paradigm” to avoid one of my least favorite buzzwords) to this new era. Why? Mostly because of fragmentation.
IoT is not one market. It’s a collection of many markets that have different cycles, different technical requirements, different growth patterns. To simplify to the extreme, we can probably say that it’s a market for tens of Billions of $10 units. So, we keep the same constant $100B TAM and everyone is happy… Well, not quite.
The fragmentation brings new rules compared to the PC and Mobile eras. Automotive, Industrial, Networking, Consumer, Medical, Military applications are not “one” IoT market. For example, there is absolutely no reason why one merchant software platform would win in all the sub-segments. Plain Linux is probably fine for almost all apps. Most IoT devices will not need and certainly will not encourage large, open apps stores to be developed which would be a big driving force against fragmentation. Do you really believe BMW will let you download a Lewis Hamilton app to change your autonomous driving style?

Uncertainty and fragmentation are not conducive to the way the US corporations think. Let’s face it, even if it’s politically incorrect, most technology companies in the US are hoping to build monopolistic positions. They invest big money when they feel there is a possibility to build a quasi-monopoly. And, they are extremely good at it. I would therefore bet that Semiconductors 3.0 will continue the drift towards the east. Uncertainty and fragmentation aren’t that bad if you move really fast and adapt really fast. We could also see some European companies doing reasonably well, close to their end market. Germany is important for Automotive and Industrial, for example, so NXP and Infineon could continue to do well.

How about processor platforms? I think the game is totally open and I believe that it’s open both at the edge and in the cloud.

At the edge, ARM is definitely under threat because the 32 bit core is a commodity and the software legacy is not so relevant.
In addition, I am a firm believer that some more processing will need to happen at the edge to pre-process the enormous amount of data generated by the hundreds of billions of sensors. This pre-processing is necessary for latency, bandwidth and cost of data transportation reasons. It will be done most probably by embedded neural processing units, let’s call them ENPU. A prototype of this approach is the QuarkSE MCU from Intel. It has a 32 bit X86 Quark core but its real power comes from its embedded neural processing unit which does super fast and super low power sensor data analytics. It would actually be fun to see Intel become the king of the edge, almost by accident, as this is clearly not their strategic focus but rather a hobby.

In the Cloud, Intel is under serious threat because the processing required for the big data analytics (buzzwords again, sorry) is different. Deep Learning (more buzzwords) or whatever AI technology will take off, does not care about X86 instruction set. It cares about massive parallelism, removal of the Von Neumann bottleneck and, I strongly believe, it will want in-memory computing. A new kind of Cloud Processing Unit will emerge, let’s call it CNPU for Cloud Neural Processing Unit and the control processor will become a commodity, doing ancillary tasks. GPUs are doing this neural processing today but that’s just a gap filler until something better emerges. No matter what Nvidia says, GPU were designed to do Graphics, not to do Neural Processing. Our brain isn’t full of GPUs, sorry guys. That’s ARM’s opportunity to finally enter the data center in a semi-big way but not in the key strategic socket. Who will take that neural processing socket? New names probably. Google, Alibaba, Baidu, Facebook, Amazon, Apple,… Anyone trying to shoehorn its ISA, whether ARM or X86 into that socket will fail. Are ARM or Intel capable to come up with something radically different from their core technology? I am not betting on that, but I might be wrong.

I also believe that the 3.0 era will also see a new memory technology emerge. Artificial Intelligence needs a fast write, fast read, non-volatile memory. Samsung/Grandis, Intel/Micron, WD/Everspin and others are working hard on this. I don’t know who will win but it will be interesting to watch.

I am sure Mr Son, who just wrote a $32B check to acquire ARM, wants everything I just wrote to be totally wrong. He’s a much smarter than I am, so he’s probably right 🙂 Stay tuned as we see this new era develop.

PS: This is a personal note. Most of you won’t care so you can stop reading here. As I am writing this post about the next phase of the Semiconductor market, I cannot describe how sad I feel when I see ST, my former employer, going through an endless decline. This fragmented market I describe as the Semiconductors 3.0 era would have been a PERFECT market for ST. ST has sensors, MCUs, analog, mixed signal, power. It has almost all the bits and pieces (OK, they’re a little weak in connectivity and in software but that could be fixed) to be the Intel/Qualcomm of the Semiconductors 3.0 era. It also has 28nm FDSOI which is the perfect process for IoT edge devices. What’s missing is the leadership to unify all the people, all the silos in ST around one clear vision. I wish them good luck anyhow.

Philippe Lambinet

Of Robots and Men

In 1937, John Steinbeck published his novel, Of Mice and Men, telling the story of two migrant workers during the great depression. Now, almost 80 years later, there is a growing fear of an unprecedented depression and a fear that humans will soon be under threat from Robots.

“The oldest and strongest kind of fear is fear of the unknown” observed H.P. Lovecraft, certainly one of the all time great specialists of fear. So, let’s try to know more about this threat. Hopefully, it will reduce the fear and help us better prepare for the consequences of the revolution underway.

Revolution? That’s the word used everywhere to describe what is happening. Now is the beginning of the 4th industrial revolution. Let’s take a look at the history of industrial revolutions and how they impacted mankind.

The first industrial revolution came with the steam engine. Progressively, from 1780 to 1850, small workshops were replaced by factories, coal became the dominant source of energy and horses were replaced by steam machines. This first revolution was much more threatening to horses than it was to humans!

The second industrial revolution from 1880 to 1950, came with petrol and electricity (produced with coal and petrol) as the main energy sources. This was the era of productivity gains. The Ford T factory remains the icon of this second revolution. Workers started to earn enough salary to become consumers. Mass production for mass consumption. This appeared as a non-threatening revolution for humans. In fact, after 2 revolutions, people were richer. They were also more urban. However, our planet started to suffer. The first virulent denunciations of the waste of natural resources of our industrial society started to appear at the beginning of the 20th century.

While all historians agree on the first and second revolutions, the description of the third and fourth vary. Some authors are describing what is underway as the third industrial revolution, some are calling it the fourth. I believe that it makes sense to separate two phases so I’ll stick with the third and fourth separation.

The third is already about robotics. The very first industrial robot made its debut in 1961, at a GM factory. The Auto industry, once again, was pioneering new manufacturing technologies. These robots were dumb. They were able to do one task under the control of a simple program running on a very basic microprocessor. The aim was to outsource painful or dangerous tasks to machines and keep humans safe and healthy. That’s the way it was perceived and this (r)evolution was easily accepted. That’s pretty much where the world is right now.

The fourth industrial revolution, which just started, is about smart robots. What some call cyber-physical systems, or Industrial IoT. It is also about sustainable energy sources and protection of the environment. It is much more complex from an ethical point of view and, of course, from a social point of view.

While a vast majority of people are in favour of robots assisting them for painful or dangerous tasks, they are very much against robots taking care of kids or elderly people and they have great reservations about robots performing surgeries, for example. For the first time, machines are not there to replace horses or assist humans. They are clearly positioned as replacement for humans, including for safe, clean and even qualified tasks.

What most people tend to forget is that this new capability will also enable things that were not possible before. Robotisation will allow higher quality products, for example, because quality inspection done on a sampling basis will now be done 100%, at several steps in the production line. Smart data analytics and smart factories will also reduce waste. The first and second revolutions were about coal and petrol. The 4th revolution is about renewable energy and sustainability. It is striking to see that Tesla considers the factory as part of the product and put in their factories the same emphasis on innovation as they put in their cars. Again the auto industry is showing the way.

What relatively wealthy people in the occidental part of the northern hemisphere, a very small minority of the world’s population, also forget is that their societal model was based on the availability of cheap labour to take care of the tasks they did not want to perform. How long were they hoping it could last? Certainly not forever!

We can see the fourth industrial revolution as a threat. But we can also see it as an opportunity to save the planet, better share the wealth and improve the quality of life as new services become available to more people. It is also an opportunity to re-localize some activities, fight the most negative effects of globalisation and develop industries in under-industrialized regions of the world without repeating the environmentally destructive approach of the past. Industry 4.0 will enable the emergence of smaller factories, located closer to their end markets, powered by renewable energy and producing goods designed locally. Industry 4.0 may very well see the end of the “designed in California, produced in China” era. What about humans? Hopefully, they’ll have an opportunity to live in a “designed and produced near you” era. Robots may become creative and marketing savvy one day but that will have to wait for Industry 6.0, at least!

Philippe Lambinet

Welcome to IntelliTech Consulting

 

cropped-TableauPL.jpgFounded as an advisory firm in 2016, IntelliTech is led by a veteran of the Semiconductors and Consumer Electronics industries. IntelliTech is part of a network on consulting firms, worldwide. Our value proposition is to provide hands-on advice for companies that need positioning towards growth and targeting to create sustainable return for their shareholders.

Our  team specializes in accelerating business take-off and strategic positioning, growth programs, cash flow & margin improvements, and interim management (CEO, sales & marketing, and operations).

IntelliTech operates worldwide, taking on complex national and international projects.