Member-only story
Apple M1 foreshadows Rise of RISC-V
The M1 is the beginning of a paradigm shift, which will benefit RISC-V microprocessors, but not the way you think.

By now it is pretty clear that Apple’s M1 chip is a big deal. And the implications for the rest of the industry is gradually becoming clearer. In this story I want to talk about a connection to RISC-V microprocessors which may not be obvious to most readers.
Let me me give you some background first: Why Is Apple’s M1 Chip So Fast?
In that story I talked about two factors driving M1 performance. One was the use of massive number of decoders and Out-of-Order Execution (OoOE). Don’t worry it that sounds like technological gobbledegook to you.
This story will be all about the other part: Heterogenous computing. Apple is aggressively pursued a strategy of adding specialized hardware units, I will refer to as coprocessors throughout this article:
- GPU (Graphical Processing Unit) for graphics and many other tasks with a lot of data parallelism (do the same operation on many elements at the same time).
- Neural Engine. Specialized hardware for doing machine learning.
- Digital Signal processing hardware for image processing.
- Video encoding in hardware.
Instead of adding a lot more general purpose processors to their solution, Apple has started adding a lot more coprocessors to their solution. You could also use the term accelerator.
This isn’t an entirely new trend, my good old Amiga 1000 from 1985 had coprocessors to speed up audio and graphics. Modern GPUs are essentially coprocessors. Google’s Tensor Processing Units are a form of coprocessors used for machine learning.

What is a Coprocessor?
Unlike a CPU, a coprocessor cannot live alone. You cannot make a computer by just sticking a coprocessor into it. Coprocessor as special purpose processors which do a particular task really well.