x86 is Not a CISC/RISC Hybrid

Erik Engheim
5 min readDec 11, 2020

Awesome that you have been an engineer at NeXT/Apple. I bet you have some really interesting stories to tell. But please no need to be rude. I am just a regular Joe reading about technology popularizing it and doing my subjective analysis. I have never claimed to be an expert on this field, and there will always be things I get wrong. I have done several updates already based on helpful feedback.

Still I don’t think you got it all figured out either. E.g. former Intel chief architect Bob Colwells has been quite clear that micro-ops has nothing to do with RISC:

Intel’s x86’s do NOT have a RISC engine “under the hood.” They implement the x86 instruction set architecture via a decode/execution scheme relying on mapping the x86 instructions into machine operations, or sequences of machine operations for complex instructions, and those operations then find their way through the microarchitecture, obeying various rules about data dependencies and ultimately time-sequencing. The “micro-ops” that perform this feat are over 100 bits wide, carry all sorts of odd information, cannot be directly generated by a compiler, are not necessarily single cycle. But most of all, they are a microarchitecture artifice — RISC/CISC is about the instruction set architecture.

Thus it makes no sense to do as you and talk about x86 being a CISC/RISC hybrid. x86 still has all the CISC disadvantages at the ISA level. It still has multiple address modes instead of a load/store architecture. It still has variable length instructions from 1–15 bytes which as I have pointed out makes decoding multiple instructions in parallel that much harder.

x86 doesn’t have any of the things that made the original RISC CPUs RISC. It doesn’t have fixed with instructions. And it most certainly doesn’t have a small and simple instruction-set. Now of it could be argued that ARM has gotten more CISC like in behavior by adding a lot more instructions and some how them more complex. But it still uses fixed length instructions, unless you are going to get pedantic and claim thumb instructions make it variable length.

And of course it still has load/store architecture.

Your last paragraph is delusional clap trap. AMD has been developing APUs and was the premier driver of HSA long before Apple got into the game.

I made that abundantly clear in my article that Apple in no way was first with SoC designs. The point I made was that Apple had gone much further in that direction and you cannot argue against that point. Sticking a GPU and CPU on the same silicon die is not quite the same as what Apple has done.

What’s worse is you know nothing of Xilinx and how it is going to completely revolutionize AMDs designs.

I have never claimed to be some kind of CPU Oracle. But I would love to hear your thoughts on it. Maybe it is something I could include in the story.

It’s already happening with Zen 4 and RDNA 3.0/CDNA 2.0 based hardware designs in late stages.

Zen4 from what I understand is scheduled for 2021 or early 2022 which is around the same time Apple will release 32 Core Apple Silicon chips, and incidentally TSMC is scheduled to come with 3nm process in 2022. Yet Zen4 is being designed for 5nm. It could be that Apple will keep being one process node generation ahead.

I don’t doubt that AMD is capable of coming up with some clever hardware trick which can put them ahead of the game.

However what I struggle to see, is how AMD can pull any special trick that Apple cannot do as well. My point is that there is nothing special about the x86 ISA that allows AMD to go micro-architecture magic not available to other chip makers.

However there clearly are limitations imposed by the x86 ISA, which limits the ability to add decoders, which AMD has admitted themselves.

I have far more confidence in Lisa Su knowing how to execute than you prognosticating the future.

No doubt she is a brilliant women. But even she cannot wave a magic wand and make the limitations imposed by the x86 magically disappear.

Over time it will be really hard to keep ahead of the competition when you got Apple, Amazon, Google and likely Nvidia all throwing money at making high performance ARM designs. Are you really betting that AMD is going to keep beat all these competitors, all with deep pockets, year after year? You can add a whole host of smaller ARM chip making startups to the mix as well.

The IP, engineering talent and teams combined between the two companies is going to have you asking why in the hell didn’t Apple just switch to AMD.

I predicted four years ago that Apple would switch to ARM and I was just one year off in my prediction. If you read that whole prediction which was based on no knowledge of any Apple ARM plans, I laid out the rational based on the advantages in synergy, and longer battery time. In fact back then I took it for granted that Apple could not beat Intel on performance.

Still to me, a transition made perfect sense. Instead of two different hardware platforms to maintain they would have one. A lot of the SoC designs for iPad could be reused. Using x86 complicated the use of specialized co-processors for Apple.

Thus it makes no sense that they should go for AMD. It would have given them pretty much all the same disadvantages as Intel, and minimal benefits. Sure get could get higher performance, but most users don’t need radically higher performance. Battery time, and running cool matters more. AMD cannot offer that.

Not to mention AMD would have complicated Apple’s heterogenous computing strategy. Apple makes specialized chips that they use on their iPad and iPhone SoCs. I don’t quite see how they would bring those over to some kind of AMD SoC. This never happened with their Intel partnership so I don’t see why AMD would have been any different.

I am clueless about a lot that is going on in this industry, but I can create arguments based on what I know. It is up to those with better knowledge to produce counter arguments if they think I am wrong.

--

--

Erik Engheim
Erik Engheim

Written by Erik Engheim

Geek dad, living in Oslo, Norway with passion for UX, Julia programming, science, teaching, reading and writing.

No responses yet