Thanks for the feedback, I deliberately left out discussion of the nanometer process, as there are simply too many factors one can discuss. I also left out a mention of M1 having e.g. more ALU units, significantly faster memory, larger L1 cache etc.
The size of the nodes is something people can fairly easily lookup themselves. I wanted to focus on the things that are not so easy to grasp when you read articles about the M1 or microchips in general.
Also if I was to write about it, I would have wanted to give context, and discussing the size of the nodes is a big article in and by itself, e.g. since the intel numbers are not comparable to the numbers used by TSMC.
As for the watt usage, I would love to hear your sources on that. I might include it in the article.
The problem I have seen thus far is that there is no good authority on what the TDP of the M1 is. You cannot get it by simply looking at how much power the socket draws, as some have done. TDP is for real world high workloads. So I am not sure e.g. that it includes something like Geekbench which is probably not a realistic workload, as it is designed to stress the CPUs to the max.
Next is the problem of comparing the M1 to AMD and Intel chips. The M1 is basically a whole computer. So the TDP you get for it is not the TDP for the CPU cores. It is for the CPUs, neural engine, GPU cores, memory, IO controller and many more things.
For instance AnandTech discuss here, why the power drawn is often higher than the TDP.
Also the node size you are able to use is not arbitrary. E.g. one could argue that Intel chips back in the 90s was crap, and that they only had an advantage due to using smaller nodes than their RISC competitors. Still due to Intel having much larger volume production they where able to use smaller nodes. Apple has much of the same advantage today. They have higher volumes than AMD. Part of that is due to making iPhones, iPads and relatively few distinct computer models.
Apple also makes smaller chips than AMD, which allows them to use smaller nodes. If you make large chips like AMD, then you will get higher defect rate on an immature process node. That costs money. Thus that Apple is able to use smaller nodes is not arbitrary but directly connected to a real advantage they have in the market.
Back in the 90s the market did not care that RISC CPUs where theoretically better than Intel chips on paper. They care about who could deliver best real world performance at lowest prices.
We will see how this will play out. Eventually AMD will be able to use the same process node as Apple (actually they partially do already), but at this point Apple should also be making larger chips. The M2 may be much larger, with more cores, cache etc making it higher performance. Apple is not going to sit still and let AMD catch up with them.