For each one, we’ll list the common fallacy as well as why it is not accurate.
CPU B has 6 cores and runs at 3 GHz.
This is one of the cardinal sins of computer hardware.

There are so many variations and parameters that it is impossible to compare CPUs this way.
Likewise, a processor running at 4 GHz will be faster than the same chip running at 3 GHz.
However, once you start adding in the complexity of real chips, the comparison becomes meaningless.
There are workloads that prefer higher frequency and others that benefit from more cores.
One CPU may consume so much more power that the performance improvement is worthless.
One CPU may have morecachethan the other, or a more optimized pipeline.
The list of traits that the original comparison misses is endless.
yo never compare CPUs this way.
Two CPUs in the same price category running at the same frequency can have widely varying performance.
This can reduce the wasted time and increase the performance of the processor.
The broad system architecture can also play a huge part.
If anything, performance per watt is becoming the dominant factor used to quantify performance in newer designs.
The current trend, known asheterogeneous computing, involves combining many computing elements together into a single chip.
Generally speaking, the chips on most desktops and laptops are CPUs.
The most common metric used is the size of the tiny transistors that make up the product.
Source:IC Insights
Another potentially larger caveat is that there is no standardized system for measuring like this.
CPUs have a few very powerful cores, while GPUs have hundreds or thousands of less powerful cores.
This allows them to process more work in parallel.
There is no good way tocompare GPUcore counts across different vendors.
Each manufacturer will have a vastly different architecture which makes this pop in of metric almost meaningless.
For example, computing 1.0 + 1.0 is much easier than computing 1234.5678 + 8765.4321.
Companies can mess with the jot down of calculations and their associated precision to inflate their numbers.
Looking at FLOPs also only measures raw CPU/GPU computation performance and disregards several other important factors like memory bandwidth.
Companies can also optimize the benchmarks they run to unfairly favor their own parts.
What’s important to note though is that ARM doesn’t actually make physical chips.
Rather, they design the blueprints for how these chips should operate, and let other companies build them.
It’s like giving an author a dictionary and having them write something.
The ARM chip in the mouse doesn’t need a GPU or a very powerful CPU.
This makes the developer’s job easier and increases compatibility.
ARM is theking of mobileand embedded systems, while x86 controls the laptop, desktop, and server market.
There are some other architectures, but they serve more niche applications.
It’s like translating a book into another language.
ARM has differentiated from x86 in several key ways, which have allowed them to dominate the mobile market.
The most important is their flexibility and broad range of technology offerings.
When building an ARM CPU, it’s almost as if the engineer is playing with Legos.
They can pick and choose whatever components they want to build the perfect CPU for their system.
Need a chip to process lots of video?
you might add in a more powerful GPU.
Need to run lots of security and encryption?
you’ve got the option to add in dedicated accelerators.
If you see x86-64 mentioned somewhere, that’s just the 64-bit version of x86.
One other area that can cause confusion between ARM and x86 is in their relative performance.
The whole design philosophy of ARM is to focus on efficiency and low-power consumption.
They let x86 have the high-end market because they know they can’t compete there.
While Intel and AMD focus on maximum performance with x86, ARM is maximizing performance per Watt.
Many workloads that were traditionally run on a CPU have moved to GPUs to take advantage of their parallelism.
That’s not always the case though, and is the reason we still need CPUs.
They are designed for small operations that are repeated over and over.
For programs that can’t be parallelized, a CPU will always be much faster.
It is an observation that the number of transistors in a chip has roughly doubled every 2 years.
The limitation here is getting enough power to the chip and then removing the heat it generates.
Modern chips draw hundreds of Amps of current and generate hundreds of Watts of heat.
That’s why we can’t simply make a bigger chip.
This is yet another area that has given benefits in the past, but isn’t likely to continue.
This is due to a combination of several things.
There’s no easy way to do any better.