AMD's ThreadRipper, Intel’s Core i9, Vega and Volta GPU launches, the future of computers is looking interesting again. Does this mean a return of the AMD vs Intel Processor War?
Lately, processor performance hasn't been increasing all that rapidly. Clock speeds have been rising gradually, and even then it's mostly been in the dynamic overclocking that's built into most personal computer processors these days. Maximum core counts continue to rise as does the size of the on-die cache, but the performance benefits of these aren't universal; some applications really just need higher single-threaded performance.
Meanwhile, GPUs have been showing huge performance improvements with each generation and commanding ever higher profit margins while taking those profit margins away from the main processors, Intel's bread and butter.
In recent years, Intel has been emphasizing power efficiency more than performance, which is leading to ever smaller processors and enabling tablets and ultrabooks to become ever more capable. Though Intel is also continuing to develop processors with 16 (and more) cores, it's hard to ignore the fact that the single-threaded performance hasn't changed much from generation to generation.
Finally, after years of floundering, AMD has launched the Ryzen family processors, with high core counts and high single-threaded performance. Like Intel, AMD is using dynamic clocking, ramping up the processor's clock speed in response to its workload. Dynamic clocking is getting sophisticated enough that the latest generation processors are able to dynamically change the clock speed of a small number of cores, no longer needing to ramp them all up at once; the small number of cores can ramp up to higher clock speeds, thus boosting single threaded performance.
At the same time, nVidia and AMD are going head to head with ever more ambitious GPU designs as well. AMD's latest, in some cases incorporating two full-fledged GPUs onto a single board, have enough horsepower to crunch through 8K footage with aplomb, something no one expected until AMD arrived at SIGGRAPH to demo its new hardware and showed what its hardware could do.
A lot of PC enthusiasts are saying that this signals that the Processor Wars are back.
Those enthusiasts are correct, but the circumstances are very different now from the last time war broke out.
First off, GPUs are a major factor now, which wasn't the case during the Pentium IV/Athlon64 days. Both Intel and AMD processors are being overshadowed by the massive GPUs installed in machines alongside them. With every new generation, GPUs have gotten both more powerful and more general purpose, yet the same factors that enable GPU designers to make such monstrous processors also enable them to incorporate dedicated, special purpose processing units such as tensor processors in them.
The second major difference is that personal computer sales are declining across the board. It's not just the anaemic MacPros, it's pretty much all of them. Phones and tablets are rapidly taking over the entry level computer market since most computer users don't have a need for computers powerful enough to work with 4K or higher resolution video.
On top of this, even entry level computers are enough to edit HD video, so even professional editors often have no need for particularly high-end machines; especially as most of them use an offline editing workflow, rather than attempting to edit raw footage directly.
Outside of casual users, the largest market is corporate. Most corporate users, however, require computers powerful enough for email, web browsing, and spreadsheets and word processors. None of these require high-end machines. Developers need more capable machines than most corporate users, but only a minority of developers need anything more than an ultrabook with a lot of memory and an external monitor or two.
Who needs the high-end?
So who needs the high-end machines? Mainly, very dedicated gamers and post production professionals. Neither is that large a market since most gamers are playing casual games that run on cell phones anyway.
If it weren't for GPUs, this would most likely be a re-run of the Athlon64/Pentium4 war, only this time AMD is working with Global Foundries, rather than using fabs it owns by itself. With the large and growing importance of GPUs however, things are looking quite a bit more interesting than they did during the Athon64's heydey; now AMD has an ace in the hole along with a third competitor — nVidia.
All indications so far are that the now available Vega GPUs from AMD are quite a bit more than just worthy competitors; Jarred Land's response to seeing the Vegas in action tells it all.
Is Intel in trouble? Maybe. With processors becoming less critical in the fact of being outclassed by GPUs, Xeon Phi being primarily limited to custom supercomputing applications, and nVidia at war with AMD, Intel's needs a new ace in the hole.
Intel has one, however; its memory. The more significant factor holding computers back these days is memory; it simply hasn't kept pace with processors, either in terms of capacity or throughput. Intel, in collaboration with Micron, is developing 3D XPoint (read: Cross Point) memory, a 3-Dimensional memory array, that promises to revitalize the memory industry by raising the performance of system memory to the levels of processor cache, while at the same time boosting capacity to rival disks. If it pans out, we could be seeing computers with a terabyte of system memory that's faster than current SSDs, combined with on-chip caches measured in gigabytes.
Whether that will happen let alone when is still uncertain, but with both AMD hitting its stride on both the GPU and CPU along with heated up competition from nVidia, Intel will need something big to maintain its position of dominance.
Conclusion? Yes, the Processor Wars are back... but this time there are more factions, and once again, customers will probably win.