Why does the Tech Press avoid doing any Integrated GPU Accelerated Blender 3D Cycles rendering tests as the Apple M series SOCs support that. And that's faster and uses less power than any Cycles CPU rendering that's slower and more power hungry.
Now the x86 SOCs have really Poor iGPU compute API support and Blender 3D 3.0/later editions have no support for OpenCL as the Blender Foundation decided to drop that OpenCL support long ago where Blender 3D's iGPU/dGPU compute API support is mostly via Nvidia's CUDA or Apple's Metal mostly.
But What About Intel's OneAPI/Level-0 and AMD's ROCm/HIP and the tech press never really looks into why for AMD that ROCm/HIP is not really supported for AMD's Integrated Graphics and I have not seen anyone doing any testing of Intel's OneAPI/Level-0 for any iGPU accelerated Blender 3D Cycles rendering.
Apple's iGPUs on their M series processors are well supported for non gaming graphics applications for the most part but what about the x86 makers iGPUs and Blender 3D's iGPU accelerated cycles rendering should be on a regular roster of testing.
Why is there so much fragmentation now for Integrated Graphics and some Standard iGPU compute API and OpenCL was supposed to be the cross platform answer to that but really long before there was any Apple M series processors OpenCL never really progressed much on Linux and Apple moved on to using Metal while Nvidia has always favored its CUDA Graphics and Compute API.
But the Phoronix Automated test suite has more tests than then ones that Phoronix has chosen to list in the Article so there's usually a link to the remainder of the tests and I'd rather there be some iGPU testing done as that's being ignored for the most part in favor of mostly CPU cores testing as if the iGPUs on the processors do not even exist! And there's quite a bit more FP compute on the makers respective iGPUs that can come in handy for more than just Gaming workloads!
I'll have more integrated graphics tests to come... Unfortunately I am a one-man show and only so much time to juggle everything. Initially focusing on CPU tests since they tend to be most trouble-free and reliable.
They're quite different product lines, serving different markets. The heatsink you'd need to cool something like the 9800X3D would be the size of the Mac mini itself. The Intel Core Ultra 9 285K costs roughly the same as the entire M4 mini, has twice as many cores (4x as many threads), and its advertised base power is twice the M4's maximum.
There's nothing wrong with comparing them, but they don't seem to be in the same market.
M4 is Apple's top-of-the-line CPU design atm and this very same CPU is used across all their products and not only Mac Mini's. It's used in MacBook Pro.
Is M4 TDP different in MacBook Pro than what it is in Mac Mini?
I don't think it is entirely pointless but what I think it is pointless is to presumably leave so much performance on the table for a device that does not even run on a battery.
M4 raw performance compilation from the article for my future self.
Sorted by result.
Note: M4 wins by a landslide when it comes to perf/watt but performs poorly in terms of raw performance for these benchmarks.
Note to self: Keep working with a beefy x86 workstation because time is valuable.
FLAC encoding: best
Compiling FFmpeg: average
Compiling LLVM: average
C-Ray 2.0 1080p: average
C-Ray 2.0 4k: average
Zstd compression: second worst
Zstd decompression: second worst
7-Zip compression: worst
7-Zip decompression: worst
Appleseed 2.0: worst
V-RAY 6.0: worst
IndigoBench: worst
QuantLib: worst
Apache HTTP Server: worst
DuckDB: worst
PyBench: worst
x265: worst
Kavazaar 2.2: worst
AVIF encoding: worst
JPEG-XL encoding: worst
This compares laptop level power budget vs beefy desktops.
They should include a picture of m4 mini and intel/amd desktops side-by-side.
Intel® Core™ Ultra 9 285K CPU alone costs the same as m4 mini. Its base/turbo power is 125/250W while m4 seems to be 11/15W.
You're not allowed to compare the M4 to anything that it loses to!
Well to be fair they should've compared it to something price comparable for the whole package.
Comparing $600 Mac mini vs $600 CPU doesn't make sense. I can get work done with the Mini, that can't be said with one CPU.
Might just as well compare it with EPYC/Xeon.. would be about as meaningful.
If we just ignore price and power usage entirely it’s pretty pointless.
Of course Studio and the Mac Pro probably don’t really offer such great value as the Mini
Throw in RTX 4090 into that desktop while at it or they might loose on CPU inference.
> Keep working with a beefy x86 workstation
Instead of the cheapest, entry level alternative? How is this even remotely comparable?
"worst" in this list is still 2-4x faster than my main desktop PC
Why does the Tech Press avoid doing any Integrated GPU Accelerated Blender 3D Cycles rendering tests as the Apple M series SOCs support that. And that's faster and uses less power than any Cycles CPU rendering that's slower and more power hungry.
Now the x86 SOCs have really Poor iGPU compute API support and Blender 3D 3.0/later editions have no support for OpenCL as the Blender Foundation decided to drop that OpenCL support long ago where Blender 3D's iGPU/dGPU compute API support is mostly via Nvidia's CUDA or Apple's Metal mostly.
But What About Intel's OneAPI/Level-0 and AMD's ROCm/HIP and the tech press never really looks into why for AMD that ROCm/HIP is not really supported for AMD's Integrated Graphics and I have not seen anyone doing any testing of Intel's OneAPI/Level-0 for any iGPU accelerated Blender 3D Cycles rendering.
Apple's iGPUs on their M series processors are well supported for non gaming graphics applications for the most part but what about the x86 makers iGPUs and Blender 3D's iGPU accelerated cycles rendering should be on a regular roster of testing.
Why is there so much fragmentation now for Integrated Graphics and some Standard iGPU compute API and OpenCL was supposed to be the cross platform answer to that but really long before there was any Apple M series processors OpenCL never really progressed much on Linux and Apple moved on to using Metal while Nvidia has always favored its CUDA Graphics and Compute API.
But the Phoronix Automated test suite has more tests than then ones that Phoronix has chosen to list in the Article so there's usually a link to the remainder of the tests and I'd rather there be some iGPU testing done as that's being ignored for the most part in favor of mostly CPU cores testing as if the iGPUs on the processors do not even exist! And there's quite a bit more FP compute on the makers respective iGPUs that can come in handy for more than just Gaming workloads!
I'll have more integrated graphics tests to come... Unfortunately I am a one-man show and only so much time to juggle everything. Initially focusing on CPU tests since they tend to be most trouble-free and reliable.
AMD views ROCm as a professional market feature so they don't support it on IGPs and also not on most gaming cards, sadly.
It'd be more helpful if each benchmark lists whether it has Rosetta translation and if not, does it have ARM instruction optimization.
The M4 might be brute forcing these applications, while Intel and AMD have x86 specific instruction optimizations.
So ... performance-wise results seem to be quite apart from Intel/AMD?
They're quite different product lines, serving different markets. The heatsink you'd need to cool something like the 9800X3D would be the size of the Mac mini itself. The Intel Core Ultra 9 285K costs roughly the same as the entire M4 mini, has twice as many cores (4x as many threads), and its advertised base power is twice the M4's maximum.
There's nothing wrong with comparing them, but they don't seem to be in the same market.
What's the market for Mac minis if not desktop nor laptop?
Desktop spans a wide range. Mac minis should be compared against NUCs or cheap PCs, not high-end PCs.
M4 is Apple's top-of-the-line CPU design atm and this very same CPU is used across all their products and not only Mac Mini's. It's used in MacBook Pro.
Is M4 TDP different in MacBook Pro than what it is in Mac Mini?
M4 is Apple's weakest CPU.
Cost matters. Let's not forget about electricity/cooling cost as well.
That said, I wonder what would desktop cost that is better or equal in https://browserbench.org/Speedometer3.0 on m4 mini. Does it even exist?
Was there a single laptop chip included in the benchmark?
They are comparing a 5w CPU with 80-200w ones.. which is entirely pointless.
I don't think it is entirely pointless but what I think it is pointless is to presumably leave so much performance on the table for a device that does not even run on a battery.
> Once Asahi Linux is up and running on the M4 devices reasonably well, > I'll be eager to run some 1:1 Linux benchmarks.
What would be the point of running these benchmarks on Asahi? It would be only a benchmark of Asahi itself, nothing else.