The Xbox One and PS4 share similar specs, but the devil’s in the details

In the past, it’s been difficult to do truly apples-to-apples performance comparisons between game consoles because of the vastly different architectures of the various systems. You can get some raw numbers—clock speeds, memory bandwidth, FLOPS—and compare them that way, but how games looked and ran often had just as much to do with console-specific optimizations and tweaks from the developers as it did with the theoretical capabilities of the hardware.

With the Xbox One and PlayStation 4, things have changed. In lieu of expensive custom-designed chips, both Microsoft and Sony have opted to commission semi-custom CPU/GPU hybrids from AMD based on the same basic architecture that AMD is already selling in PCs. There are still variables to account for, but these new consoles are more alike on the inside than any others in recent memory. In advance of the new consoles’ imminent launches, we’ll take a quick comparative look at how the consoles’ CPU, GPU, and memory configurations stack up. Hopefully, this will give you a better understanding of what the hardware differences mean for the first wave of launch games soon crashing down upon us.

The CPU: within a stone’s throw

Enlarge / The Xbox One’s main chip up close.

Both the PS4 and the Xbox One’s CPUs use the exact same number of computing cores and the exact same AMD “Jaguar” architecture. In terms of raw performance, the only real point of differentiation between them is clock speed.

We know that the Xbox One’s CPU clock was recently raised to 1.75GHz from the 1.6GHz of the original devkits, a respectable 9.37 percent boost. Sony hasn’t stated an official figure for the PS4’s CPU speed, though rumors point to it being the same 1.6GHz as the pre-boost Xbox One. Depending on the CPU speed, this means that for CPU-heavy games the Xbox One may have a slight edge over the PlayStation 4. This different won’t be very noticeable, though, unless the game is coded to be absolutely desperate for every drop of performance it can squeeze out of the CPU.

In any case, Jaguar isn’t AMD’s fastest CPU architecture—it was actually designed first and foremost for low-power systems like tablets and low-end to mid-range laptops. Eurogamer’s Digital Foundry did an in-depth interview with Microsoft’s Andrew Goossen and Nick Baker, two members of the Xbox hardware design team, to get a sense of why Microsoft chose the components and made these design decisions (it’s a very long interview, but it’s worth reading in its entirety for the insight it provides into both consoles’ hardware design). Baker summarized why a company designing a new console would choose to go with more, slower Jaguar CPU cores rather than a chip with fewer, faster cores based on AMD’s speedier Piledriver architecture.

“The extra power and area associated with getting that additional [instructions per clock] boost going from Jaguar to Piledriver… It’s not the right decision to make for a console,” Baker said. “Being able to hit the sweet spot of power/performance per area and make it a more parallel problem. That’s what it’s all about. How we’re partitioning cores between the title and the operating system works out as well in that respect.”

Enlarge / Using many CPU cores and a few dedicated blocks for audio and video processing lets you do many small tasks at once. Having some resources to dedicate to the system UI helps keep things smooth and prevents interruption of gameplay.

In other words, given the size of the chip and the box and given that these consoles will often be called upon to do many small tasks at once, it made sense for both console makers to go with more cores rather than faster ones. It’s also worth noting that, at least for some tasks, the consoles will be able to offload processing duties to the GPU and to other onboard coprocessors to lighten the CPU load, especially when it comes to non-gaming and multitasking functions. Both consoles include, for example, dedicated blocks for encoding and decoding video, as well as audio processors that can take some sound-related pressure off of the CPU. PS4 lead architect Mark Cerny brought chips like these up in an interview with Gamasutra in April.

“The reason we use dedicated units is it means the overhead as far as games are concerned is very low,” said Cerny. “It also establishes a baseline that we can use in our user experience. For example, by having the hardware dedicated unit for audio, that means we can support audio chat without the games needing to dedicate any significant resources to them. The same thing for compression and decompression of video.”

The GPU: Microsoft has more MHz, but Sony has more hardware

Enlarge / Inside the Xbox One. The large chip near the center surrounded by the RAM chips is the main processor, which combines the CPU and GPU among other things.

The two consoles diverge more sharply when it comes to their GPUs. They again share the same underlying architecture (AMD’s Sea Islands, which has come to market in some of its Radeon 7000 and 8000-series GPUs), which makes comparisons between the two simple. The Xbox One’s GPU runs at 853MHz (another late-in-the-game clock speed boost) while the PS4 GPU runs at 800MHz. However, the PS4 GPU has much more hardware behind it—18 of AMD’s compute units (CUs), rather than the 12 CUs in the Xbox One.

These two GPUs support all of the same APIs and hardware features. The Xbox One can render a 3D image that looks exactly the same as one rendered by the PS4, it just can’t do it quite as quickly. It’s the reason why a Radeon HD 7790 clocked at 1GHz delivers worse performance in PC games than a Radeon HD 7850 clocked at 860MHz. There’s just more silicon there to do the heavy lifting.

In their Digital Foundry interview, Microsoft’s Goossen and Baker argue that, for the Xbox One’s launch titles, the clock speed boost to the GPU was more effective than adding extra CUs would have been. For some games, that may be the case, but what we’ve seen in the PC market for years and years is that GPUs with more CUs (assuming an otherwise similar architecture) are going to perform better. This will potentially give PS4 developers additional headroom to make their games more detailed, make them run more smoothly, or make them render at a higher resolution than on the Xbox One. There’s also more silicon there to help out with any GPU-assisted compute tasks that need to be run.

Especially on the GPU side, software and API optimizations will play some part in how quick the two consoles will be, but from what both Microsoft and Sony are saying, the companies’ strategies won’t differ much here. Both of them are trying to get typical PC APIs out of the way where possible, increasing performance by reducing the number of layers between game code and the hardware. Back in March, Ars gaming editor Kyle Orland wrote about some of Sony’s statements to this effect.

Sony is building its CPU on what it’s calling an extended DirectX 11.1+ feature set, including extra debugging support that is not available on PC platforms. This system will also give developers more direct access to the shader pipeline than they had on the PS3 or through DirectX itself. “This is access you’re not used to getting on the PC, and as a result you can do a lot more cool things and have a lot more access to the power of the system,” [Sony Senior Staff Engineer Chris] Norden said. A low-level API will let coders talk directly with the hardware in a way that’s “much lower-level than DirectX and OpenGL,” but still not quite at the driver level.

In the Digital Foundry interview, Goossen said much the same thing of Microsoft’s software implementation.

“To a large extent we inherited a lot of DX11 design,” he said. “When we went with AMD, that was a baseline requirement… We’ve been doing a lot of work to remove a lot of the overhead in terms of the implementation and for a console we can go and make it so that when you call a D3D API it writes directly to the command buffer to update the GPU registers right there in that API function without making any other function calls. There’s not layers and layers of software. We did a lot of work in that respect.”

You’ve also got to account for the fraction of GPU resources the system may reserve during gaming for non-3D-rendering purposes. Goossen noted that about 10 percent of the Xbox One’s GPU would be reserved for Kinect and other system-level processes. As of this writing, Sony hasn’t gone into detail about just how much of its GPU would be reserved for system use.

via: ArsTechnica

Scroll to Top