Radeon R9 290X Compared To Xbox One, PlayStation 4 GPU Hardware, Consoles Blown Away

Patrick Frye

The AMD Radeon R9 290X hardware specifications have been announced, so how does the Radeon R9 290X compare to the Xbox One and PlayStation 4 hardware?

As previously reported by The Inquisitr, the Wii U, Xbox One, and PlayStation 4 are all designed by AMD, which should make things much easier for video game developers, who say the Xbone and PS4 are evenly matched when it comes to features, but not necessarily performance.

AMD announced the Radeon R9 290X during their GPU14 Tech Day event in Hawaii. The AMD Radeon R9 290X should be faster than the Nvidia Geforce GTX Titan, which originally sold for $999. In that time frame AMD also released the Radeon HD 7990, which briefly was in the same price and performance range, but is now hovering closer to $400 prior to the release of the Radeon R9 290X. The HD 7990 was also technically a dual-GPU video card, while the Titan was a single GPU.

For the PC only details we'll first consider the lack of a Crossfire port on the Radeon R9 290X. Experts assume this means these newer video cards will be using PCI Express 3.0 instead of using a connector. A new VESA standard will help prevent screen tearing when using 4K HDTVs. Another new feature is AMD TrueAudio, which allows audio designers to program audio similar to graphics shaders. It's expected to draw about 300W of power, so it's quite the beast. The Radeon R9 290X also comes bundled with Battlefield 4, which should please gamers, although you'd expect that since it's said to cost $600.

So let's look at the memory systems, which are significantly different. The Xbox One combines 8GB of 2133 MHz DDR3 memory with 32MB of embedded memory along with storage built into the Xbox One SoC. The PlayStation 4 skips the eSRAM and provides 8GB of 5500 MHz GDDR5 memory. This means the Xbox One provides 68.3 GB/s and 102 GB/s of memory bandwidth, while the PS4 once again wins with 176 GB/s. But these console's CPU, GPU, and other every other device share this memory pool on a 256-bit memory bus. Whereas, the Radeon R9 290X provides 4GB of dedicated memory operating at 300 GB/s over a 512-bit memory bus. AMD also says there will be a 6GB Radeon R9 290X later on.

To give you an idea on why memory bandwidth is important, the shadow effects in the latest games use a technique called shadow mapping (which itself has many variations). This approach to shadowing tends to use a lot of video memory and a lot of bandwidth.

If we compare the Xbox One GPU and the PlayStation 4 GPU to the Radeon R9 290X we can keep things simple by just considering the overall theoretical performance, which is measures in TeraFLOPS (or trillions of FLoating-point Operations Per Second). The Xbox One GPU is 853 MHz and has 768 shader cores and 48 Texture Mapping Units (TMU) capable of 1.33 TeraFLOPS. The PlayStation 4 GPU boasts 1152 shader cores producing 1.84 TeraFLOPS.

But the Radeon R9 290X provides 11 CUs (Compute Units), 2,816 stream processors, 176 TMUs, and 44 ROPs (Raster Operating Units), which all together is claimed to produce an astounding 5 TeraFLOPS. To put this power in perspective, Tim Sweeney, graphics programmer behind the Unreal 4 engine technology, claims we need 5,000 TeraFLOPS in order to accurately simulate a fully realistic environment on a 8000 x 8000 resolution display filling the room. That's probably overkill for most people since even a 4K HDTV is 3840 × 2160.

Still, the Radeon R9 290X is definitely a step up from previous generations of GPU designs. And the Xbox One and PlayStation GPUs are a leap beyond the Xbox 360 and PS3, which offered 21.6 GB/s / 0.240 GigaFLOPS and 22.4 GB/s / 0.364 TeraFLOPS, respectively. (Note: There are huge discrepancies in old reports on PS3 RSX GPU performance.)

Were you surprised the Radeon R9 290X, Xbox One, and PlayStation 4 hardware comparison showed the not-yet-released consoles to be so much slower than what is possible with today's technology?