As the Xbox vs PS4 console battle continues to be waged over the internet, some PC gamers may be wondering how multi-platform game ports will be affected by the limitations of even the PlayStation 4 hardware.
In a related report by The Inquisitr, I gave my top five reasons why I'm buying a Xbone over a PlayStation 4, but I've also declared Microsoft's claim that the Xbox One GPU is just as fast as the PS4 the gaming lie of the year so I'm no fanboy. I'd also suggest checking out my article comparing the compare the Radeon Ro 290X to the Xbox One and PS4 GPU as a primer for where graphics technology currently stands.
Quite frankly, ever since I saw the hardware specifications for both consoles I've felt as if gamers are being gypped. Even the supposedly mighty PS4, which is theoretically capable of 1.84 TeraFLOPS versus the Xbone's 1.33 TeraFLOPS, is only as capable as a mid-range video card like the Radeon HD 7850. That PC video card produces 1.75 TeraFLOPS and a quick check shows it costs about $140 on sale.
Why is that so bad? If you look at console prices adjusted for inflation the Xbox 360 cost $348 in today's dollars and the PlayStation 3 cost $567. While both the PS4 and Xbox One are loss leaders, the previous generation definitely gave a better bang for your buck because the PlayStation 3 was estimated to cost about $805 to make in 2006 (never mind the fact that standalone Bluray drives started at over $1,000 retail at the time). Even as late as 2010 the PS3 was still a loss leader for Sony and the PlayStation Slim was sold at a $31.27 per unit loss. But today the PlayStation 4 costs about $381 to manufacturer, including labor, while the cost of the CPU, GPU, and GDDR5 memory come out to about $188. Similarly, Microsoft about breaks even with the Xbox One because the higher cost of the Kinect 2.0 is offset by a slightly less expensive GPU and GDDR3 memory.
At this point, you're probably wondering how the PlayStation 4 GPU could hold back PC gaming. After all, isn't it just a matter of adjusting the graphics quality slider higher for PC games and tweaking the resolution of a few effects? Unfortunately, video game developers have to balance out graphics quality and performance requirements. Based upon these needs, the level design and art pipeline are adjusted for budget constraints in development. While many graphics effects are easily scalable (overall rendering resolution, shadow mapping resolution, Level Of Detail (LOD) systems, etc.), there are some that would require extra work that is unlikely to fit the budget. For an example, check out thesis papers on raytracing via sparse voxel octrees by clicking here and here.
This likely will mean the Xbox One will be the lowest common denominator that sets the stage for PC gaming. While the PS4 and PC will both see noticeable graphics enhancements the art assets will be produced to meet the performance limitations of the Xbone. For example, Titanfall looks decent on the Xbox One and PC but how much better could it be if the developers didn't have to produce a Xbox 360 version, as well?
Some may argue that API and SDK improvements to the Xbone/PS4 will also cause PC games to improve over the years. While this is true, I do not predict the improvement will be as noticeable as the increase seen during the last generation. When developers were shifting from the original Xbox and PS2 to the Xbox 360 and PS3 the GPU hardware went from being fixed-function graphics to unified shader model 3.0. This required a drastic overhaul in the art pipeline of game development and the graphics improvements were predictable.
In addition, while the PS4 and Xbox One use Shader Model 5.0 (SM5), the progress made between these generations was mostly incremental and not as revolutionary as the last change. SM5-capable GPUs have increased performance dramatically but functionality-wise SM3 introduced the majority of games developers' needs. This means the biggest limitation is performance, since that determines whether or not a graphics effect is feasible in actual game.
A good analogy would be the cost of developing for the Wii U versus the PS4 and Xbone. Some third party developers are very reluctant to develop Wii U titles even though the installed user base of the Wii U exceeds PS4 sales at the moment. Even though the extra costs are not as bad as porting between the Wii and PS3/360 because, functionality-wise, the AMD GPU powering the Wii U is almost as capable as the PS4/Xbone they are still reluctant because the performance gap incurs significant development costs. In a similar manner, I expect the same to be true for PC gaming.
So what should PC gamers expect? Based upon Unreal Engine 4 tech demos, we already know the PS4 can't handle full global illumination techniques and must rely on a mix of dynamic GI and pre-computed lighting:
The good news is that we're at the point where we're getting diminishing returns. As you can see, the less accurate, but more efficient, effects look almost the same in the majority of the UE4 video. As another example, the difference in performance requirements between HDAO and SSAO are notable but spotting the visual difference usually requires special scenarios or analyzing screenshots. How often does that happen when actually playing games?
The other good news about the "slow" PS4 and Xbox One is that the dramatic increases in system memory and CPU speed opens up a new world of gameplay options in the form of better AI and physics. I've already written up an article showing how a new object recognition algorithm might be feasible in today's stealth games like Assassin's Creed 5 or Splinter Cell. On the physics side, it's unlikely that Nvidia's PhysX will ever be optimized for an AMD GPU, but games using Havok or Bullet will benefit from the OpenCL-based GPUs and the eight-core CPU of both the PlayStation 4 and Xbox One should be up to the task.
Do you think PC games will noticeably be held back by the Xbox One and PS4 GPU performance limitations?