Pages

Thursday 31 March 2011

The NEXT Generation - an indepth look (part 1)

It's that time again, we're a good bit through another gaming generation and people are gearing themselves up for the imminent announcement of another new generation of home consoles, likely at this year's E3.

The rumours have started stacking up, from job postings to the usual "insider information" that you have to take with a bigger-than-usual pinch of salt. Even if you believe the rumours that the PS4 is being "shelved" in favour of the NGP, at some point something has to give and there will be a new generation out there sooner rather than later. In this article, I'm looking at the technological advancements that are out there today, or due to be released soon, that may well make their way into the next generation of home consoles.

Note: Everything you've read so far on the topic so far is rumour and speculation and this article is no different, however I hope to be a bit more detailed than you usually get with this sort of thing, so please read on and enjoy!

For the purpose of the article, I'll refer to Sony's next console as the "PS4" and the next Xbox as the "Xbox720", since that's generally what most people are referring to them as, even though there's never a guarantee that they'll be called anything remotely similar to this. The Xbox 360 should have been the Xbox 2 and the PSP2 ended up being the NGP (or the PSPGo, depending on your perspective). As for Nintendo? Let's just call it the Wii 2 to keep things simple.

This is part 1 of a 2-part series. In this part, I'm taking a look at what some of the hardware itself might involve, based on how technology has evolved since the launch of the PS3 and 360.

The CPU(s)


One core or many? Power PC or CELL? Here's something you may not have noticed - The CELL was jointly developed by Toshiba, Sony and IBM. It's based on IBM's PowerPC design. The Xbox 360's processor, the "Xenon" (not to be confused with the "Xenon" Codename Microsoft used for the 360 itself) was jointly developed between Microsoft and IBM. It's based on IBM's PowerPC design (And, in fact, allegedly had a lot of help from Sony's CELL design process). The Wii's Processor, Codenamed "Broadway" was jointly developed with IBM and is based on, you guessed it, IBM's PowerPC design. It would appear that if you want a high-end processor designed, IBM are the ones to do it. There aren't really many others out there capable of such a thing. ARM make fantastic processors, but they're more designed to be low-power affairs, better suited to mobile and ultra-low-power devices. The next logical choice is Intel or AMD, but the x86 line of processors are a bit too general for Games Consoles. They have a lot of features that wouldn't really be used, but still increase the cost of production somewhat. Intel could certainly build a hell of a Custom processor, maybe based off their Itanium line, but why risk trying something new when you can stick to the tried-and-tested PPC line? Keeping binary compatibility wouldn't be a terrible thing, either, the current generation of consoles have cost their creators quite a bit of money, so keeping gamers buying the existing lot of games for a little bit longer wouldn't be a terrible thing. More on that later, though. Suffice to say, you can bet that PPC isn't going anywhere and I would bet good money that you'll see the Power line appear in future consoles for a while yet.

There's a good chance Sony would stick with the CELL, after all they invested a lot into it and you could drop in a few overclocked cores to make an even more powerful system that's somewhat backwards compatible with the existing PS3 software, much like Nintendo did with the Wii and the DS. Developers wouldn't mind because a lot of their existing tools could be easily updated to make use of it. The "best" way to go for developers would be to stick with the same number of cores the PS3 has and clock them higher, but Sony seems to like making drastic changes to their architectures between generations. Still, they've learned a lot from the PS3 and have vowed for greater developer support in the future and part of that may involve greater consideration for developers when it comes to designing the hardware itself. The PS3 was designed to do a lot of things, but no matter what way you spin it, its primary use is a Games Console and the PS4 shouldn't be any different. Sony should be aware now that the main people who buy into a new generation are the hardcore gamers.

Microsoft could do a similar thing, they could easily turn their tri-core processor into a hex-core, up the clock speed and increase the cache to get a much more powerful system that is easy to make backwards compatible with the previous generation. Indeed, clock speeds in processors hasn't really gone up much in the last 5 years, with companies instead focusing on optimising them better and plopping more and more cores together. It seems likely that from here on, all future consoles will have many cores.

With Nintendo, however, I wouldn't be surprised if backwards-compatibility was left on the back-burner somewhat. Nintendo has always had some element of backwards compatibility with their handhelds - the Game Boy Colour was compatible with Game Boy Games, the GBA could play GBC games, the DS could play GBA Games, etc., however the Wii was the first of their home consoles to offer compatibility with the previous Generation, the Gamecube. Still, this didn't stop Nintendo from pushing out a few Remakes and ports, even if corners were cut. If rumours are to be believed, Nintendo are looking to do more than just up the graphics on the Wii and seem intent on offering yet another new "gameplay experience". What that is, is anyone's guess as Nintendo are very hush-hush on the subject.

Graphics


Ahhh Graphics, the main reason any of us go out to get a new console on launch day. We see the screenshots, the tech demos, the polygon counts, we get excited and we must have it. The problem with the next generation is that the leap, in terms of graphics, from previous generations will likely be a lot less than other jumps. From the SNES/Mega Drive (Genesis) era to the Playstation/N64 era, the big change was the focus on 3D. The 3D wasn't fantastic, textures were blurry, models were blocky and the resolution was often a bit rubbish, but it still looked amazing at the time. Then came the Playstation 2 / Xbox / Gamecube Generation and the "sort-of-ok" 3D started to look really impressive. Resolutions got higher, things started looking smoother and rounder. Lighting became an incredible achievement and water effects actually started looking convincing. Fast forward to today's graphics and this has all been taken a step further, to what many are calling the cusp of the "Photo-realistic" generation. But where do we go from here?

Rasterisation versus Ray-Tracing


When it comes to talking about Future graphics technology, one of the most common things you'll see mentioned is real-time Ray-tracing. Ray-tracing, depending on who you talk to, is often heralded as the future of graphics. With it, you'll be able to create ultra-realistic scenes with perfect lighting, reflections and so on. The problem is that ray-tracing is hard work. Ray tracing works by starting at the pixel on the screen and working backwards, calculating each and every reflection until you reach the end (which is generally the source of the light). It has some advantages over the usual approach (rasterisation), for example mirrors and portals don't add to the computation much, so you can have as many reflective surfaces as you want. With rasterisation, reflective surfaces essentially mean you have to draw the whole scene again to calculatae it, meaning a lot more work for the Graphics processor. The problem is that Ray-tracing is very computationally expensive. On a 720p screen, you have to trace over 900,000 pixels to draw a single frame. If you want your game to run at 30FPS, that's a lot of pixels to trace, double that for 60FPS. By far, the people doing the most hyping about Ray-tracing are Intel, who have done a lot of interesting research projects on the subject.

However, a quick look into these and you can see a problem - Getting quake wars to run at 25-30FPS required 4 hex-core Processors (that's 24 cores!) just over a year ago, quite a lot considering that Quake Wars isn't exactly the newest, or best looking game out there. They seem to have shifted to the idea of "cloud-based ray-tracing", which essentially means that all the hard processing is done on a server farm, not your PC/Console, probably due to practicality reasons - not everyone happens to have a 24-core machine in their house. Intel is working on what they call a "Many Integrated Core" (MIC) Architecture, with demonstrations of monstrous 80-core processors a few years ago, but since then they've been quiet on the subject. They said, back in 2006, that they would be available "within 5 years" but nothing has materialised quite yet. Even if they were to release this year, they would be extremely high-end chips, costing more than a PS3 did at launch alone - not exactly a good choice for a console. Real-time Ray-tracing may one day grace our homes, but at the moment, it's too computationally expensive to be viable.

Intel's other Graphics project, Larrabee, which allegedly Intel has been pushing to go into the Xbox720, has suffered numerous delays, performance issues and seems to have pretty much been shelved by Intel. It may still see the light of day, but all indications point towards it being a bit of a failure and not worth any of the "big-3"'s investment.

So where does that leave us? Right back where we started - rasterisation. Tried and tested, it simply means the graphics chips you'll see in future consoles will likely be beefier versions of the ones in today's consoles. There's been a large shift in GPGPU technologies since both the PS3 and 360 came out, meaning that the graphics chips in future consoles could well play a much larger role than before. In an almost odd twist, you could expect to see much more realistic physics and fluid dynamics all thanks to the graphics chips being able to do "general" calculations.

It used to be that if you wanted to do a particular graphical technique, you had to wait for it to be implemented into the latest DirectX/OpenGL spec, but now graphics chips are becoming more "programmable", essentially allowing them to do whatever you want, rather than just whatever the DirectX/OpenGL spec lets you do. That fancy tech demo that Lionhead has been showing off lately uses a technique called Tessellation to let the graphics chip add and remove polygons on the fly (that's a basic explanation at least, read the link to learn more), which is why you can have "billions" of polygons on screen and still render at a decent frame rate. This is pretty cool, so you'll see it and similar techniques being used to create highly-details scenes, realistic fur and so on. It's advancements like this that will make the Graphics chips inside the next consoles play a much more important role than before, leading to more realistic graphics, physics, AI and whatever else programmers can figure out how to get running on them.

As for which graphics chips will be used? It's hard to say. It's widely speculated that Sony originally intended to design their own graphics chip for the PS3, but weren't delivering on time or within performance specifications and thus went to nVidia for a quick solution. Depending on how the relationship between the two companies works out, they may continue with this partnership, switch to AMD or once again attempt to design their own. Microsoft seems much less likely to design their own chip and will probably partner with the likes of AMD once more. Microsoft seems to be pretty happy with the graphical performance and design of their Xenos GPU so I would expect another AMD-based solution, particularly as they're probably still a bit sore after how nVidia treated them with the Original Xbox.

RAM


Allegedly, the Xbox 360 was only going to have 256Mb of RAM, until Epic convinced Microsoft to go with 512Mb. In hindsight, this was a good idea as the PS3 ended up with a similar amount, albeit in a completely different configuration. RAM technology doesn't really change much, though It gets bigger, latency gets reduced and that's about it. The only thing I can suggest is looking at previous generations and seeing how much it all changed. The PS1 had 2Mb of RAM, the PS2 had 32Mb and the PS3 has 512 - an increase of 16x each, so perhaps the PS4 will have 8Gb of RAM? That seems a little bit high at first, but not totally beyond the realm of possibility. I've done a bit of googling and according to my estimations, what 512Mb of RAM cost in 2005/2006 would get you somewhere around 6Gb of RAM today. In a year or so, 8Gb doesn't seem that unlikely after all.

USB versus Thunderbolt


Now I know the prospect of better graphics and more realistic games is very exciting, but for me that's all to be expected from a new generation and not all that surprising. If the Wii has taught us anything, it's that there's more to making a good games console than throwing lots of polygons at it. To that end, I believe that one of the biggest advancements we'll see will come from somewhere as lowly as the port the consoles use to connect peripherals to. Why? Let me tell you.

Last year, both Sony and Microsoft launched their own motion controls. Microsoft has the very popular Kinect and Sony has Move. Kinect has come under a lot of criticism for its lack of accuracy, particularly in certain situations, however Microsoft claims that this isn't actually a limitation of the Kinect Sensor, but rather a limitation of the connection it uses - good ol' USB2. Supposedly Microsoft are working on a compression algorithm the sensor can use to Quadruple the accuracy of the device, although that claim has since been refuted, it does lead us to one conclusion - Kinect is capable of a lot more and unlocking that potential could be as simple as switching to a faster connection.

The Playstation Move isn't that different. Although thanks to accelerometers and gyroscopes, the PS Move is widely considered to be much more accurate than Kinect, it still relies on the PS Eye Camera for some of its tracking. And that camera connects to a USB2 port - the same logic applies, a bigger resolution camera could provide much more accuracy and for that, a faster connection will be required. The Move has a slight advantage in that the Camera only records a single colour image, while Kinect does both a colour and an infra-red image, plus all of the gyroscope and accelerometer data is sent via Bluetooth, but a bit of extra bandwidth wouldn't hurt, increasing the resolution and frame rate of the camera, increasing overall accuracy.

The question is, though  - which connector do you use? Not that long ago, you had a choice between USB and Firewire, with Firewire being the faster of the two, particularly in "real-world" situations, yet USB won out.

Likewise, today there seems to be an imminent battle between USB3 and the recently announced Thunderbolt, from Intel. Thunderbolt, on paper at least, is the "superior" standard, offering twice the speeds USB3 capable of, but it has one major drawback - compatibility. USB3 is compatible with all existing USB products, Thunderbolt is not. So if you were Microsoft or Sony, what would you do?

Well, it depends on what else you want the consoles to do. The USB connectivity on the 360 and PS3 is used for little more than connecting external drives, charging controllers and connecting peripherals like cameras, Kinects and so on. A new console generation means new peripherals designed for them, new controllers, etc. so the only thing being "lost" is that external drive support. Is that a vital, deal breaking feature? Particularly as you can stream all your media over your network, these days.

Really, the decision probably isn't that hard to make - Thunderbolt is probably overkill for a console, the bandwidth provided by USB3 would be enough for ultra-high resolution cameras and sticking with USB means that if you decide to go for some backwards compatibility, you could still keep selling your old peripherals, as well as encourage people to keep their old stuff. One advantage that Thunderbolt has is that it can be used to plug in "DisplayPort" equipped monitors, up to two in fact, which is great if you have a PC, but how many people have TVs with a Displayport interface? Furthermore, would you need more than one screen? Dropping HDMI support would be a ridiculous idea, so once more the advantages of Thunderbolt are less useful on consoles.

With this one, I'm pretty confident in saying that it's a bit of a cut and dry situation - USB3 will be widely adopted by everyone. It is worth mentioning that the Ps2 did originally include a Firewire port as well as USB, but it wasn't exactly widely used and Sony eventually dropped it. So although you might be thinking that you might get both connections, this is probably not likely to happen as costs will have to be factored into the design at some point.

Part 2 coming soon!


There's a lot more at stake with the next generation consoles than making everything beefier and faster. In Part 2, I'll be looking into issues such as Backwards compatibility, cloud gaming, controller design and more! Stay tuned!

 

-Kushan

1 comment:

  1. [...] is part 2 of a 2-part series on the future of games consoles. Part 1 is available here and looks at the basic hardware you’re likely to find inside the machines, such as the CPU [...]

    ReplyDelete