Pages

Showing posts with label Games. Show all posts
Showing posts with label Games. Show all posts

Thursday, 31 March 2011

The NEXT Generation - an indepth look (part 1)

It's that time again, we're a good bit through another gaming generation and people are gearing themselves up for the imminent announcement of another new generation of home consoles, likely at this year's E3.

The rumours have started stacking up, from job postings to the usual "insider information" that you have to take with a bigger-than-usual pinch of salt. Even if you believe the rumours that the PS4 is being "shelved" in favour of the NGP, at some point something has to give and there will be a new generation out there sooner rather than later. In this article, I'm looking at the technological advancements that are out there today, or due to be released soon, that may well make their way into the next generation of home consoles.

Note: Everything you've read so far on the topic so far is rumour and speculation and this article is no different, however I hope to be a bit more detailed than you usually get with this sort of thing, so please read on and enjoy!

For the purpose of the article, I'll refer to Sony's next console as the "PS4" and the next Xbox as the "Xbox720", since that's generally what most people are referring to them as, even though there's never a guarantee that they'll be called anything remotely similar to this. The Xbox 360 should have been the Xbox 2 and the PSP2 ended up being the NGP (or the PSPGo, depending on your perspective). As for Nintendo? Let's just call it the Wii 2 to keep things simple.

This is part 1 of a 2-part series. In this part, I'm taking a look at what some of the hardware itself might involve, based on how technology has evolved since the launch of the PS3 and 360.

The CPU(s)


One core or many? Power PC or CELL? Here's something you may not have noticed - The CELL was jointly developed by Toshiba, Sony and IBM. It's based on IBM's PowerPC design. The Xbox 360's processor, the "Xenon" (not to be confused with the "Xenon" Codename Microsoft used for the 360 itself) was jointly developed between Microsoft and IBM. It's based on IBM's PowerPC design (And, in fact, allegedly had a lot of help from Sony's CELL design process). The Wii's Processor, Codenamed "Broadway" was jointly developed with IBM and is based on, you guessed it, IBM's PowerPC design. It would appear that if you want a high-end processor designed, IBM are the ones to do it. There aren't really many others out there capable of such a thing. ARM make fantastic processors, but they're more designed to be low-power affairs, better suited to mobile and ultra-low-power devices. The next logical choice is Intel or AMD, but the x86 line of processors are a bit too general for Games Consoles. They have a lot of features that wouldn't really be used, but still increase the cost of production somewhat. Intel could certainly build a hell of a Custom processor, maybe based off their Itanium line, but why risk trying something new when you can stick to the tried-and-tested PPC line? Keeping binary compatibility wouldn't be a terrible thing, either, the current generation of consoles have cost their creators quite a bit of money, so keeping gamers buying the existing lot of games for a little bit longer wouldn't be a terrible thing. More on that later, though. Suffice to say, you can bet that PPC isn't going anywhere and I would bet good money that you'll see the Power line appear in future consoles for a while yet.

There's a good chance Sony would stick with the CELL, after all they invested a lot into it and you could drop in a few overclocked cores to make an even more powerful system that's somewhat backwards compatible with the existing PS3 software, much like Nintendo did with the Wii and the DS. Developers wouldn't mind because a lot of their existing tools could be easily updated to make use of it. The "best" way to go for developers would be to stick with the same number of cores the PS3 has and clock them higher, but Sony seems to like making drastic changes to their architectures between generations. Still, they've learned a lot from the PS3 and have vowed for greater developer support in the future and part of that may involve greater consideration for developers when it comes to designing the hardware itself. The PS3 was designed to do a lot of things, but no matter what way you spin it, its primary use is a Games Console and the PS4 shouldn't be any different. Sony should be aware now that the main people who buy into a new generation are the hardcore gamers.

Microsoft could do a similar thing, they could easily turn their tri-core processor into a hex-core, up the clock speed and increase the cache to get a much more powerful system that is easy to make backwards compatible with the previous generation. Indeed, clock speeds in processors hasn't really gone up much in the last 5 years, with companies instead focusing on optimising them better and plopping more and more cores together. It seems likely that from here on, all future consoles will have many cores.

With Nintendo, however, I wouldn't be surprised if backwards-compatibility was left on the back-burner somewhat. Nintendo has always had some element of backwards compatibility with their handhelds - the Game Boy Colour was compatible with Game Boy Games, the GBA could play GBC games, the DS could play GBA Games, etc., however the Wii was the first of their home consoles to offer compatibility with the previous Generation, the Gamecube. Still, this didn't stop Nintendo from pushing out a few Remakes and ports, even if corners were cut. If rumours are to be believed, Nintendo are looking to do more than just up the graphics on the Wii and seem intent on offering yet another new "gameplay experience". What that is, is anyone's guess as Nintendo are very hush-hush on the subject.

Graphics


Ahhh Graphics, the main reason any of us go out to get a new console on launch day. We see the screenshots, the tech demos, the polygon counts, we get excited and we must have it. The problem with the next generation is that the leap, in terms of graphics, from previous generations will likely be a lot less than other jumps. From the SNES/Mega Drive (Genesis) era to the Playstation/N64 era, the big change was the focus on 3D. The 3D wasn't fantastic, textures were blurry, models were blocky and the resolution was often a bit rubbish, but it still looked amazing at the time. Then came the Playstation 2 / Xbox / Gamecube Generation and the "sort-of-ok" 3D started to look really impressive. Resolutions got higher, things started looking smoother and rounder. Lighting became an incredible achievement and water effects actually started looking convincing. Fast forward to today's graphics and this has all been taken a step further, to what many are calling the cusp of the "Photo-realistic" generation. But where do we go from here?

Rasterisation versus Ray-Tracing


When it comes to talking about Future graphics technology, one of the most common things you'll see mentioned is real-time Ray-tracing. Ray-tracing, depending on who you talk to, is often heralded as the future of graphics. With it, you'll be able to create ultra-realistic scenes with perfect lighting, reflections and so on. The problem is that ray-tracing is hard work. Ray tracing works by starting at the pixel on the screen and working backwards, calculating each and every reflection until you reach the end (which is generally the source of the light). It has some advantages over the usual approach (rasterisation), for example mirrors and portals don't add to the computation much, so you can have as many reflective surfaces as you want. With rasterisation, reflective surfaces essentially mean you have to draw the whole scene again to calculatae it, meaning a lot more work for the Graphics processor. The problem is that Ray-tracing is very computationally expensive. On a 720p screen, you have to trace over 900,000 pixels to draw a single frame. If you want your game to run at 30FPS, that's a lot of pixels to trace, double that for 60FPS. By far, the people doing the most hyping about Ray-tracing are Intel, who have done a lot of interesting research projects on the subject.

However, a quick look into these and you can see a problem - Getting quake wars to run at 25-30FPS required 4 hex-core Processors (that's 24 cores!) just over a year ago, quite a lot considering that Quake Wars isn't exactly the newest, or best looking game out there. They seem to have shifted to the idea of "cloud-based ray-tracing", which essentially means that all the hard processing is done on a server farm, not your PC/Console, probably due to practicality reasons - not everyone happens to have a 24-core machine in their house. Intel is working on what they call a "Many Integrated Core" (MIC) Architecture, with demonstrations of monstrous 80-core processors a few years ago, but since then they've been quiet on the subject. They said, back in 2006, that they would be available "within 5 years" but nothing has materialised quite yet. Even if they were to release this year, they would be extremely high-end chips, costing more than a PS3 did at launch alone - not exactly a good choice for a console. Real-time Ray-tracing may one day grace our homes, but at the moment, it's too computationally expensive to be viable.

Intel's other Graphics project, Larrabee, which allegedly Intel has been pushing to go into the Xbox720, has suffered numerous delays, performance issues and seems to have pretty much been shelved by Intel. It may still see the light of day, but all indications point towards it being a bit of a failure and not worth any of the "big-3"'s investment.

So where does that leave us? Right back where we started - rasterisation. Tried and tested, it simply means the graphics chips you'll see in future consoles will likely be beefier versions of the ones in today's consoles. There's been a large shift in GPGPU technologies since both the PS3 and 360 came out, meaning that the graphics chips in future consoles could well play a much larger role than before. In an almost odd twist, you could expect to see much more realistic physics and fluid dynamics all thanks to the graphics chips being able to do "general" calculations.

It used to be that if you wanted to do a particular graphical technique, you had to wait for it to be implemented into the latest DirectX/OpenGL spec, but now graphics chips are becoming more "programmable", essentially allowing them to do whatever you want, rather than just whatever the DirectX/OpenGL spec lets you do. That fancy tech demo that Lionhead has been showing off lately uses a technique called Tessellation to let the graphics chip add and remove polygons on the fly (that's a basic explanation at least, read the link to learn more), which is why you can have "billions" of polygons on screen and still render at a decent frame rate. This is pretty cool, so you'll see it and similar techniques being used to create highly-details scenes, realistic fur and so on. It's advancements like this that will make the Graphics chips inside the next consoles play a much more important role than before, leading to more realistic graphics, physics, AI and whatever else programmers can figure out how to get running on them.

As for which graphics chips will be used? It's hard to say. It's widely speculated that Sony originally intended to design their own graphics chip for the PS3, but weren't delivering on time or within performance specifications and thus went to nVidia for a quick solution. Depending on how the relationship between the two companies works out, they may continue with this partnership, switch to AMD or once again attempt to design their own. Microsoft seems much less likely to design their own chip and will probably partner with the likes of AMD once more. Microsoft seems to be pretty happy with the graphical performance and design of their Xenos GPU so I would expect another AMD-based solution, particularly as they're probably still a bit sore after how nVidia treated them with the Original Xbox.

RAM


Allegedly, the Xbox 360 was only going to have 256Mb of RAM, until Epic convinced Microsoft to go with 512Mb. In hindsight, this was a good idea as the PS3 ended up with a similar amount, albeit in a completely different configuration. RAM technology doesn't really change much, though It gets bigger, latency gets reduced and that's about it. The only thing I can suggest is looking at previous generations and seeing how much it all changed. The PS1 had 2Mb of RAM, the PS2 had 32Mb and the PS3 has 512 - an increase of 16x each, so perhaps the PS4 will have 8Gb of RAM? That seems a little bit high at first, but not totally beyond the realm of possibility. I've done a bit of googling and according to my estimations, what 512Mb of RAM cost in 2005/2006 would get you somewhere around 6Gb of RAM today. In a year or so, 8Gb doesn't seem that unlikely after all.

USB versus Thunderbolt


Now I know the prospect of better graphics and more realistic games is very exciting, but for me that's all to be expected from a new generation and not all that surprising. If the Wii has taught us anything, it's that there's more to making a good games console than throwing lots of polygons at it. To that end, I believe that one of the biggest advancements we'll see will come from somewhere as lowly as the port the consoles use to connect peripherals to. Why? Let me tell you.

Last year, both Sony and Microsoft launched their own motion controls. Microsoft has the very popular Kinect and Sony has Move. Kinect has come under a lot of criticism for its lack of accuracy, particularly in certain situations, however Microsoft claims that this isn't actually a limitation of the Kinect Sensor, but rather a limitation of the connection it uses - good ol' USB2. Supposedly Microsoft are working on a compression algorithm the sensor can use to Quadruple the accuracy of the device, although that claim has since been refuted, it does lead us to one conclusion - Kinect is capable of a lot more and unlocking that potential could be as simple as switching to a faster connection.

The Playstation Move isn't that different. Although thanks to accelerometers and gyroscopes, the PS Move is widely considered to be much more accurate than Kinect, it still relies on the PS Eye Camera for some of its tracking. And that camera connects to a USB2 port - the same logic applies, a bigger resolution camera could provide much more accuracy and for that, a faster connection will be required. The Move has a slight advantage in that the Camera only records a single colour image, while Kinect does both a colour and an infra-red image, plus all of the gyroscope and accelerometer data is sent via Bluetooth, but a bit of extra bandwidth wouldn't hurt, increasing the resolution and frame rate of the camera, increasing overall accuracy.

The question is, though  - which connector do you use? Not that long ago, you had a choice between USB and Firewire, with Firewire being the faster of the two, particularly in "real-world" situations, yet USB won out.

Likewise, today there seems to be an imminent battle between USB3 and the recently announced Thunderbolt, from Intel. Thunderbolt, on paper at least, is the "superior" standard, offering twice the speeds USB3 capable of, but it has one major drawback - compatibility. USB3 is compatible with all existing USB products, Thunderbolt is not. So if you were Microsoft or Sony, what would you do?

Well, it depends on what else you want the consoles to do. The USB connectivity on the 360 and PS3 is used for little more than connecting external drives, charging controllers and connecting peripherals like cameras, Kinects and so on. A new console generation means new peripherals designed for them, new controllers, etc. so the only thing being "lost" is that external drive support. Is that a vital, deal breaking feature? Particularly as you can stream all your media over your network, these days.

Really, the decision probably isn't that hard to make - Thunderbolt is probably overkill for a console, the bandwidth provided by USB3 would be enough for ultra-high resolution cameras and sticking with USB means that if you decide to go for some backwards compatibility, you could still keep selling your old peripherals, as well as encourage people to keep their old stuff. One advantage that Thunderbolt has is that it can be used to plug in "DisplayPort" equipped monitors, up to two in fact, which is great if you have a PC, but how many people have TVs with a Displayport interface? Furthermore, would you need more than one screen? Dropping HDMI support would be a ridiculous idea, so once more the advantages of Thunderbolt are less useful on consoles.

With this one, I'm pretty confident in saying that it's a bit of a cut and dry situation - USB3 will be widely adopted by everyone. It is worth mentioning that the Ps2 did originally include a Firewire port as well as USB, but it wasn't exactly widely used and Sony eventually dropped it. So although you might be thinking that you might get both connections, this is probably not likely to happen as costs will have to be factored into the design at some point.

Part 2 coming soon!


There's a lot more at stake with the next generation consoles than making everything beefier and faster. In Part 2, I'll be looking into issues such as Backwards compatibility, cloud gaming, controller design and more! Stay tuned!

 

-Kushan

Sunday, 27 February 2011

Why the PS3 WAS hard to develop for (and why this is no longer the case)

If you all cast your minds back to around 2006 and the launch of the PS3, you may remember that it had a bit of a hard time getting off of the ground. In Fact, Sony themselves referred to the PS3 as being on "life support".

There were a lot of reasons for this, ranging from a lack of software to the high cost of investment for anyone that wanted to own the console. Suffice to say, the PS3 is still very much with us today, partly thanks to Sony pulling the finger out and addressing the major issues that held the console back. Still, one of the earlier criticisms of the console was that it was "Difficult to develop for" and "too complicated". One of the side effect of this is that it drives development costs up, forcing some to declare the PS3 "too expensive" from a development standpoint, let alone a consumer standpoint. Of course, there's exceptions to every rule and a few developers spoke out, saying that the PS3 wasn't difficult to develop for at all, some people even said the difficulty was deliberate on Sony's part. There was a lot of confusion with some developers saying it was no more difficult to develop for than any other console and some saying it was far too difficult, but the barrage of really poor ports to the console in those earlier days couldn't be ignored and eventually we got some kind of admittance from Sony that the PS3 had issues.

That all happened a few years ago, yet the PS3 is still here today and has fantastic developer support. Most cross-platform games are now on par, or better, than their 360 or PC counterparts. Even Valve, one of the greatest opponents of the PS3, has seriously changed their tune. So what was all the fuss about and what has changed? Read on to find out!

The Problem


If you ask any average joe about why they think the PS3 was difficult to develop for, nine times out of ten, the answer is usually "The CELL". The CELL being the name of the coprocessors inside the PS3. If you weren't aware by now, the PS3 has a whole bunch of processors inside it. It has a regular PowerPC CPU, similar to what you'd get inside older Macs (albeit more powerful) and even the Xbox360, but custom designed for the PS3. It also has 8 exta processors called "SPUs" (Synergistic Processing Units). One of these SPUs is always disabled. This is done to improve manufacturing yields of the PS3. One of them is reserved entirely for the system itself. This handles all sorts of odd-jobs, such as encryption and authentication. However, the 6 remaining SPUs are there for the developers to use however they want. And the great news is they're really fast, in fact some people claimed that the PS3 had nearly "unlimited power", so you'd think with all that processing power, you could just throw polygons at it and it'd process them before you can even shout "Sixy Eff Pee Ess for all games!". Except this quite obviously isn't the case.

If you were to look at the processing power of a single CELL SPU, you'd see that as far as processors go, it's not actually that fast. There are certain operations the CELL is optimised for and with these tasks, it can run rings around a regular CPU, but likewise there are other operations that it simply wasn't designed to perform well at, meaning that all this humongous processing potential needs to be directed to the right place to make use of it. This is nothing new, in much the same way that you wouldn't expect your CPU to perform graphical calculations while your GPU encrypts and decrypts a bunch of files (Although GPGPU developments are making that statement quite irrelevant), certain hardware is simply tailor-made for certain operations. The real power of the CELL comes from the fact that you've got 6 of them to play with and a CPU and a dedicated graphics unit. All your bases are covered and no matter what it is you're trying to do, you've got a lot of power to do it, but you need to apply it in the right places.

Still, this in itself isn't all that difficult or new, either. What was relatively new to game design at the time was this notion that you have extra processors to do extra work. Back in 2005 when the PS3's development kits were being handed out, nearly every desktop PC out there had a single processor inside it, some RAM and a Graphics card/chipset of some description. Dual core chips existed but they weren't that widespread and most games didn't take advantage of them. The consoles at the time (Gamecube, PS2 and especially the Xbox1) were in a similar situation - one processor, some RAM and a GPU. The PS2 is a bit of an exception to this rule, but that's another article for another time (The PS2 was also famously hard to develop for).

It didn't matter that PC's had one type of processor while consoles had another, it's not the kind of thing you have to think about much when it comes to allocating your processing. If you have some physics to be calculated, it'd be done by the CPU, whichever CPU that machine happens to have (of course, we'll ignore things like Endianness for the sake of keeping this post as simple as possible) and it would take as long as that processor took to calculate it. The PS3 changed that - now you have lots of processors, processors that by themselves are no more powerful than a desktop processor, but combined can do wonderful things.

The PS3 was not actually the first console to take an approach like this. Those of you with a good enough memory might remember the Sega Saturn, a console that was supposedly more powerful than the original Playstation, but was notoriously difficult to develop for, partly due to its use of multiple processors. Seeing a similarity here? Another issue with the Saturn was that Sega didn't help matters much with limited developer support. It pretty much became the norm for developers to more or less ignore all the extra processing power the Saturn had to offer and just use a single CPU. The games suffered, the console suffered and we all know which came out on top between that and the PS1. (Edit: Some people commenting have got a little confused here - I don't mean to imply that this is the sole reason the Saturn didn't do well, it was just one reason of many).

As I mentioned earlier, right up until 2005, developing on multiple processors, or multiple cores, wasn't really the done thing in the gaming industry. Until that point, multiple CPU systems were reserved for the likes of servers and clusters that were designed to do a lot of different things at once, or the same thing in parallel, over and over and over. Some tasks can easily be parallelised, such as weather calculations, payrolls, protein folding and so on. However, games are distinctly absent from that list. Quite the contrary - games are very linear in nature.

Ok, I know what you're thinking - you've played gorgeous, sprawling, open-world games like Grand Theft Auto and you can see all sorts of things going on at once - cars are moving, people are interacting, weather is simulated, physics is simulated, sound is playing, it's all beautifully drawn on screen and such - how can that all be going on at the same time and yet not actually going on at the same time?

One of the early PS3 vs 360 arguments was that the 360 was designed with games in mind, in particular it's tri-core Processor. I have seen numerous people state that this was "designed for gaming" because you can have one core processing the physics, one processing the audio and one processing the graphics - perfect and simple. It's just a shame that this description is just too simple.

A quick lesson in game design


In order to understand why all this isn't as simple as it seems, it's good to have a really basic understanding of game design. So this will be a very very basic overview.

If you've ever read any kind of basic guide to computers, you've probably seen something like this:

Input - Process - Output

Supposedly, this is how all computers work. You give them some sort of an input, they process this input and give you an output. I wont get into the nitty-gritty as to why this isn't necessarily always the case, but from a basic overview, it is pretty much how your computer appears to work. You click on a button and the computer does some processing before displaying whatever it is that it's meant to display. In terms of games, if you move the analogue stick, the computer will then do some calculations, work out how far forward you've moved based on how far the analogue stick has been moved, work out if you've walked into a wall or an object, calculate the necessary physics, etc. and then draw the screen, showing you your new position.

Almost all games ever made will actually follow a pattern like this. It doesn't matter if it's Pac-man or Shenmue, the game grabs the input from your controller, decides what that means (Have you moved? Have you fired a gun?), then does some AI calculations (Has the ghost moved? Is it changing direction? Is the soldier going to duck for cover?) apply any physics or movement necessary because of this, including collision detection (maybe you touched a wall, or a powerpill or a ghost, maybe the AI got shot), apply the necessary effects (stop you from moving through the will, take some health off because you got hit by a bullet), draw what it wants you to see on screen and then repeats.

Not all games will follow this pattern exactly, but as a general rule of thumb, this is what's going on in the background. Do a search for "game loop" and every example you get is more or less the same. It's all very linear - input, then movement, then physics, then collision detection, then results of collisions, etc. and this is where we have issues with the idea of separating the game's logic into the separate processing cores of the console. You can't process the AI until the AI knows what happened with all the physics and collisions. You can't process the physics and collisions until you have the input from the player. You can't draw the screen unless you know where all the objects are, what direction they're facing, how fast they're moving (if you need to add a motion blur, for example) and you wont know any of that until the physics are done processing. Furthermore, what happens if you start drawing say a box on the screen and at the same moment in time, the physics have calculated that the box needs to be destroyed?

So on the 360, you've got 3 cores - if one's doing all the physics, then the other two are just going to sit waiting until the physics have been completed. Once the physics are done, the other two cores might kick in to do the AI, drawing and numerous other things, but then the "physics" core is sitting doing nothing. Even if there was a way you could run all these different operations in parallel, you'd never be able to make sure that all of the cores are doing stuff. At some point, one core is going to have a lot more work to do than the others, holding everyone up. And when that happens, you're just wasting all that processing power. The goal is to get every core doing stuff 100% of the time, or as close to it as possible.

Now, I'd like to make a bit of a point here. You often see developers claiming to have "maxed out" a console. Now I'm not about to undermine what Naughty Dog and the like have said, but it's not actually difficult to have a processor running at full tilt.

while(1) {}

That one line of code will easily get a processor running at 100%. Multiply that by 6 and you've got a PS3 that's "maxed out", but it isn't doing very much. The real secret sauce is when you optimise your code enough to make sure that as the processors are erm...processing, they're doing useful stuff. (Side note: If you want to know a bit more about really getting the most out of a particularly limited system, you can do worse than watching this fantastic presentation on the C64).

But how do you make that happen? Games, particularly modern games, can have different paces in them. You could fine tune a game to death so that the physics, graphics, etc. all take about the same amount of time, but then what happens when someone throws a grenade into a pile of boxes behind you? Your physics calculations suddenly go up, but nothing else does. But since games are so linear by nature, why not just split the work of each task into 3 or more bits and let the various processors work together? I'm glad you asked.

Threading fun


If you didn't already know, the concept of having a program do two things at the same time is called "multi-threading" (not to be confused with multi-tasking, which is the concept of having more than one program running at once). If you think of a program as being executed line-by-line, a multithreadded program is executing two lines at the same time, line-by-line. In our earlier example, the 360 seems well suited to games as there are generally 3 big things that need to be processed, meaning that if you applied the same logic to the PS3, you're only using half of the SPU's - and that doesn't count the PS3's main CPU, either. I can see why a lot of people jump to this conclusion, it "easily" explains why the 360 seems so easy to develop for while the Ps3 seems so difficult, but this easy explanation is too easy, there's clearly a bit more to it than that.

As I hit upon earlier, we're now looking to getting all of our extra processors/cores to do the same tasks at the same time. Rather than have one do the physics while the others wait around, get all of them to do the physics in 1/6 of the time, then the AI in 1/6 of the time and so on and so on, then we're not "wasting" any processing time and making full use of the hardware available. Sounds deceptively simple and it is. This kind of parallel computing is not easy at all.

To demonstrate this, imagine a 100m sprint with 6 athletes. Each athlete represents an SPU working away on the PS3. In an ideal world, each athlete would finish the 100m in exactly the same time, but this isn't likely. As some tasks take a  bit longer in different parts, the athlete will slow down a bit. This is ok because each athlete is on their own bit of the track, but that's not what we want to do. We want the athletes to run on the same bit of track, to process the same piece of data. There isn't enough room for two, let alone six - what happens? They'll trip over each other, knock each other down, people will get injured and the whole race is called off before it finishes.

In computing, if you have two threads running side by side, they are not allowed to cross paths or bad things will happen. If one thread accesses the same piece of data another thread is accessing at the exact same time, literally anything could happen. It might be ok, it might cause one or both threads to crash, it might cause that piece of data to become corrupt, either way the outcome isn't good. You might think that the chances of each thread touching the same byte of data at the same time is pretty remote and you'd almost be right - but we want our game (ideally) running at 60FPS. That means there's at least 60 chances per second of it happening. Then think of it this way - with each frame that passes, a calculation will be done to see if your character is colliding with any objects in the world. Even in a simple game like pac-man, each dot and ghost is an object. There's probably a couple of hundred dots in your average pac-man map, each being compared 60 times a second just to see if Mr Pac-man has hit any of them. If you have one thread checking Pac-man and another thread checking the ghosts, at some point each thread is going to compare a Ghost and Pac-man to the same dot on screen, causing that collision. If this happened on a Windows program, the program would likely crash and you'd get the dreaded "not responding" message. What do you think would happen on a Ps3 or a 360?

So we've established that to get the most out of our consoles, we need to get all of the processors working on the same set of data, but without stepping on each other's toes. If you don't know much about programming, that might sound pretty hard. If you do know a bit about programming, or even quite a lot about programming, you'll see it can be just as bad as it sounds. But fear not - this isn't anything new, it (as of about 2005/2006) was just fairly new to gaming. There are plenty of books and utilities out there to help programmers  do this, it's just that if you've never done multi-threaded programming like this before, it can be a bit of a mind-bender. The important thing to keep in mind is that if you're planning your code out well enough, you wont run into any issues. You'll never be able to reach that 100% useful CPU utilisation dream, but you can come pretty close.

The problem is that bit about planning. If you design a game engine from scratch to handle all this, it's not too bad to deal with. But most game developers already have an engine. An engine that wasn't built with multi-threading in mind. You have probably noticed that some games, even games vastly different from each other, can sort of look about the same. I can spot an Unreal Engine game at 20 paces. Engines take time and money (and time is money, so engines take money and money) to develop, it's no good scrapping everything just because a new console came out, the more code you can reuse, the more money you'll save and the faster you'll be able to ship your game. When 2006 came, most games were being released on engines that dated back a couple of years (at least). Even today, many games use engines that originate from games from 2 or even 3 generations ago. Call of Duty's is a prime example of this - if you look closely enough, you can still find remnants of the Quake 3 engine it was based on. Of course, years upon years of modification and upgrades mean it's still a "Next-gen" engine (at least if you still count the 360/PS3 as being "next-gen"). In order to really make the most out of multiple cores and CPUs, some extensive work needs to be done to these engines, or create a new engine from scratch. Neither are cheap options and the cheapest (but more complicated) way to go about it is to use your existing code as much as you can.

So what has changed?


It's pretty obvious to say that the PS3's made today still work internally much the same as the ones released in 2006. They still have the 6 SPU's for developers to use, yet these days nobody is really complaining about how difficult the PS3 is. Have developers just stopped moaning or has something changed?

It's actually a little of column A and a little of Column B. Sony eventually acknowledged that even its own developers were struggling with the console and got its brightest people to work together to create various libraries and engines to make the whole thing a lot easier to work with. Then, they gave out these libraries and tools to other PS3 developers, showing how to program for the PS3 and giving plenty of support where they could.

Secondly, by now most developers have migrated their engines over to the new "multi-threaded" way of doing things, or built entirely new engines to harness that power. If you pay particular attention to the Unreal Engine, you can track its development over the years, with Unreal3-based games becoming better and better on the PS3 as time goes on.

The PS3 sold enough consoles to make it worthwhile for the developers to put the effort into this. That's what separates the PS3 from the likes of the Saturn. A similar thing happened with the PS2 - the PS2 was also quite hard to develop for, but for different reasons. It didn't have multiple processors, but the graphics chip design was a little bit weird and many developers claimed it was a pain to work with. Still, the PS2 sold millions of consoles, the Dreamcast sank and developers didn't really have much choice but to just deal with it, at least if they wanted to make money.

In the end, money is the biggest factor. If a consoles takes a lot of money to develop for, but also has a big install base, it works out much the same as a console that's easier (cheaper) to develop for, but has  a smaller install base. Had Sony not acted quickly to drop the Price of the PS3, the gaming ecosystem as we know it might look vastly different today.

nness


Tuesday, 8 February 2011

Does the PS3 outselling the 360 change anything?

By now, most people interested in this sort of thing will have noticed the news that the PS3 seems poised to outsell the 360 any month now. Allegedly, the PS3 is only about 2 or 3 million units behind the 360 and could overtake it any month now.

This would most certainly be an achievement for Sony and I'm sure someone at Sony HQ will get a nice little bonus and a pat on the back, but almost immediately it'll be business as usual for Sony and I doubt very much that anything will change.

It seems, however, that some people think that this is a bigger deal than it really is. In fact, since the launch of the PS3, some gamers seem oddly obsessed with the idea of the PS3 overtaking the 360. Almost every year there's a spat of articles predicting that sales of the PS3 would overtake the 360 that year, even as far back as 2008 (2009, 2011, 2012).

In the early days of the "console war", I can almost understand the fixation. Looking at previous generations, everyone knows that when one console overtakes another, it tends to go on to be the best console and the one left in the dust often gets discontinued, such as what happened with the PS2 and the Dreamcast. Yet, it has been over 5 years since the 360 launched, it's no longer a "Next-gen" console war, it's just a regular "current-gen" console war and frankly, this war is getting a bit long in the tooth. I have no doubt that the PlayStation 3 will overtake the Xbox 360 at some point, but here's what most people seem to be missing - it wont change anything.

For the sake of argument, lets pretend that the 3million Xbox360's have succumbed to the RROD and that not a single PS3 has died, meaning that right now, sony has sold exactly the same number of PS3's as Microsoft has sold 360's. In fact, lets say Sony has sold 1 additional console and that the PS3 is now in the lead. What happens next?

Something tells me that Activision will still bring out Call of Duty 8 on both consoles, Microsoft will continue selling Kinect and millions of gamers who don't know any better will continue to buy Halo-branded games, overpriced accessories and chat to their mates on live. Indeed, in this scenario, Microsoft still owns 49.999% of the market (Side-note: In the UK, a company is said to have "monopoly power" if it owns just 25% of the market) and any respectable game developer or publisher would be silly to ignore that, especially as by now they've got their development pipelines sorted out to the point that porting from one console to the other takes a lot less time than it used to (And it was worth doing back when the 360 had a 9 million console head start).

A hard, cold fact that a lot of platform-conscious gamers (aka fanboys) tend to ignore is that games these days cost a lot of money to make. A hell of a lot of money, in some cases and the revenue from just one platform isn't going to make that money back. Well, not unless you've got something like Halo on your hands but those uber-successful franchises are few and far between and businesses are all about minimising risk and maximising profits.

"But what about the 360's old hardware limiting games?" I hear you say? Well, the problem once again comes down to money. Making a game that really takes advantage of the 360 costs quite a bit of money. Making one that takes advantage of it and more takes even more money, but when you go beyond those limits, you cut your market in half. That's not exactly economical and sooner or later, your publisher is going to put its foot down and tell you, as a developer, to ship the game by a certain date or else. Despite the claims that the gaming industry is "recession-proof", it's hard to ignore the number of studios that have closed down in recent months:

  • Bizzare Creations

  • FASA

  • Sony Studio Liverpool

  • Midway

  • 3D Realms

  • Ensemble

  • GRIN

  • Free Radical (Now Crytek UK)

  • Factor 5


I could go on and on, but to be honest, it was sad enough noting down some of those names.

You could argue that some of those names brought their own misfortune upon themselves by releasing terrible games (or not releasing anything at all - you know who I'm talking about), but there's too many great names there for them all to have failed due to misfires and poor decisions.

It's almost sad to say, but if you don't have a practically unlimited budget supplied by your first-party publisher, it doesn't make sense to be exclusive any more.

Of course, with all this talk of consoles overtaking other consoles, it's easy to overlook that red herring we call the Wii. By my counts, the Wii has sold something like (roughly) 80million units, compared to the (roughly) 48million PS3's and (roughly) 50million Xbox 360's. But does anyone ever count the Wii? No. "It's not a next-gen console" people cry. In that case, The Move and Kinect are not "next-gen" peripherals, but people still treat them as such. As far as I'm concerned, the Wii is just as Valid at playing the numbers game as any other console. It has sold 30million more units than the 360, yet publishers and developers alike aren't exactly stopping all "HD-development" in favour of it, so what makes anyone thing that the PS3 selling a few hundred thousand, or even a few million, more will change anything?

There's absolutely no reason for anything to change. If the PS3 overtook the 360 a couple of years ago, perhaps things would be different, but this late in the game, both consoles may as well call it a draw and we can start the whole battle all over again with the real next-gen consoles, whenever they may arrive.