Pages

Sunday 27 February 2011

Why the PS3 WAS hard to develop for (and why this is no longer the case)

If you all cast your minds back to around 2006 and the launch of the PS3, you may remember that it had a bit of a hard time getting off of the ground. In Fact, Sony themselves referred to the PS3 as being on "life support".

There were a lot of reasons for this, ranging from a lack of software to the high cost of investment for anyone that wanted to own the console. Suffice to say, the PS3 is still very much with us today, partly thanks to Sony pulling the finger out and addressing the major issues that held the console back. Still, one of the earlier criticisms of the console was that it was "Difficult to develop for" and "too complicated". One of the side effect of this is that it drives development costs up, forcing some to declare the PS3 "too expensive" from a development standpoint, let alone a consumer standpoint. Of course, there's exceptions to every rule and a few developers spoke out, saying that the PS3 wasn't difficult to develop for at all, some people even said the difficulty was deliberate on Sony's part. There was a lot of confusion with some developers saying it was no more difficult to develop for than any other console and some saying it was far too difficult, but the barrage of really poor ports to the console in those earlier days couldn't be ignored and eventually we got some kind of admittance from Sony that the PS3 had issues.

That all happened a few years ago, yet the PS3 is still here today and has fantastic developer support. Most cross-platform games are now on par, or better, than their 360 or PC counterparts. Even Valve, one of the greatest opponents of the PS3, has seriously changed their tune. So what was all the fuss about and what has changed? Read on to find out!

The Problem


If you ask any average joe about why they think the PS3 was difficult to develop for, nine times out of ten, the answer is usually "The CELL". The CELL being the name of the coprocessors inside the PS3. If you weren't aware by now, the PS3 has a whole bunch of processors inside it. It has a regular PowerPC CPU, similar to what you'd get inside older Macs (albeit more powerful) and even the Xbox360, but custom designed for the PS3. It also has 8 exta processors called "SPUs" (Synergistic Processing Units). One of these SPUs is always disabled. This is done to improve manufacturing yields of the PS3. One of them is reserved entirely for the system itself. This handles all sorts of odd-jobs, such as encryption and authentication. However, the 6 remaining SPUs are there for the developers to use however they want. And the great news is they're really fast, in fact some people claimed that the PS3 had nearly "unlimited power", so you'd think with all that processing power, you could just throw polygons at it and it'd process them before you can even shout "Sixy Eff Pee Ess for all games!". Except this quite obviously isn't the case.

If you were to look at the processing power of a single CELL SPU, you'd see that as far as processors go, it's not actually that fast. There are certain operations the CELL is optimised for and with these tasks, it can run rings around a regular CPU, but likewise there are other operations that it simply wasn't designed to perform well at, meaning that all this humongous processing potential needs to be directed to the right place to make use of it. This is nothing new, in much the same way that you wouldn't expect your CPU to perform graphical calculations while your GPU encrypts and decrypts a bunch of files (Although GPGPU developments are making that statement quite irrelevant), certain hardware is simply tailor-made for certain operations. The real power of the CELL comes from the fact that you've got 6 of them to play with and a CPU and a dedicated graphics unit. All your bases are covered and no matter what it is you're trying to do, you've got a lot of power to do it, but you need to apply it in the right places.

Still, this in itself isn't all that difficult or new, either. What was relatively new to game design at the time was this notion that you have extra processors to do extra work. Back in 2005 when the PS3's development kits were being handed out, nearly every desktop PC out there had a single processor inside it, some RAM and a Graphics card/chipset of some description. Dual core chips existed but they weren't that widespread and most games didn't take advantage of them. The consoles at the time (Gamecube, PS2 and especially the Xbox1) were in a similar situation - one processor, some RAM and a GPU. The PS2 is a bit of an exception to this rule, but that's another article for another time (The PS2 was also famously hard to develop for).

It didn't matter that PC's had one type of processor while consoles had another, it's not the kind of thing you have to think about much when it comes to allocating your processing. If you have some physics to be calculated, it'd be done by the CPU, whichever CPU that machine happens to have (of course, we'll ignore things like Endianness for the sake of keeping this post as simple as possible) and it would take as long as that processor took to calculate it. The PS3 changed that - now you have lots of processors, processors that by themselves are no more powerful than a desktop processor, but combined can do wonderful things.

The PS3 was not actually the first console to take an approach like this. Those of you with a good enough memory might remember the Sega Saturn, a console that was supposedly more powerful than the original Playstation, but was notoriously difficult to develop for, partly due to its use of multiple processors. Seeing a similarity here? Another issue with the Saturn was that Sega didn't help matters much with limited developer support. It pretty much became the norm for developers to more or less ignore all the extra processing power the Saturn had to offer and just use a single CPU. The games suffered, the console suffered and we all know which came out on top between that and the PS1. (Edit: Some people commenting have got a little confused here - I don't mean to imply that this is the sole reason the Saturn didn't do well, it was just one reason of many).

As I mentioned earlier, right up until 2005, developing on multiple processors, or multiple cores, wasn't really the done thing in the gaming industry. Until that point, multiple CPU systems were reserved for the likes of servers and clusters that were designed to do a lot of different things at once, or the same thing in parallel, over and over and over. Some tasks can easily be parallelised, such as weather calculations, payrolls, protein folding and so on. However, games are distinctly absent from that list. Quite the contrary - games are very linear in nature.

Ok, I know what you're thinking - you've played gorgeous, sprawling, open-world games like Grand Theft Auto and you can see all sorts of things going on at once - cars are moving, people are interacting, weather is simulated, physics is simulated, sound is playing, it's all beautifully drawn on screen and such - how can that all be going on at the same time and yet not actually going on at the same time?

One of the early PS3 vs 360 arguments was that the 360 was designed with games in mind, in particular it's tri-core Processor. I have seen numerous people state that this was "designed for gaming" because you can have one core processing the physics, one processing the audio and one processing the graphics - perfect and simple. It's just a shame that this description is just too simple.

A quick lesson in game design


In order to understand why all this isn't as simple as it seems, it's good to have a really basic understanding of game design. So this will be a very very basic overview.

If you've ever read any kind of basic guide to computers, you've probably seen something like this:

Input - Process - Output

Supposedly, this is how all computers work. You give them some sort of an input, they process this input and give you an output. I wont get into the nitty-gritty as to why this isn't necessarily always the case, but from a basic overview, it is pretty much how your computer appears to work. You click on a button and the computer does some processing before displaying whatever it is that it's meant to display. In terms of games, if you move the analogue stick, the computer will then do some calculations, work out how far forward you've moved based on how far the analogue stick has been moved, work out if you've walked into a wall or an object, calculate the necessary physics, etc. and then draw the screen, showing you your new position.

Almost all games ever made will actually follow a pattern like this. It doesn't matter if it's Pac-man or Shenmue, the game grabs the input from your controller, decides what that means (Have you moved? Have you fired a gun?), then does some AI calculations (Has the ghost moved? Is it changing direction? Is the soldier going to duck for cover?) apply any physics or movement necessary because of this, including collision detection (maybe you touched a wall, or a powerpill or a ghost, maybe the AI got shot), apply the necessary effects (stop you from moving through the will, take some health off because you got hit by a bullet), draw what it wants you to see on screen and then repeats.

Not all games will follow this pattern exactly, but as a general rule of thumb, this is what's going on in the background. Do a search for "game loop" and every example you get is more or less the same. It's all very linear - input, then movement, then physics, then collision detection, then results of collisions, etc. and this is where we have issues with the idea of separating the game's logic into the separate processing cores of the console. You can't process the AI until the AI knows what happened with all the physics and collisions. You can't process the physics and collisions until you have the input from the player. You can't draw the screen unless you know where all the objects are, what direction they're facing, how fast they're moving (if you need to add a motion blur, for example) and you wont know any of that until the physics are done processing. Furthermore, what happens if you start drawing say a box on the screen and at the same moment in time, the physics have calculated that the box needs to be destroyed?

So on the 360, you've got 3 cores - if one's doing all the physics, then the other two are just going to sit waiting until the physics have been completed. Once the physics are done, the other two cores might kick in to do the AI, drawing and numerous other things, but then the "physics" core is sitting doing nothing. Even if there was a way you could run all these different operations in parallel, you'd never be able to make sure that all of the cores are doing stuff. At some point, one core is going to have a lot more work to do than the others, holding everyone up. And when that happens, you're just wasting all that processing power. The goal is to get every core doing stuff 100% of the time, or as close to it as possible.

Now, I'd like to make a bit of a point here. You often see developers claiming to have "maxed out" a console. Now I'm not about to undermine what Naughty Dog and the like have said, but it's not actually difficult to have a processor running at full tilt.

while(1) {}

That one line of code will easily get a processor running at 100%. Multiply that by 6 and you've got a PS3 that's "maxed out", but it isn't doing very much. The real secret sauce is when you optimise your code enough to make sure that as the processors are erm...processing, they're doing useful stuff. (Side note: If you want to know a bit more about really getting the most out of a particularly limited system, you can do worse than watching this fantastic presentation on the C64).

But how do you make that happen? Games, particularly modern games, can have different paces in them. You could fine tune a game to death so that the physics, graphics, etc. all take about the same amount of time, but then what happens when someone throws a grenade into a pile of boxes behind you? Your physics calculations suddenly go up, but nothing else does. But since games are so linear by nature, why not just split the work of each task into 3 or more bits and let the various processors work together? I'm glad you asked.

Threading fun


If you didn't already know, the concept of having a program do two things at the same time is called "multi-threading" (not to be confused with multi-tasking, which is the concept of having more than one program running at once). If you think of a program as being executed line-by-line, a multithreadded program is executing two lines at the same time, line-by-line. In our earlier example, the 360 seems well suited to games as there are generally 3 big things that need to be processed, meaning that if you applied the same logic to the PS3, you're only using half of the SPU's - and that doesn't count the PS3's main CPU, either. I can see why a lot of people jump to this conclusion, it "easily" explains why the 360 seems so easy to develop for while the Ps3 seems so difficult, but this easy explanation is too easy, there's clearly a bit more to it than that.

As I hit upon earlier, we're now looking to getting all of our extra processors/cores to do the same tasks at the same time. Rather than have one do the physics while the others wait around, get all of them to do the physics in 1/6 of the time, then the AI in 1/6 of the time and so on and so on, then we're not "wasting" any processing time and making full use of the hardware available. Sounds deceptively simple and it is. This kind of parallel computing is not easy at all.

To demonstrate this, imagine a 100m sprint with 6 athletes. Each athlete represents an SPU working away on the PS3. In an ideal world, each athlete would finish the 100m in exactly the same time, but this isn't likely. As some tasks take a  bit longer in different parts, the athlete will slow down a bit. This is ok because each athlete is on their own bit of the track, but that's not what we want to do. We want the athletes to run on the same bit of track, to process the same piece of data. There isn't enough room for two, let alone six - what happens? They'll trip over each other, knock each other down, people will get injured and the whole race is called off before it finishes.

In computing, if you have two threads running side by side, they are not allowed to cross paths or bad things will happen. If one thread accesses the same piece of data another thread is accessing at the exact same time, literally anything could happen. It might be ok, it might cause one or both threads to crash, it might cause that piece of data to become corrupt, either way the outcome isn't good. You might think that the chances of each thread touching the same byte of data at the same time is pretty remote and you'd almost be right - but we want our game (ideally) running at 60FPS. That means there's at least 60 chances per second of it happening. Then think of it this way - with each frame that passes, a calculation will be done to see if your character is colliding with any objects in the world. Even in a simple game like pac-man, each dot and ghost is an object. There's probably a couple of hundred dots in your average pac-man map, each being compared 60 times a second just to see if Mr Pac-man has hit any of them. If you have one thread checking Pac-man and another thread checking the ghosts, at some point each thread is going to compare a Ghost and Pac-man to the same dot on screen, causing that collision. If this happened on a Windows program, the program would likely crash and you'd get the dreaded "not responding" message. What do you think would happen on a Ps3 or a 360?

So we've established that to get the most out of our consoles, we need to get all of the processors working on the same set of data, but without stepping on each other's toes. If you don't know much about programming, that might sound pretty hard. If you do know a bit about programming, or even quite a lot about programming, you'll see it can be just as bad as it sounds. But fear not - this isn't anything new, it (as of about 2005/2006) was just fairly new to gaming. There are plenty of books and utilities out there to help programmers  do this, it's just that if you've never done multi-threaded programming like this before, it can be a bit of a mind-bender. The important thing to keep in mind is that if you're planning your code out well enough, you wont run into any issues. You'll never be able to reach that 100% useful CPU utilisation dream, but you can come pretty close.

The problem is that bit about planning. If you design a game engine from scratch to handle all this, it's not too bad to deal with. But most game developers already have an engine. An engine that wasn't built with multi-threading in mind. You have probably noticed that some games, even games vastly different from each other, can sort of look about the same. I can spot an Unreal Engine game at 20 paces. Engines take time and money (and time is money, so engines take money and money) to develop, it's no good scrapping everything just because a new console came out, the more code you can reuse, the more money you'll save and the faster you'll be able to ship your game. When 2006 came, most games were being released on engines that dated back a couple of years (at least). Even today, many games use engines that originate from games from 2 or even 3 generations ago. Call of Duty's is a prime example of this - if you look closely enough, you can still find remnants of the Quake 3 engine it was based on. Of course, years upon years of modification and upgrades mean it's still a "Next-gen" engine (at least if you still count the 360/PS3 as being "next-gen"). In order to really make the most out of multiple cores and CPUs, some extensive work needs to be done to these engines, or create a new engine from scratch. Neither are cheap options and the cheapest (but more complicated) way to go about it is to use your existing code as much as you can.

So what has changed?


It's pretty obvious to say that the PS3's made today still work internally much the same as the ones released in 2006. They still have the 6 SPU's for developers to use, yet these days nobody is really complaining about how difficult the PS3 is. Have developers just stopped moaning or has something changed?

It's actually a little of column A and a little of Column B. Sony eventually acknowledged that even its own developers were struggling with the console and got its brightest people to work together to create various libraries and engines to make the whole thing a lot easier to work with. Then, they gave out these libraries and tools to other PS3 developers, showing how to program for the PS3 and giving plenty of support where they could.

Secondly, by now most developers have migrated their engines over to the new "multi-threaded" way of doing things, or built entirely new engines to harness that power. If you pay particular attention to the Unreal Engine, you can track its development over the years, with Unreal3-based games becoming better and better on the PS3 as time goes on.

The PS3 sold enough consoles to make it worthwhile for the developers to put the effort into this. That's what separates the PS3 from the likes of the Saturn. A similar thing happened with the PS2 - the PS2 was also quite hard to develop for, but for different reasons. It didn't have multiple processors, but the graphics chip design was a little bit weird and many developers claimed it was a pain to work with. Still, the PS2 sold millions of consoles, the Dreamcast sank and developers didn't really have much choice but to just deal with it, at least if they wanted to make money.

In the end, money is the biggest factor. If a consoles takes a lot of money to develop for, but also has a big install base, it works out much the same as a console that's easier (cheaper) to develop for, but has  a smaller install base. Had Sony not acted quickly to drop the Price of the PS3, the gaming ecosystem as we know it might look vastly different today.

nness


Tuesday 8 February 2011

Does the PS3 outselling the 360 change anything?

By now, most people interested in this sort of thing will have noticed the news that the PS3 seems poised to outsell the 360 any month now. Allegedly, the PS3 is only about 2 or 3 million units behind the 360 and could overtake it any month now.

This would most certainly be an achievement for Sony and I'm sure someone at Sony HQ will get a nice little bonus and a pat on the back, but almost immediately it'll be business as usual for Sony and I doubt very much that anything will change.

It seems, however, that some people think that this is a bigger deal than it really is. In fact, since the launch of the PS3, some gamers seem oddly obsessed with the idea of the PS3 overtaking the 360. Almost every year there's a spat of articles predicting that sales of the PS3 would overtake the 360 that year, even as far back as 2008 (2009, 2011, 2012).

In the early days of the "console war", I can almost understand the fixation. Looking at previous generations, everyone knows that when one console overtakes another, it tends to go on to be the best console and the one left in the dust often gets discontinued, such as what happened with the PS2 and the Dreamcast. Yet, it has been over 5 years since the 360 launched, it's no longer a "Next-gen" console war, it's just a regular "current-gen" console war and frankly, this war is getting a bit long in the tooth. I have no doubt that the PlayStation 3 will overtake the Xbox 360 at some point, but here's what most people seem to be missing - it wont change anything.

For the sake of argument, lets pretend that the 3million Xbox360's have succumbed to the RROD and that not a single PS3 has died, meaning that right now, sony has sold exactly the same number of PS3's as Microsoft has sold 360's. In fact, lets say Sony has sold 1 additional console and that the PS3 is now in the lead. What happens next?

Something tells me that Activision will still bring out Call of Duty 8 on both consoles, Microsoft will continue selling Kinect and millions of gamers who don't know any better will continue to buy Halo-branded games, overpriced accessories and chat to their mates on live. Indeed, in this scenario, Microsoft still owns 49.999% of the market (Side-note: In the UK, a company is said to have "monopoly power" if it owns just 25% of the market) and any respectable game developer or publisher would be silly to ignore that, especially as by now they've got their development pipelines sorted out to the point that porting from one console to the other takes a lot less time than it used to (And it was worth doing back when the 360 had a 9 million console head start).

A hard, cold fact that a lot of platform-conscious gamers (aka fanboys) tend to ignore is that games these days cost a lot of money to make. A hell of a lot of money, in some cases and the revenue from just one platform isn't going to make that money back. Well, not unless you've got something like Halo on your hands but those uber-successful franchises are few and far between and businesses are all about minimising risk and maximising profits.

"But what about the 360's old hardware limiting games?" I hear you say? Well, the problem once again comes down to money. Making a game that really takes advantage of the 360 costs quite a bit of money. Making one that takes advantage of it and more takes even more money, but when you go beyond those limits, you cut your market in half. That's not exactly economical and sooner or later, your publisher is going to put its foot down and tell you, as a developer, to ship the game by a certain date or else. Despite the claims that the gaming industry is "recession-proof", it's hard to ignore the number of studios that have closed down in recent months:

  • Bizzare Creations

  • FASA

  • Sony Studio Liverpool

  • Midway

  • 3D Realms

  • Ensemble

  • GRIN

  • Free Radical (Now Crytek UK)

  • Factor 5


I could go on and on, but to be honest, it was sad enough noting down some of those names.

You could argue that some of those names brought their own misfortune upon themselves by releasing terrible games (or not releasing anything at all - you know who I'm talking about), but there's too many great names there for them all to have failed due to misfires and poor decisions.

It's almost sad to say, but if you don't have a practically unlimited budget supplied by your first-party publisher, it doesn't make sense to be exclusive any more.

Of course, with all this talk of consoles overtaking other consoles, it's easy to overlook that red herring we call the Wii. By my counts, the Wii has sold something like (roughly) 80million units, compared to the (roughly) 48million PS3's and (roughly) 50million Xbox 360's. But does anyone ever count the Wii? No. "It's not a next-gen console" people cry. In that case, The Move and Kinect are not "next-gen" peripherals, but people still treat them as such. As far as I'm concerned, the Wii is just as Valid at playing the numbers game as any other console. It has sold 30million more units than the 360, yet publishers and developers alike aren't exactly stopping all "HD-development" in favour of it, so what makes anyone thing that the PS3 selling a few hundred thousand, or even a few million, more will change anything?

There's absolutely no reason for anything to change. If the PS3 overtook the 360 a couple of years ago, perhaps things would be different, but this late in the game, both consoles may as well call it a draw and we can start the whole battle all over again with the real next-gen consoles, whenever they may arrive.