Thoughts on Consoles and Certification Processes

Last weekend I wrote an email to Kyle Orland of Ars Technica in response to a question about an independent game’s certification troubles. That letter was quoted in this article, which was then summarized by gameindustry.biz, and is probably floating elsewhere around the internet too.

The original letter went through a few examples of specific issues of certification and how they could be handled better, to help substantiate why I think these processes are so broken. These are not the kinds of details you would expect to make it into a short article for a general audience, but having typed them up, I think some people out there might be interested (especially developers who have not done console games before but are thinking about it). So, below is my original letter (edited a little bit for clarity):

—–

Hi Kyle,

I have gone on record a number of times, saying that the XBLA certification process makes games worse rather than better. But that is a pretty old experience for me now (4 years ago), and they are unlikely to change any of that stuff until the new console comes out anyway. Keep in mind that the Xbox people as a whole do not think of XBLA as a high priority (you can get a glimpse of this from the E3 press conferences, etc).

But, I think the more important issue applies globally, to all of Microsoft, Sony and Nintendo (and not just to smaller downloadable games, but to every game regardless of size). The certification processes of all these platform holders were based on the idea that all these steps they test are absolutely necessary for software to run robustly, and that software robustness is super-important for the health of their platform and its perception by customers.

But, look at iOS. There is almost no certification process for iOS, so by the Microsoft/Sony/Nintendo theory, the apps should be crashing all the time, everyone should think of iOS as sucky, etc. But in fact this is not what is happening. There is no public outcry for more testing and robustness of iOS software.

Part of this may be that iOS software is so easily patched; so maybe a heavy cert process made sense back in the disc-only days, but as we go into the next console generation it becomes unnecessary. But something tells me that Microsoft/Sony/Nintendo are not really going to let up on cert, even though they say they will. It’s just “not in their DNA” as they say. The proof of this is the bizarre over-complexification that already happens with console games today, and is baked into the current certifications, that any of them could easily fix, but none of them do, because they don’t care.

For example: every single game is REQUIRED to say on startup, “sometimes this game saves, when you see this animated icon in the corner, DO NOT TURN OFF YOUR CONSOLE, etc”. This is something that developers have to implement and that has to be tested, which costs significant time and money, but worse than all that, it impacts the user experience, because the startup of the game becomes just a little more bureaucratized, and also — this is supposed to be a fun experience, so why are you issuing warnings and strict instructions? (Just one thing like this may not seem like too much, but combined with everything else, it is a lot; I am using it as just one specific example).

Why is this DO NOT TURN OFF YOUR CONSOLE warning there? It’s because if the game is saving a save file, overwriting an old one, and you turn off in the middle, you might corrupt the saved game. Well, guess what… the solution to this is to IMPLEMENT A MORE ROBUST SAVE SYSTEM. You save the new file next to the old file, flush the I/O, and only delete the old file once the integrity of the new one has been verified (or just don’t remove it, and keep two copies of the save at all times). On load you just make sure you load the intact save file if one is corrupted. This is not hard to implement; I did it in Braid. But if consoles cared about this kind of thing, it would be built into their basic save-file API, so that it would always work perfectly and no developers would ever have to think about it.

If they did this, there would be fewer things to certify, certification would cost a little less and take a bit less time. Let’s say you save 3 days of development and testing per game (this is conservative; the real amount can be substantially higher when you factor in the discussions and coordination about how the save notice should look, etc). Now add up how many games have been released just on the Xbox 360, multiply that number by 3 days, and what you get is probably OVER A DECADE OF DEVELOPER TIME that was wasted. Just on this one little requirement. For something that should just be built into the system by one person in a couple of weeks. Okay, that was only the Xbox, so now add in all the PS3 and Wii games, and see what a huge waste of time that represents.

Like I said, just this one thing by itself would not be too bad, but it is only an example. Another example that all three certification processes have is the “title-safe region” restriction. This basically means that you are not allowed to draw text or gameplay-critical information near the edge of the screen (about 10% of screen width in a margin around the edge) because you don’t know if the user’s TV is displaying the whole picture (even modern HDTVs often cover the edge of the display with a bezel or else just crop it entirely, which is completely stupid).

Okay, so at first blush this seems to make sense, so to make sure players can see everything, we have to go through each screen and each mode in every game we ever release and certify that important things don’t get drawn too far toward the edge. Well, as you can extrapolate from my previous example, this also has wasted DECADES OF DEVELOPER TIME (probably many decades, because dealing with title-safe probably takes a lot more time than save/load). And it is also something that these consoles could handle at the system level if they actually cared about the problem. Just put a screen calibration menu in the dashboard, and then initialize it properly so that players would almost never need to use it (when they are using HDMI output, initialize the settings using the display’s EDID; if they are outputting a different way, just default to shrinking the display a little bit).

10% at the edges of the screen may not sound like much, but it is actually huge. Suppose a game is rendering at 1280×720… you are only really able to use the interior 1024×576, with everything outside that just being decoration. 1280×720 is 921600 pixels; 1024×576 is 589824 pixels, in other words, you only get to use 64% of the screen area for real! Everything outside that is only allowed to be decorative. You know how the 360’s new Metro interface seems to use an absurdly small amount of real estate in the middle of the screen? That is due in large part to title-safe region. You know how games always seem to have their HUD elements placed too far inward when they could be nicer, further off to the sides? Same.

This would also have the benefit of saving work for most people who implement 2D games. In Braid we had to program a screen calibration menu. I notice there’s one at the beginning of Dyad, and one at the beginning of PixelJunk Shooter. For God’s sake why??

This makes all games worse all the time, and makes developers have to do a lot more work, but ultimately could be fixed very easily by any of the platform holders. They just don’t care enough to do it. I think they will have a hard time getting rid of these rules because bureaucracy is in their DNA.

These are just two examples of many… I am going to stop ranting here, but this is just the tip of the iceberg. Most certification requirements are like this.

The edge that both Apple and Valve have going into the future is that they both genuinely care about the end-user experience and want to make it as good as possible. (Their end-user experience is already way better than any of the consoles and they are always working to improve it). Which coincidentally seems to be the place that these consoles are handicapped due to their corporate culture. Can anyone look at the current 360 or PS3 dashboards and legitimately say that those are products of an entity that deeply cares about user experience?

A question a lot of developers have is: When the 720 / PS4 get launched, how many people are really going to care and buy games for those systems? Obviously some people will, but if it is less than bought games this generation — which a lot of developers think is very possible — then look for a “peak oil” kind of crash where a lot of too-big publishers and developers fight over a shrinking market. If the actual way the next-gen consoles work is much like now, they will be a functionally archaic in the marketplace (keep in mind that they have to compete with the iPad 4, 5, 6, 7, 8, and 9. Have you got any idea what the iPad 5 or 6 are going to look like, how powerful they are going to be, what other user experience benefits they are going to have? I sure don’t.)

—–

A few days after having written this letter, I just have one more thought to add: I know that Microsoft thinks they are fixing these issues with their next console, and creating an environment more suitable to free-to-play games and downloadable things in general; I assume Sony is thinking similarly. I will be very surprised if either of them succeeds in fixing these problems. Large institutions always undercorrect; they always think they are being radical and risky when in fact they are doing something tame that is just a modified version of the status quo. When large institutions need to change course by 95% in order to do well, even if they know it’s an emergency and are totally panicked, they can probably only manage about 35%. For recent illustrations of this, look at Nokia and RIM (or, uhh, look at the large governments of Earth in dealing with global warming).

So I would bet that Microsoft and Sony believe they are streamlining their online stuff, because they are implementing an expedited, simplified cert process for patches and content updates, or something, and that because of this they will be in a great position to compete with Facebook and Apple and Steam and whoever. Whatever they do is very likely not to be enough. Their competitors are not stopping either. (Steam, which was already pretty painless in terms of updating games, recently revamped their system; the new thing is way better than the old thing, which was already way better than what the consoles do.)

This entry was posted in Ill-Advised Rants. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

58 Comments

  1. Posted July 24, 2012 at 5:44 pm | Permalink

    Great letter Jon!
    I have always heard that console certification requirements are ridiculous. It’s nice to see some examples of how! Your emphasis on developer time spent makes a lot of sense to me and really illuminates the scale of the problem.

  2. Jonathan Blow
    Posted July 24, 2012 at 5:46 pm | Permalink

    I actually did not even cover the most ridiculous examples, because those tend to be specific to each console. Here I just stuck to a couple of examples that are common across systems.

  3. Posted July 24, 2012 at 5:49 pm | Permalink

    Thanks for making this available for all to read, I for one found it very informative and enjoyable to read, and it does give me some insight into things that I never knew about before reading this, like the autosave icon or the title safe region. I had no idea that the title safe region limitation even existed, and suddenly after reading this it’s become clear to me why such issues even exist. I always thought that it was an error on the developer’s side of things and that they had just done a mediocre job with the UI on certain games, but now I can assign blame where blame is due.

    To be perfectly honest I have no idea why the system callibration option does not actually exist, if independent developers are able to set aside the time to implement it in their games I don’t see why these large corporations can’t get a couple of people working on it and implement it in an OS patch.

    Anyway, thanks again for posting this.

    Best Regards,
    -RegretZero, Pixxelcraft Website Manager

  4. Posted July 24, 2012 at 6:01 pm | Permalink

    You’ve made some really good points and examples and have got me thinking more about the way these “problems” start rather than just the rate at which they are corrected.
    Is it simply a lack of forethought in the design phase? Or maybe a general lack of flexibility in the large systems that have been created? Especially in the case of Microsoft, it seems really weird that they got it so wrong despite years of experience developing enterprise/business software. I hesitate to blame it entirely on just bureaucracy. I feel like you’ve nailed it when you say the “small-publisher games” department of these companies really isn’t their primary focus.

    • Jonathan Blow
      Posted July 24, 2012 at 6:09 pm | Permalink

      These certification rules apply to *all* games, not just independent games. So it has nothing to do with game size or importance to the console.

      It’s just that they feel like they can make the developers do this work and so they don’t have to do it themselves. What they don’t realize (or maybe don’t care about) is the immense cost, of time, money and game quality, incurred by making all developers do all this stuff all the time.

  5. Posted July 24, 2012 at 6:20 pm | Permalink

    I doubt very much that Microsoft or Sony are going to catch up to Steam and iOS in this field, at least not for a very long time. Nintendo has a good shot as they are a very online system; I don’t know exactly what they are doing with the Wii U in this regard but if they decide to implement easy indie game development and certification for the Wii U they could be very successful indeed. Until these companies do this, though, I think what we need to be keeping an eye on is the Ouya console, as this seems to be the actual Steam incarnation on consoles, rather than slight tendencies toward tried-and-true methods used on these easy-access platforms.

    As always I really appreciate your thoughts Jon, and Braid remains in my top 5 games of all time – although I’m not sure you’re ever going to beat Kingdom Hearts; nostalgia is a powerful thing ;D

  6. Posted July 24, 2012 at 6:35 pm | Permalink

    Wow, I’d never ever thought about this when it came to producing.

  7. Keith
    Posted July 24, 2012 at 6:49 pm | Permalink

    Interesting points. I’m not sure that iOS is a fair comparison, though. People likely have different quality expectations for $15 games than for $1 games so it’s not obvious to me that Apple’s less stringent approach would be equally appropriate in a console context. Also, targeting TVs made by a number of manufacturers seems to add some inevitable complexity compared to Apple’s vertically integrated end-to-end approach. That’s not to diminish your points about whether the burden for individual developers could be reasonably shifted to the console manufacturer in some cases, of course.

  8. Posted July 24, 2012 at 7:24 pm | Permalink

    Hi Jon. Great article. I couldn’t agree more with everything you’ve said. Having gone through this certification process recently, it’s painful. And while a cannot fault the help and support the team at Microsoft provided in helping me get my game to market… There is so much unnecessary bureaucracy. Like many other indie devs, I started with an end to end complete title and it was over a year and many hoop jumps later we released. It’s just too much, and so much can be avoided as you say. For anyone who’s not been through this, the list of stuff he hasn’t mentioned is almost endless. Is every menu item in the correct order, are they all written exactly as every other game. Can the user pause during every split second of the game. Lots of forced cookie cutter rules. It was painful, I think in all honesty my last experience would push me to steam for my next game release.

  9. Marco Mustapic
    Posted July 24, 2012 at 7:25 pm | Permalink

    I’m not familiar with the rest of the rules, but the ones you mention seem to be reducing the testing Microsoft does at the expense of dev work. If, for example, Microsoft did implement a decent save game API, they would still have to test how the game responds when the console is turned off during saves, and if the game uses the API everywhere. By requiring just a message, they don’t need to test for anything but that text, and if the game corrupts the savegame data, hey, it’s the player’s fault, he saw the warning.

    The second case seems to be similar. Instead of them testing if everything looks more or less ok with different calibration configurations, they ask the dev to target the lowest common denominator so Microsoft has to check each screen only once.

    As you say, this approach is a dead end.

    • Jonathan Blow
      Posted July 24, 2012 at 10:53 pm | Permalink

      But it’s not like that, because they still already test what happens when you yank a memory card out while saving / loading, or turn off the console! The requirement is that the game does not crash when dealing with the corrupted save, it just discards it and reports an error. On the other hand, if it’s in the API, I claim they don’t need to test it at all once they know it works.

      • Andrew Dohr
        Posted July 25, 2012 at 1:12 am | Permalink

        Devil’s advocate, because there don’t seem to be any …

        Others will claim the ‘Microsoft Save API’ is too restrictive and doesn’t allow them them to implement save data in the style best suited to their game design. This is less creative freedom for developers.

        Title screen-safety is a strange thing to complain about. First parties are trying their best to guarantee YOUR game is playable on all of their customers’ televisions. You don’t have to deal with the customer service issues or issuing refunds – they do, and CS is not cheap. Having a ‘lowest common denominator’ makes it fiscally possible to have a modern console ecosystem as it exists today; no one’s gonna buy the shit if the people selling it can’t guarantee it’ll work 100% of the time.

        An OS-based screen calibration tool simply wouldn’t work unless you forced all titles to render at the same (or a group of ‘sanctioned’) resolution(s); considering games like Microsoft’s own Halo franchise do some crazy rendering tricks to achieve both excellent looks and high framerate, a one-size-fits-all solution is more creatively restrictive than your suggestion.

        As much as they’d like to believe, XBLA is not designed with agile indies in mind, and all the square-peg-in-the-round-hole effort in the world will not change that. If you’re looking for that level of flexibility while still remaining on console, XBLIG was designed around you.

        • Jonathan Blow
          Posted July 25, 2012 at 8:39 am | Permalink

          I’m sorry, but all your attempted points are wrong!

          To the first point, nobody is going to complain, because we are just talking about a robust write operation. It is the same as what people would currently be doing, except it is better in every way. Nobody is going to complain. If someone wants to do something different, they just can. You aren’t taking the rest of the API away from them.

          About screen saftey, it is the same issue of a “lowest common denominator” either way. The question is just how it’s implemented. In one case, every developer has to do all the work separately; in the other case, the platform holder does something once. People aren’t going to return consoles for title-safe region problems, especially if you run the screen calibration at startup the first time so that you know it’s right!

          This point (“… unless you forced all titles to render at the same resolutions”) is just nonsense. I almost don’t know what to say. It is straightforward to scale an arbitrary resolution down by an arbitrary amount. I don’t know why you think this is an issue, but I surmise you are not a programmer or technical person.

          To the last point, this letter is not about XBLA. It is about all certification processes for all sizes of games across all the major consoles. And no, XBLIG was not designed around me or anyone else who designs reasonably ambitious games written in C++ and who wants to sell them to a reasonably-sized audience.

  10. Jonathan Gerlach
    Posted July 24, 2012 at 8:11 pm | Permalink

    It appears to me that Nintendo and Microsoft have a wall of fallacious reasoning built up to keep out unskilled developers. They reason that if they make the barrier to entry high, then they only have to deal with serious developers. The problem with this reasoning is that it keeps out almost all of the creative innovation and (most of) the developers who are willing and able to put up with all these costs and guidelines also choose a safe path of little or no creative innovation.

  11. Dave
    Posted July 24, 2012 at 8:43 pm | Permalink

    Super interesting read for a young game dev who has no experience in console development. Good insight also loving the two posts in a row in two days!

  12. Posted July 25, 2012 at 2:47 am | Permalink

    The whole market is moving too fast for those Dinosaurs to keep up with competitors. Where Apple and others improved the user experience I hope to see that the Ouya tackles TV based console gaming without the whole baggage of cert, closed platform and super strict rules that limit innovation business and gaming wise.

  13. Luke
    Posted July 25, 2012 at 3:10 am | Permalink

    Nice post – agree with your comments on the title-safe requirement especially – a system-level calibration step would be an enormous improvement for everybody.

  14. Posted July 25, 2012 at 4:38 am | Permalink

    I cannot say I understand how Valve got in alongside Apple?
    I assume he is not talking about their games, but instead Steam? Not that their games ever seemed to shake up the status quo much, even if they are all terrific games.
    Steam is decent, but the Steam client is has not got a great gui and is fundamentally feature incomplete (and far too old for this to be acceptable). For example, how do you have a download client that does not allow you to control the speed you download at?

    Because of huge issues like this that Valve could patch in a week if they really cared, I always considered them as people who cared not a whit about user experience.

    Anyone know what Mr. Blow is talking about?

    • Justin Burgess
      Posted July 25, 2012 at 7:02 am | Permalink

      Being able to set throttles for Steam is a thing I’d like to have too, but to your other points I am not sure what you mean by bad GUI (I’m not a graphics artist or designer), and general oldness. Steam is updated regularly and has had large refreshes in interface, functionality and back-end stuff.

      They patch and update their service seemingly aggressively. So, not to be too accusatory, I am unsure what you are talking about?

      • Posted July 25, 2012 at 7:19 pm | Permalink

        The Steam client is updated regularly, but I have never seen a gui change in any of the sections I use and no features I have ever used have been added.

        The detail view is horrible. Half the detail section is filled the the same default “game Info” , which is just in fast a paragraph of boiler plate and a link to their coming page.

        And then you have the news, that mentioned how there was a sale 5 years ago, and before that an update.

        • Posted July 25, 2012 at 7:20 pm | Permalink

          *Community page

        • Justin Burgess
          Posted July 25, 2012 at 11:57 pm | Permalink

          I don’t know why I exactly feel compelled to talk about this, but I believe it was a couple of years ago the entire steam platform went through a relatively large refresh and added the community section and the notion of groups. As I recall it did change the UI.

          And since then they have added cloud-saves, the steam workshop, and screenshot/video upload features. Also there’s the promising sounding Greenlight feature coming up soon as well. Again I think Valve is fairly aggressive at adding features and updating steam.

        • Eduard Nathan Igma
          Posted September 12, 2013 at 1:38 pm | Permalink

          You shouldn’t make changes for the sake of changes, while you may not like the UI that Steam uses, the changes you propose would decrease usability of the client for me.

          The ‘boilerplate’ is far more useful to me than the details you want first, and in general, the stability of the Steam UI makes it far more usable than rapidly-changing UIs like Youtube’s, where you need to relearn features every other month… just when you’d gotten used to the changes.

    • BG
      Posted July 25, 2012 at 7:47 am | Permalink

      He is talking about the certification process for games being rolled out on each system (Steam/iOS/xBox/PS3/Wii)… Apparently you missed the whole point of the article/letter.

  15. Posted July 25, 2012 at 5:19 am | Permalink

    Oh my, keep talking like this and developers will start taking their games elsewhere. They might even reach out to players themselves and sell games from their own websites – the game design equivalent of the blogger’s eBook.

    Self publishing directly to players? Craziness! Steam with its low barrier to entry was disruptive enough – now how will console manufacturers make their millions*

    You rebel you.

    *I say millions instead of billions or some other “illions” because I assume profit is quite low when the cost of maintaining such a bloated enterprise takes a serious chunk out of any incoming revenue. I’ve done work for Microsoft. They don’t exactly think “lean”.

  16. Posted July 25, 2012 at 7:13 am | Permalink

    Very interesting read. I’ve always heard that consoles have a high barrier to entry, and I’ve never completely understood why. It makes a lot more sense now.

    All this talk about Steam reminded me about Steam Greenlight, which, if all goes well, will lower the barriers to entry even further, as Steam has realized that their certification process, which is far easier than any console certification process, is still fundamentally flawed. Here’s to hoping that consoles change before the companies die out completely, but in the meantime I’ll likely just stick to PC releases for my games.

  17. Posted July 25, 2012 at 7:27 am | Permalink

    Also, here’s reuploaded GiantBomb E3 2011 podcast with Jon that touched on this topic (there is skip points in the description): http://www.youtube.com/watch?v=1YjBdVDf8jk

  18. Posted July 25, 2012 at 7:39 am | Permalink

    Regarding the save warning issue, I think I’d still be rather upset if the latest save state got corrupted. Even if it fell back to the N-1th state, that could be devastating if I’d saved just prior to facing an especially difficult part of the game, and then saved immediately after passing it, and lost the progress. So all things considered, I appreciate the warning, and even if a more robust system such as you proposed were implemented, I would still want it.

    But that’s not to say that we can’t improve on things another way. What about a system that queues the shutdown command, and executes it only after all pending writes have been completed and verified? As long as the isn’t a hardware power button and the only shutdown possible is performed via software, that would work well.

    In any case, I think you raise some good points about wasting developer hours by forcing developers to take their focus away from core gameplay implementation and deal with these sorts of details. To the extent that we can extend the framework provided by the vendor to reduce and hopefully eliminate such issues, I say it should be done. Even if Microsoft or Sony don’t make these things a part of their API, as I would hope they would, the developer community could create libraries that extend the base API and accomplish these goals, so that developers don’t have to waste their time re-implementing the same requirements, and can focus on making the game itself.

    I suspect a lot of studios probably do have some in house solutions that they at least share internally already, while others may license or sell this work to other developers. But I’d really like to see a community based effort to bring about a Free framework that does this stuff, and makes it available to all developers. Games will only get better as a result.

    • Jonathan Blow
      Posted July 25, 2012 at 8:33 am | Permalink

      In this case I am talking largely about autosaves, in which case you don’t know exactly when it’s saving anyway. What should happen for all games is that we establish baseline behavior where people don’t worry about whether their progress is saved, and just don’t care, because they trust that the game has done it.

      If you actually go to a menu and pick a manual save, I have no problem with putting up a “Saving…” message until you know saving is complete. That is kind of a dinosaur way to do things, though, except in special cases.

  19. Sami
    Posted July 25, 2012 at 8:20 am | Permalink

    That was interesting as. I don’t plan to ever develop primarily for consoles, but even so it was helpful nonetheless. I’ve never really thought of how compulsory this sort of stuff is.

  20. Spence
    Posted July 25, 2012 at 11:32 am | Permalink

    Some of the biggest criticisms I hear about IOS games are that the controls often don’t do what they should. If IOS had a tighter certification process, do you think would this sort of thing be improved?

    • Jonathan Blow
      Posted July 25, 2012 at 11:52 am | Permalink

      I think it depends on what the specific complaints are. In general, certification does not cover quality of controls, so it’s hard to see how certification would fix that kind of issue.

      But my impression about iOS controls is that they are generally fine if the game is designed to be a touchscreen game, and they generally are terrible if it’s a joypad game ported to a touchscreen (2D platformers, etc). But that isn’t really a cert kind of issue, it is more just a game design kind of issue.

  21. Julien Koenen
    Posted July 25, 2012 at 1:17 pm | Permalink

    Maybe there are other reasons for these certification requirements / hurdles. For one it hits the small developers much harder then the big ones (Just because the 3 days required for the example are tiny part of the complete time budget of a ‘big’ game). That increases the costs of entry to the console market which helps the big companies and therefore (allegedly?) Sony/MS. In this case the negative outcome of a quality measure you describe here would be on purpose.

    • Jonathan Blow
      Posted July 25, 2012 at 2:03 pm | Permalink

      This is true in theory but I don’t think it’s the reason Sony or Microsoft do this. Maybe Nintendo, though — they seem to have weird mental problems about independent developers.

      • Anonymous
        Posted July 25, 2012 at 6:34 pm | Permalink

        I’ve met Nintendo in person, and he is a strange fellow indeed. Nintendo doesn’t know what XBOX Live is, or what is cross-game chat. Nintendo is fiercely protective of their platform’s image, to the extent that they shun microtransaction-based games because they fear developers might trick users somehow into spending real-world currencies. Nintendo is also paranoid in the extreme with regards to what third-party titles do to their console, to the extent that they can’t conceive of an online game where each patch does not go through an extended lot-check process. Yup – lot-check – Nintendo still uses a manufacturing metaphor when thinking about software! To me, this explains a great deal.

  22. Chris
    Posted July 25, 2012 at 5:17 pm | Permalink

    Generally, I agree that robustness in code can eliminate the need to have stupid requirements like you mentioned. I also agree that these companies should improve their API’s as much as possible to facilitate the work required by the developers making games for their consoles. That’s the main point of an API right, to make code more accessible and easier to use?

    Anyways – I’ve only played the PC version of Braid, but I’m pretty sure the XBLA version doesn’t have an animated icon saving screen. Were you able to get Braid certified without passing all of the reqs?

    • Jonathan Blow
      Posted July 25, 2012 at 8:06 pm | Permalink

      I was able to convince them to skip that requirement because I implemented a robust save of my own.

      • Posted July 26, 2012 at 11:06 am | Permalink

        Glad I read till all the way down here! This comment was the most informative thing in an already highly-informative article. It took me back to the first time I played Braid on my Xbox and how the game “just started”… earlier than I expected. The opening experience was a lot more surreal and impactful because of the suddenness of its onset. Of course a lot of that had to do with the way the opening screen presents itself (with Tim in the dark), but I imagine the lack of the save warning helped a lot too.

        Now that I have this context I’m curious to know which was the effect and which the cause. I.e. did you put in the extra effort to skip MS’s save screen requirement because you wanted that suddenness in Braid’s opening? Going even further, did you implement your more robust save system in order to skip MS’s save screen requirement, in order to achieve the sudden opening?

        Ironically, the presence of the save screen in every other game made Braid opening even more sudden than it would have been if MS had no save screen requirement to begin with.

        • Jonathan Blow
          Posted July 26, 2012 at 11:22 am | Permalink

          Yes, I wanted to get rid of a lot of the ugliness and bureaucracy that happens when so many games start up.

          Note also that the game did not have a screen where it tells you to press A to start, if you only have one controller active (I got away with this because GTA4 had already shipped doing the same thing) and that it doesn’t ask you where you want to save if you only have one save device. (Though maybe it always asks now, since there are cloud saves and thus almost always two save devices).

          These are all technically violations of the requirements that I was fortunate enough to get waived because I did the programming work to get around them.

          But it would be better for the platform holders to do this stuff once than for everyone to have to do it all the time. (Well, they wouldn’t be doing exactly these things, but rather, related things to make the system more streamlined.)

          • Posted July 26, 2012 at 1:26 pm | Permalink

            Kudos on getting those requirements waived / adjusted – it definitely had a “magic moment” effect on me as a gamer.

            I spent about a year in the Xbox Platform team and at times found myself in the set of people making the call on developer requests to waive cert requirements. The discussion was always primarily about user experience robustness. In general, product / UX quality was valued much more at Xbox than at any other group I’d seen at MS.

            The console industry’s DNA was forged in the memory that Nintendo’s invention of the certification process had saved it from collapse in the early 80s. Agreed that heavy robustness certification is irrelevant now thanks to ubiquitous connectivity (thus easy patching), crowd-voting / social media, and a self-publishing model that increases content supply by orders of magnitude, thus lowering prices and expectations. I agree it’s likely that next-gen consoles will undercorrect for this shift, but both MS and Nintendo have both demonstrated the ability to make bold (at the time) decisions and radical shifts from one console generation to the next. Of the three console makers, I’d bet that MS will go the furthest, given its experience with the more open model on Windows Phone (and Windows), its platform-driven mentality as a company and its relative youth in the console industry. Sony / Nintendo’s best bet would be to just adopt Steam itself as their primary content system and ride the wave more easily… but something tells me they won’t do that.

          • bochelord
            Posted August 3, 2012 at 6:18 am | Permalink

            You must be also very influential to MS since I worked in localization for several major games (mass effect, forza, etc…) and we had a hard time to get waived some TRC violations even being first party dev. sometimes…
            I couldn’t agree more than the 3 big ones are resting in their laurels when it comes to digital entertainment…

        • justin
          Posted July 26, 2012 at 2:01 pm | Permalink

          I loved that as well! I was expecting a begging animation and a menu but I just got droped into the iconic bridge in front of the sunrise/burning city and I was this strange character in the dark! I didn’t even wait like 10 seconds and I was playing already. Without a warning I was playing, it was the game, no menu or animation or cutscene!

          In that very moment at the begging, just seconds after leaving the console hub I was Tim and was exiting the bridge and going into the house and I said “This game is fucking serious!” Best game ever, really.

          I hope The Witness has the most Epic opening ever, A dark hallway? That could be potentially cool, just depends on what’s behind the door at the end of the hallway! I hope The Witness is as fast or faster than Braid to load and begging but being 3D it will be difficult! I wish the whole team good luck! I hope you can get away with alot more that you couldn’t with Braid!

  23. a0me
    Posted July 25, 2012 at 9:18 pm | Permalink

    While I agree with you on the fact that TRC requirements such as the ones you mention should be handled on the platform’s end, a lot of requirements are a necessity: things like
    – game stability (the game shouldn’t crash)
    – UI consistency
    – security (the game shouldn’t try to harvest your personal data)
    – legal (the game is not infringing on any rights, properly credits use of third party technology, etc.)
    – ESRB ratings compliance
    … and more, are part of the current cert process at MSFT/SCE/Nintendo.

    While the cert process could certainly be a lot more streamlined, getting rid of it entirely seems like a very bad idea.

    • Jonathan Blow
      Posted July 26, 2012 at 11:29 am | Permalink

      Why is ESRB ratings compliance a necessity? Steam doesn’t require it. Apple doesn’t require it.

      Why is UI consistency a necessity, when the person who is making the UI rules is a far worse game designer than a lot of the people trying to make games for the platform? Doesn’t this prevent improvements and stifle innovation?

      Certification processes don’t include any legal checks; how could they? There is no way to certify that. That is something that goes into a contract and is standard pretty much everywhere, so I don’t see it entering into this issue.

      I agree that a game should not crash, but in my experience with testing on these platforms, the testers are not spending most of their time trying to make the game crash; they are spending most of their time going through a bunch of mechanical steps to test a bunch of standard TCR things, most of which are unnecessary.

      • a0me
        Posted July 26, 2012 at 7:53 pm | Permalink

        Compliance with ratings: because consumers want to be informed about the age/cultural appropriateness of the game before they buy.
        (Steam is the only platform not to have a rating system in place)

        UI consistency: because players expect to be able to pause the game with the START button, cancel with the B button and that the Memory Unit is consistently called “Memory Unit” and not “SD Card.” Being user friendly shouldn’t stifle innovation.

        Legal checks: they do exist, at least in the form of warnings in cert reports.

        Game stability: hire a new QA team. A crash or progress stopping bug is almost a guarantee for a certification fail.

  24. SawTheFinalChapter
    Posted July 26, 2012 at 12:14 am | Permalink

    You know what — from an aesthetic standpoint — is nearly as annoying as not using 34-36% of a screen? Using white text on a black background. Seriously, Blow, fix this!

    • Jonathan Blow
      Posted July 26, 2012 at 11:25 am | Permalink

      Cool story, bro.

      • justin
        Posted July 26, 2012 at 1:16 pm | Permalink

        Don’t… Don’t do that please… You are so much better than that…
        Yes, the guy is an imbecile but please don’t use internet jokes…
        …Please no meme talk.

        Much better is to tell him to go fuck himself or that white on black is the best combination of them all or whatever.

      • Posted July 27, 2012 at 6:17 pm | Permalink

        Jonathon Blow using internet memes… well now I just don’t know WHAT to think!

        • Pritchard
          Posted August 14, 2012 at 5:32 am | Permalink

          jonathan /b/low

  25. Andreas
    Posted July 26, 2012 at 7:52 am | Permalink

    Hi Jonathan,

    Great letter, thanks for bringing this issue to the public. The certification process used by all three console manufacturers is a bureaucratic nightmare that makes the Soviet Union seem like a well-oiled machinery in comparison.

    I personally work with localization, and the translation requirements are just as insane as some of the technical ones. For instance, how do you operate a touch screen? By *touching* it, right? No, not if you’re Swedish and you play on the PS Vita. Then you *caress* it. If you’re Danish and want to play a Quick Game (this is an MS term) on Xbox LIVE, you have to look for the equivalent of “Quick Profile”. It makes no sense, but call it something else and MS will fail the game. Or how about this? Sony and MS couldn’t agree on how to translate the word “button” into Dutch. There’s at least one dev studio out there right now (that I know of) that is busy getting rid of every instance of that word to avoid having to create separate strings for the two different platforms. They can’t just leave it out of the Dutch translations, because that would mean that Dutch players lose “vital” information – the better option is, obviously, that EVERYONE loses out on this information.

    Absolute madness, I tell you.

  26. Posted July 26, 2012 at 8:25 am | Permalink

    Great article, thanks for sharing. Couldn’t agree more. I have been an interface designer in games for over 11 years and the amount of frustrating certification bugs I have encountered is simply ridiculous.
    Everything from how the controller image can be displayed (Sony is specially assy about it), but the buttons should look and their colours, how long can a static screen be on, where and how loading screens etc etc. Its crazy. The worst and most frustrating part is when games developed and released by Microsoft/Sony/Nintendo break these rules as if they never existed.

    Sometimes having rules is important though, it creates a language and unity for the users, but it would be nice if Microsoft/Sony/Nintendo at least once in a while worked together to unify these rules. For example, why the hell is save zone on PS3 and 360 different for 720p?!

  27. Don Reba
    Posted July 27, 2012 at 4:59 am | Permalink

    Hopefully, as Microsoft expands its influence on gaming with the store certification for Metro games, it will learn from its mistakes. With the amount of coverage it got, your article might aid in this.

  28. Brian Bell
    Posted July 27, 2012 at 5:07 pm | Permalink

    I would actually be more than willing to do title-safe region compensation code if the platform would give me reliable information (from a standard calibration screen, device profile, etc.) to allow me to leverage the full safe-display region of the actual display in use.

    Building screen-edge/corner relative UI (and/or UI scaling) is something we almost all do anyway when display resolution is a variable we need to support. Getting the actual display region, offsetting from there, and not punishing people with decent TVs is a win for everybody.

  29. Posted July 27, 2012 at 6:15 pm | Permalink

    Thanks this was a fun thing to read. As if I wasn’t completely scared away from consoles already!

  30. Paden
    Posted July 28, 2012 at 1:41 am | Permalink

    I’m interested in what you think of the Ouya (considering what has been released about it at least). I’m pretty fed up with the major consoles at this point, but a good number of the games I play are played socially in the living room on the couch. And it’s hard to really imagine getting a similar experience out of a pc.

  31. Posted July 31, 2012 at 12:23 pm | Permalink

    Hey hey, Jonathan! Glad to see that all is going awesomely with The Witness, and I really cannot wait to finally get my teeth into it :D.

    I have a question: Jon, have you ever considered doing a lecture/assembly about the value of more interesting, thought-provoking games for an audience of children at all? Seeing as they are essentially the ‘next generation’ of gamers, it would be very interesting to see exactly what is on their minds amidst their disillusionment that Call of Duty and FIFA are the epitome of video games, and the fact that you’re quite possibly the only person that would ever give them the opportunity to show them what is outside the box (because, let’s face it, the aforementioned titles are never going to do that for them) would be really valuable to them, I reckon.

    Is this something that you have ever considered?

    Thanks!

    Gary

    xx

  32. Posted August 3, 2012 at 11:15 am | Permalink

    A lot of the problems on PS3 specifically are because the PS3 OS is kind of non-robust. It’s originally a quick port of the PSP OS, which in turn was sort of quickly-written based on the idea of everything coming directly from optical media and saves going to a FAT32-formatted memory stick. It’s not designed to take advantage of a lot of the hardware in the system itself (its memory model isn’t particularly different than what the Macintosh had back in the late 80s), the filesystem is incredibly fragile (if you turn the system off during a save you run a very real risk of corrupting your ENTIRE HARD DRIVE), and APIs are full of bugs which they can’t fix properly for reasons of backwards compatibility (because some games which pass TRC might later be broken by updates, but there’s no requirement that developers release patches for already-passed games).

    Pretty much all of the system services are implemented in as naive a way as possible, with the idea that they can always provide a better way later – which they never do. Often these naive approaches also preclude the developer from doing anything useful as a workaround, too; for example, the file system is designed such that any seek operation has to rewind to the beginning of the file and then read through it while discarding all the data you don’t want. Basically, anything I/O that doesn’t involve linearly reading through a file from beginning to end becomess an O(n^2) operation.

    The virtual memory system is built on top of that, so it becomes so poorly-performing as to be useless. The virtual memory system also does not allow you to override the page fault handler, so you can’t use it to do anything USEFUL like using it to remap physical memory access (which is very handy if your memory space gets fragmented).

    The C++ toolchain only begrudgingly supports exceptions, and buggily so, because “nobody uses exceptions” (which is only the case because the toolchain support is so poor). Often you have to rewrite entire codebases when porting to PS3 to make it not use exceptions, despite exception handling having been a part of the language since the 80s and generally being significantly faster than checked error returns (when used properly).

    Once upon a time, the Sony tools folks were developing PSGL, basically a version of OpenGL ES which ran on top of the native low-level graphics library (gcm). The intent was to make it easier to program graphics while also improving performance in a central optimization point (because they had skilled people who could write SPU code for doing matrix math and culling and so on). For a while they were even saying, quite explicitly, to use PSGL for all new graphics code, because it was the future for the PS3. A couple years later they quietly dropped it and any support requests for it were responded with “PSGL is deprecated, just use gcm.”

    There is a robust save API, but it’s built on top of a house of cards (so it still doesn’t do anything to prevent the hard disk corruption issue) and uses a ridiculously restrictive “dinosaur model” (as Jonathan puts it) which doesn’t allow for automatic background saves, so very few games use it (and the ones which do always generate groans of “Oh no, not THIS”).

    The title-safe area stuff is further compounded by REALLY terrible support for resolution selection, and inconsistencies about legacy things like 4:3 displays and 480i vs 480p. But developers have to support it, even though the APIs for supporting it are TERRIBLE. It doesn’t help that the PS3 still ships with a composite cable and nothing else; so much for it being built for HD and nothing else, right? I wonder how many PS3s out there are hooked up to an HDTV via composite cable.

    Pretty much all of the PS3 problems boil down to bad cases of not-invented-here syndrome, where the Sony engineers decide not to see how other people did it anyway because it’s just important to get something out the door (regardless of how cumbersome and clunky it is, and regardless of how bad the design is from a high level).

Post a Comment

Your email is never published nor shared.

You may use these HTML tags and attributes <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>