I always thought that Shigeru Miyamoto sat down and made a very detailed "plan" before they ever started programming and making assets for the game to actually construct it. And that this was how it was done in general for video games.
But then I realized that two of my favouritest games in the whole wide world, DOOM (1993) and Ocarina of Time (1998) had extremely "sketchy" beta/alpha versions.
DOOM was an extremely different game in every version up to the final (first shareware) version. I've played them myself.
Ocarina's alpha/beta footage, and leaked ROMs, show extreme differences in how areas looked then and in the final version of the game.
How is this possible? Did they not follow the original ideas? It seems like such a waste to create a whole 3D area which is just some sort of "placeholder" and then later "flesh it out". I mean, these aren't freeware hobbyist games made by one guy in his bedroom, working on it for 10 years and released (if ever) "when it's done".
I don't understand how it's possible, especially not for such a intricate masterpiece such as Ocarina. The in-development 3D environments look nothing like the finished ones, so they clearly did not have very clear instructions when they first made them. How can that be? Were these specific games extreme exceptions?
31 Answer
Iterative development is the norm in the Video Game industry
And in most software development in general, with the exception of cases where you start out from absolutely unambiguous requirements which isn’t typically the case in the wider industry.
But you seem to have something backwards here, apparently you’re under the impression that this means that they weren’t carefully designed. Just to be clear, as I said the situation where one or a few people sit down, work out an entire product on paper and then the rest of the workers just sit down and make that thing and then it’s ready basically never happens in real life, not in Software anyway. This goes doubly for anything involving new technology or interaction patterns, which was the case for early 3D games because 3D was still new-ish and it wasn’t at all clear how a 3D game should be designed to be the most intuitive and fun.
If a designer has an idea for what’s fun they’ll certainly have some reasoning, and good designers will come up with more good ideas than bad ones (or at least have a better eye for which is which), but no responsible person would sign off on a design that’s not actually been tested. Maybe this thing which sounded really fun on paper actually turns out to be annoying when you have to do it twice because you die in the middle. Maybe what the designer originally intended would theoretically work out great, but turns out not to be technologically feasible.
Then there is the work efficiency part of it - nobody just sits down and designs a whole game in 5 minutes. While, for instance, someone is working out level design for a particular area, work can still happen on implementing/improving the UI, the underlying 3D technology or internal tools used within the project. So it doesn’t make sense to stall all of that until every single design question has been worked out.
And finally, designers are often spontaneous people prone to inspiration. Very often when they see what can be done, or see how the implementation of their design actually looks/plays like in practice, they’ll have ideas on how to improve on it. Sheer imagination doesn’t always (or even often) yield you the same results that seeing and experiencing something in reality does.
So for all of these reasons, it is typical for games (and other software) to be developed in stages; Where you start off with a general idea that lets you scope out the project and start out on the technology basics, moving on to refine, redesign and recreate where needed. The same thing applies at smaller scales to parts of the project (like levels, UI, individual quests etc).
5