Hey, I'm back. I'll have some news about Schizoid soon, and I have the energy to blog again.
A thing about drag. I forgot one of the most important ways to mitigate drag, which is: Don't Start From Scratch. Almost nobody does, anymore - one of the first questions you ask yourself when beginning a game project is "What engine am I going to use?" The main reason to do this is because it saves you a lot of work. But another reason it's helpful is because you schedule more accurately - to keep using the snowball metaphor, the snowball you're rolling uphill is already nice and big: progress starts slow and doesn't get that much slower. In fact, a question I was asked at my IGDA talk went along these lines: "We just do content drops into an existing engine - does this really apply to us?" I fumbled the question at the talk, but now I'd agree and say, yes, although there will always be at least a little increase in drag as a project evolves, it may be so little as to be immaterial.
And I said I was going to talk about measuring velocity. But it's nontrivial, and I don't really have a good way to do it. I've been looking at this old progress graph:
And, if you're at any given point on that graph, what do you use for velocity? The ideal is to find a velocity that will accurately predict when you'll be at at "zero bugs." Here are some options:
- Use when you started the project as a starting point, and your current point as the end point: this always gives you an overly optimistic estimate.
- Use your instantaneous velocity. This gives you a wildly fluctuating estimate. One day, you're going to finish tomorrow. The next day (after a slip), you're never going to finish.
- Use some amount of history. But how much? Something that would have been fairly good for us, though still too optimistic, would be to use the last half of the project.
I tried a bunch of stuff - moving averages, fitting curves, nothing hit. But my math skills in this area are weak. If someone out there can take a graph like the one above and find a good approximating/predicting curve, please let me know how. I did ask my brother-in-law, a financial analyst, if he could use stock market momentum measuring tricks to do it, and he wasn't able to help.
In the end, my best guide was actually my eyeballs. When we were in December, and hoping we'd be finished in January, I was able to say - "Just look at the graph. We're looking at the end of March."
That turned out to be on the money - the first time we hit the zero bug line. Of course, we still weren't finished yet; a little more testing and we were right back in the red again.
One must imagine Sisyphus happy.
That really reminds me of the graphs from Benoit Mandlebrot's The (Mis)Behavior of Markets.
Posted by: Vince | April 25, 2008 at 02:13 PM
If you flip the curve vertically, it would become a curve of "work done" rather than "work left" and it would kinda look like a square root curve (here's an example).
This means that if you did X amount of work in Y time, doing 2X work would take 4Y time. In other words, development time rises to the square of the complexity of work to do. This intuitively makes sense, because when you're just starting you have few systems interacting with each other. Every time you add something new, you have to consider how it interacts with every other system, raising the complexity geometrically.
If this is true -- and I'm no mathematician so I could be entirely wrong -- then this has a good and a bad side. The good side is that it's not an asymptotic equation, so you'll eventually reach whatever quality level you're aiming for. The bad side -- beside development time climbing much faster than complexity -- is that even if you start ahead of the curve with an established engine, it doesn't change the shape of the curve and it's still not going to be a straight line that's easy to plan for.
Posted by: Pag | April 25, 2008 at 05:19 PM
Frankly, it looks to me like an "exponential decay" curve, and the operative element is not bugs fixed, but bugs found. It becomes steadily more difficult to detect bugs, as the more obvious ones are discovered, you have to look deeper, test longer, to find more. You never reach zero bugs, it just eventually becomes more work to find them (not fix them, *find* them) than it's worth.
Of course, when the million monkeys are turned loose, they expose a lot of bugs that your testers could not have in any reasonable period.
More complex systems not only contain more opportunities for bugs, but increasingly obscure and subtle bugs that will be difficult to identify and repeat. Given the nature of emergent complexity, both number and obscurity are not going to rise in a linear fashion, but an exponential one.
Posted by: Dave Rickey | April 30, 2008 at 04:19 PM