« Dave Packard Must Be Spinning In His Grave | Main | The Company Fomerly Known As Hewlett-Packard Saga Continues »

January 20, 2007

Comments

MrCranky

The problem I have with debugger driven development is two-fold really.

1) I write a lot of code where the state of the system isn't easy to tell from observing data. I'm not capable of telling from looking at two vectors and a quaternion whether they express the right geometric relationship. So I like to write code which checks those relationships and says yay or nay.

But even more than that, I prefer visual debugging solutions. If I have a network for path-finding AI, and AI that are moving between states all the time, I don't want to see that in a debugger, I want to see it in the world I'm trying to simulate.

2) When does the code get exercised? It's a real pain to breakpoint and debug code that is fired off once in a blue moon, especially if it needs a lot of complex input (motion vectors, or information about several AI, etc.).

Even worse is when you're in a situation where breakpointing and debugging changes the results! I know ideally you've got an engine which is inert when debugging, but if the break in flow caused by hitting breakpoints makes it hard to recreate those icky test cases, thats when you're right back with logging out results and poring over them later.

Regardless, all these methods (unit test, debugging, logging, visual feedback, etc.) are just tools in a toolbox, and I think you'd be tying one hand behind your back if you didn't use any of them when appropriate. Saying 'writing sufficient unit tests means you never have to use a debugger' (which it feels like the TDD camp is saying sometimes) is missing the point - writing good solid code, quickly and effectively. We work in an industry where the goal criteria for our code is nebulous enough that writing a test to determine whether or not it works is often harder than writing code to meet the goal!

Paul

Y'know, the theory is difficult to argue against because it is the "perfect" method. How do these guys propose doing it differently? A test per line and/or just some serious deep thought before writing any code? I've worked with guys that spend days contemplating all the possible scenarios and pitfalls before writing any code that can be checked in. It's very annoying!

I don't know. My intial reaction is that it isn't practical in a production environment to be so extremely cautious. I mean, I could spend an hour to examine every corn flake and drop of milk before I put it in my bowl. But I'd much rather take a second to sniff the milk and eat my corn flakes.

Robert 'Groby' Blum

I've been through printf debugging (scoffing at the debugger users), gdb debugging (scoffing at people who used visual debuggers), visual debuggers (slowly stopping to scoff, because there seemed to be a trend), TDD, correctness proofs, and what-have-you.

It took me a long time to learn, but it turns out they are *all* just tools in my toolkit. So keep your old ways - there will be a day when you're glad you still have them ;)

vince

I'll take "tools in a toolbox" for $1000. Printf, debugger, theory, unit tests, inspection, thinking, inserting sleep statements to test race conditions, assertions, whatever gets the job done. I think experience comes to bear when you start to get a good grip on which tool is best for the situation at hand.

I will say that if I don't know why a bug is happening, then I haven't fixed it, even if I can make the symptoms go away. I need to know the root cause for me to feel satisifed with a confirmed kill - I'm not against a symptom-fixing hack if a deadline requires, but it makes me feel dirty.

greggman

I'm really curious about TDD. I certainly see the argument for a large team or especially a large company, the library department should be using TDD. But for example in my last project every time I had a bug I wondered (would TDD have caught this?). So far as I can tell only about 3 bugs in the entire project would have been caught. Would putting all the TDD in (taking time away from *real* code) have really been worth it?

I've seen others argue that TDD isn't about finding bugs, it's about designing code since you write the test first it forces you to design the interface before the implementation. That's fine and it could be that would help. It's really hard to quantify how much more better/flexible etc my interfaces would be if I used TDD. I often write my interfaces first, I just don't write a unit test. Instead I write some code to use whatever it is I'm writing. I mean, if I'm writing IO functions than clearly I need IO and clearly that IO has to work. Isn't that the test? I know, a real test would fail if something breaks down the line but generally when something breaks it's pretty clear without specific tests. Load up a level and see it assert or stop running. That brings me back to the first point which is so far I haven't run into that case enough for the extra TDD work to be worth it.

Again, I'm not saying I wouldn't do TDD. If I was writing libraries for multiple teams I'd be all over TDD but for game code I'm still on the fence.

What's your experience with game code TDD vs say library code TDD? What's your impression of your code design before you started using TDD vs after? (and how much of the difference is attributable to TDD vs just growing as a programmer over time?)

Chris Buss

My problem with the TDD guys, on the surface of it, is what I refer to as the "My Beautiful Code!" syndrome.

We are in a business, we are not in academia where we can place our code out on some code fellatio forums and have every touch themselves over code that has obviously been touched by the hand of god.

Clean code, all for it; reusable code, all for it; missing deadlines because we refused to get the job done, unacceptable!


Billy Zelsnack

If anyone ever gives you crap about being a 'printf debugger', just tell them that you are not one. Tell them that you are an 'instrumentation debugger'. It sounds much fancier.

Lorenzo

TDD as you describe it is quite different from what I do every day and I do not encounter the problems you describe (I had those problem when I started, but was due to not properly applying the thing).

TDD is not just writing test, it's going through the red-green-refactor cycle. If you work this way you do not have any line that is not covered by, at least, one test. And those lines do exactly what you expected because have been written right after you wrote the test.
It just not happens to be unsure if an "if" is actually executed because it has been added just in response to that specific verification. So you had a red bar because the "if" was missing, you add the "if" and the bar turns green.

Anyway if you are unsure about some path in your code you can just add one more test for that specific case, execute it through the debugger to be sure that the test follow exactly that path and in the end you obtain a "recording" of your debugging session. Consider this as a trick to verify the correctness of the test.
But this should not be the rule otherwise you are just not doing TDD. With TDD tests should be the only reference for the correctness of the system.
If you depend on visual inspection for correctness your test will get weaker and weaker. Instead if you rely only on test they'll get stronger because you have to write there all of your doubts and reasoning.
Think about it as a recorder.

You typically do not have a unit test for each line, but usually you have tests that covers 2 or 3 lines, and not just one test but about 3 or 4 covering the same lines in different context. You often obtain 100% lines and branch coverage without any specific effort.

But the important point is that you can repeat the test, all the tests, any time you want. The code inspection you do with the debugger is not repeatable.
You can run some hundreds tests every time you want (usually every 10 minutes) and you know that the checks that you are doing are exactly the same that you did when you wrote the code.

One last note: writing a test in a TDD developed system takes very little time, you typically add a test in about a minute.
Even when verifying code with the debugger or with printf you need some small setup code (a main() or setting variables values in the debugger): the time required to write a test is this very same time. And, again, you do not waste the investment.

Instead adding a test to a non TDD code can be very costly.

The debugger is useful to find bugs, but as soon as I am able to reproduce it I switch back to writing a test to reproduce it (red bar) and try to fix it until green.
If through examination of the code I suspect could be other similar bugs I add 2 o 3 more tests to check. If something brakes good, otherwise I left the new test there as an extra check.
For vince: it's up to you to fix the bug with dirty tricks of with a good solution. TDD guarantees that your fix matches all the expectations of your system. On the code you are fixing you could have 5 or 10 other tests and you have to match all. Finally the test suite allows you to refactor your code to cleaner code any time you want and TDD states that you should do this after each green bar.

For any sceptics, I was, just try it for at least a couple of months with someone that knows how to do it.


Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Your Information

(Name is required. Email address will not be displayed with the comment.)

Jamie's Bragging Rights

  • Spider-Man 2
    The best superhero games of all time Game Informer
    Top five games of all time Yahtzee Croshaw
    Top five superhero games of all time MSNBC
    Top 100 PS2 games of all time Official Playstation 2 Magazine
    1001 Games You Must Play Before You Die Nomination for Excellence in Gameplay Engineering Academy of Interactive Arts & Sciences
  • Schizoid
    Penny Arcade PAX 10 Award
    Nominated for XBLA Best Original Game
    Nominated for XBLA Best Co-Op Game