Thinking back to Die By The Sword reminds me of what was possibility the worst case of speculative generality / premature abstraction I ever indulged in. Pete told me that we were going to have a lava pit in one of the arenas - most of the characters would get burned when they entered it, but a couple (the skeletons and Magmar the rock elemental guy) would be immune.
I dove in and started coding. Brandishing the ideal of data-driven design I said to myself, "I don't want this to be some kludge. I want this to be extensible and abstract."
So here's what I did -
* Any polygon you could stand on had an associated temperature.
* All the creatures had a temperature endurance. Magmar and the skeletons could stand very high temperatures; most people got uncomfortable around 90 degrees centrigrade.
I forget how I propagated all this information into the data files and whatnot. There may have been a table or two in the source-code itself, immediately throwing one of the advantages of data-driven design out the window (designers not having to touch code files and all that.)
What was I thinking? No idea. Was I imagining that someday a desinger would want to put in some kind of "medium-hot" surface that some third class of creatures - not as tough as the skeletons or Magmar - would be able to walk across? Or maybe a "superhot" surface that would destroy even skeletons? Of course, there were no changes to that code after it was written, and a simple
if(( characterflavor!=MAGMAR) && (characterflavor!=SKELETON)) BurnTheCrapOutOfThem();
would have been just fine. And what I could have coded in minutes took a day.
TSTTCPW! Lived and learned.
Hey, as long as we're sharing, do any of you have good premature-abstraction stories?
In Praetorians, we wrote a little tool that would take the databases for all units and calculate the effects of combat between every pair of unit types based on their parameters. We hoped to use that for balancing. It turned out that the dynamics of battle between groups of dozens of units (focus firing, speed, range, etc) were having a much larger impact on combat outcome than individual parameters. Balancing ended up being done by setting up scenarios and watching them actually play out.
We also had a pretty comprehensive system to define formation shapes (we could create wedges, arcs and almost anything else you would imagine), but the final game only featured rectangular formations.
As long as you don't go overboard (i.e. you always keep in mind the actual problem you are trying to solve), I normally find that half the abstractions provide additional features that go unused, and half become powerful tools in the hands of creative designers.
Posted by: Jare | June 17, 2007 at 01:25 AM
I've done premature abstraction several times, and I should have plenty of examples, but I can't think of one right now. As a counter-example I can mention that I'm working on a mobile game now that had support for selecting items with the arrow keys, and "clicking" them with the press of a key. I needed to add functionality for performing an action when an item received focus instead of when it was clicked, and the system was generic enough to allow me to add that with very little code, even though it had been used only for "clicking" for several months. Still, it's difficult to say that the time they spent on implementing that generic functionality in the beginning of the project (I joined the project later) was worth it. I didn't know that it existed in the first place and implemented it differently at first until someone complained. So in the end, I don't think the project saved any time on it. And there was certainly a risk that the abstraction would never be needed.
Posted by: Martin Vilcans | June 18, 2007 at 07:42 AM
I'm leaving the wondefull realm of video game development here.
Some years ago I designed a system which was built around the idea of a data bus - every component should be able to access the data bus, query for some value, add, remove or modify some data and so on. The architecture ended to be quite complex, with a custom RTTI system (yeah, I know. I was young) in order to deal with the different data type. In fact, actual datas were encapsulated in a container, the container was the subject of data observers and an observer of the different data sources or data modifiers. Then I did some concrete classes to store the actual data. The specs were really scarce, so I just did a (templated) 1D-data vector (you know what? a custom RTTI system is quite teh fun to make it work with templates...) and a 2D-data vector.
After what might have been 4 or 6 weeks of work, I finally got more precisions about the "data bus" and tha data type it should manage. Images. More specifically, BMPs. Oh well.
In my case, I think that TSTTCPW would have been to NOT do this job.
Posted by: Emmanuel Deloget | June 19, 2007 at 06:25 AM
In Metal Gear Solid (PS1 version), there was a level with puddle's - one or two in the that level. The way they've coded that, was not to put property on the polygons filled with water that they were reflective, so if you see them do the mirroring, etc. - they've simply put one or two bounding boxes, and were checking for them manually in the level, if you stepped on it, they would show Solid Snake upside down, to imitate the reflection. Not a whole-engine approach, but one that worked for them, and was more easier to implement, a bit of a bottom-up approach, rather than top-down.
Posted by: Dimiter "malkia" Stanev | June 19, 2007 at 09:15 AM
You got it easy because at least your abstraction did not turn out to be incompatible with future requests from the game designers. :)
Posted by: Emil Dotchevski | June 19, 2007 at 03:47 PM
Wow, it looks like I wrote a novel.. :/
In Forrester Research's 10 Mistakes in Software Development, #3 is overscoping a solution and #9 is jumping into development without enough research, and we've all done it. :(
A friend consulted at a large transportation company in the Kansas City area. They'd been gearing up for a massive conversion from their existing AS/400 systems to a completely Java-based system. Remarkably, in the early stages of the project, they decided to wrap the entire JDK in their own custom framework and decreed that all developers use these wrappers exclusively. As a result of this, they obtained -
A.) little or no added functionality,
B.) more limited abstractions than the raw JDK,
C.) minimally tested code underlying everything,
D.) the requirement to train all incoming Java developers on their framework and,
E.) worst of all, the obligation to maintain many more thousands of lines of code.
I've observed that developers tends to progress through a number of stages in their career. These are like the stages of grief: inescapable. This is because each stage produces a set of skills and philosophies that are eventually synthesized in the master software developer.
The Underengineer
The underengineer is elated at his ability to get things done quickly and be productive, disdaining others (usually the overengineer) for what they see as excessive rumination and for producing an excessive amount of code for a given task. You can count on these guys to get stuff done. Will it be done 'the right way?' Only if by accident. Underengineers are often unable to discern 'the right way,' often because they've spent little time reading and maintaining other people's code.
A developer may begin to come out of this stage when he tires of fixing or writing the same code (in different places) over and over again. Or perhaps he'll be propelled into the second stage by an enthusiastic reading of Design Patterns or a sudden grasp of UML.
The developer at this stage is fascinated by accomplishing the task at hand and it is this stage in which he learns how to code and debug.
The Overengineer
The overengineer knows that he can solve any single development task and has become fascinated by possibility. Abstraction is magical. How can any piece of code be used to solve multiple tasks? Paradoxically, this usually ends up producing more code for any one task. While the underengineer shakes his head at this, the overengineer knows that his approach will produce much less code for an entire set of similar problems. Meanwhile, the overengineer looks at the underengineer's hackery with contempt. The question the overengineer rarely asks himself, though, is will there be a large enough set of similar problems to justify this effort? And so he abstracts everything in the name of flexibility.
The overengineer is sometimes shocked to realize he's become much less productive in terms of functionality than before. He can't seem to get anything done! Still, he produces as much (if not more) code and suddenly becomes capable of building much larger systems than he could as an underengineer.
This is the stage in which the developer learns architecture and the value (and painfully, the costs) of abstraction. This is the stage of the architects of the transportation system conversion discussed above.
What kills the overengineer? Maintenance. Dependencies. Broken abstraction. Realizing that the flexiblity he built in is unused and real systems change in ways he could never have anticipated. At some point, and probably many, the overengineer will get a phone call at 2am asking him to come in and fix an issue that breaks his whole architecture. After a few of these heartbreaking moments, his world view begins to change again..
The Master Developer
The master developer is skilled in coding and debugging from his time as an underengineer and is a skilled architect from his time as an overengineer. What's more, he's come to understand the following -
The primary reason for abstraction is to simplify implementation for a given set of requirements.
If abstraction doesn't simplify, ditch it.
The master developer also realizes that the solution domain for a given problem may extend beyond technology. People often try to use technology to solve social or organizational problems. Conversely, people frequently attempt social or organizational solutions for essentially technological problems. Recognizing these incongruities before they become a software design and implementation can help to avoid doomed projects.
The organizational problem in the transportation example above is that software architects were assigned to do a job before anyone knew what job they were to do. The proper response would be to do nothing - or better, to work on a different project until the problem domain was fully understood.
Instead they chose speculatively general busy work, incurring a much higher cost.
Posted by: Paul Senzee | June 30, 2007 at 07:26 AM
Paul's comment is very true - although it might be arrogant for anyone to think they've ever reached "Mastery" - hope he republishes it somewhere more visible.
Let's see, I started overengineering when I was around...24...made the jump to C++ then...and I shifted back to "conscious underengineering" around...Spider-Man 1...that would be when I was 30 or so.
Another way of looking at things - for any given problem, there is a perfect amount of up-front cost/abstraction that will make us maximally efficient - but the chances of us hitting that perfect amount are nearly zero. We will always either overengineer or underengineer to some extent, so a holy grail to shoot for is just keeping that delta as small as possible.
Posted by: Jamie Fristrom | June 30, 2007 at 09:02 AM
Lol, I thought about the 'master' sounding arrogant - development is certainly a lifelong learning process. I couldn't think of an alternative. Well, I thought of 'good' and 'seasoned', but they didn't do much for me. Any suggestions?
I've reposted it at http://senzee.blogspot.com/2007/06/developers-life-stages.html as well.
Posted by: Paul Senzee | June 30, 2007 at 09:11 AM
In an acronym: YAGNI, or "You ain't gonna need it". I picked this up from some Agile development book, and for the product I recently shipped I made it our rallying cry.
I believe it is useful in almost any situation. Ignore the "Agile" thing. On this project (Monster's Career Advertising Network aka CAN) we had great vision, enormous possibilities for what we might do, and really no clue at any given point what we *had* to do (until the last month or so before the product landed).
If we hadn't YAGNI'd it, we'd have never shipped and would have either generalized to chase ideas we thought were cool or requirements that came and went. Also, by crying YAGNI (literally) I could get engineers to rethink their approach to a problem that made for much productive engineering and code.
Even in a project with rigid requirements (like the “reimplement in Java” mentioned above), you have to YAGNI. If you under abstract the code, you'll learn quickly and refactor. Refactoring once you've overdone it is much, much harder.
Posted by: Christian Knott | July 12, 2007 at 04:25 PM