So Bryan says you can't have globals if you have a multicore architecture.
Schizoid was multicore (to my chagrin, because it doesn't look like the sort of game that should have needed more than one core to run fast, but that's what the XNA game studio guys recommend), so let's talk about how we did that and had globals and survived.
One thing we did was close to what Bryan describes - before and after game states. But how we did that was quite ghetto - since our multicore was a retrofit, we had oldPos and newPos, oldRot and newRot, in the entity states. Sometimes, there'd be cosmetic variables that could get bashed by the wrong thread and would come out different on different executions, but I didn't particularly care.
But what about singletons? And here I'm talking about logical singletons. Variables or classes that there are only one instance of, that can be read to or written from, that don't make sense to duplicate. The renderer...the audio engine...things where I felt performance didn't matter...the UI manager...player macro-game states (what levels were unlocked or not). You can wrap these things up however you want - put layer of layer of indirection on them, and you still have the problem that if more than one thread tries to access them at the same time your results are undefined. Whether they're globals syntactically is irrelevant.
And not just singletons, there's a kind of variable that's sort of 'logically' global - which means that it's fairly easy and probably intended for any module to be able to access it, whether it's syntactically global or not. Maybe the module has to access it through an Inst() method or call gApp->GetRenderer()->GetDisplay()->GetWhatever(), but it's still fairly easy and if you're doing it from multiple threads you'll get in trouble. So any member variable that has, say, both Get... and Set... accessors could be 'global' in this sense...
And we don't have the tools to automatically protect ourselves at compile time, or even really at runtime, from the sorts of heisenbugs that can crop up when we use these things.
So what helped me was notation. Spolsky once wrote an article on what Hungarian notation was *really* for - it's down at the bottom of http://www.joelonsoftware.com/articles/Wrong.html - and what it was for was for helping you tell if the program is doing what you think it's doing. I thought, instead of using notation to remember if something is a bool or not, we can use notation to remember if something is logically global or logically a singleton or what. And I broke it down like this:
There are constants. These things are read-only. They might not be syntactically const - sometimes I like to store them as mutable variables so I can change them in the debugger at runtime, for example. But when we ship, they'll be read-only. These get the letter 'c', and they're safe to read from on any thread.
There are full-on globals. You can read or write to them from anywhere. They may not be syntactically global, they might be a member of a singleton. These get the letter 'gl' (global lock) and you have to remember to use locks if you're messing with them.
There are thread-safe globals. These are classes that have thread-safety built into them - they lock and unlock their members appropriately when you use their functions, so you can't hurt yourself using them from multiple threads, though they're not necessarily fast. These get 'gts'. (global thread safe.)
There are partially thread-safe globals. These are classes where some of their member functions are thread safe and others aren't. These get 'gpts' and you have to read the documentation when you're working with one of these to make sure you don't screw up anything.
Globals that were only supposed to be accessed by the main thread - gmt.
Globals that should only be written to by the main thread, but can be read from any thread - gwmt. (global write main thread)
Member variables that were fairly protected or private, that didn't have Set accessors or could be otherwise violated (a Get that returns a reference, for example) would get the familiar 'm' designation.
You might say, 'Sounds like almost nothing in my program is m' but it's about how the variables are *supposed* to be used rather than how they *could be* used. If the compiler could protect us from using them wrong, we wouldn't need the notation.
And I called it "Australian Notation" because I'm half-Australian.
Random additional note: part way through the project I realized that using prefixes didn't really work too well with Visual's autocomplete feature. Although I'd remember the name of a member, I usually wouldn't remember if it was global or a member or what kind of global it was - that was why I invented the notation in the first place, so I wouldn't have to remember. After Schizoid, I started making the notations suffixes instead of prefixes, so
IRenderer* gRenderer;
would become
RendererI* rendererG;
And then I discovered that the latest Visual Assist does something fuzzy where prefixes can be ignored, so the point is moot. Do whatever you want. As long as you're consistent.
I have been combining the two types of globals you mentioned (things that happen to be global and things that are accessed globally) into the same thing. Ultimately, I don't care about things that happen to be global, because that implies you can change how they are created and accessed without affecting the behavior of other systems that use it.
On the other hand, I think that all global references ultimately become scoped in the long run. Even things like a debug logger; we are seeing a continuous drive towards making code more self-contained and composable, which means that implicit references that can modify behavior are becoming less and less maintainable.
main() is probably the only exception.
Posted by: Mat Noguchi | July 28, 2009 at 11:21 AM
This Australian notation of yours is very close to standard practice among good developers I know.
Thread-safety via locks, however, is untenable at eight cores and beyond. On a three-core XBOX360 or a four-core PC, it can sorta work out.
At eight cores and beyond, one typically employs a "job system" that farms little tasks to cores. Each core has its own tiny local memory (aka cache) and communicates with other cores 1) ideally never 2) if you must, via a FIFO.
A FIFO enables the writing core to keep pumping out outputs for a while, even if the reading core is too busy to pick them up. and vice versa.
As for "natural" singletons such as "gGraphics," well already on PC, graphics state is becoming thread-local. As of DX11, There's not much global worth sharing anymore.
Posted by: Bryan McNett | July 30, 2009 at 12:18 AM