« STL & Memory allocation on consoles | Main | Siren Song? Or How I Spent My Weekend. »

August 22, 2006

Comments

anoneemouse

"Hmm...when my daughter's old enough to read am I going to have to go through all my blog entries and take the swear words out?"

I'm sure refactoring is a part of agile blogging ;-)

Hector Yee

If your memory access patterns are predictable, you can still get some good cache performance by judicious use of prefetching even with dynamic allocation.

Martin Donlon

Pretty much how I feel about things too, Jamie. Start off with a solid general purpose allocator with some kind of binning/block system to reduce fragmentation for small objects. Make sure you build in some decent tracking and debugging facilities too.

Special purpose allocators are wasteful in my opinion. We saved a lot of memory on USM by consolidating most of the special purposed pools into one single small memory allocator.

If the maximum working set and the object size is small enough then you might get some cache performance improvements by using a smaller heap that would maintain locality.

Charlie

Don't forget that malloc/free/new/delete is not the only allocation model that exists. The Jak and Daxter games used a very different memory model for game objects - the gist was that a) the children of one class (Process) could make a big contiguous land-grab when created, b) heap allocations were verboten everywhere else, and c) relocations were permitted. The system would memcpy a Process somewhere new, then call a virtual function to have it fix up its pointers.

Obviously, dealing with those relocations is a project-long headache, but in exchange you get perfect defragmentation (compaction, really) that's trivial to implement.

Of course, you can get dynamic allocations in your code way down anyway with a little skill at using the stack (which can, in turn, be acquired with some practice at a good functional language like Ocaml.)

alienspy77

Obviously it's a speed/memory trade-off. Fixed pool allocation and release are both 2 instructions. That is the reason it is frequently used in operating system kernels (in conjunction with new block allocation on page fault that gets rid of the if statement for multiblock pools). Also FYI fixed pools aren't guaranteed to buy you any cache performance in the long run. Anyhow, even Martin's lightweight slab allocator was taking up 2% of the frame on PS2 on USM at some point. You just can't do that in any performance critical code. Any kind of general purpose allocator will be at least 10 times as slow as fixed pool or stack or no allocation. If basic performance intuition should tells you that the code is likely to become a bottleneck then chances are you'll have to go back and look for ways to either get rid of the allocation entirely or make it faster. With per-allocation access locking in multithreaded environments it's going to become even more of a problem. Re: tuning the pool sizes - fixed pools are multiblock allocators so you don't have to force an upper cap on those. Whenever I explicitly disable multiblock pool expansions on Spiderman projects in the past was to bring up an indication of a problem (object count abnormally high et c.). There is a reason for why all these different allocation models exist, that is because the heap model doesn't work for everyone in all cases. Educating yourself about different allocation models just gives you more options to choose from in any given situation. But of course whoever ends up doing things is the person who makes the final decisions with whatever preferences/knowledge they have about the problem domain. Peace.

Bryan McNett

The best dynamic memory allocation is none at all. To avoid allocation, aim to model systems that have no persistent state. If there is no state to save, there is no reason to allocate memory.

Stateless systems require considerably more thought, but can pay off hugely in terms of lower overall complexity and raw performance.

Jason Smith

I am not familiar with the stateless system model but from its description it is obvious that its not possible for any non trivial application. There is always state and state sharing. The best one can hope for is to minimize it. A function with only local variables is stateless. Using such functions you can write a math library but even a physics library will have problems, because the computations are very non trivial and require caching inputs for good performance.
Without state you could never write any efficient occlusion culling algorithm because the whole point of it is to precompute something that does not change and use it later.
I may have misunderstood the Bryan McNett's comment "To avoid allocation, aim to model systems that have no persistent state" but to me it means: "Get out of game programming and start writing simple math libraries".

The comments to this entry are closed.

Jamie's Bragging Rights

  • Spider-Man 2
    The best superhero games of all time Game Informer
    Top five games of all time Yahtzee Croshaw
    Top five superhero games of all time MSNBC
    Top 100 PS2 games of all time Official Playstation 2 Magazine
    1001 Games You Must Play Before You Die Nomination for Excellence in Gameplay Engineering Academy of Interactive Arts & Sciences
  • Schizoid
    Penny Arcade PAX 10 Award
    Nominated for XBLA Best Original Game
    Nominated for XBLA Best Co-Op Game