Let’s put one thing out of the way: I hate the term memory consumption. I don’t think it’s relevant to anything. Likewise, I resent the terminology of processor time consumption vs. memory space consumption. Its all in the way you use that processor time or memory space: if use memory inefficiently, you’ll exhaust your cache, and your memory, and maybe start thrashing, and where is your performance now? Instead, I like the term memory efficiency.
So now you clutch your favorite mobile device or embedded system with trembling fingers and warn me that if I eat too much of your memory you’ll kick me out. Good point. Let’s put those aside for now.
Let’s talk about big servers, with lots of memory and a few processors. Only one problem: these processors share the same memory. So the issue here is not how much memory I consume, it’s all about how I consume it. We now have a third road, a tertium quid of sorts. Processor time is abundant, memory is too, but time spent getting the information from the memory to the processor is wasted. Memory efficiency is the key.
- Put information that you need at the same time close together.
- Replicate it if you have to.
- Stay in the same module as long as you can.
- Allocate related objects on the same memory space if possible.
- Reduce code size.
The point is, it’s no use trading off space for time if the space is spread out enough to reduce efficiency, and there is no reason for trading off time for space if the same space can simply be used efficiently.
Still clutching the embedded systems? Good. What we have there is a smaller scale: Smaller memory, slower, single CPU, tiny cache. You can’t go wild with the memory there, true, but you can’t waste processor time too, since it’s sparse as well. Again we need to make things more efficient, not trade them off.