Sunday, May 17, 2015

Out of memory handling

I watched a video from CppCon 2014 where the speaker said during Q&A
[...] if you are on Linux, you know, malloc is never going to return NULL. It's always going to give you a chunk of memory, even if memory is full. It's going to say "I can get it from somewhere at some point", and if you actually runs out of memory, what happens is that the OS kills you.
I hear this a lot — there is no need to handle out of memory conditions as you'll never get NULL from malloc, and the OS will kill your process anyway. But it is wrong; there are at least two cases where malloc will return NULL on Linux:
  • Per-process memory limits are configured, and the process is exceeding those.
  • A 32-bit application running under a 64-bit kernel is trying to use more than about 4 gigabytes of memory.
So you need to deal with malloc returning NULL.

I'm not saying that you must handle out of memory conditions gracefully, although I would argue it is a good idea (especially if you are developing libraries). But you should at least check if malloc fails, as dereferencing NULL invokes undefined behavior in C, and may lead to surprising results from compiler optimizations.1,2


1 Such as this old Linux 2.6.30 kernel exploit.
2 I cannot see how the compiler may introduce problems by exploiting the undefined behavior resulting from not checking for malloc failure, but I'm sure GCC will find a way...

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.