Published Jul 24, 2008
When Firefox 3 was released, jemalloc was left disabled for the OS X version, essentially because OS X’s malloc implementation did as good a job as jemalloc (in terms of both speed and memory usage), and we didn’t think it worth introducing potential regressions due to changed memory layout. Recently I have been working on a memory reserve system that allows Firefox to simplify its error handling with regard to out-of-memory errors. Since the memory reserve is necessarily deeply integrated with the allocator, we need to use jemalloc on all platforms in order to take advantage of this new facility. This prompted me to take a closer look at jemalloc performance on OS X. In summary:
On ELF-based systems (pretty much all modern Unix and Unix-like systems except OS X), it is possible to cleanly replace the system malloc, either by directly implementing the appropriate functions (malloc, realloc, free, etc.), or by using the LD_PRELOAD environment variable to preload a dynamic library that contains a malloc implementation. For Windows, replacing malloc is much harder; it is necessary to create a custom CRT. On the bright side, at least it is possible to create a custom CRT, since source code is included with MS Visual Studio.
OS X uses the Mach-O format, and in order to completely replace the system malloc, it would be necessary to compile a custom libSystem. As far as I know, that has not been possible outside the confines of Apple since version 10.3 (2+ years ago). Even if it were possible, there would be all sorts of undesirable aspects to shipping a custom libSystem with Firefox; libSystem is a huge library, and binary compatibility issues would be a constant problem. So, the only remaining viable option is to subvert the malloc zone machinery. There is no supported method for changing the default zone, and furthermore, CoreFoundation directly accesses the default zone. Enough about that though; suffice it to say that I did find ways to subvert the malloc zone machinery.
Once Firefox was successfully using jemalloc for all memory allocation, I started doing performance tests. Memory usage differences were minor, but jemalloc was consistently slower than OS X’s allocator. It took a lot of profiling for me to finally accept the hard truth: jemalloc was spending way too much time manipulating red-black trees. My first experimental solution was to replace red-black trees with treaps. However, this made little overall difference. So, the problem was too many tree operations, not slow tree operations.
After a bit of code review, it became clear that when I fixed a page allocation bottleneck earlier this year, I was overzealous with the application of red-black trees. It is possible to use constant-time algorithms based on linear page map data structures for splitting/coalescing sequential runs of pages, but I had re-coded these operations entirely using red-black trees. So, I enhanced the page map data structures to support splitting/coalescing, and jemalloc became markedly faster. For example, Firefox sped up by as much as ~10% on JavaScript-heavy benchmarks. (As a side benefit, memory usage went down by 1-2%).
In essence, my initial failure was to disregard the difference between a O(1) algorithm and a O(lg n) algorithm. Intuitively, I think of logarithmic-time algorithms as fast, but constant factors and large n can conspire to make logarthmic time not nearly good enough.
Have you taken a look at Vladimir Marangozov’s PyMalloc used in the CPython interpreter?
It’s a optimization for many small objects with similar sizes, a typical memory usage pattern in interpreters. It falls back to the system malloc for anything larger than a few hundred bytes.
You could use mach_override () to override malloc on OS X.