diff options
author | Jason Evans <je@fb.com> | 2014-05-15 22:22:27 -0700 |
---|---|---|
committer | Jason Evans <je@fb.com> | 2014-05-15 22:36:41 -0700 |
commit | e2deab7a751c8080c2b2cdcfd7b11887332be1bb (patch) | |
tree | 511826b7cb6a2fd926f0e9018ef4c7d76cae6569 /include/jemalloc/internal/chunk.h | |
parent | fb7fe50a88ca9bde74e9a401ae17ad3b15bbae28 (diff) | |
download | jemalloc-e2deab7a751c8080c2b2cdcfd7b11887332be1bb.tar.gz |
Refactor huge allocation to be managed by arenas.
Refactor huge allocation to be managed by arenas (though the global
red-black tree of huge allocations remains for lookup during
deallocation). This is the logical conclusion of recent changes that 1)
made per arena dss precedence apply to huge allocation, and 2) made it
possible to replace the per arena chunk allocation/deallocation
functions.
Remove the top level huge stats, and replace them with per arena huge
stats.
Normalize function names and types to *dalloc* (some were *dealloc*).
Remove the --enable-mremap option. As jemalloc currently operates, this
is a performace regression for some applications, but planned work to
logarithmically space huge size classes should provide similar amortized
performance. The motivation for this change was that mremap-based huge
reallocation forced leaky abstractions that prevented refactoring.
Diffstat (limited to 'include/jemalloc/internal/chunk.h')
-rw-r--r-- | include/jemalloc/internal/chunk.h | 8 |
1 files changed, 5 insertions, 3 deletions
diff --git a/include/jemalloc/internal/chunk.h b/include/jemalloc/internal/chunk.h index cea0e8a..f3bfbe0 100644 --- a/include/jemalloc/internal/chunk.h +++ b/include/jemalloc/internal/chunk.h @@ -43,12 +43,14 @@ extern size_t chunk_npages; extern size_t map_bias; /* Number of arena chunk header pages. */ extern size_t arena_maxclass; /* Max size class for arenas. */ -void *chunk_alloc(arena_t *arena, size_t size, size_t alignment, bool base, - bool *zero, dss_prec_t dss_prec); +void *chunk_alloc_base(size_t size); +void *chunk_alloc_arena(chunk_alloc_t *chunk_alloc, + chunk_dalloc_t *chunk_dalloc, unsigned arena_ind, size_t size, + size_t alignment, bool *zero); void *chunk_alloc_default(size_t size, size_t alignment, bool *zero, unsigned arena_ind); void chunk_unmap(void *chunk, size_t size); -void chunk_dealloc(arena_t *arena, void *chunk, size_t size, bool unmap); +bool chunk_dalloc_default(void *chunk, size_t size, unsigned arena_ind); bool chunk_boot(void); void chunk_prefork(void); void chunk_postfork_parent(void); |