aboutsummaryrefslogtreecommitdiff
path: root/src/arena.c
AgeCommit message (Collapse)Author
2016-05-04Fix potential chunk leaks.nougat-devJason Evans
Move chunk_dalloc_arena()'s implementation into chunk_dalloc_wrapper(), so that if the dalloc hook fails, proper decommit/purge/retain cascading occurs. This fixes three potential chunk leaks on OOM paths, one during dss-based chunk allocation, one during chunk header commit (currently relevant only on Windows), and one during rtree write (e.g. if rtree node allocation fails). Merge chunk_purge_arena() into chunk_purge_default() (refactor, no change to functionality). Bug: 28590121 (cherry picked from commit 8d8960f635c63b918ac54e0d1005854ed7a2692b) Change-Id: I70758757b3342e0623918bb56c602873a367a192
2016-03-07Merge remote-tracking branch 'aosp/upstream-dev' into mergeChristopher Ferris
Bug: 26807329 (cherry picked from commit fb9c9c8d5230956caa48501dad4fde4b90e00319) Change-Id: I428ae6395d8c00db6baef5313b3dd47b68444bd9
2016-02-01Merge remote-tracking branch 'aosp/upstream-dev' into mergeChristopher Ferris
Bug: 24264290
2015-11-10Fast-path improvement: reduce # of branches and unnecessary operations.Qi Wang
- Combine multiple runtime branches into a single malloc_slow check. - Avoid calling arena_choose / size2index / index2size on fast path. - A few micro optimizations.
2015-11-09Allow const keys for lookupJoshua Kahn
Signed-off-by: Steve Dougherty <sdougherty@barracuda.com> This resolves #281.
2015-11-09Remove arena_run_dalloc_decommit().Mike Hommey
This resolves #284.
2015-09-24Fix a xallocx(..., MALLOCX_ZERO) bug.Jason Evans
Fix xallocx(..., MALLOCX_ZERO to zero the last full trailing page of large allocations that have been randomly assigned an offset of 0 when --enable-cache-oblivious configure option is enabled. This addresses a special case missed in d260f442ce693de4351229027b37b3293fcbfd7d (Fix xallocx(..., MALLOCX_ZERO) bugs.).
2015-09-24Fix xallocx(..., MALLOCX_ZERO) bugs.Jason Evans
Zero all trailing bytes of large allocations when --enable-cache-oblivious configure option is enabled. This regression was introduced by 8a03cf039cd06f9fa6972711195055d865673966 (Implement cache index randomization for large allocations.). Zero trailing bytes of huge allocations when resizing from/to a size class that is not a multiple of the chunk size.
2015-09-20Make arena_dalloc_large_locked_impl() static.Jason Evans
2015-09-15Centralize xallocx() size[+extra] overflow checks.Jason Evans
2015-09-14Regenerate files and re-add android changes.Christopher Ferris
Update to the jemalloc top of tree and regenerate all of the files using configure. Also, re-add all of small the android changes. In addition, add a new define to allow the chunk size to be changed easily. Use this define to set the chunk size to 512K for the 32 bit library, and set the chunk size to 2MB for the 64 bit library. Bug: 23633724 Change-Id: I9daef0428e6c22e56eb7b05089f9a3f6e2f86d82
2015-09-11Rename arena_maxclass to large_maxclass.Jason Evans
arena_maxclass is no longer an appropriate name, because arenas also manage huge allocations.
2015-09-11Fix xallocx() bugs.Jason Evans
Fix xallocx() bugs related to the 'extra' parameter when specified as non-zero.
2015-09-04Reduce variables scopeDmitry-Me
2015-08-19Rename index_t to szind_t to avoid an existing type on Solaris.Jason Evans
This resolves #256.
2015-08-19Don't bitshift by negative amounts.Jason Evans
Don't bitshift by negative amounts when encoding/decoding run sizes in chunk header maps. This affected systems with page sizes greater than 8 KiB. Reported by Ingvar Hagelund <ingvar@redpill-linpro.com>.
2015-08-11Refactor arena_mapbits_{small,large}_set() to not preserve unzeroed.Jason Evans
Fix arena_run_split_large_helper() to treat newly committed memory as zeroed.
2015-08-10Refactor arena_mapbits unzeroed flag management.Jason Evans
Only set the unzeroed flag when initializing the entire mapbits entry, rather than mutating just the unzeroed bit. This simplifies the possible mapbits state transitions.
2015-08-10Arena chunk decommit cleanups and fixes.Jason Evans
Decommit arena chunk header during chunk deallocation if the rest of the chunk is decommitted.
2015-08-07Implement chunk hook support for page run commit/decommit.Jason Evans
Cascade from decommit to purge when purging unused dirty pages, so that it is possible to decommit cleaned memory rather than just purging. For non-Windows debug builds, decommit runs rather than purging them, since this causes access of deallocated runs to segfault. This resolves #251.
2015-08-06Fix an in-place growing large reallocation regression.Jason Evans
Fix arena_ralloc_large_grow() to properly account for large_pad, so that in-place large reallocation succeeds when possible, rather than always failing. This regression was introduced by 8a03cf039cd06f9fa6972711195055d865673966 (Implement cache index randomization for large allocations.)
2015-08-03Generalize chunk management hooks.Jason Evans
Add the "arena.<i>.chunk_hooks" mallctl, which replaces and expands on the "arena.<i>.chunk.{alloc,dalloc,purge}" mallctls. The chunk hooks allow control over chunk allocation/deallocation, decommit/commit, purging, and splitting/merging, such that the application can rely on jemalloc's internal chunk caching and retaining functionality, yet implement a variety of chunk management mechanisms and policies. Merge the chunks_[sz]ad_{mmap,dss} red-black trees into chunks_[sz]ad_retained. This slightly reduces how hard jemalloc tries to honor the dss precedence setting; prior to this change the precedence setting was also consulted when recycling chunks. Fix chunk purging. Don't purge chunks in arena_purge_stashed(); instead deallocate them in arena_unstash_purged(), so that the dirty memory linkage remains valid until after the last time it is used. This resolves #176 and #201.
2015-07-23Change arena_palloc_large() parameter from size to usize.Jason Evans
This change merely documents that arena_palloc_large() always receives usize as its argument.
2015-07-23Fix MinGW-related portability issues.Jason Evans
Create and use FMT* macros that are equivalent to the PRI* macros that inttypes.h defines. This allows uniform use of the Unix-specific format specifiers, e.g. "%zu", as well as avoiding Windows-specific definitions of e.g. PRIu64. Add ffs()/ffsl() support for compiling with gcc. Extract compatibility definitions of ENOENT, EINVAL, EAGAIN, EPERM, ENOMEM, and ENORANGE into include/msvc_compat/windows_extra.h and use the file for tests as well as for core jemalloc code.
2015-07-15Revert to first-best-fit run/chunk allocation.Jason Evans
This effectively reverts 97c04a93838c4001688fe31bf018972b4696efe2 (Use first-fit rather than first-best-fit run/chunk allocation.). In some pathological cases, first-fit search dominates allocation time, and it also tends not to converge as readily on a steady state of memory layout, since precise allocation order has a bigger effect than for first-best-fit.
2015-07-07Fix MinGW build warnings.Jason Evans
Conditionally define ENOENT, EINVAL, etc. (was unconditional). Add/use PRIzu, PRIzd, and PRIzx for use in malloc_printf() calls. gcc issued (harmless) warnings since e.g. "%zu" should be "%Iu" on Windows, and the alternative to this workaround would have been to disable the function attributes which cause gcc to look for type mismatches in formatted printing function calls.
2015-07-07Move a variable declaration closer to its use.Jason Evans
2015-06-22Convert arena_maybe_purge() recursion to iteration.Jason Evans
This resolves #235.
2015-05-19Fix performance regression in arena_palloc().Jason Evans
Pass large allocation requests to arena_malloc() when possible. This regression was introduced by 155bfa7da18cab0d21d87aa2dce4554166836f5d (Normalize size classes.).
2015-05-06Implement cache index randomization for large allocations.Jason Evans
Extract szad size quantization into {extent,run}_quantize(), and . quantize szad run sizes to the union of valid small region run sizes and large run sizes. Refactor iteration in arena_run_first_fit() to use run_quantize{,_first,_next(), and add support for padded large runs. For large allocations that have no specified alignment constraints, compute a pseudo-random offset from the beginning of the first backing page that is a multiple of the cache line size. Under typical configurations with 4-KiB pages and 64-byte cache lines this results in a uniform distribution among 64 page boundary offsets. Add the --disable-cache-oblivious option, primarily intended for performance testing. This resolves #13.
2015-03-25Fix in-place shrinking huge reallocation purging bugs.Jason Evans
Fix the shrinking case of huge_ralloc_no_move_similar() to purge the correct number of pages, at the correct offset. This regression was introduced by 8d6a3e8321a7767cb2ca0930b85d5d488a8cc659 (Implement dynamic per arena control over dirty page purging.). Fix huge_ralloc_no_move_shrink() to purge the correct number of pages. This bug was introduced by 9673983443a0782d975fbcb5d8457cfd411b8b56 (Purge/zero sub-chunk huge allocations as necessary.).
2015-03-24Add the "stats.arenas.<i>.lg_dirty_mult" mallctl.Jason Evans
2015-03-24Fix signed/unsigned comparison in arena_lg_dirty_mult_valid().Jason Evans
2015-03-18Implement dynamic per arena control over dirty page purging.Jason Evans
Add mallctls: - arenas.lg_dirty_mult is initialized via opt.lg_dirty_mult, and can be modified to change the initial lg_dirty_mult setting for newly created arenas. - arena.<i>.lg_dirty_mult controls an individual arena's dirty page purging threshold, and synchronously triggers any purging that may be necessary to maintain the constraint. - arena.<i>.chunk.purge allows the per arena dirty page purging function to be replaced. This resolves #93.
2015-03-11Fix a declaration-after-statement regression.Jason Evans
2015-03-10Normalize rdelm/rd structure field naming.Jason Evans
2015-03-10Refactor dirty run linkage to reduce sizeof(extent_node_t).Jason Evans
2015-03-06Use first-fit rather than first-best-fit run/chunk allocation.Jason Evans
This tends to more effectively pack active memory toward low addresses. However, additional tree searches are required in many cases, so whether this change stands the test of time will depend on real-world benchmarks.
2015-03-06Quantize szad trees by size class.Jason Evans
Treat sizes that round down to the same size class as size-equivalent in trees that are used to search for first best fit, so that there are only as many "firsts" as there are size classes. This comes closer to the ideal of first fit.
2015-02-18Fix chunk cache races.Jason Evans
These regressions were introduced by ee41ad409a43d12900a5a3108f6c14f84e4eb0eb (Integrate whole chunks into unused dirty page purging machinery.).
2015-02-18Rename "dirty chunks" to "cached chunks".Jason Evans
Rename "dirty chunks" to "cached chunks", in order to avoid overloading the term "dirty". Fix the regression caused by 339c2b23b2d61993ac768afcc72af135662c6771 (Fix chunk_unmap() to propagate dirty state.), and actually address what that change attempted, which is to only purge chunks once, and propagate whether zeroed pages resulted into chunk_record().
2015-02-17Fix chunk_unmap() to propagate dirty state.Jason Evans
Fix chunk_unmap() to propagate whether a chunk is dirty, and modify dirty chunk purging to record this information so it can be passed to chunk_unmap(). Since the broken version of chunk_unmap() claimed that all chunks were clean, this resulted in potential memory corruption for purging implementations that do not zero (e.g. MADV_FREE). This regression was introduced by ee41ad409a43d12900a5a3108f6c14f84e4eb0eb (Integrate whole chunks into unused dirty page purging machinery.).
2015-02-17arena_chunk_dirty_node_init() --> extent_node_dirty_linkage_init()Jason Evans
2015-02-17Simplify extent_node_t and add extent_node_init().Jason Evans
2015-02-16Integrate whole chunks into unused dirty page purging machinery.Jason Evans
Extend per arena unused dirty page purging to manage unused dirty chunks in aaddtion to unused dirty runs. Rather than immediately unmapping deallocated chunks (or purging them in the --disable-munmap case), store them in a separate set of trees, chunks_[sz]ad_dirty. Preferrentially allocate dirty chunks. When excessive unused dirty pages accumulate, purge runs and chunks in ingegrated LRU order (and unmap chunks in the --enable-munmap case). Refactor extent_node_t to provide accessor functions.
2015-02-15Normalize *_link and link_* fields to all be *_link.Jason Evans
2015-02-12Refactor huge_*() calls into arena internals.Jason Evans
Make redirects to the huge_*() API the arena code's responsibility, since arenas now take responsibility for all allocation sizes.
2015-02-12Move centralized chunk management into arenas.Jason Evans
Migrate all centralized data structures related to huge allocations and recyclable chunks into arena_t, so that each arena can manage huge allocations and recyclable virtual memory completely independently of other arenas. Add chunk node caching to arenas, in order to avoid contention on the base allocator. Use chunks_rtree to look up huge allocations rather than a red-black tree. Maintain a per arena unsorted list of huge allocations (which will be needed to enumerate huge allocations during arena reset). Remove the --enable-ivsalloc option, make ivsalloc() always available, and use it for size queries if --enable-debug is enabled. The only practical implications to this removal are that 1) ivsalloc() is now always available during live debugging (and the underlying radix tree is available during core-based debugging), and 2) size query validation can no longer be enabled independent of --enable-debug. Remove the stats.chunks.{current,total,high} mallctls, and replace their underlying statistics with simpler atomically updated counters used exclusively for gdump triggering. These statistics are no longer very useful because each arena manages chunks independently, and per arena statistics provide similar information. Simplify chunk synchronization code, now that base chunk allocation cannot cause recursive lock acquisition.
2015-02-09Implement explicit tcache support.Jason Evans
Add the MALLOCX_TCACHE() and MALLOCX_TCACHE_NONE macros, which can be used in conjunction with the *allocx() API. Add the tcache.create, tcache.flush, and tcache.destroy mallctls. This resolves #145.
2015-02-04Make opt.lg_dirty_mult work as documentedMike Hommey
The documentation for opt.lg_dirty_mult says: Per-arena minimum ratio (log base 2) of active to dirty pages. Some dirty unused pages may be allowed to accumulate, within the limit set by the ratio (or one chunk worth of dirty pages, whichever is greater) (...) The restriction in parentheses currently doesn't happen. This makes jemalloc aggressively madvise(), which in turns increases the amount of page faults significantly. For instance, this resulted in several(!) hundred(!) milliseconds startup regression on Firefox for Android. This may require further tweaking, but starting with actually doing what the documentation says is a good start.