Age | Commit message (Collapse) | Author |
|
Fix a latent valgrind bug exposed by d412624b25eed2b5c52b7d94a71070d3aab03cb4
(Move retaining out of default chunk hooks).
|
|
When tsd is not in nominal state (e.g. during thread termination), we
should not increment nthreads.
|
|
Bug: 29881356
Bug: https://github.com/jemalloc/jemalloc/issues/399
Test: device boots again
(cherry picked from commit 54374abfaef24d7d74083f0d57b81d39db36ce5b)
Change-Id: Ia8cbd53eaa13d14f1742853fcf3066c96f7d9e45
|
|
Bug: 28860984
Change-Id: If12daed270ec0a85cd151aaaa432d178c8389757
|
|
Bug: 28860984
Change-Id: I9eaf67f53f9872177d068d660d1e051cecdc82a0
|
|
rallocx() for an alignment-constrained request may end up with a
smaller-than-worst-case size if in-place reallocation succeeds due to
serendipitous alignment. In such cases, sampling may not happen.
|
|
Fix huge_ralloc_no_move_expand() to update the extent's zeroed attribute
based on the intersection of the previous value and that of the newly
merged trailing extent.
|
|
This regression was caused by d412624b25eed2b5c52b7d94a71070d3aab03cb4
(Move retaining out of default chunk hooks).
|
|
This regression was caused by 3ef51d7f733ac6432e80fa902a779ab5b98d74f6
(Optimize the fast paths of calloc() and [m,d,sd]allocx().).
|
|
Revert 245ae6036c09cc11a72fab4335495d95cddd5beb (Support --with-lg-page
values larger than actual page size.), because it could cause VM map
fragmentation if the kernel grows mmap()ed memory downward.
This resolves #391.
|
|
Fix mixed decl in the gettimeofday() branch of nstime_update()
|
|
This avoids bootstrapping issues for configurations that require
allocation during tsd initialization.
This resolves #390.
|
|
|
|
|
|
Short-circuit commonly called witness functions so that they only
execute in debug builds, and remove equivalent guards from mutex
functions. This avoids pointless code execution in
witness_assert_lockless(), which is typically called twice per
allocation/deallocation function invocation.
Inline commonly called witness functions so that optimized builds can
completely remove calls as dead code.
|
|
Fix in place huge reallocation to update the chunk counters that are
used for triggering gdump profiles.
|
|
b2c0d6322d2307458ae2b28545f8a5c9903d7ef5 (Add witness, a simple online
locking validator.) caused a broad propagation of tsd throughout the
internal API, but tsd_fetch() was designed to fail prior to tsd
bootstrapping. Fix this by splitting tsd_t into non-nullable tsd_t and
nullable tsdn_t, and modifying all internal APIs that do not critically
rely on tsd to take nullable pointers. Furthermore, add the
tsd_booted_get() function so that tsdn_fetch() can probe whether tsd
bootstrapping is complete and return NULL if not. All dangerous
conversions of nullable pointers are tsdn_tsd() calls that assert-fail
on invalid conversion.
|
|
|
|
This is a broader application of optimizations to malloc() and free() in
f4a0f32d340985de477bbe329ecdaecd69ed1055 (Fast-path improvement:
reduce # of branches and unnecessary operations.).
This resolves #321.
|
|
If the OS overcommits:
- Commit all mappings in pages_map() regardless of whether the caller
requested committed memory.
- Linux-specific: Specify MAP_NORESERVE to avoid
unfortunate interactions with heuristic overcommit mode during
fork(2).
This resolves #193.
|
|
Move chunk_dalloc_arena()'s implementation into chunk_dalloc_wrapper(),
so that if the dalloc hook fails, proper decommit/purge/retain cascading
occurs. This fixes three potential chunk leaks on OOM paths, one during
dss-based chunk allocation, one during chunk header commit (currently
relevant only on Windows), and one during rtree write (e.g. if rtree
node allocation fails).
Merge chunk_purge_arena() into chunk_purge_default() (refactor, no
change to functionality).
Bug: 28590121
(cherry picked from commit 1ae9287a1aec534fa0a805a717f1c4e058ae8433)
Change-Id: Ia21292ab25c65bb7a8aa44077c86545789a9e786
|
|
This makes the numbers reported in the leak report summary closely match
those reported by jeprof.
This resolves #356.
|
|
This resolves #367.
|
|
Split arena_choose() into arena_[i]choose() and use arena_ichoose() for
arena lookup during internal allocation. This fixes huge_palloc() so
that it always succeeds during extent node allocation.
This regression was introduced by
66cd953514a18477eb49732e40d5c2ab5f1b12c5 (Do not allocate metadata via
non-auto arenas, nor tcaches.).
|
|
Fix witness to clear its list of owned mutexes in the child if
platform-specific malloc_mutex code re-initializes mutexes rather than
unlocking them.
|
|
|
|
Reset large curruns to 0 during arena reset.
Do not increase huge ndalloc stats during arena reset.
|
|
This regression was caused by 66cd953514a18477eb49732e40d5c2ab5f1b12c5
(Do not allocate metadata via non-auto arenas, nor tcaches.).
|
|
This makes it possible to discard all of an arena's allocations in a
single operation.
This resolves #146.
|
|
This assures that all internally allocated metadata come from the
first opt_narenas arenas, i.e. the automatically multiplexed arenas.
|
|
|
|
Huge allocations may have a size that is not a multiple of chunksize.
When stepping through chunks round up the huge allocation size to the
next multiple of chunksize.
Change-Id: I417fdfb840559c2b90c97b0ade7670aa2d181de4
Fixes: 28303511
(cherry picked from commit 1e14731d7182460d082d128660d419b26b9c6c39)
|
|
Change test-related mangling to simplify symbol filtering.
The following commands can be used to detect missing/obsolete symbol
mangling, with the caveat that the full set of symbols is based on the
union of symbols generated by all configurations, some of which are
platform-specific:
./autogen.sh --enable-debug --enable-prof --enable-lazy-lock
make all tests
nm -a lib/libjemalloc.a src/*.jet.o \
|grep " [TDBCR] " \
|awk '{print $3}' \
|sed -e 's/^\(je_\|jet_\(n_\)\?\)\([a-zA-Z0-9_]*\)/\3/g' \
|LC_COLLATE=C sort -u \
|grep -v \
-e '^\(malloc\|calloc\|posix_memalign\|aligned_alloc\|realloc\|free\)$' \
-e '^\(m\|r\|x\|s\|d\|sd\|n\)allocx$' \
-e '^mallctl\(\|nametomib\|bymib\)$' \
-e '^malloc_\(stats_print\|usable_size\|message\)$' \
-e '^\(memalign\|valloc\)$' \
-e '^__\(malloc\|memalign\|realloc\|free\)_hook$' \
-e '^pthread_create$' \
> /tmp/private_symbols.txt
|
|
|
|
Also remove tautological cassert(config_debug) calls.
|
|
|
|
This resolves #358.
|
|
|
|
This regression was caused by 8f683b94a751c65af8f9fa25970ccf2917b96bb8
(Make opt_narenas unsigned rather than size_t.).
|
|
During over-allocation in preparation for creating aligned mappings,
allocate one more page than necessary if PAGE is the actual page size,
so that trimming still succeeds even if the system returns a mapping
that has less than PAGE alignment. This allows compiling with e.g. 64
KiB "pages" on systems that actually use 4 KiB pages.
Note that for e.g. --with-lg-page=21, it is also necessary to increase
the chunk size (e.g. --with-malloc-conf=lg_chunk:22) so that there are
at least two "pages" per chunk. In practice this isn't a particularly
compelling configuration because so much (unusable) virtual memory is
dedicated to chunk headers.
|
|
Refactor ph to support configurable comparison functions. Use a cpp
macro code generation form equivalent to the rb macros so that pairing
heaps can be used for both run heaps and chunk heaps.
Remove per node parent pointers, and instead use leftmost siblings' prev
pointers to track parents.
Fix multi-pass sibling merging to iterate over intermediate results
using a FIFO, rather than a LIFO. Use this fixed sibling merging
implementation for both merge phases of the auxiliary twopass algorithm
(first merging the aux list, then replacing the root with its merged
children). This fixes both degenerate merge behavior and the potential
for deep recursion.
This regression was introduced by
6bafa6678fc36483e638f1c3a0a9bf79fb89bfc9 (Pairing heap).
This resolves #371.
|
|
|
|
Replace hardcoded 0xa5 and 0x5a junk values with JEMALLOC_ALLOC_JUNK and
JEMALLOC_FREE_JUNK macros, respectively.
|
|
|
|
Move chunk_dalloc_arena()'s implementation into chunk_dalloc_wrapper(),
so that if the dalloc hook fails, proper decommit/purge/retain cascading
occurs. This fixes three potential chunk leaks on OOM paths, one during
dss-based chunk allocation, one during chunk header commit (currently
relevant only on Windows), and one during rtree write (e.g. if rtree
node allocation fails).
Merge chunk_purge_arena() into chunk_purge_default() (refactor, no
change to functionality).
|
|
Variables s and slen are declared inside a switch statement, but outside
a case scope. clang reports these variable definitions as "unreachable",
though this is not really meaningful in this case. This is the only
-Wunreachable-code warning in jemalloc.
src/util.c:501:5 [-Wunreachable-code] code will never be executed
This resolves #364.
|
|
|
|
|
|
Specialize fast path to avoid code that cannot execute for dependent
loads.
Manually unroll.
|
|
|