LCOV - code coverage report
Current view: top level - js/src - jsgc.cpp (source / functions) Hit Total Coverage
Test: output.info Lines: 1045 3836 27.2 %
Date: 2017-07-14 16:53:18 Functions: 159 557 28.5 %
Legend: Lines: hit not hit

          Line data    Source code
       1             : /* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 4 -*-
       2             :  * vim: set ts=8 sts=4 et sw=4 tw=99:
       3             :  * This Source Code Form is subject to the terms of the Mozilla Public
       4             :  * License, v. 2.0. If a copy of the MPL was not distributed with this
       5             :  * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
       6             : 
       7             : /*
       8             :  * This code implements an incremental mark-and-sweep garbage collector, with
       9             :  * most sweeping carried out in the background on a parallel thread.
      10             :  *
      11             :  * Full vs. zone GC
      12             :  * ----------------
      13             :  *
      14             :  * The collector can collect all zones at once, or a subset. These types of
      15             :  * collection are referred to as a full GC and a zone GC respectively.
      16             :  *
      17             :  * It is possible for an incremental collection that started out as a full GC to
      18             :  * become a zone GC if new zones are created during the course of the
      19             :  * collection.
      20             :  *
      21             :  * Incremental collection
      22             :  * ----------------------
      23             :  *
      24             :  * For a collection to be carried out incrementally the following conditions
      25             :  * must be met:
      26             :  *  - the collection must be run by calling js::GCSlice() rather than js::GC()
      27             :  *  - the GC mode must have been set to JSGC_MODE_INCREMENTAL with
      28             :  *    JS_SetGCParameter()
      29             :  *  - no thread may have an AutoKeepAtoms instance on the stack
      30             :  *
      31             :  * The last condition is an engine-internal mechanism to ensure that incremental
      32             :  * collection is not carried out without the correct barriers being implemented.
      33             :  * For more information see 'Incremental marking' below.
      34             :  *
      35             :  * If the collection is not incremental, all foreground activity happens inside
      36             :  * a single call to GC() or GCSlice(). However the collection is not complete
      37             :  * until the background sweeping activity has finished.
      38             :  *
      39             :  * An incremental collection proceeds as a series of slices, interleaved with
      40             :  * mutator activity, i.e. running JavaScript code. Slices are limited by a time
      41             :  * budget. The slice finishes as soon as possible after the requested time has
      42             :  * passed.
      43             :  *
      44             :  * Collector states
      45             :  * ----------------
      46             :  *
      47             :  * The collector proceeds through the following states, the current state being
      48             :  * held in JSRuntime::gcIncrementalState:
      49             :  *
      50             :  *  - MarkRoots  - marks the stack and other roots
      51             :  *  - Mark       - incrementally marks reachable things
      52             :  *  - Sweep      - sweeps zones in groups and continues marking unswept zones
      53             :  *  - Finalize   - performs background finalization, concurrent with mutator
      54             :  *  - Compact    - incrementally compacts by zone
      55             :  *  - Decommit   - performs background decommit and chunk removal
      56             :  *
      57             :  * The MarkRoots activity always takes place in the first slice. The next two
      58             :  * states can take place over one or more slices.
      59             :  *
      60             :  * In other words an incremental collection proceeds like this:
      61             :  *
      62             :  * Slice 1:   MarkRoots:  Roots pushed onto the mark stack.
      63             :  *            Mark:       The mark stack is processed by popping an element,
      64             :  *                        marking it, and pushing its children.
      65             :  *
      66             :  *          ... JS code runs ...
      67             :  *
      68             :  * Slice 2:   Mark:       More mark stack processing.
      69             :  *
      70             :  *          ... JS code runs ...
      71             :  *
      72             :  * Slice n-1: Mark:       More mark stack processing.
      73             :  *
      74             :  *          ... JS code runs ...
      75             :  *
      76             :  * Slice n:   Mark:       Mark stack is completely drained.
      77             :  *            Sweep:      Select first group of zones to sweep and sweep them.
      78             :  *
      79             :  *          ... JS code runs ...
      80             :  *
      81             :  * Slice n+1: Sweep:      Mark objects in unswept zones that were newly
      82             :  *                        identified as alive (see below). Then sweep more zone
      83             :  *                        sweep groups.
      84             :  *
      85             :  *          ... JS code runs ...
      86             :  *
      87             :  * Slice n+2: Sweep:      Mark objects in unswept zones that were newly
      88             :  *                        identified as alive. Then sweep more zones.
      89             :  *
      90             :  *          ... JS code runs ...
      91             :  *
      92             :  * Slice m:   Sweep:      Sweeping is finished, and background sweeping
      93             :  *                        started on the helper thread.
      94             :  *
      95             :  *          ... JS code runs, remaining sweeping done on background thread ...
      96             :  *
      97             :  * When background sweeping finishes the GC is complete.
      98             :  *
      99             :  * Incremental marking
     100             :  * -------------------
     101             :  *
     102             :  * Incremental collection requires close collaboration with the mutator (i.e.,
     103             :  * JS code) to guarantee correctness.
     104             :  *
     105             :  *  - During an incremental GC, if a memory location (except a root) is written
     106             :  *    to, then the value it previously held must be marked. Write barriers
     107             :  *    ensure this.
     108             :  *
     109             :  *  - Any object that is allocated during incremental GC must start out marked.
     110             :  *
     111             :  *  - Roots are marked in the first slice and hence don't need write barriers.
     112             :  *    Roots are things like the C stack and the VM stack.
     113             :  *
     114             :  * The problem that write barriers solve is that between slices the mutator can
     115             :  * change the object graph. We must ensure that it cannot do this in such a way
     116             :  * that makes us fail to mark a reachable object (marking an unreachable object
     117             :  * is tolerable).
     118             :  *
     119             :  * We use a snapshot-at-the-beginning algorithm to do this. This means that we
     120             :  * promise to mark at least everything that is reachable at the beginning of
     121             :  * collection. To implement it we mark the old contents of every non-root memory
     122             :  * location written to by the mutator while the collection is in progress, using
     123             :  * write barriers. This is described in gc/Barrier.h.
     124             :  *
     125             :  * Incremental sweeping
     126             :  * --------------------
     127             :  *
     128             :  * Sweeping is difficult to do incrementally because object finalizers must be
     129             :  * run at the start of sweeping, before any mutator code runs. The reason is
     130             :  * that some objects use their finalizers to remove themselves from caches. If
     131             :  * mutator code was allowed to run after the start of sweeping, it could observe
     132             :  * the state of the cache and create a new reference to an object that was just
     133             :  * about to be destroyed.
     134             :  *
     135             :  * Sweeping all finalizable objects in one go would introduce long pauses, so
     136             :  * instead sweeping broken up into groups of zones. Zones which are not yet
     137             :  * being swept are still marked, so the issue above does not apply.
     138             :  *
     139             :  * The order of sweeping is restricted by cross compartment pointers - for
     140             :  * example say that object |a| from zone A points to object |b| in zone B and
     141             :  * neither object was marked when we transitioned to the Sweep phase. Imagine we
     142             :  * sweep B first and then return to the mutator. It's possible that the mutator
     143             :  * could cause |a| to become alive through a read barrier (perhaps it was a
     144             :  * shape that was accessed via a shape table). Then we would need to mark |b|,
     145             :  * which |a| points to, but |b| has already been swept.
     146             :  *
     147             :  * So if there is such a pointer then marking of zone B must not finish before
     148             :  * marking of zone A.  Pointers which form a cycle between zones therefore
     149             :  * restrict those zones to being swept at the same time, and these are found
     150             :  * using Tarjan's algorithm for finding the strongly connected components of a
     151             :  * graph.
     152             :  *
     153             :  * GC things without finalizers, and things with finalizers that are able to run
     154             :  * in the background, are swept on the background thread. This accounts for most
     155             :  * of the sweeping work.
     156             :  *
     157             :  * Reset
     158             :  * -----
     159             :  *
     160             :  * During incremental collection it is possible, although unlikely, for
     161             :  * conditions to change such that incremental collection is no longer safe. In
     162             :  * this case, the collection is 'reset' by ResetIncrementalGC(). If we are in
     163             :  * the mark state, this just stops marking, but if we have started sweeping
     164             :  * already, we continue until we have swept the current sweep group. Following a
     165             :  * reset, a new non-incremental collection is started.
     166             :  *
     167             :  * Compacting GC
     168             :  * -------------
     169             :  *
     170             :  * Compacting GC happens at the end of a major GC as part of the last slice.
     171             :  * There are three parts:
     172             :  *
     173             :  *  - Arenas are selected for compaction.
     174             :  *  - The contents of those arenas are moved to new arenas.
     175             :  *  - All references to moved things are updated.
     176             :  *
     177             :  * Collecting Atoms
     178             :  * ----------------
     179             :  *
     180             :  * Atoms are collected differently from other GC things. They are contained in
     181             :  * a special zone and things in other zones may have pointers to them that are
     182             :  * not recorded in the cross compartment pointer map. Each zone holds a bitmap
     183             :  * with the atoms it might be keeping alive, and atoms are only collected if
     184             :  * they are not included in any zone's atom bitmap. See AtomMarking.cpp for how
     185             :  * this bitmap is managed.
     186             :  */
     187             : 
     188             : #include "jsgcinlines.h"
     189             : 
     190             : #include "mozilla/ArrayUtils.h"
     191             : #include "mozilla/DebugOnly.h"
     192             : #include "mozilla/MacroForEach.h"
     193             : #include "mozilla/MemoryReporting.h"
     194             : #include "mozilla/Move.h"
     195             : #include "mozilla/ScopeExit.h"
     196             : #include "mozilla/SizePrintfMacros.h"
     197             : #include "mozilla/TimeStamp.h"
     198             : #include "mozilla/Unused.h"
     199             : 
     200             : #include <ctype.h>
     201             : #include <string.h>
     202             : #ifndef XP_WIN
     203             : # include <sys/mman.h>
     204             : # include <unistd.h>
     205             : #endif
     206             : 
     207             : #include "jsapi.h"
     208             : #include "jsatom.h"
     209             : #include "jscntxt.h"
     210             : #include "jscompartment.h"
     211             : #include "jsfriendapi.h"
     212             : #include "jsobj.h"
     213             : #include "jsprf.h"
     214             : #include "jsscript.h"
     215             : #include "jstypes.h"
     216             : #include "jsutil.h"
     217             : #include "jswatchpoint.h"
     218             : #include "jsweakmap.h"
     219             : #ifdef XP_WIN
     220             : # include "jswin.h"
     221             : #endif
     222             : 
     223             : #include "gc/FindSCCs.h"
     224             : #include "gc/GCInternals.h"
     225             : #include "gc/GCTrace.h"
     226             : #include "gc/Marking.h"
     227             : #include "gc/Memory.h"
     228             : #include "gc/Policy.h"
     229             : #include "jit/BaselineJIT.h"
     230             : #include "jit/IonCode.h"
     231             : #include "jit/JitcodeMap.h"
     232             : #include "js/SliceBudget.h"
     233             : #include "proxy/DeadObjectProxy.h"
     234             : #include "vm/Debugger.h"
     235             : #include "vm/GeckoProfiler.h"
     236             : #include "vm/ProxyObject.h"
     237             : #include "vm/Shape.h"
     238             : #include "vm/String.h"
     239             : #include "vm/Symbol.h"
     240             : #include "vm/Time.h"
     241             : #include "vm/TraceLogging.h"
     242             : #include "vm/WrapperObject.h"
     243             : 
     244             : #include "jsobjinlines.h"
     245             : #include "jsscriptinlines.h"
     246             : 
     247             : #include "gc/Heap-inl.h"
     248             : #include "gc/Nursery-inl.h"
     249             : #include "vm/GeckoProfiler-inl.h"
     250             : #include "vm/Stack-inl.h"
     251             : #include "vm/String-inl.h"
     252             : 
     253             : using namespace js;
     254             : using namespace js::gc;
     255             : 
     256             : using mozilla::ArrayLength;
     257             : using mozilla::Get;
     258             : using mozilla::HashCodeScrambler;
     259             : using mozilla::Maybe;
     260             : using mozilla::Swap;
     261             : using mozilla::TimeStamp;
     262             : 
     263             : using JS::AutoGCRooter;
     264             : 
     265             : /* Increase the IGC marking slice time if we are in highFrequencyGC mode. */
     266             : static const int IGC_MARK_SLICE_MULTIPLIER = 2;
     267             : 
     268             : const AllocKind gc::slotsToThingKind[] = {
     269             :     /*  0 */ AllocKind::OBJECT0,  AllocKind::OBJECT2,  AllocKind::OBJECT2,  AllocKind::OBJECT4,
     270             :     /*  4 */ AllocKind::OBJECT4,  AllocKind::OBJECT8,  AllocKind::OBJECT8,  AllocKind::OBJECT8,
     271             :     /*  8 */ AllocKind::OBJECT8,  AllocKind::OBJECT12, AllocKind::OBJECT12, AllocKind::OBJECT12,
     272             :     /* 12 */ AllocKind::OBJECT12, AllocKind::OBJECT16, AllocKind::OBJECT16, AllocKind::OBJECT16,
     273             :     /* 16 */ AllocKind::OBJECT16
     274             : };
     275             : 
     276             : static_assert(JS_ARRAY_LENGTH(slotsToThingKind) == SLOTS_TO_THING_KIND_LIMIT,
     277             :               "We have defined a slot count for each kind.");
     278             : 
     279             : #define CHECK_THING_SIZE(allocKind, traceKind, type, sizedType) \
     280             :     static_assert(sizeof(sizedType) >= SortedArenaList::MinThingSize, \
     281             :                   #sizedType " is smaller than SortedArenaList::MinThingSize!"); \
     282             :     static_assert(sizeof(sizedType) >= sizeof(FreeSpan), \
     283             :                   #sizedType " is smaller than FreeSpan"); \
     284             :     static_assert(sizeof(sizedType) % CellAlignBytes == 0, \
     285             :                   "Size of " #sizedType " is not a multiple of CellAlignBytes"); \
     286             :     static_assert(sizeof(sizedType) >= MinCellSize, \
     287             :                   "Size of " #sizedType " is smaller than the minimum size");
     288             : FOR_EACH_ALLOCKIND(CHECK_THING_SIZE);
     289             : #undef CHECK_THING_SIZE
     290             : 
     291             : const uint32_t Arena::ThingSizes[] = {
     292             : #define EXPAND_THING_SIZE(allocKind, traceKind, type, sizedType) \
     293             :     sizeof(sizedType),
     294             : FOR_EACH_ALLOCKIND(EXPAND_THING_SIZE)
     295             : #undef EXPAND_THING_SIZE
     296             : };
     297             : 
     298             : FreeSpan ArenaLists::placeholder;
     299             : 
     300             : #undef CHECK_THING_SIZE_INNER
     301             : #undef CHECK_THING_SIZE
     302             : 
     303             : #define OFFSET(type) uint32_t(ArenaHeaderSize + (ArenaSize - ArenaHeaderSize) % sizeof(type))
     304             : 
     305             : const uint32_t Arena::FirstThingOffsets[] = {
     306             : #define EXPAND_FIRST_THING_OFFSET(allocKind, traceKind, type, sizedType) \
     307             :     OFFSET(sizedType),
     308             : FOR_EACH_ALLOCKIND(EXPAND_FIRST_THING_OFFSET)
     309             : #undef EXPAND_FIRST_THING_OFFSET
     310             : };
     311             : 
     312             : #undef OFFSET
     313             : 
     314             : #define COUNT(type) uint32_t((ArenaSize - ArenaHeaderSize) / sizeof(type))
     315             : 
     316             : const uint32_t Arena::ThingsPerArena[] = {
     317             : #define EXPAND_THINGS_PER_ARENA(allocKind, traceKind, type, sizedType) \
     318             :     COUNT(sizedType),
     319             : FOR_EACH_ALLOCKIND(EXPAND_THINGS_PER_ARENA)
     320             : #undef EXPAND_THINGS_PER_ARENA
     321             : };
     322             : 
     323             : #undef COUNT
     324             : 
     325             : struct js::gc::FinalizePhase
     326             : {
     327             :     gcstats::PhaseKind statsPhase;
     328             :     AllocKinds kinds;
     329             : };
     330             : 
     331             : /*
     332             :  * Finalization order for objects swept incrementally on the active thread.
     333             :  */
     334             : static const FinalizePhase ForegroundObjectFinalizePhase = {
     335             :     gcstats::PhaseKind::SWEEP_OBJECT, {
     336             :         AllocKind::OBJECT0,
     337             :         AllocKind::OBJECT2,
     338             :         AllocKind::OBJECT4,
     339             :         AllocKind::OBJECT8,
     340             :         AllocKind::OBJECT12,
     341             :         AllocKind::OBJECT16
     342             :     }
     343           3 : };
     344             : 
     345             : /*
     346             :  * Finalization order for GC things swept incrementally on the active thread.
     347             :  */
     348           3 : static const FinalizePhase IncrementalFinalizePhases[] = {
     349             :     {
     350             :         gcstats::PhaseKind::SWEEP_SCRIPT, {
     351             :             AllocKind::SCRIPT
     352             :         }
     353             :     },
     354             :     {
     355             :         gcstats::PhaseKind::SWEEP_JITCODE, {
     356             :             AllocKind::JITCODE
     357             :         }
     358             :     }
     359           3 : };
     360             : 
     361             : /*
     362             :  * Finalization order for GC things swept on the background thread.
     363             :  */
     364           3 : static const FinalizePhase BackgroundFinalizePhases[] = {
     365             :     {
     366             :         gcstats::PhaseKind::SWEEP_SCRIPT, {
     367             :             AllocKind::LAZY_SCRIPT
     368             :         }
     369             :     },
     370             :     {
     371             :         gcstats::PhaseKind::SWEEP_OBJECT, {
     372             :             AllocKind::FUNCTION,
     373             :             AllocKind::FUNCTION_EXTENDED,
     374             :             AllocKind::OBJECT0_BACKGROUND,
     375             :             AllocKind::OBJECT2_BACKGROUND,
     376             :             AllocKind::OBJECT4_BACKGROUND,
     377             :             AllocKind::OBJECT8_BACKGROUND,
     378             :             AllocKind::OBJECT12_BACKGROUND,
     379             :             AllocKind::OBJECT16_BACKGROUND
     380             :         }
     381             :     },
     382             :     {
     383             :         gcstats::PhaseKind::SWEEP_SCOPE, {
     384             :             AllocKind::SCOPE,
     385             :         }
     386             :     },
     387             :     {
     388             :         gcstats::PhaseKind::SWEEP_REGEXP_SHARED, {
     389             :             AllocKind::REGEXP_SHARED,
     390             :         }
     391             :     },
     392             :     {
     393             :         gcstats::PhaseKind::SWEEP_STRING, {
     394             :             AllocKind::FAT_INLINE_STRING,
     395             :             AllocKind::STRING,
     396             :             AllocKind::EXTERNAL_STRING,
     397             :             AllocKind::FAT_INLINE_ATOM,
     398             :             AllocKind::ATOM,
     399             :             AllocKind::SYMBOL
     400             :         }
     401             :     },
     402             :     {
     403             :         gcstats::PhaseKind::SWEEP_SHAPE, {
     404             :             AllocKind::SHAPE,
     405             :             AllocKind::ACCESSOR_SHAPE,
     406             :             AllocKind::BASE_SHAPE,
     407             :             AllocKind::OBJECT_GROUP
     408             :         }
     409             :     }
     410           3 : };
     411             : 
     412             : // Incremental sweeping is controlled by a list of actions that describe what
     413             : // happens and in what order. Due to the incremental nature of sweeping an
     414             : // action does not necessarily run to completion so the current state is tracked
     415             : // in the GCRuntime by the performSweepActions() method. We may yield to the
     416             : // mutator after running part of any action.
     417             : //
     418             : // There are two types of action: per-sweep-group and per-zone.
     419             : //
     420             : // Per-sweep-group actions are run first. Per-zone actions are grouped into
     421             : // phases, with each phase run once per sweep group, and each action in it run
     422             : // for every zone in the group.
     423             : //
     424             : // This is illustrated by the following pseudocode:
     425             : //
     426             : //   for each sweep group:
     427             : //     for each per-sweep-group action:
     428             : //       run part or all of action
     429             : //       maybe yield to the mutator
     430             : //     for each per-zone phase:
     431             : //       for each zone in sweep group:
     432             : //         for each action in phase:
     433             : //           run part or all of action
     434             : //           maybe yield to the mutator
     435             : //
     436             : // Progress through the loops is stored in GCRuntime, e.g. |sweepActionIndex|
     437             : // for looping through the sweep actions.
     438             : 
     439             : using PerSweepGroupSweepAction = IncrementalProgress (*)(GCRuntime* gc, SliceBudget& budget);
     440             : 
     441             : struct PerZoneSweepAction
     442             : {
     443             :     using Func = IncrementalProgress (*)(GCRuntime* gc, FreeOp* fop, Zone* zone,
     444             :                                          SliceBudget& budget, AllocKind kind);
     445             : 
     446             :     Func func;
     447             :     AllocKind kind;
     448             : 
     449          33 :     PerZoneSweepAction(Func func, AllocKind kind) : func(func), kind(kind) {}
     450             : };
     451             : 
     452             : using PerSweepGroupActionVector = Vector<PerSweepGroupSweepAction, 0, SystemAllocPolicy>;
     453             : using PerZoneSweepActionVector = Vector<PerZoneSweepAction, 0, SystemAllocPolicy>;
     454             : using PerZoneSweepPhaseVector = Vector<PerZoneSweepActionVector, 0, SystemAllocPolicy>;
     455             : 
     456           3 : static PerSweepGroupActionVector PerSweepGroupSweepActions;
     457           3 : static PerZoneSweepPhaseVector PerZoneSweepPhases;
     458             : 
     459             : bool
     460           3 : js::gc::InitializeStaticData()
     461             : {
     462           3 :     return GCRuntime::initializeSweepActions();
     463             : }
     464             : 
     465             : template<>
     466             : JSObject*
     467           0 : ArenaCellIterImpl::get<JSObject>() const
     468             : {
     469           0 :     MOZ_ASSERT(!done());
     470           0 :     return reinterpret_cast<JSObject*>(getCell());
     471             : }
     472             : 
     473             : void
     474        4928 : Arena::unmarkAll()
     475             : {
     476        4928 :     uintptr_t* word = chunk()->bitmap.arenaBits(this);
     477        4928 :     memset(word, 0, ArenaBitmapWords * sizeof(uintptr_t));
     478        4928 : }
     479             : 
     480             : /* static */ void
     481           0 : Arena::staticAsserts()
     482             : {
     483             :     static_assert(size_t(AllocKind::LIMIT) <= 255,
     484             :                   "We must be able to fit the allockind into uint8_t.");
     485             :     static_assert(JS_ARRAY_LENGTH(ThingSizes) == size_t(AllocKind::LIMIT),
     486             :                   "We haven't defined all thing sizes.");
     487             :     static_assert(JS_ARRAY_LENGTH(FirstThingOffsets) == size_t(AllocKind::LIMIT),
     488             :                   "We haven't defined all offsets.");
     489             :     static_assert(JS_ARRAY_LENGTH(ThingsPerArena) == size_t(AllocKind::LIMIT),
     490             :                   "We haven't defined all counts.");
     491           0 : }
     492             : 
     493             : template<typename T>
     494             : inline size_t
     495           0 : Arena::finalize(FreeOp* fop, AllocKind thingKind, size_t thingSize)
     496             : {
     497             :     /* Enforce requirements on size of T. */
     498           0 :     MOZ_ASSERT(thingSize % CellAlignBytes == 0);
     499           0 :     MOZ_ASSERT(thingSize >= MinCellSize);
     500           0 :     MOZ_ASSERT(thingSize <= 255);
     501             : 
     502           0 :     MOZ_ASSERT(allocated());
     503           0 :     MOZ_ASSERT(thingKind == getAllocKind());
     504           0 :     MOZ_ASSERT(thingSize == getThingSize());
     505           0 :     MOZ_ASSERT(!hasDelayedMarking);
     506           0 :     MOZ_ASSERT(!markOverflow);
     507           0 :     MOZ_ASSERT(!allocatedDuringIncremental);
     508             : 
     509           0 :     uint_fast16_t firstThing = firstThingOffset(thingKind);
     510           0 :     uint_fast16_t firstThingOrSuccessorOfLastMarkedThing = firstThing;
     511           0 :     uint_fast16_t lastThing = ArenaSize - thingSize;
     512             : 
     513             :     FreeSpan newListHead;
     514           0 :     FreeSpan* newListTail = &newListHead;
     515           0 :     size_t nmarked = 0;
     516             : 
     517           0 :     if (MOZ_UNLIKELY(MemProfiler::enabled())) {
     518           0 :         for (ArenaCellIterUnderFinalize i(this); !i.done(); i.next()) {
     519           0 :             T* t = i.get<T>();
     520           0 :             if (t->asTenured().isMarkedAny())
     521           0 :                 MemProfiler::MarkTenured(reinterpret_cast<void*>(t));
     522             :         }
     523             :     }
     524             : 
     525           0 :     for (ArenaCellIterUnderFinalize i(this); !i.done(); i.next()) {
     526           0 :         T* t = i.get<T>();
     527           0 :         if (t->asTenured().isMarkedAny()) {
     528           0 :             uint_fast16_t thing = uintptr_t(t) & ArenaMask;
     529           0 :             if (thing != firstThingOrSuccessorOfLastMarkedThing) {
     530             :                 // We just finished passing over one or more free things,
     531             :                 // so record a new FreeSpan.
     532           0 :                 newListTail->initBounds(firstThingOrSuccessorOfLastMarkedThing,
     533             :                                         thing - thingSize, this);
     534           0 :                 newListTail = newListTail->nextSpanUnchecked(this);
     535             :             }
     536           0 :             firstThingOrSuccessorOfLastMarkedThing = thing + thingSize;
     537           0 :             nmarked++;
     538             :         } else {
     539           0 :             t->finalize(fop);
     540           0 :             JS_POISON(t, JS_SWEPT_TENURED_PATTERN, thingSize);
     541           0 :             TraceTenuredFinalize(t);
     542             :         }
     543             :     }
     544             : 
     545           0 :     if (nmarked == 0) {
     546             :         // Do nothing. The caller will update the arena appropriately.
     547           0 :         MOZ_ASSERT(newListTail == &newListHead);
     548           0 :         JS_EXTRA_POISON(data, JS_SWEPT_TENURED_PATTERN, sizeof(data));
     549           0 :         return nmarked;
     550             :     }
     551             : 
     552           0 :     MOZ_ASSERT(firstThingOrSuccessorOfLastMarkedThing != firstThing);
     553           0 :     uint_fast16_t lastMarkedThing = firstThingOrSuccessorOfLastMarkedThing - thingSize;
     554           0 :     if (lastThing == lastMarkedThing) {
     555             :         // If the last thing was marked, we will have already set the bounds of
     556             :         // the final span, and we just need to terminate the list.
     557           0 :         newListTail->initAsEmpty();
     558             :     } else {
     559             :         // Otherwise, end the list with a span that covers the final stretch of free things.
     560           0 :         newListTail->initFinal(firstThingOrSuccessorOfLastMarkedThing, lastThing, this);
     561             :     }
     562             : 
     563           0 :     firstFreeSpan = newListHead;
     564             : #ifdef DEBUG
     565           0 :     size_t nfree = numFreeThings(thingSize);
     566           0 :     MOZ_ASSERT(nfree + nmarked == thingsPerArena(thingKind));
     567             : #endif
     568           0 :     return nmarked;
     569             : }
     570             : 
     571             : // Finalize arenas from src list, releasing empty arenas if keepArenas wasn't
     572             : // specified and inserting the others into the appropriate destination size
     573             : // bins.
     574             : template<typename T>
     575             : static inline bool
     576           0 : FinalizeTypedArenas(FreeOp* fop,
     577             :                     Arena** src,
     578             :                     SortedArenaList& dest,
     579             :                     AllocKind thingKind,
     580             :                     SliceBudget& budget,
     581             :                     ArenaLists::KeepArenasEnum keepArenas)
     582             : {
     583             :     // When operating in the foreground, take the lock at the top.
     584           0 :     Maybe<AutoLockGC> maybeLock;
     585           0 :     if (fop->onActiveCooperatingThread())
     586           0 :         maybeLock.emplace(fop->runtime());
     587             : 
     588             :     // During background sweeping free arenas are released later on in
     589             :     // sweepBackgroundThings().
     590           0 :     MOZ_ASSERT_IF(!fop->onActiveCooperatingThread(), keepArenas == ArenaLists::KEEP_ARENAS);
     591             : 
     592           0 :     size_t thingSize = Arena::thingSize(thingKind);
     593           0 :     size_t thingsPerArena = Arena::thingsPerArena(thingKind);
     594             : 
     595           0 :     while (Arena* arena = *src) {
     596           0 :         *src = arena->next;
     597           0 :         size_t nmarked = arena->finalize<T>(fop, thingKind, thingSize);
     598           0 :         size_t nfree = thingsPerArena - nmarked;
     599             : 
     600           0 :         if (nmarked)
     601           0 :             dest.insertAt(arena, nfree);
     602           0 :         else if (keepArenas == ArenaLists::KEEP_ARENAS)
     603           0 :             arena->chunk()->recycleArena(arena, dest, thingsPerArena);
     604             :         else
     605           0 :             fop->runtime()->gc.releaseArena(arena, maybeLock.ref());
     606             : 
     607           0 :         budget.step(thingsPerArena);
     608           0 :         if (budget.isOverBudget())
     609           0 :             return false;
     610             :     }
     611             : 
     612           0 :     return true;
     613             : }
     614             : 
     615             : /*
     616             :  * Finalize the list. On return, |al|'s cursor points to the first non-empty
     617             :  * arena in the list (which may be null if all arenas are full).
     618             :  */
     619             : static bool
     620           0 : FinalizeArenas(FreeOp* fop,
     621             :                Arena** src,
     622             :                SortedArenaList& dest,
     623             :                AllocKind thingKind,
     624             :                SliceBudget& budget,
     625             :                ArenaLists::KeepArenasEnum keepArenas)
     626             : {
     627           0 :     switch (thingKind) {
     628             : #define EXPAND_CASE(allocKind, traceKind, type, sizedType) \
     629             :       case AllocKind::allocKind: \
     630             :         return FinalizeTypedArenas<type>(fop, src, dest, thingKind, budget, keepArenas);
     631           0 : FOR_EACH_ALLOCKIND(EXPAND_CASE)
     632             : #undef EXPAND_CASE
     633             : 
     634             :       default:
     635           0 :         MOZ_CRASH("Invalid alloc kind");
     636             :     }
     637             : }
     638             : 
     639             : Chunk*
     640          40 : ChunkPool::pop()
     641             : {
     642          40 :     MOZ_ASSERT(bool(head_) == bool(count_));
     643          40 :     if (!count_)
     644          20 :         return nullptr;
     645          20 :     return remove(head_);
     646             : }
     647             : 
     648             : void
     649          75 : ChunkPool::push(Chunk* chunk)
     650             : {
     651          75 :     MOZ_ASSERT(!chunk->info.next);
     652          75 :     MOZ_ASSERT(!chunk->info.prev);
     653             : 
     654          75 :     chunk->info.next = head_;
     655          75 :     if (head_)
     656          23 :         head_->info.prev = chunk;
     657          75 :     head_ = chunk;
     658          75 :     ++count_;
     659             : 
     660          75 :     MOZ_ASSERT(verify());
     661          75 : }
     662             : 
     663             : Chunk*
     664          45 : ChunkPool::remove(Chunk* chunk)
     665             : {
     666          45 :     MOZ_ASSERT(count_ > 0);
     667          45 :     MOZ_ASSERT(contains(chunk));
     668             : 
     669          45 :     if (head_ == chunk)
     670          45 :         head_ = chunk->info.next;
     671          45 :     if (chunk->info.prev)
     672           0 :         chunk->info.prev->info.next = chunk->info.next;
     673          45 :     if (chunk->info.next)
     674           1 :         chunk->info.next->info.prev = chunk->info.prev;
     675          45 :     chunk->info.next = chunk->info.prev = nullptr;
     676          45 :     --count_;
     677             : 
     678          45 :     MOZ_ASSERT(verify());
     679          45 :     return chunk;
     680             : }
     681             : 
     682             : #ifdef DEBUG
     683             : bool
     684         103 : ChunkPool::contains(Chunk* chunk) const
     685             : {
     686         103 :     verify();
     687         322 :     for (Chunk* cursor = head_; cursor; cursor = cursor->info.next) {
     688         264 :         if (cursor == chunk)
     689          45 :             return true;
     690             :     }
     691          58 :     return false;
     692             : }
     693             : 
     694             : bool
     695         223 : ChunkPool::verify() const
     696             : {
     697         223 :     MOZ_ASSERT(bool(head_) == bool(count_));
     698         223 :     uint32_t count = 0;
     699         759 :     for (Chunk* cursor = head_; cursor; cursor = cursor->info.next, ++count) {
     700         536 :         MOZ_ASSERT_IF(cursor->info.prev, cursor->info.prev->info.next == cursor);
     701         536 :         MOZ_ASSERT_IF(cursor->info.next, cursor->info.next->info.prev == cursor);
     702             :     }
     703         223 :     MOZ_ASSERT(count_ == count);
     704         223 :     return true;
     705             : }
     706             : #endif
     707             : 
     708             : void
     709           0 : ChunkPool::Iter::next()
     710             : {
     711           0 :     MOZ_ASSERT(!done());
     712           0 :     current_ = current_->info.next;
     713           0 : }
     714             : 
     715             : ChunkPool
     716           0 : GCRuntime::expireEmptyChunkPool(const AutoLockGC& lock)
     717             : {
     718           0 :     MOZ_ASSERT(emptyChunks(lock).verify());
     719           0 :     MOZ_ASSERT(tunables.minEmptyChunkCount(lock) <= tunables.maxEmptyChunkCount());
     720             : 
     721           0 :     ChunkPool expired;
     722           0 :     while (emptyChunks(lock).count() > tunables.minEmptyChunkCount(lock)) {
     723           0 :         Chunk* chunk = emptyChunks(lock).pop();
     724           0 :         prepareToFreeChunk(chunk->info);
     725           0 :         expired.push(chunk);
     726             :     }
     727             : 
     728           0 :     MOZ_ASSERT(expired.verify());
     729           0 :     MOZ_ASSERT(emptyChunks(lock).verify());
     730           0 :     MOZ_ASSERT(emptyChunks(lock).count() <= tunables.maxEmptyChunkCount());
     731           0 :     MOZ_ASSERT(emptyChunks(lock).count() <= tunables.minEmptyChunkCount(lock));
     732           0 :     return expired;
     733             : }
     734             : 
     735             : static void
     736           0 : FreeChunkPool(JSRuntime* rt, ChunkPool& pool)
     737             : {
     738           0 :     for (ChunkPool::Iter iter(pool); !iter.done();) {
     739           0 :         Chunk* chunk = iter.get();
     740           0 :         iter.next();
     741           0 :         pool.remove(chunk);
     742           0 :         MOZ_ASSERT(!chunk->info.numArenasFreeCommitted);
     743           0 :         UnmapPages(static_cast<void*>(chunk), ChunkSize);
     744             :     }
     745           0 :     MOZ_ASSERT(pool.count() == 0);
     746           0 : }
     747             : 
     748             : void
     749           0 : GCRuntime::freeEmptyChunks(JSRuntime* rt, const AutoLockGC& lock)
     750             : {
     751           0 :     FreeChunkPool(rt, emptyChunks(lock));
     752           0 : }
     753             : 
     754             : inline void
     755           0 : GCRuntime::prepareToFreeChunk(ChunkInfo& info)
     756             : {
     757           0 :     MOZ_ASSERT(numArenasFreeCommitted >= info.numArenasFreeCommitted);
     758           0 :     numArenasFreeCommitted -= info.numArenasFreeCommitted;
     759           0 :     stats().count(gcstats::STAT_DESTROY_CHUNK);
     760             : #ifdef DEBUG
     761             :     /*
     762             :      * Let FreeChunkPool detect a missing prepareToFreeChunk call before it
     763             :      * frees chunk.
     764             :      */
     765           0 :     info.numArenasFreeCommitted = 0;
     766             : #endif
     767           0 : }
     768             : 
     769             : inline void
     770           0 : GCRuntime::updateOnArenaFree(const ChunkInfo& info)
     771             : {
     772           0 :     ++numArenasFreeCommitted;
     773           0 : }
     774             : 
     775             : void
     776           0 : Chunk::addArenaToFreeList(JSRuntime* rt, Arena* arena)
     777             : {
     778           0 :     MOZ_ASSERT(!arena->allocated());
     779           0 :     arena->next = info.freeArenasHead;
     780           0 :     info.freeArenasHead = arena;
     781           0 :     ++info.numArenasFreeCommitted;
     782           0 :     ++info.numArenasFree;
     783           0 :     rt->gc.updateOnArenaFree(info);
     784           0 : }
     785             : 
     786             : void
     787           0 : Chunk::addArenaToDecommittedList(JSRuntime* rt, const Arena* arena)
     788             : {
     789           0 :     ++info.numArenasFree;
     790           0 :     decommittedArenas.set(Chunk::arenaIndex(arena->address()));
     791           0 : }
     792             : 
     793             : void
     794           0 : Chunk::recycleArena(Arena* arena, SortedArenaList& dest, size_t thingsPerArena)
     795             : {
     796           0 :     arena->setAsFullyUnused();
     797           0 :     dest.insertAt(arena, thingsPerArena);
     798           0 : }
     799             : 
     800             : void
     801           0 : Chunk::releaseArena(JSRuntime* rt, Arena* arena, const AutoLockGC& lock)
     802             : {
     803           0 :     MOZ_ASSERT(arena->allocated());
     804           0 :     MOZ_ASSERT(!arena->hasDelayedMarking);
     805             : 
     806           0 :     arena->release();
     807           0 :     addArenaToFreeList(rt, arena);
     808           0 :     updateChunkListAfterFree(rt, lock);
     809           0 : }
     810             : 
     811             : bool
     812           0 : Chunk::decommitOneFreeArena(JSRuntime* rt, AutoLockGC& lock)
     813             : {
     814           0 :     MOZ_ASSERT(info.numArenasFreeCommitted > 0);
     815           0 :     Arena* arena = fetchNextFreeArena(rt);
     816           0 :     updateChunkListAfterAlloc(rt, lock);
     817             : 
     818             :     bool ok;
     819             :     {
     820           0 :         AutoUnlockGC unlock(lock);
     821           0 :         ok = MarkPagesUnused(arena, ArenaSize);
     822             :     }
     823             : 
     824           0 :     if (ok)
     825           0 :         addArenaToDecommittedList(rt, arena);
     826             :     else
     827           0 :         addArenaToFreeList(rt, arena);
     828           0 :     updateChunkListAfterFree(rt, lock);
     829             : 
     830           0 :     return ok;
     831             : }
     832             : 
     833             : void
     834           0 : Chunk::decommitAllArenasWithoutUnlocking(const AutoLockGC& lock)
     835             : {
     836           0 :     for (size_t i = 0; i < ArenasPerChunk; ++i) {
     837           0 :         if (decommittedArenas.get(i) || arenas[i].allocated())
     838           0 :             continue;
     839             : 
     840           0 :         if (MarkPagesUnused(&arenas[i], ArenaSize)) {
     841           0 :             info.numArenasFreeCommitted--;
     842           0 :             decommittedArenas.set(i);
     843             :         }
     844             :     }
     845           0 : }
     846             : 
     847             : void
     848        6662 : Chunk::updateChunkListAfterAlloc(JSRuntime* rt, const AutoLockGC& lock)
     849             : {
     850        6662 :     if (MOZ_UNLIKELY(!hasAvailableArenas())) {
     851          25 :         rt->gc.availableChunks(lock).remove(this);
     852          25 :         rt->gc.fullChunks(lock).push(this);
     853             :     }
     854        6662 : }
     855             : 
     856             : void
     857           0 : Chunk::updateChunkListAfterFree(JSRuntime* rt, const AutoLockGC& lock)
     858             : {
     859           0 :     if (info.numArenasFree == 1) {
     860           0 :         rt->gc.fullChunks(lock).remove(this);
     861           0 :         rt->gc.availableChunks(lock).push(this);
     862           0 :     } else if (!unused()) {
     863           0 :         MOZ_ASSERT(!rt->gc.fullChunks(lock).contains(this));
     864           0 :         MOZ_ASSERT(rt->gc.availableChunks(lock).contains(this));
     865           0 :         MOZ_ASSERT(!rt->gc.emptyChunks(lock).contains(this));
     866             :     } else {
     867           0 :         MOZ_ASSERT(unused());
     868           0 :         rt->gc.availableChunks(lock).remove(this);
     869           0 :         decommitAllArenas(rt);
     870           0 :         MOZ_ASSERT(info.numArenasFreeCommitted == 0);
     871           0 :         rt->gc.recycleChunk(this, lock);
     872             :     }
     873           0 : }
     874             : 
     875             : void
     876           0 : GCRuntime::releaseArena(Arena* arena, const AutoLockGC& lock)
     877             : {
     878           0 :     arena->zone->usage.removeGCArena();
     879           0 :     if (isBackgroundSweeping())
     880           0 :         arena->zone->threshold.updateForRemovedArena(tunables);
     881           0 :     return arena->chunk()->releaseArena(rt, arena, lock);
     882             : }
     883             : 
     884           4 : GCRuntime::GCRuntime(JSRuntime* rt) :
     885             :     rt(rt),
     886             :     systemZone(nullptr),
     887             :     systemZoneGroup(nullptr),
     888             :     atomsZone(nullptr),
     889             :     stats_(rt),
     890             :     marker(rt),
     891             :     usage(nullptr),
     892             :     mMemProfiler(rt),
     893             :     nextCellUniqueId_(LargestTaggedNullCellPointer + 1), // Ensure disjoint from null tagged pointers.
     894             :     numArenasFreeCommitted(0),
     895             :     verifyPreData(nullptr),
     896             :     chunkAllocationSinceLastGC(false),
     897           8 :     lastGCTime(PRMJ_Now()),
     898             :     mode(JSGC_MODE_INCREMENTAL),
     899             :     numActiveZoneIters(0),
     900             :     cleanUpEverything(false),
     901             :     grayBufferState(GCRuntime::GrayBufferState::Unused),
     902             :     grayBitsValid(false),
     903             :     majorGCTriggerReason(JS::gcreason::NO_REASON),
     904             :     fullGCForAtomsRequested_(false),
     905             :     minorGCNumber(0),
     906             :     majorGCNumber(0),
     907             :     jitReleaseNumber(0),
     908             :     number(0),
     909             :     isFull(false),
     910             :     incrementalState(gc::State::NotActive),
     911             :     lastMarkSlice(false),
     912             :     sweepOnBackgroundThread(false),
     913             :     blocksToFreeAfterSweeping((size_t) JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE),
     914             :     sweepGroupIndex(0),
     915             :     sweepGroups(nullptr),
     916             :     currentSweepGroup(nullptr),
     917             :     sweepPhaseIndex(0),
     918             :     sweepZone(nullptr),
     919             :     sweepActionIndex(0),
     920             :     abortSweepAfterCurrentGroup(false),
     921             :     arenasAllocatedDuringSweep(nullptr),
     922             :     startedCompacting(false),
     923             :     relocatedArenasToRelease(nullptr),
     924             : #ifdef JS_GC_ZEAL
     925             :     markingValidator(nullptr),
     926             : #endif
     927             :     interFrameGC(false),
     928             :     defaultTimeBudget_((int64_t) SliceBudget::UnlimitedTimeBudget),
     929             :     incrementalAllowed(true),
     930             :     compactingEnabled(true),
     931             :     rootsRemoved(false),
     932             : #ifdef JS_GC_ZEAL
     933             :     zealModeBits(0),
     934             :     zealFrequency(0),
     935             :     nextScheduled(0),
     936             :     deterministicOnly(false),
     937             :     incrementalLimit(0),
     938             : #endif
     939             :     fullCompartmentChecks(false),
     940             :     alwaysPreserveCode(false),
     941             : #ifdef DEBUG
     942             :     arenasEmptyAtShutdown(true),
     943             : #endif
     944             :     lock(mutexid::GCLock),
     945             :     allocTask(rt, emptyChunks_.ref()),
     946             :     decommitTask(rt),
     947             :     helperState(rt),
     948             :     nursery_(rt),
     949             :     storeBuffer_(rt, nursery()),
     950          12 :     blocksToFreeAfterMinorGC((size_t) JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE)
     951             : {
     952           4 :     setGCMode(JSGC_MODE_GLOBAL);
     953           4 : }
     954             : 
     955             : #ifdef JS_GC_ZEAL
     956             : 
     957             : void
     958           0 : GCRuntime::getZealBits(uint32_t* zealBits, uint32_t* frequency, uint32_t* scheduled)
     959             : {
     960           0 :     *zealBits = zealModeBits;
     961           0 :     *frequency = zealFrequency;
     962           0 :     *scheduled = nextScheduled;
     963           0 : }
     964             : 
     965             : const char* gc::ZealModeHelpText =
     966             :     "  Specifies how zealous the garbage collector should be. Some of these modes can\n"
     967             :     "  be set simultaneously, by passing multiple level options, e.g. \"2;4\" will activate\n"
     968             :     "  both modes 2 and 4. Modes can be specified by name or number.\n"
     969             :     "  \n"
     970             :     "  Values:\n"
     971             :     "    0: (None) Normal amount of collection (resets all modes)\n"
     972             :     "    1: (RootsChange) Collect when roots are added or removed\n"
     973             :     "    2: (Alloc) Collect when every N allocations (default: 100)\n"
     974             :     "    3: (FrameGC) Collect when the window paints (browser only)\n"
     975             :     "    4: (VerifierPre) Verify pre write barriers between instructions\n"
     976             :     "    5: (FrameVerifierPre) Verify pre write barriers between paints\n"
     977             :     "    6: (StackRooting) Verify stack rooting\n"
     978             :     "    7: (GenerationalGC) Collect the nursery every N nursery allocations\n"
     979             :     "    8: (IncrementalRootsThenFinish) Incremental GC in two slices: 1) mark roots 2) finish collection\n"
     980             :     "    9: (IncrementalMarkAllThenFinish) Incremental GC in two slices: 1) mark all 2) new marking and finish\n"
     981             :     "   10: (IncrementalMultipleSlices) Incremental GC in multiple slices\n"
     982             :     "   11: (IncrementalMarkingValidator) Verify incremental marking\n"
     983             :     "   12: (ElementsBarrier) Always use the individual element post-write barrier, regardless of elements size\n"
     984             :     "   13: (CheckHashTablesOnMinorGC) Check internal hashtables on minor GC\n"
     985             :     "   14: (Compact) Perform a shrinking collection every N allocations\n"
     986             :     "   15: (CheckHeapAfterGC) Walk the heap to check its integrity after every GC\n"
     987             :     "   16: (CheckNursery) Check nursery integrity on minor GC\n"
     988             :     "   17: (IncrementalSweepThenFinish) Incremental GC in two slices: 1) start sweeping 2) finish collection\n";
     989             : 
     990             : // The set of zeal modes that control incremental slices. These modes are
     991             : // mutually exclusive.
     992           3 : static const mozilla::EnumSet<ZealMode> IncrementalSliceZealModes = {
     993             :     ZealMode::IncrementalRootsThenFinish,
     994             :     ZealMode::IncrementalMarkAllThenFinish,
     995             :     ZealMode::IncrementalMultipleSlices,
     996             :     ZealMode::IncrementalSweepThenFinish
     997             : };
     998             : 
     999             : void
    1000           1 : GCRuntime::setZeal(uint8_t zeal, uint32_t frequency)
    1001             : {
    1002           1 :     MOZ_ASSERT(zeal <= unsigned(ZealMode::Limit));
    1003             : 
    1004           1 :     if (verifyPreData)
    1005           0 :         VerifyBarriers(rt, PreBarrierVerifier);
    1006             : 
    1007           1 :     if (zeal == 0) {
    1008           1 :         if (hasZealMode(ZealMode::GenerationalGC)) {
    1009           0 :             evictNursery(JS::gcreason::DEBUG_GC);
    1010           0 :             nursery().leaveZealMode();
    1011             :         }
    1012             : 
    1013           1 :         if (isIncrementalGCInProgress())
    1014           0 :             finishGC(JS::gcreason::DEBUG_GC);
    1015             :     }
    1016             : 
    1017           1 :     ZealMode zealMode = ZealMode(zeal);
    1018           1 :     if (zealMode == ZealMode::GenerationalGC) {
    1019           0 :         for (ZoneGroupsIter group(rt); !group.done(); group.next())
    1020           0 :             group->nursery().enterZealMode();
    1021             :     }
    1022             : 
    1023             :     // Some modes are mutually exclusive. If we're setting one of those, we
    1024             :     // first reset all of them.
    1025           1 :     if (IncrementalSliceZealModes.contains(zealMode)) {
    1026           0 :         for (auto mode : IncrementalSliceZealModes)
    1027           0 :             clearZealMode(mode);
    1028             :     }
    1029             : 
    1030           1 :     bool schedule = zealMode >= ZealMode::Alloc;
    1031           1 :     if (zeal != 0)
    1032           0 :         zealModeBits |= 1 << unsigned(zeal);
    1033             :     else
    1034           1 :         zealModeBits = 0;
    1035           1 :     zealFrequency = frequency;
    1036           1 :     nextScheduled = schedule ? frequency : 0;
    1037           1 : }
    1038             : 
    1039             : void
    1040           0 : GCRuntime::setNextScheduled(uint32_t count)
    1041             : {
    1042           0 :     nextScheduled = count;
    1043           0 : }
    1044             : 
    1045             : bool
    1046           0 : GCRuntime::parseAndSetZeal(const char* str)
    1047             : {
    1048           0 :     int frequency = -1;
    1049           0 :     bool foundFrequency = false;
    1050           0 :     mozilla::Vector<int, 0, SystemAllocPolicy> zeals;
    1051             : 
    1052             :     static const struct {
    1053             :         const char* const zealMode;
    1054             :         size_t length;
    1055             :         uint32_t zeal;
    1056             :     } zealModes[] = {
    1057             : #define ZEAL_MODE(name, value) {#name, sizeof(#name) - 1, value},
    1058             :         JS_FOR_EACH_ZEAL_MODE(ZEAL_MODE)
    1059             : #undef ZEAL_MODE
    1060             :         {"None", 4, 0}
    1061             :     };
    1062             : 
    1063           0 :     do {
    1064           0 :         int zeal = -1;
    1065             : 
    1066           0 :         const char* p = nullptr;
    1067           0 :         if (isdigit(str[0])) {
    1068           0 :             zeal = atoi(str);
    1069             : 
    1070           0 :             size_t offset = strspn(str, "0123456789");
    1071           0 :             p = str + offset;
    1072             :         } else {
    1073           0 :             for (auto z : zealModes) {
    1074           0 :                 if (!strncmp(str, z.zealMode, z.length)) {
    1075           0 :                     zeal = z.zeal;
    1076           0 :                     p = str + z.length;
    1077           0 :                     break;
    1078             :                 }
    1079             :             }
    1080             :         }
    1081           0 :         if (p) {
    1082           0 :             if (!*p || *p == ';') {
    1083           0 :                 frequency = JS_DEFAULT_ZEAL_FREQ;
    1084           0 :             } else if (*p == ',') {
    1085           0 :                 frequency = atoi(p + 1);
    1086           0 :                 foundFrequency = true;
    1087             :             }
    1088             :         }
    1089             : 
    1090           0 :         if (zeal < 0 || zeal > int(ZealMode::Limit) || frequency <= 0) {
    1091           0 :             fprintf(stderr, "Format: JS_GC_ZEAL=level(;level)*[,N]\n");
    1092           0 :             fputs(ZealModeHelpText, stderr);
    1093           0 :             return false;
    1094             :         }
    1095             : 
    1096           0 :         if (!zeals.emplaceBack(zeal)) {
    1097           0 :             return false;
    1098             :         }
    1099           0 :     } while (!foundFrequency &&
    1100           0 :              (str = strchr(str, ';')) != nullptr &&
    1101           0 :              str++);
    1102             : 
    1103           0 :     for (auto z : zeals)
    1104           0 :         setZeal(z, frequency);
    1105           0 :     return true;
    1106             : }
    1107             : 
    1108             : static const char*
    1109           0 : AllocKindName(AllocKind kind)
    1110             : {
    1111             :     static const char* names[] = {
    1112             : #define EXPAND_THING_NAME(allocKind, _1, _2, _3) \
    1113             :         #allocKind,
    1114             : FOR_EACH_ALLOCKIND(EXPAND_THING_NAME)
    1115             : #undef EXPAND_THING_NAME
    1116             :     };
    1117             :     static_assert(ArrayLength(names) == size_t(AllocKind::LIMIT),
    1118             :                   "names array should have an entry for every AllocKind");
    1119             : 
    1120           0 :     size_t i = size_t(kind);
    1121           0 :     MOZ_ASSERT(i < ArrayLength(names));
    1122           0 :     return names[i];
    1123             : }
    1124             : 
    1125             : void
    1126           0 : js::gc::DumpArenaInfo()
    1127             : {
    1128           0 :     fprintf(stderr, "Arena header size: %" PRIuSIZE "\n\n", ArenaHeaderSize);
    1129             : 
    1130           0 :     fprintf(stderr, "GC thing kinds:\n");
    1131           0 :     fprintf(stderr, "%25s %8s %8s %8s\n", "AllocKind:", "Size:", "Count:", "Padding:");
    1132           0 :     for (auto kind : AllAllocKinds()) {
    1133           0 :         fprintf(stderr,
    1134             :                 "%25s %8" PRIuSIZE " %8" PRIuSIZE " %8" PRIuSIZE "\n",
    1135             :                 AllocKindName(kind),
    1136             :                 Arena::thingSize(kind),
    1137             :                 Arena::thingsPerArena(kind),
    1138           0 :                 Arena::firstThingOffset(kind) - ArenaHeaderSize);
    1139             :     }
    1140           0 : }
    1141             : 
    1142             : #endif // JS_GC_ZEAL
    1143             : 
    1144             : /*
    1145             :  * Lifetime in number of major GCs for type sets attached to scripts containing
    1146             :  * observed types.
    1147             :  */
    1148             : static const uint64_t JIT_SCRIPT_RELEASE_TYPES_PERIOD = 20;
    1149             : 
    1150             : bool
    1151           4 : GCRuntime::init(uint32_t maxbytes, uint32_t maxNurseryBytes)
    1152             : {
    1153           4 :     MOZ_ASSERT(SystemPageSize());
    1154             : 
    1155           4 :     if (!rootsHash.ref().init(256))
    1156           0 :         return false;
    1157             : 
    1158             :     {
    1159           8 :         AutoLockGC lock(rt);
    1160             : 
    1161             :         /*
    1162             :          * Separate gcMaxMallocBytes from gcMaxBytes but initialize to maxbytes
    1163             :          * for default backward API compatibility.
    1164             :          */
    1165           4 :         MOZ_ALWAYS_TRUE(tunables.setParameter(JSGC_MAX_BYTES, maxbytes, lock));
    1166           4 :         MOZ_ALWAYS_TRUE(tunables.setParameter(JSGC_MAX_NURSERY_BYTES, maxNurseryBytes, lock));
    1167           4 :         setMaxMallocBytes(maxbytes);
    1168             : 
    1169           4 :         const char* size = getenv("JSGC_MARK_STACK_LIMIT");
    1170           4 :         if (size)
    1171           0 :             setMarkStackLimit(atoi(size), lock);
    1172             : 
    1173           4 :         jitReleaseNumber = majorGCNumber + JIT_SCRIPT_RELEASE_TYPES_PERIOD;
    1174             : 
    1175           4 :         if (!nursery().init(maxNurseryBytes, lock))
    1176           0 :             return false;
    1177             :     }
    1178             : 
    1179             : #ifdef JS_GC_ZEAL
    1180           4 :     const char* zealSpec = getenv("JS_GC_ZEAL");
    1181           4 :     if (zealSpec && zealSpec[0] && !parseAndSetZeal(zealSpec))
    1182           0 :         return false;
    1183             : #endif
    1184             : 
    1185           4 :     if (!InitTrace(*this))
    1186           0 :         return false;
    1187             : 
    1188           4 :     if (!marker.init(mode))
    1189           0 :         return false;
    1190             : 
    1191           4 :     return true;
    1192             : }
    1193             : 
    1194             : void
    1195           0 : GCRuntime::finish()
    1196             : {
    1197             :     /* Wait for nursery background free to end and disable it to release memory. */
    1198           0 :     if (nursery().isEnabled()) {
    1199           0 :         nursery().waitBackgroundFreeEnd();
    1200           0 :         nursery().disable();
    1201             :     }
    1202             : 
    1203             :     /*
    1204             :      * Wait until the background finalization and allocation stops and the
    1205             :      * helper thread shuts down before we forcefully release any remaining GC
    1206             :      * memory.
    1207             :      */
    1208           0 :     helperState.finish();
    1209           0 :     allocTask.cancel(GCParallelTask::CancelAndWait);
    1210           0 :     decommitTask.cancel(GCParallelTask::CancelAndWait);
    1211             : 
    1212             : #ifdef JS_GC_ZEAL
    1213             :     /* Free memory associated with GC verification. */
    1214           0 :     finishVerifier();
    1215             : #endif
    1216             : 
    1217             :     /* Delete all remaining zones. */
    1218           0 :     if (rt->gcInitialized) {
    1219           0 :         AutoSetThreadIsSweeping threadIsSweeping;
    1220           0 :         for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    1221           0 :             for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next())
    1222           0 :                 js_delete(comp.get());
    1223           0 :             js_delete(zone.get());
    1224             :         }
    1225             :     }
    1226             : 
    1227           0 :     groups.ref().clear();
    1228             : 
    1229           0 :     FreeChunkPool(rt, fullChunks_.ref());
    1230           0 :     FreeChunkPool(rt, availableChunks_.ref());
    1231           0 :     FreeChunkPool(rt, emptyChunks_.ref());
    1232             : 
    1233           0 :     FinishTrace();
    1234             : 
    1235           0 :     for (ZoneGroupsIter group(rt); !group.done(); group.next())
    1236           0 :         group->nursery().printTotalProfileTimes();
    1237           0 :     stats().printTotalProfileTimes();
    1238           0 : }
    1239             : 
    1240             : bool
    1241          50 : GCRuntime::setParameter(JSGCParamKey key, uint32_t value, AutoLockGC& lock)
    1242             : {
    1243          50 :     switch (key) {
    1244             :       case JSGC_MAX_MALLOC_BYTES:
    1245           3 :         setMaxMallocBytes(value);
    1246          23 :         for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next())
    1247          20 :             zone->setGCMaxMallocBytes(maxMallocBytesAllocated() * 0.9);
    1248           3 :         break;
    1249             :       case JSGC_SLICE_TIME_BUDGET:
    1250           3 :         defaultTimeBudget_ = value ? value : SliceBudget::UnlimitedTimeBudget;
    1251           3 :         break;
    1252             :       case JSGC_MARK_STACK_LIMIT:
    1253           0 :         if (value == 0)
    1254           0 :             return false;
    1255           0 :         setMarkStackLimit(value, lock);
    1256           0 :         break;
    1257             :       case JSGC_MODE:
    1258          20 :         if (mode != JSGC_MODE_GLOBAL &&
    1259          22 :             mode != JSGC_MODE_ZONE &&
    1260           8 :             mode != JSGC_MODE_INCREMENTAL)
    1261             :         {
    1262           0 :             return false;
    1263             :         }
    1264           4 :         mode = JSGCMode(value);
    1265           4 :         break;
    1266             :       case JSGC_COMPACTING_ENABLED:
    1267           2 :         compactingEnabled = value != 0;
    1268           2 :         break;
    1269             :       default:
    1270          38 :         if (!tunables.setParameter(key, value, lock))
    1271           0 :             return false;
    1272         297 :         for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    1273         259 :             zone->threshold.updateAfterGC(zone->usage.gcBytes(), GC_NORMAL, tunables,
    1274         259 :                                           schedulingState, lock);
    1275             :         }
    1276             :     }
    1277             : 
    1278          50 :     return true;
    1279             : }
    1280             : 
    1281             : bool
    1282          46 : GCSchedulingTunables::setParameter(JSGCParamKey key, uint32_t value, const AutoLockGC& lock)
    1283             : {
    1284             :     // Limit heap growth factor to one hundred times size of current heap.
    1285          46 :     const double MaxHeapGrowthFactor = 100;
    1286             : 
    1287          46 :     switch(key) {
    1288             :       case JSGC_MAX_BYTES:
    1289          10 :         gcMaxBytes_ = value;
    1290          10 :         break;
    1291             :       case JSGC_MAX_NURSERY_BYTES:
    1292           4 :         gcMaxNurseryBytes_ = value;
    1293           4 :         break;
    1294             :       case JSGC_HIGH_FREQUENCY_TIME_LIMIT:
    1295           3 :         highFrequencyThresholdUsec_ = value * PRMJ_USEC_PER_MSEC;
    1296           3 :         break;
    1297             :       case JSGC_HIGH_FREQUENCY_LOW_LIMIT: {
    1298           3 :         uint64_t newLimit = (uint64_t)value * 1024 * 1024;
    1299           3 :         if (newLimit == UINT64_MAX)
    1300           0 :             return false;
    1301           3 :         highFrequencyLowLimitBytes_ = newLimit;
    1302           3 :         if (highFrequencyLowLimitBytes_ >= highFrequencyHighLimitBytes_)
    1303           0 :             highFrequencyHighLimitBytes_ = highFrequencyLowLimitBytes_ + 1;
    1304           3 :         MOZ_ASSERT(highFrequencyHighLimitBytes_ > highFrequencyLowLimitBytes_);
    1305           3 :         break;
    1306             :       }
    1307             :       case JSGC_HIGH_FREQUENCY_HIGH_LIMIT: {
    1308           3 :         uint64_t newLimit = (uint64_t)value * 1024 * 1024;
    1309           3 :         if (newLimit == 0)
    1310           0 :             return false;
    1311           3 :         highFrequencyHighLimitBytes_ = newLimit;
    1312           3 :         if (highFrequencyHighLimitBytes_ <= highFrequencyLowLimitBytes_)
    1313           0 :             highFrequencyLowLimitBytes_ = highFrequencyHighLimitBytes_ - 1;
    1314           3 :         MOZ_ASSERT(highFrequencyHighLimitBytes_ > highFrequencyLowLimitBytes_);
    1315           3 :         break;
    1316             :       }
    1317             :       case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MAX: {
    1318           3 :         double newGrowth = value / 100.0;
    1319           3 :         if (newGrowth <= 0.85 || newGrowth > MaxHeapGrowthFactor)
    1320           0 :             return false;
    1321           3 :         highFrequencyHeapGrowthMax_ = newGrowth;
    1322           3 :         MOZ_ASSERT(highFrequencyHeapGrowthMax_ / 0.85 > 1.0);
    1323           3 :         break;
    1324             :       }
    1325             :       case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MIN: {
    1326           3 :         double newGrowth = value / 100.0;
    1327           3 :         if (newGrowth <= 0.85 || newGrowth > MaxHeapGrowthFactor)
    1328           0 :             return false;
    1329           3 :         highFrequencyHeapGrowthMin_ = newGrowth;
    1330           3 :         MOZ_ASSERT(highFrequencyHeapGrowthMin_ / 0.85 > 1.0);
    1331           3 :         break;
    1332             :       }
    1333             :       case JSGC_LOW_FREQUENCY_HEAP_GROWTH: {
    1334           3 :         double newGrowth = value / 100.0;
    1335           3 :         if (newGrowth <= 0.9 || newGrowth > MaxHeapGrowthFactor)
    1336           0 :             return false;
    1337           3 :         lowFrequencyHeapGrowth_ = newGrowth;
    1338           3 :         MOZ_ASSERT(lowFrequencyHeapGrowth_ / 0.9 > 1.0);
    1339           3 :         break;
    1340             :       }
    1341             :       case JSGC_DYNAMIC_HEAP_GROWTH:
    1342           2 :         dynamicHeapGrowthEnabled_ = value != 0;
    1343           2 :         break;
    1344             :       case JSGC_DYNAMIC_MARK_SLICE:
    1345           2 :         dynamicMarkSliceEnabled_ = value != 0;
    1346           2 :         break;
    1347             :       case JSGC_ALLOCATION_THRESHOLD:
    1348           3 :         gcZoneAllocThresholdBase_ = value * 1024 * 1024;
    1349           3 :         break;
    1350             :       case JSGC_MIN_EMPTY_CHUNK_COUNT:
    1351           3 :         minEmptyChunkCount_ = value;
    1352           3 :         if (minEmptyChunkCount_ > maxEmptyChunkCount_)
    1353           0 :             maxEmptyChunkCount_ = minEmptyChunkCount_;
    1354           3 :         MOZ_ASSERT(maxEmptyChunkCount_ >= minEmptyChunkCount_);
    1355           3 :         break;
    1356             :       case JSGC_MAX_EMPTY_CHUNK_COUNT:
    1357           2 :         maxEmptyChunkCount_ = value;
    1358           2 :         if (minEmptyChunkCount_ > maxEmptyChunkCount_)
    1359           0 :             minEmptyChunkCount_ = maxEmptyChunkCount_;
    1360           2 :         MOZ_ASSERT(maxEmptyChunkCount_ >= minEmptyChunkCount_);
    1361           2 :         break;
    1362             :       case JSGC_REFRESH_FRAME_SLICES_ENABLED:
    1363           2 :         refreshFrameSlicesEnabled_ = value != 0;
    1364           2 :         break;
    1365             :       default:
    1366           0 :         MOZ_CRASH("Unknown GC parameter.");
    1367             :     }
    1368             : 
    1369          46 :     return true;
    1370             : }
    1371             : 
    1372             : uint32_t
    1373           1 : GCRuntime::getParameter(JSGCParamKey key, const AutoLockGC& lock)
    1374             : {
    1375           1 :     switch (key) {
    1376             :       case JSGC_MAX_BYTES:
    1377           0 :         return uint32_t(tunables.gcMaxBytes());
    1378             :       case JSGC_MAX_MALLOC_BYTES:
    1379           0 :         return mallocCounter.maxBytes();
    1380             :       case JSGC_BYTES:
    1381           0 :         return uint32_t(usage.gcBytes());
    1382             :       case JSGC_MODE:
    1383           0 :         return uint32_t(mode);
    1384             :       case JSGC_UNUSED_CHUNKS:
    1385           0 :         return uint32_t(emptyChunks(lock).count());
    1386             :       case JSGC_TOTAL_CHUNKS:
    1387           0 :         return uint32_t(fullChunks(lock).count() +
    1388           0 :                         availableChunks(lock).count() +
    1389           0 :                         emptyChunks(lock).count());
    1390             :       case JSGC_SLICE_TIME_BUDGET:
    1391           0 :         if (defaultTimeBudget_.ref() == SliceBudget::UnlimitedTimeBudget) {
    1392           0 :             return 0;
    1393             :         } else {
    1394           0 :             MOZ_RELEASE_ASSERT(defaultTimeBudget_ >= 0);
    1395           0 :             MOZ_RELEASE_ASSERT(defaultTimeBudget_ <= UINT32_MAX);
    1396           0 :             return uint32_t(defaultTimeBudget_);
    1397             :         }
    1398             :       case JSGC_MARK_STACK_LIMIT:
    1399           0 :         return marker.maxCapacity();
    1400             :       case JSGC_HIGH_FREQUENCY_TIME_LIMIT:
    1401           0 :         return tunables.highFrequencyThresholdUsec() / PRMJ_USEC_PER_MSEC;
    1402             :       case JSGC_HIGH_FREQUENCY_LOW_LIMIT:
    1403           0 :         return tunables.highFrequencyLowLimitBytes() / 1024 / 1024;
    1404             :       case JSGC_HIGH_FREQUENCY_HIGH_LIMIT:
    1405           0 :         return tunables.highFrequencyHighLimitBytes() / 1024 / 1024;
    1406             :       case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MAX:
    1407           0 :         return uint32_t(tunables.highFrequencyHeapGrowthMax() * 100);
    1408             :       case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MIN:
    1409           0 :         return uint32_t(tunables.highFrequencyHeapGrowthMin() * 100);
    1410             :       case JSGC_LOW_FREQUENCY_HEAP_GROWTH:
    1411           0 :         return uint32_t(tunables.lowFrequencyHeapGrowth() * 100);
    1412             :       case JSGC_DYNAMIC_HEAP_GROWTH:
    1413           0 :         return tunables.isDynamicHeapGrowthEnabled();
    1414             :       case JSGC_DYNAMIC_MARK_SLICE:
    1415           0 :         return tunables.isDynamicMarkSliceEnabled();
    1416             :       case JSGC_ALLOCATION_THRESHOLD:
    1417           0 :         return tunables.gcZoneAllocThresholdBase() / 1024 / 1024;
    1418             :       case JSGC_MIN_EMPTY_CHUNK_COUNT:
    1419           0 :         return tunables.minEmptyChunkCount(lock);
    1420             :       case JSGC_MAX_EMPTY_CHUNK_COUNT:
    1421           0 :         return tunables.maxEmptyChunkCount();
    1422             :       case JSGC_COMPACTING_ENABLED:
    1423           0 :         return compactingEnabled;
    1424             :       case JSGC_REFRESH_FRAME_SLICES_ENABLED:
    1425           0 :         return tunables.areRefreshFrameSlicesEnabled();
    1426             :       default:
    1427           1 :         MOZ_ASSERT(key == JSGC_NUMBER);
    1428           1 :         return uint32_t(number);
    1429             :     }
    1430             : }
    1431             : 
    1432             : void
    1433           0 : GCRuntime::setMarkStackLimit(size_t limit, AutoLockGC& lock)
    1434             : {
    1435           0 :     MOZ_ASSERT(!JS::CurrentThreadIsHeapBusy());
    1436           0 :     AutoUnlockGC unlock(lock);
    1437           0 :     AutoStopVerifyingBarriers pauseVerification(rt, false);
    1438           0 :     marker.setMaxCapacity(limit);
    1439           0 : }
    1440             : 
    1441             : bool
    1442          11 : GCRuntime::addBlackRootsTracer(JSTraceDataOp traceOp, void* data)
    1443             : {
    1444          11 :     AssertHeapIsIdle();
    1445          11 :     return !!blackRootTracers.ref().append(Callback<JSTraceDataOp>(traceOp, data));
    1446             : }
    1447             : 
    1448             : void
    1449           0 : GCRuntime::removeBlackRootsTracer(JSTraceDataOp traceOp, void* data)
    1450             : {
    1451             :     // Can be called from finalizers
    1452           0 :     for (size_t i = 0; i < blackRootTracers.ref().length(); i++) {
    1453           0 :         Callback<JSTraceDataOp>* e = &blackRootTracers.ref()[i];
    1454           0 :         if (e->op == traceOp && e->data == data) {
    1455           0 :             blackRootTracers.ref().erase(e);
    1456             :         }
    1457             :     }
    1458           0 : }
    1459             : 
    1460             : void
    1461           4 : GCRuntime::setGrayRootsTracer(JSTraceDataOp traceOp, void* data)
    1462             : {
    1463           4 :     AssertHeapIsIdle();
    1464           4 :     grayRootTracer.op = traceOp;
    1465           4 :     grayRootTracer.data = data;
    1466           4 : }
    1467             : 
    1468             : void
    1469           4 : GCRuntime::setGCCallback(JSGCCallback callback, void* data)
    1470             : {
    1471           4 :     gcCallback.op = callback;
    1472           4 :     gcCallback.data = data;
    1473           4 : }
    1474             : 
    1475             : void
    1476           1 : GCRuntime::callGCCallback(JSGCStatus status) const
    1477             : {
    1478           1 :     if (gcCallback.op)
    1479           1 :         gcCallback.op(TlsContext.get(), status, gcCallback.data);
    1480           1 : }
    1481             : 
    1482             : void
    1483           4 : GCRuntime::setObjectsTenuredCallback(JSObjectsTenuredCallback callback,
    1484             :                                      void* data)
    1485             : {
    1486           4 :     tenuredCallback.op = callback;
    1487           4 :     tenuredCallback.data = data;
    1488           4 : }
    1489             : 
    1490             : void
    1491          21 : GCRuntime::callObjectsTenuredCallback()
    1492             : {
    1493          21 :     if (tenuredCallback.op)
    1494          21 :         tenuredCallback.op(TlsContext.get(), tenuredCallback.data);
    1495          21 : }
    1496             : 
    1497             : namespace {
    1498             : 
    1499             : class AutoNotifyGCActivity {
    1500             :   public:
    1501           3 :     explicit AutoNotifyGCActivity(GCRuntime& gc) : gc_(gc) {
    1502           3 :         if (!gc_.isIncrementalGCInProgress())
    1503           1 :             gc_.callGCCallback(JSGC_BEGIN);
    1504           3 :     }
    1505           6 :     ~AutoNotifyGCActivity() {
    1506           3 :         if (!gc_.isIncrementalGCInProgress())
    1507           0 :             gc_.callGCCallback(JSGC_END);
    1508           3 :     }
    1509             : 
    1510             :   private:
    1511             :     GCRuntime& gc_;
    1512             : };
    1513             : 
    1514             : } // (anon)
    1515             : 
    1516             : bool
    1517           3 : GCRuntime::addFinalizeCallback(JSFinalizeCallback callback, void* data)
    1518             : {
    1519           3 :     return finalizeCallbacks.ref().append(Callback<JSFinalizeCallback>(callback, data));
    1520             : }
    1521             : 
    1522             : void
    1523           0 : GCRuntime::removeFinalizeCallback(JSFinalizeCallback callback)
    1524             : {
    1525           0 :     for (Callback<JSFinalizeCallback>* p = finalizeCallbacks.ref().begin();
    1526           0 :          p < finalizeCallbacks.ref().end(); p++)
    1527             :     {
    1528           0 :         if (p->op == callback) {
    1529           0 :             finalizeCallbacks.ref().erase(p);
    1530           0 :             break;
    1531             :         }
    1532             :     }
    1533           0 : }
    1534             : 
    1535             : void
    1536           0 : GCRuntime::callFinalizeCallbacks(FreeOp* fop, JSFinalizeStatus status) const
    1537             : {
    1538           0 :     for (auto& p : finalizeCallbacks.ref())
    1539           0 :         p.op(fop, status, !isFull, p.data);
    1540           0 : }
    1541             : 
    1542             : bool
    1543           5 : GCRuntime::addWeakPointerZonesCallback(JSWeakPointerZonesCallback callback, void* data)
    1544             : {
    1545           5 :     return updateWeakPointerZonesCallbacks.ref().append(
    1546          10 :             Callback<JSWeakPointerZonesCallback>(callback, data));
    1547             : }
    1548             : 
    1549             : void
    1550           0 : GCRuntime::removeWeakPointerZonesCallback(JSWeakPointerZonesCallback callback)
    1551             : {
    1552           0 :     for (auto& p : updateWeakPointerZonesCallbacks.ref()) {
    1553           0 :         if (p.op == callback) {
    1554           0 :             updateWeakPointerZonesCallbacks.ref().erase(&p);
    1555           0 :             break;
    1556             :         }
    1557             :     }
    1558           0 : }
    1559             : 
    1560             : void
    1561           0 : GCRuntime::callWeakPointerZonesCallbacks() const
    1562             : {
    1563           0 :     for (auto const& p : updateWeakPointerZonesCallbacks.ref())
    1564           0 :         p.op(TlsContext.get(), p.data);
    1565           0 : }
    1566             : 
    1567             : bool
    1568           3 : GCRuntime::addWeakPointerCompartmentCallback(JSWeakPointerCompartmentCallback callback, void* data)
    1569             : {
    1570           3 :     return updateWeakPointerCompartmentCallbacks.ref().append(
    1571           6 :             Callback<JSWeakPointerCompartmentCallback>(callback, data));
    1572             : }
    1573             : 
    1574             : void
    1575           0 : GCRuntime::removeWeakPointerCompartmentCallback(JSWeakPointerCompartmentCallback callback)
    1576             : {
    1577           0 :     for (auto& p : updateWeakPointerCompartmentCallbacks.ref()) {
    1578           0 :         if (p.op == callback) {
    1579           0 :             updateWeakPointerCompartmentCallbacks.ref().erase(&p);
    1580           0 :             break;
    1581             :         }
    1582             :     }
    1583           0 : }
    1584             : 
    1585             : void
    1586           0 : GCRuntime::callWeakPointerCompartmentCallbacks(JSCompartment* comp) const
    1587             : {
    1588           0 :     for (auto const& p : updateWeakPointerCompartmentCallbacks.ref())
    1589           0 :         p.op(TlsContext.get(), comp, p.data);
    1590           0 : }
    1591             : 
    1592             : JS::GCSliceCallback
    1593           9 : GCRuntime::setSliceCallback(JS::GCSliceCallback callback) {
    1594           9 :     return stats().setSliceCallback(callback);
    1595             : }
    1596             : 
    1597             : JS::GCNurseryCollectionCallback
    1598           3 : GCRuntime::setNurseryCollectionCallback(JS::GCNurseryCollectionCallback callback) {
    1599           3 :     return stats().setNurseryCollectionCallback(callback);
    1600             : }
    1601             : 
    1602             : JS::DoCycleCollectionCallback
    1603           3 : GCRuntime::setDoCycleCollectionCallback(JS::DoCycleCollectionCallback callback)
    1604             : {
    1605           3 :     auto prior = gcDoCycleCollectionCallback;
    1606           3 :     gcDoCycleCollectionCallback = Callback<JS::DoCycleCollectionCallback>(callback, nullptr);
    1607           3 :     return prior.op;
    1608             : }
    1609             : 
    1610             : void
    1611           0 : GCRuntime::callDoCycleCollectionCallback(JSContext* cx)
    1612             : {
    1613           0 :     if (gcDoCycleCollectionCallback.op)
    1614           0 :         gcDoCycleCollectionCallback.op(cx);
    1615           0 : }
    1616             : 
    1617             : bool
    1618        2926 : GCRuntime::addRoot(Value* vp, const char* name)
    1619             : {
    1620             :     /*
    1621             :      * Sometimes Firefox will hold weak references to objects and then convert
    1622             :      * them to strong references by calling AddRoot (e.g., via PreserveWrapper,
    1623             :      * or ModifyBusyCount in workers). We need a read barrier to cover these
    1624             :      * cases.
    1625             :      */
    1626        2926 :     if (isIncrementalGCInProgress())
    1627          53 :         GCPtrValue::writeBarrierPre(*vp);
    1628             : 
    1629        2926 :     return rootsHash.ref().put(vp, name);
    1630             : }
    1631             : 
    1632             : void
    1633        2926 : GCRuntime::removeRoot(Value* vp)
    1634             : {
    1635        2926 :     rootsHash.ref().remove(vp);
    1636        2926 :     notifyRootsRemoved();
    1637        2926 : }
    1638             : 
    1639             : extern JS_FRIEND_API(bool)
    1640        2926 : js::AddRawValueRoot(JSContext* cx, Value* vp, const char* name)
    1641             : {
    1642        2926 :     MOZ_ASSERT(vp);
    1643        2926 :     MOZ_ASSERT(name);
    1644        2926 :     bool ok = cx->runtime()->gc.addRoot(vp, name);
    1645        2926 :     if (!ok)
    1646           0 :         JS_ReportOutOfMemory(cx);
    1647        2926 :     return ok;
    1648             : }
    1649             : 
    1650             : extern JS_FRIEND_API(void)
    1651        2926 : js::RemoveRawValueRoot(JSContext* cx, Value* vp)
    1652             : {
    1653        2926 :     cx->runtime()->gc.removeRoot(vp);
    1654        2926 : }
    1655             : 
    1656             : void
    1657           7 : GCRuntime::setMaxMallocBytes(size_t value)
    1658             : {
    1659           7 :     mallocCounter.setMax(value);
    1660          27 :     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next())
    1661          20 :         zone->setGCMaxMallocBytes(value);
    1662           7 : }
    1663             : 
    1664             : void
    1665       49474 : GCRuntime::updateMallocCounter(JS::Zone* zone, size_t nbytes)
    1666             : {
    1667       49474 :     bool triggered = mallocCounter.update(this, nbytes);
    1668       49474 :     if (!triggered && zone)
    1669       47630 :         zone->updateMallocCounter(nbytes);
    1670       49474 : }
    1671             : 
    1672             : double
    1673        3926 : ZoneHeapThreshold::allocTrigger(bool highFrequencyGC) const
    1674             : {
    1675        3926 :     return (highFrequencyGC ? 0.85 : 0.9) * gcTriggerBytes();
    1676             : }
    1677             : 
    1678             : /* static */ double
    1679         290 : ZoneHeapThreshold::computeZoneHeapGrowthFactorForHeapSize(size_t lastBytes,
    1680             :                                                           const GCSchedulingTunables& tunables,
    1681             :                                                           const GCSchedulingState& state)
    1682             : {
    1683         290 :     if (!tunables.isDynamicHeapGrowthEnabled())
    1684         115 :         return 3.0;
    1685             : 
    1686             :     // For small zones, our collection heuristics do not matter much: favor
    1687             :     // something simple in this case.
    1688         175 :     if (lastBytes < 1 * 1024 * 1024)
    1689         157 :         return tunables.lowFrequencyHeapGrowth();
    1690             : 
    1691             :     // If GC's are not triggering in rapid succession, use a lower threshold so
    1692             :     // that we will collect garbage sooner.
    1693          18 :     if (!state.inHighFrequencyGCMode())
    1694          18 :         return tunables.lowFrequencyHeapGrowth();
    1695             : 
    1696             :     // The heap growth factor depends on the heap size after a GC and the GC
    1697             :     // frequency. For low frequency GCs (more than 1sec between GCs) we let
    1698             :     // the heap grow to 150%. For high frequency GCs we let the heap grow
    1699             :     // depending on the heap size:
    1700             :     //   lastBytes < highFrequencyLowLimit: 300%
    1701             :     //   lastBytes > highFrequencyHighLimit: 150%
    1702             :     //   otherwise: linear interpolation between 300% and 150% based on lastBytes
    1703             : 
    1704             :     // Use shorter names to make the operation comprehensible.
    1705           0 :     double minRatio = tunables.highFrequencyHeapGrowthMin();
    1706           0 :     double maxRatio = tunables.highFrequencyHeapGrowthMax();
    1707           0 :     double lowLimit = tunables.highFrequencyLowLimitBytes();
    1708           0 :     double highLimit = tunables.highFrequencyHighLimitBytes();
    1709             : 
    1710           0 :     if (lastBytes <= lowLimit)
    1711           0 :         return maxRatio;
    1712             : 
    1713           0 :     if (lastBytes >= highLimit)
    1714           0 :         return minRatio;
    1715             : 
    1716           0 :     double factor = maxRatio - ((maxRatio - minRatio) * ((lastBytes - lowLimit) /
    1717           0 :                                                          (highLimit - lowLimit)));
    1718           0 :     MOZ_ASSERT(factor >= minRatio);
    1719           0 :     MOZ_ASSERT(factor <= maxRatio);
    1720           0 :     return factor;
    1721             : }
    1722             : 
    1723             : /* static */ size_t
    1724         290 : ZoneHeapThreshold::computeZoneTriggerBytes(double growthFactor, size_t lastBytes,
    1725             :                                            JSGCInvocationKind gckind,
    1726             :                                            const GCSchedulingTunables& tunables,
    1727             :                                            const AutoLockGC& lock)
    1728             : {
    1729             :     size_t base = gckind == GC_SHRINK
    1730         580 :                 ? Max(lastBytes, tunables.minEmptyChunkCount(lock) * ChunkSize)
    1731         580 :                 : Max(lastBytes, tunables.gcZoneAllocThresholdBase());
    1732         290 :     double trigger = double(base) * growthFactor;
    1733         290 :     return size_t(Min(double(tunables.gcMaxBytes()), trigger));
    1734             : }
    1735             : 
    1736             : void
    1737         290 : ZoneHeapThreshold::updateAfterGC(size_t lastBytes, JSGCInvocationKind gckind,
    1738             :                                  const GCSchedulingTunables& tunables,
    1739             :                                  const GCSchedulingState& state, const AutoLockGC& lock)
    1740             : {
    1741         290 :     gcHeapGrowthFactor_ = computeZoneHeapGrowthFactorForHeapSize(lastBytes, tunables, state);
    1742         290 :     gcTriggerBytes_ = computeZoneTriggerBytes(gcHeapGrowthFactor_, lastBytes, gckind, tunables,
    1743         580 :                                               lock);
    1744         290 : }
    1745             : 
    1746             : void
    1747           0 : ZoneHeapThreshold::updateForRemovedArena(const GCSchedulingTunables& tunables)
    1748             : {
    1749           0 :     size_t amount = ArenaSize * gcHeapGrowthFactor_;
    1750           0 :     MOZ_ASSERT(amount > 0);
    1751             : 
    1752           0 :     if ((gcTriggerBytes_ < amount) ||
    1753           0 :         (gcTriggerBytes_ - amount < tunables.gcZoneAllocThresholdBase() * gcHeapGrowthFactor_))
    1754             :     {
    1755           0 :         return;
    1756             :     }
    1757             : 
    1758           0 :     gcTriggerBytes_ -= amount;
    1759             : }
    1760             : 
    1761             : void
    1762         130 : GCMarker::delayMarkingArena(Arena* arena)
    1763             : {
    1764         130 :     if (arena->hasDelayedMarking) {
    1765             :         /* Arena already scheduled to be marked later */
    1766           0 :         return;
    1767             :     }
    1768         130 :     arena->setNextDelayedMarking(unmarkedArenaStackTop);
    1769         130 :     unmarkedArenaStackTop = arena;
    1770             : #ifdef DEBUG
    1771         130 :     markLaterArenas++;
    1772             : #endif
    1773             : }
    1774             : 
    1775             : void
    1776           0 : GCMarker::delayMarkingChildren(const void* thing)
    1777             : {
    1778           0 :     const TenuredCell* cell = TenuredCell::fromPointer(thing);
    1779           0 :     cell->arena()->markOverflow = 1;
    1780           0 :     delayMarkingArena(cell->arena());
    1781           0 : }
    1782             : 
    1783             : inline void
    1784          16 : ArenaLists::prepareForIncrementalGC()
    1785             : {
    1786          16 :     purge();
    1787         480 :     for (auto i : AllAllocKinds())
    1788         464 :         arenaLists(i).moveCursorToEnd();
    1789          16 : }
    1790             : 
    1791             : /* Compacting GC */
    1792             : 
    1793             : bool
    1794           1 : GCRuntime::shouldCompact()
    1795             : {
    1796             :     // Compact on shrinking GC if enabled, but skip compacting in incremental
    1797             :     // GCs if we are currently animating.
    1798           3 :     return invocationKind == GC_SHRINK && isCompactingGCEnabled() &&
    1799           2 :         (!isIncremental || rt->lastAnimationTime + PRMJ_USEC_PER_SEC < PRMJ_Now());
    1800             : }
    1801             : 
    1802             : bool
    1803           0 : GCRuntime::isCompactingGCEnabled() const
    1804             : {
    1805           0 :     return compactingEnabled && TlsContext.get()->compactingDisabledCount == 0;
    1806             : }
    1807             : 
    1808           2 : AutoDisableCompactingGC::AutoDisableCompactingGC(JSContext* cx)
    1809           2 :   : cx(cx)
    1810             : {
    1811           2 :     ++cx->compactingDisabledCount;
    1812           2 :     if (cx->runtime()->gc.isIncrementalGCInProgress() && cx->runtime()->gc.isCompactingGc())
    1813           0 :         FinishGC(cx);
    1814           2 : }
    1815             : 
    1816           4 : AutoDisableCompactingGC::~AutoDisableCompactingGC()
    1817             : {
    1818           2 :     MOZ_ASSERT(cx->compactingDisabledCount > 0);
    1819           2 :     --cx->compactingDisabledCount;
    1820           2 : }
    1821             : 
    1822             : static bool
    1823           0 : CanRelocateZone(Zone* zone)
    1824             : {
    1825           0 :     return !zone->isAtomsZone() && !zone->isSelfHostingZone();
    1826             : }
    1827             : 
    1828             : static const AllocKind AllocKindsToRelocate[] = {
    1829             :     AllocKind::FUNCTION,
    1830             :     AllocKind::FUNCTION_EXTENDED,
    1831             :     AllocKind::OBJECT0,
    1832             :     AllocKind::OBJECT0_BACKGROUND,
    1833             :     AllocKind::OBJECT2,
    1834             :     AllocKind::OBJECT2_BACKGROUND,
    1835             :     AllocKind::OBJECT4,
    1836             :     AllocKind::OBJECT4_BACKGROUND,
    1837             :     AllocKind::OBJECT8,
    1838             :     AllocKind::OBJECT8_BACKGROUND,
    1839             :     AllocKind::OBJECT12,
    1840             :     AllocKind::OBJECT12_BACKGROUND,
    1841             :     AllocKind::OBJECT16,
    1842             :     AllocKind::OBJECT16_BACKGROUND,
    1843             :     AllocKind::SCRIPT,
    1844             :     AllocKind::LAZY_SCRIPT,
    1845             :     AllocKind::SHAPE,
    1846             :     AllocKind::ACCESSOR_SHAPE,
    1847             :     AllocKind::BASE_SHAPE,
    1848             :     AllocKind::FAT_INLINE_STRING,
    1849             :     AllocKind::STRING,
    1850             :     AllocKind::EXTERNAL_STRING,
    1851             :     AllocKind::FAT_INLINE_ATOM,
    1852             :     AllocKind::ATOM,
    1853             :     AllocKind::SCOPE,
    1854             :     AllocKind::REGEXP_SHARED
    1855             : };
    1856             : 
    1857             : Arena*
    1858           0 : ArenaList::removeRemainingArenas(Arena** arenap)
    1859             : {
    1860             :     // This is only ever called to remove arenas that are after the cursor, so
    1861             :     // we don't need to update it.
    1862             : #ifdef DEBUG
    1863           0 :     for (Arena* arena = *arenap; arena; arena = arena->next)
    1864           0 :         MOZ_ASSERT(cursorp_ != &arena->next);
    1865             : #endif
    1866           0 :     Arena* remainingArenas = *arenap;
    1867           0 :     *arenap = nullptr;
    1868           0 :     check();
    1869           0 :     return remainingArenas;
    1870             : }
    1871             : 
    1872             : static bool
    1873           0 : ShouldRelocateAllArenas(JS::gcreason::Reason reason)
    1874             : {
    1875           0 :     return reason == JS::gcreason::DEBUG_GC;
    1876             : }
    1877             : 
    1878             : /*
    1879             :  * Choose which arenas to relocate all cells from. Return an arena cursor that
    1880             :  * can be passed to removeRemainingArenas().
    1881             :  */
    1882             : Arena**
    1883           0 : ArenaList::pickArenasToRelocate(size_t& arenaTotalOut, size_t& relocTotalOut)
    1884             : {
    1885             :     // Relocate the greatest number of arenas such that the number of used cells
    1886             :     // in relocated arenas is less than or equal to the number of free cells in
    1887             :     // unrelocated arenas. In other words we only relocate cells we can move
    1888             :     // into existing arenas, and we choose the least full areans to relocate.
    1889             :     //
    1890             :     // This is made easier by the fact that the arena list has been sorted in
    1891             :     // descending order of number of used cells, so we will always relocate a
    1892             :     // tail of the arena list. All we need to do is find the point at which to
    1893             :     // start relocating.
    1894             : 
    1895           0 :     check();
    1896             : 
    1897           0 :     if (isCursorAtEnd())
    1898           0 :         return nullptr;
    1899             : 
    1900           0 :     Arena** arenap = cursorp_;     // Next arena to consider for relocation.
    1901           0 :     size_t previousFreeCells = 0;  // Count of free cells before arenap.
    1902           0 :     size_t followingUsedCells = 0; // Count of used cells after arenap.
    1903           0 :     size_t fullArenaCount = 0;     // Number of full arenas (not relocated).
    1904           0 :     size_t nonFullArenaCount = 0;  // Number of non-full arenas (considered for relocation).
    1905           0 :     size_t arenaIndex = 0;         // Index of the next arena to consider.
    1906             : 
    1907           0 :     for (Arena* arena = head_; arena != *cursorp_; arena = arena->next)
    1908           0 :         fullArenaCount++;
    1909             : 
    1910           0 :     for (Arena* arena = *cursorp_; arena; arena = arena->next) {
    1911           0 :         followingUsedCells += arena->countUsedCells();
    1912           0 :         nonFullArenaCount++;
    1913             :     }
    1914             : 
    1915           0 :     mozilla::DebugOnly<size_t> lastFreeCells(0);
    1916           0 :     size_t cellsPerArena = Arena::thingsPerArena((*arenap)->getAllocKind());
    1917             : 
    1918           0 :     while (*arenap) {
    1919           0 :         Arena* arena = *arenap;
    1920           0 :         if (followingUsedCells <= previousFreeCells)
    1921           0 :             break;
    1922             : 
    1923           0 :         size_t freeCells = arena->countFreeCells();
    1924           0 :         size_t usedCells = cellsPerArena - freeCells;
    1925           0 :         followingUsedCells -= usedCells;
    1926             : #ifdef DEBUG
    1927           0 :         MOZ_ASSERT(freeCells >= lastFreeCells);
    1928           0 :         lastFreeCells = freeCells;
    1929             : #endif
    1930           0 :         previousFreeCells += freeCells;
    1931           0 :         arenap = &arena->next;
    1932           0 :         arenaIndex++;
    1933             :     }
    1934             : 
    1935           0 :     size_t relocCount = nonFullArenaCount - arenaIndex;
    1936           0 :     MOZ_ASSERT(relocCount < nonFullArenaCount);
    1937           0 :     MOZ_ASSERT((relocCount == 0) == (!*arenap));
    1938           0 :     arenaTotalOut += fullArenaCount + nonFullArenaCount;
    1939           0 :     relocTotalOut += relocCount;
    1940             : 
    1941           0 :     return arenap;
    1942             : }
    1943             : 
    1944             : #ifdef DEBUG
    1945             : inline bool
    1946           0 : PtrIsInRange(const void* ptr, const void* start, size_t length)
    1947             : {
    1948           0 :     return uintptr_t(ptr) - uintptr_t(start) < length;
    1949             : }
    1950             : #endif
    1951             : 
    1952             : static TenuredCell*
    1953           0 : AllocRelocatedCell(Zone* zone, AllocKind thingKind, size_t thingSize)
    1954             : {
    1955           0 :     AutoEnterOOMUnsafeRegion oomUnsafe;
    1956           0 :     void* dstAlloc = zone->arenas.allocateFromFreeList(thingKind, thingSize);
    1957           0 :     if (!dstAlloc)
    1958           0 :         dstAlloc = GCRuntime::refillFreeListInGC(zone, thingKind);
    1959           0 :     if (!dstAlloc) {
    1960             :         // This can only happen in zeal mode or debug builds as we don't
    1961             :         // otherwise relocate more cells than we have existing free space
    1962             :         // for.
    1963           0 :         oomUnsafe.crash("Could not allocate new arena while compacting");
    1964             :     }
    1965           0 :     return TenuredCell::fromPointer(dstAlloc);
    1966             : }
    1967             : 
    1968             : static void
    1969           0 : RelocateCell(Zone* zone, TenuredCell* src, AllocKind thingKind, size_t thingSize)
    1970             : {
    1971           0 :     JS::AutoSuppressGCAnalysis nogc(TlsContext.get());
    1972             : 
    1973             :     // Allocate a new cell.
    1974           0 :     MOZ_ASSERT(zone == src->zone());
    1975           0 :     TenuredCell* dst = AllocRelocatedCell(zone, thingKind, thingSize);
    1976             : 
    1977             :     // Copy source cell contents to destination.
    1978           0 :     memcpy(dst, src, thingSize);
    1979             : 
    1980             :     // Move any uid attached to the object.
    1981           0 :     src->zone()->transferUniqueId(dst, src);
    1982             : 
    1983           0 :     if (IsObjectAllocKind(thingKind)) {
    1984           0 :         JSObject* srcObj = static_cast<JSObject*>(static_cast<Cell*>(src));
    1985           0 :         JSObject* dstObj = static_cast<JSObject*>(static_cast<Cell*>(dst));
    1986             : 
    1987           0 :         if (srcObj->isNative()) {
    1988           0 :             NativeObject* srcNative = &srcObj->as<NativeObject>();
    1989           0 :             NativeObject* dstNative = &dstObj->as<NativeObject>();
    1990             : 
    1991             :             // Fixup the pointer to inline object elements if necessary.
    1992           0 :             if (srcNative->hasFixedElements()) {
    1993           0 :                 uint32_t numShifted = srcNative->getElementsHeader()->numShiftedElements();
    1994           0 :                 dstNative->setFixedElements(numShifted);
    1995             :             }
    1996             : 
    1997             :             // For copy-on-write objects that own their elements, fix up the
    1998             :             // owner pointer to point to the relocated object.
    1999           0 :             if (srcNative->denseElementsAreCopyOnWrite()) {
    2000           0 :                 GCPtrNativeObject& owner = dstNative->getElementsHeader()->ownerObject();
    2001           0 :                 if (owner == srcNative)
    2002           0 :                     owner = dstNative;
    2003             :             }
    2004           0 :         } else if (srcObj->is<ProxyObject>()) {
    2005           0 :             if (srcObj->as<ProxyObject>().usingInlineValueArray())
    2006           0 :                 dstObj->as<ProxyObject>().setInlineValueArray();
    2007             :         }
    2008             : 
    2009             :         // Call object moved hook if present.
    2010           0 :         if (JSObjectMovedOp op = srcObj->getClass()->extObjectMovedOp())
    2011           0 :             op(dstObj, srcObj);
    2012             : 
    2013           0 :         MOZ_ASSERT_IF(dstObj->isNative(),
    2014             :                       !PtrIsInRange((const Value*)dstObj->as<NativeObject>().getDenseElements(),
    2015             :                                     src, thingSize));
    2016             :     }
    2017             : 
    2018             :     // Copy the mark bits.
    2019           0 :     dst->copyMarkBitsFrom(src);
    2020             : 
    2021             :     // Mark source cell as forwarded and leave a pointer to the destination.
    2022           0 :     RelocationOverlay* overlay = RelocationOverlay::fromCell(src);
    2023           0 :     overlay->forwardTo(dst);
    2024           0 : }
    2025             : 
    2026             : static void
    2027           0 : RelocateArena(Arena* arena, SliceBudget& sliceBudget)
    2028             : {
    2029           0 :     MOZ_ASSERT(arena->allocated());
    2030           0 :     MOZ_ASSERT(!arena->hasDelayedMarking);
    2031           0 :     MOZ_ASSERT(!arena->markOverflow);
    2032           0 :     MOZ_ASSERT(!arena->allocatedDuringIncremental);
    2033           0 :     MOZ_ASSERT(arena->bufferedCells()->isEmpty());
    2034             : 
    2035           0 :     Zone* zone = arena->zone;
    2036             : 
    2037           0 :     AllocKind thingKind = arena->getAllocKind();
    2038           0 :     size_t thingSize = arena->getThingSize();
    2039             : 
    2040           0 :     for (ArenaCellIterUnderGC i(arena); !i.done(); i.next()) {
    2041           0 :         RelocateCell(zone, i.getCell(), thingKind, thingSize);
    2042           0 :         sliceBudget.step();
    2043             :     }
    2044             : 
    2045             : #ifdef DEBUG
    2046           0 :     for (ArenaCellIterUnderGC i(arena); !i.done(); i.next()) {
    2047           0 :         TenuredCell* src = i.getCell();
    2048           0 :         MOZ_ASSERT(RelocationOverlay::isCellForwarded(src));
    2049           0 :         TenuredCell* dest = Forwarded(src);
    2050           0 :         MOZ_ASSERT(src->isMarkedBlack() == dest->isMarkedBlack());
    2051           0 :         MOZ_ASSERT(src->isMarkedGray() == dest->isMarkedGray());
    2052             :     }
    2053             : #endif
    2054           0 : }
    2055             : 
    2056             : static inline bool
    2057           0 : ShouldProtectRelocatedArenas(JS::gcreason::Reason reason)
    2058             : {
    2059             :     // For zeal mode collections we don't release the relocated arenas
    2060             :     // immediately. Instead we protect them and keep them around until the next
    2061             :     // collection so we can catch any stray accesses to them.
    2062             : #ifdef DEBUG
    2063           0 :     return reason == JS::gcreason::DEBUG_GC;
    2064             : #else
    2065             :     return false;
    2066             : #endif
    2067             : }
    2068             : 
    2069             : /*
    2070             :  * Relocate all arenas identified by pickArenasToRelocate: for each arena,
    2071             :  * relocate each cell within it, then add it to a list of relocated arenas.
    2072             :  */
    2073             : Arena*
    2074           0 : ArenaList::relocateArenas(Arena* toRelocate, Arena* relocated, SliceBudget& sliceBudget,
    2075             :                           gcstats::Statistics& stats)
    2076             : {
    2077           0 :     check();
    2078             : 
    2079           0 :     while (Arena* arena = toRelocate) {
    2080           0 :         toRelocate = arena->next;
    2081           0 :         RelocateArena(arena, sliceBudget);
    2082             :         // Prepend to list of relocated arenas
    2083           0 :         arena->next = relocated;
    2084           0 :         relocated = arena;
    2085           0 :         stats.count(gcstats::STAT_ARENA_RELOCATED);
    2086           0 :     }
    2087             : 
    2088           0 :     check();
    2089             : 
    2090           0 :     return relocated;
    2091             : }
    2092             : 
    2093             : // Skip compacting zones unless we can free a certain proportion of their GC
    2094             : // heap memory.
    2095             : static const double MIN_ZONE_RECLAIM_PERCENT = 2.0;
    2096             : 
    2097             : static bool
    2098           0 : ShouldRelocateZone(size_t arenaCount, size_t relocCount, JS::gcreason::Reason reason)
    2099             : {
    2100           0 :     if (relocCount == 0)
    2101           0 :         return false;
    2102             : 
    2103           0 :     if (IsOOMReason(reason))
    2104           0 :         return true;
    2105             : 
    2106           0 :     return (relocCount * 100.0) / arenaCount >= MIN_ZONE_RECLAIM_PERCENT;
    2107             : }
    2108             : 
    2109             : bool
    2110           0 : ArenaLists::relocateArenas(Zone* zone, Arena*& relocatedListOut, JS::gcreason::Reason reason,
    2111             :                            SliceBudget& sliceBudget, gcstats::Statistics& stats)
    2112             : {
    2113             :     // This is only called from the active thread while we are doing a GC, so
    2114             :     // there is no need to lock.
    2115           0 :     MOZ_ASSERT(CurrentThreadCanAccessRuntime(runtime_));
    2116           0 :     MOZ_ASSERT(runtime_->gc.isHeapCompacting());
    2117           0 :     MOZ_ASSERT(!runtime_->gc.isBackgroundSweeping());
    2118             : 
    2119             :     // Clear all the free lists.
    2120           0 :     purge();
    2121             : 
    2122           0 :     if (ShouldRelocateAllArenas(reason)) {
    2123           0 :         zone->prepareForCompacting();
    2124           0 :         for (auto kind : AllocKindsToRelocate) {
    2125           0 :             ArenaList& al = arenaLists(kind);
    2126           0 :             Arena* allArenas = al.head();
    2127           0 :             al.clear();
    2128           0 :             relocatedListOut = al.relocateArenas(allArenas, relocatedListOut, sliceBudget, stats);
    2129             :         }
    2130             :     } else {
    2131           0 :         size_t arenaCount = 0;
    2132           0 :         size_t relocCount = 0;
    2133           0 :         AllAllocKindArray<Arena**> toRelocate;
    2134             : 
    2135           0 :         for (auto kind : AllocKindsToRelocate)
    2136           0 :             toRelocate[kind] = arenaLists(kind).pickArenasToRelocate(arenaCount, relocCount);
    2137             : 
    2138           0 :         if (!ShouldRelocateZone(arenaCount, relocCount, reason))
    2139           0 :             return false;
    2140             : 
    2141           0 :         zone->prepareForCompacting();
    2142           0 :         for (auto kind : AllocKindsToRelocate) {
    2143           0 :             if (toRelocate[kind]) {
    2144           0 :                 ArenaList& al = arenaLists(kind);
    2145           0 :                 Arena* arenas = al.removeRemainingArenas(toRelocate[kind]);
    2146           0 :                 relocatedListOut = al.relocateArenas(arenas, relocatedListOut, sliceBudget, stats);
    2147             :             }
    2148             :         }
    2149             :     }
    2150             : 
    2151           0 :     return true;
    2152             : }
    2153             : 
    2154             : bool
    2155           0 : GCRuntime::relocateArenas(Zone* zone, JS::gcreason::Reason reason, Arena*& relocatedListOut,
    2156             :                           SliceBudget& sliceBudget)
    2157             : {
    2158           0 :     gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::COMPACT_MOVE);
    2159             : 
    2160           0 :     MOZ_ASSERT(!zone->isPreservingCode());
    2161           0 :     MOZ_ASSERT(CanRelocateZone(zone));
    2162             : 
    2163           0 :     js::CancelOffThreadIonCompile(rt, JS::Zone::Compact);
    2164             : 
    2165           0 :     if (!zone->arenas.relocateArenas(zone, relocatedListOut, reason, sliceBudget, stats()))
    2166           0 :         return false;
    2167             : 
    2168             : #ifdef DEBUG
    2169             :     // Check that we did as much compaction as we should have. There
    2170             :     // should always be less than one arena's worth of free cells.
    2171           0 :     for (auto i : AllocKindsToRelocate) {
    2172           0 :         ArenaList& al = zone->arenas.arenaLists(i);
    2173           0 :         size_t freeCells = 0;
    2174           0 :         for (Arena* arena = al.arenaAfterCursor(); arena; arena = arena->next)
    2175           0 :             freeCells += arena->countFreeCells();
    2176           0 :         MOZ_ASSERT(freeCells < Arena::thingsPerArena(i));
    2177             :     }
    2178             : #endif
    2179             : 
    2180           0 :     return true;
    2181             : }
    2182             : 
    2183             : template <typename T>
    2184             : inline void
    2185           0 : MovingTracer::updateEdge(T** thingp)
    2186             : {
    2187           0 :     auto thing = *thingp;
    2188           0 :     if (thing->runtimeFromAnyThread() == runtime() && IsForwarded(thing))
    2189           0 :         *thingp = Forwarded(thing);
    2190           0 : }
    2191             : 
    2192           0 : void MovingTracer::onObjectEdge(JSObject** objp) { updateEdge(objp); }
    2193           0 : void MovingTracer::onShapeEdge(Shape** shapep) { updateEdge(shapep); }
    2194           0 : void MovingTracer::onStringEdge(JSString** stringp) { updateEdge(stringp); }
    2195           0 : void MovingTracer::onScriptEdge(JSScript** scriptp) { updateEdge(scriptp); }
    2196           0 : void MovingTracer::onLazyScriptEdge(LazyScript** lazyp) { updateEdge(lazyp); }
    2197           0 : void MovingTracer::onBaseShapeEdge(BaseShape** basep) { updateEdge(basep); }
    2198           0 : void MovingTracer::onScopeEdge(Scope** scopep) { updateEdge(scopep); }
    2199           0 : void MovingTracer::onRegExpSharedEdge(RegExpShared** sharedp) { updateEdge(sharedp); }
    2200             : 
    2201             : void
    2202           0 : Zone::prepareForCompacting()
    2203             : {
    2204           0 :     FreeOp* fop = runtimeFromActiveCooperatingThread()->defaultFreeOp();
    2205           0 :     discardJitCode(fop);
    2206           0 : }
    2207             : 
    2208             : void
    2209           0 : GCRuntime::sweepTypesAfterCompacting(Zone* zone)
    2210             : {
    2211           0 :     FreeOp* fop = rt->defaultFreeOp();
    2212           0 :     zone->beginSweepTypes(fop, rt->gc.releaseObservedTypes && !zone->isPreservingCode());
    2213             : 
    2214           0 :     AutoClearTypeInferenceStateOnOOM oom(zone);
    2215             : 
    2216           0 :     for (auto script = zone->cellIter<JSScript>(); !script.done(); script.next())
    2217           0 :         script->maybeSweepTypes(&oom);
    2218           0 :     for (auto group = zone->cellIter<ObjectGroup>(); !group.done(); group.next())
    2219           0 :         group->maybeSweep(&oom);
    2220             : 
    2221           0 :     zone->types.endSweep(rt);
    2222           0 : }
    2223             : 
    2224             : void
    2225           0 : GCRuntime::sweepZoneAfterCompacting(Zone* zone)
    2226             : {
    2227           0 :     MOZ_ASSERT(zone->isCollecting());
    2228           0 :     FreeOp* fop = rt->defaultFreeOp();
    2229           0 :     sweepTypesAfterCompacting(zone);
    2230           0 :     zone->sweepBreakpoints(fop);
    2231           0 :     zone->sweepWeakMaps();
    2232           0 :     for (auto* cache : zone->weakCaches())
    2233           0 :         cache->sweep();
    2234             : 
    2235           0 :     if (jit::JitZone* jitZone = zone->jitZone())
    2236           0 :         jitZone->sweep(fop);
    2237             : 
    2238           0 :     for (CompartmentsInZoneIter c(zone); !c.done(); c.next()) {
    2239           0 :         c->objectGroups.sweep(fop);
    2240           0 :         c->sweepRegExps();
    2241           0 :         c->sweepSavedStacks();
    2242           0 :         c->sweepTemplateLiteralMap();
    2243           0 :         c->sweepVarNames();
    2244           0 :         c->sweepGlobalObject();
    2245           0 :         c->sweepSelfHostingScriptSource();
    2246           0 :         c->sweepDebugEnvironments();
    2247           0 :         c->sweepJitCompartment(fop);
    2248           0 :         c->sweepNativeIterators();
    2249           0 :         c->sweepTemplateObjects();
    2250             :     }
    2251           0 : }
    2252             : 
    2253             : template <typename T>
    2254             : static inline void
    2255           0 : UpdateCellPointers(MovingTracer* trc, T* cell)
    2256             : {
    2257           0 :     cell->fixupAfterMovingGC();
    2258           0 :     cell->traceChildren(trc);
    2259           0 : }
    2260             : 
    2261             : template <typename T>
    2262             : static void
    2263           0 : UpdateArenaPointersTyped(MovingTracer* trc, Arena* arena, JS::TraceKind traceKind)
    2264             : {
    2265           0 :     for (ArenaCellIterUnderGC i(arena); !i.done(); i.next())
    2266           0 :         UpdateCellPointers(trc, reinterpret_cast<T*>(i.getCell()));
    2267           0 : }
    2268             : 
    2269             : /*
    2270             :  * Update the internal pointers for all cells in an arena.
    2271             :  */
    2272             : static void
    2273           0 : UpdateArenaPointers(MovingTracer* trc, Arena* arena)
    2274             : {
    2275           0 :     AllocKind kind = arena->getAllocKind();
    2276             : 
    2277           0 :     switch (kind) {
    2278             : #define EXPAND_CASE(allocKind, traceKind, type, sizedType) \
    2279             :       case AllocKind::allocKind: \
    2280             :         UpdateArenaPointersTyped<type>(trc, arena, JS::TraceKind::traceKind); \
    2281             :         return;
    2282           0 : FOR_EACH_ALLOCKIND(EXPAND_CASE)
    2283             : #undef EXPAND_CASE
    2284             : 
    2285             :       default:
    2286           0 :         MOZ_CRASH("Invalid alloc kind for UpdateArenaPointers");
    2287             :     }
    2288             : }
    2289             : 
    2290             : namespace js {
    2291             : namespace gc {
    2292             : 
    2293             : struct ArenaListSegment
    2294             : {
    2295             :     Arena* begin;
    2296             :     Arena* end;
    2297             : };
    2298             : 
    2299             : struct ArenasToUpdate
    2300             : {
    2301             :     ArenasToUpdate(Zone* zone, AllocKinds kinds);
    2302           0 :     bool done() { return kind == AllocKind::LIMIT; }
    2303             :     ArenaListSegment getArenasToUpdate(AutoLockHelperThreadState& lock, unsigned maxLength);
    2304             : 
    2305             :   private:
    2306             :     AllocKinds kinds;  // Selects which thing kinds to update
    2307             :     Zone* zone;        // Zone to process
    2308             :     AllocKind kind;    // Current alloc kind to process
    2309             :     Arena* arena;      // Next arena to process
    2310             : 
    2311           0 :     AllocKind nextAllocKind(AllocKind i) { return AllocKind(uint8_t(i) + 1); }
    2312             :     bool shouldProcessKind(AllocKind kind);
    2313             :     Arena* next(AutoLockHelperThreadState& lock);
    2314             : };
    2315             : 
    2316           0 : ArenasToUpdate::ArenasToUpdate(Zone* zone, AllocKinds kinds)
    2317           0 :   : kinds(kinds), zone(zone), kind(AllocKind::FIRST), arena(nullptr)
    2318             : {
    2319           0 :     MOZ_ASSERT(zone->isGCCompacting());
    2320           0 : }
    2321             : 
    2322             : Arena*
    2323           0 : ArenasToUpdate::next(AutoLockHelperThreadState& lock)
    2324             : {
    2325             :     // Find the next arena to update.
    2326             :     //
    2327             :     // This iterates through the GC thing kinds filtered by shouldProcessKind(),
    2328             :     // and then through thea arenas of that kind.  All state is held in the
    2329             :     // object and we just return when we find an arena.
    2330             : 
    2331           0 :     for (; kind < AllocKind::LIMIT; kind = nextAllocKind(kind)) {
    2332           0 :         if (kinds.contains(kind)) {
    2333           0 :             if (!arena)
    2334           0 :                 arena = zone->arenas.getFirstArena(kind);
    2335             :             else
    2336           0 :                 arena = arena->next;
    2337           0 :             if (arena)
    2338           0 :                 return arena;
    2339             :         }
    2340             :     }
    2341             : 
    2342           0 :     MOZ_ASSERT(!arena);
    2343           0 :     MOZ_ASSERT(done());
    2344           0 :     return nullptr;
    2345             : }
    2346             : 
    2347             : ArenaListSegment
    2348           0 : ArenasToUpdate::getArenasToUpdate(AutoLockHelperThreadState& lock, unsigned maxLength)
    2349             : {
    2350           0 :     Arena* begin = next(lock);
    2351           0 :     if (!begin)
    2352           0 :         return { nullptr, nullptr };
    2353             : 
    2354           0 :     Arena* last = begin;
    2355           0 :     unsigned count = 1;
    2356           0 :     while (last->next && count < maxLength) {
    2357           0 :         last = last->next;
    2358           0 :         count++;
    2359             :     }
    2360             : 
    2361           0 :     arena = last;
    2362           0 :     return { begin, last->next };
    2363             : }
    2364             : 
    2365             : struct UpdatePointersTask : public GCParallelTask
    2366             : {
    2367             :     // Maximum number of arenas to update in one block.
    2368             : #ifdef DEBUG
    2369             :     static const unsigned MaxArenasToProcess = 16;
    2370             : #else
    2371             :     static const unsigned MaxArenasToProcess = 256;
    2372             : #endif
    2373             : 
    2374           0 :     UpdatePointersTask(JSRuntime* rt, ArenasToUpdate* source, AutoLockHelperThreadState& lock)
    2375           0 :       : GCParallelTask(rt), source_(source)
    2376             :     {
    2377           0 :         arenas_.begin = nullptr;
    2378           0 :         arenas_.end = nullptr;
    2379           0 :     }
    2380             : 
    2381           0 :     ~UpdatePointersTask() override { join(); }
    2382             : 
    2383             :   private:
    2384             :     ArenasToUpdate* source_;
    2385             :     ArenaListSegment arenas_;
    2386             : 
    2387             :     virtual void run() override;
    2388             :     bool getArenasToUpdate();
    2389             :     void updateArenas();
    2390             : };
    2391             : 
    2392             : bool
    2393           0 : UpdatePointersTask::getArenasToUpdate()
    2394             : {
    2395           0 :     AutoLockHelperThreadState lock;
    2396           0 :     arenas_ = source_->getArenasToUpdate(lock, MaxArenasToProcess);
    2397           0 :     return arenas_.begin != nullptr;
    2398             : }
    2399             : 
    2400             : void
    2401           0 : UpdatePointersTask::updateArenas()
    2402             : {
    2403           0 :     MovingTracer trc(runtime());
    2404           0 :     for (Arena* arena = arenas_.begin; arena != arenas_.end; arena = arena->next)
    2405           0 :         UpdateArenaPointers(&trc, arena);
    2406           0 : }
    2407             : 
    2408             : /* virtual */ void
    2409           0 : UpdatePointersTask::run()
    2410             : {
    2411             :     // These checks assert when run in parallel.
    2412           0 :     AutoDisableProxyCheck noProxyCheck;
    2413             : 
    2414           0 :     while (getArenasToUpdate())
    2415           0 :         updateArenas();
    2416           0 : }
    2417             : 
    2418             : } // namespace gc
    2419             : } // namespace js
    2420             : 
    2421             : static const size_t MinCellUpdateBackgroundTasks = 2;
    2422             : static const size_t MaxCellUpdateBackgroundTasks = 8;
    2423             : 
    2424             : static size_t
    2425           0 : CellUpdateBackgroundTaskCount()
    2426             : {
    2427           0 :     if (!CanUseExtraThreads())
    2428           0 :         return 0;
    2429             : 
    2430           0 :     size_t targetTaskCount = HelperThreadState().cpuCount / 2;
    2431           0 :     return Min(Max(targetTaskCount, MinCellUpdateBackgroundTasks), MaxCellUpdateBackgroundTasks);
    2432             : }
    2433             : 
    2434             : static bool
    2435           0 : CanUpdateKindInBackground(AllocKind kind) {
    2436             :     // We try to update as many GC things in parallel as we can, but there are
    2437             :     // kinds for which this might not be safe:
    2438             :     //  - we assume JSObjects that are foreground finalized are not safe to
    2439             :     //    update in parallel
    2440             :     //  - updating a shape touches child shapes in fixupShapeTreeAfterMovingGC()
    2441           0 :     if (!js::gc::IsBackgroundFinalized(kind) || IsShapeAllocKind(kind))
    2442           0 :         return false;
    2443             : 
    2444           0 :     return true;
    2445             : }
    2446             : 
    2447             : static AllocKinds
    2448           0 : ForegroundUpdateKinds(AllocKinds kinds)
    2449             : {
    2450           0 :     AllocKinds result;
    2451           0 :     for (AllocKind kind : kinds) {
    2452           0 :         if (!CanUpdateKindInBackground(kind))
    2453           0 :             result += kind;
    2454             :     }
    2455           0 :     return result;
    2456             : }
    2457             : 
    2458             : void
    2459           0 : GCRuntime::updateTypeDescrObjects(MovingTracer* trc, Zone* zone)
    2460             : {
    2461           0 :     zone->typeDescrObjects().sweep();
    2462           0 :     for (auto r = zone->typeDescrObjects().all(); !r.empty(); r.popFront())
    2463           0 :         UpdateCellPointers(trc, r.front());
    2464           0 : }
    2465             : 
    2466             : void
    2467           0 : GCRuntime::updateCellPointers(MovingTracer* trc, Zone* zone, AllocKinds kinds, size_t bgTaskCount)
    2468             : {
    2469           0 :     AllocKinds fgKinds = bgTaskCount == 0 ? kinds : ForegroundUpdateKinds(kinds);
    2470           0 :     AllocKinds bgKinds = kinds - fgKinds;
    2471             : 
    2472           0 :     ArenasToUpdate fgArenas(zone, fgKinds);
    2473           0 :     ArenasToUpdate bgArenas(zone, bgKinds);
    2474           0 :     Maybe<UpdatePointersTask> fgTask;
    2475           0 :     Maybe<UpdatePointersTask> bgTasks[MaxCellUpdateBackgroundTasks];
    2476             : 
    2477           0 :     size_t tasksStarted = 0;
    2478             : 
    2479             :     {
    2480           0 :         AutoLockHelperThreadState lock;
    2481             : 
    2482           0 :         fgTask.emplace(rt, &fgArenas, lock);
    2483             : 
    2484           0 :         for (size_t i = 0; i < bgTaskCount && !bgArenas.done(); i++) {
    2485           0 :             bgTasks[i].emplace(rt, &bgArenas, lock);
    2486           0 :             startTask(*bgTasks[i], gcstats::PhaseKind::COMPACT_UPDATE_CELLS, lock);
    2487           0 :             tasksStarted = i;
    2488             :         }
    2489             :     }
    2490             : 
    2491           0 :     fgTask->runFromActiveCooperatingThread(rt);
    2492             : 
    2493             :     {
    2494           0 :         AutoLockHelperThreadState lock;
    2495             : 
    2496           0 :         for (size_t i = 0; i < tasksStarted; i++)
    2497           0 :             joinTask(*bgTasks[i], gcstats::PhaseKind::COMPACT_UPDATE_CELLS, lock);
    2498             :     }
    2499           0 : }
    2500             : 
    2501             : // After cells have been relocated any pointers to a cell's old locations must
    2502             : // be updated to point to the new location.  This happens by iterating through
    2503             : // all cells in heap and tracing their children (non-recursively) to update
    2504             : // them.
    2505             : //
    2506             : // This is complicated by the fact that updating a GC thing sometimes depends on
    2507             : // making use of other GC things.  After a moving GC these things may not be in
    2508             : // a valid state since they may contain pointers which have not been updated
    2509             : // yet.
    2510             : //
    2511             : // The main dependencies are:
    2512             : //
    2513             : //   - Updating a JSObject makes use of its shape
    2514             : //   - Updating a typed object makes use of its type descriptor object
    2515             : //
    2516             : // This means we require at least three phases for update:
    2517             : //
    2518             : //  1) shapes
    2519             : //  2) typed object type descriptor objects
    2520             : //  3) all other objects
    2521             : //
    2522             : // Since we want to minimize the number of phases, we put everything else into
    2523             : // the first phase and label it the 'misc' phase.
    2524             : 
    2525           3 : static const AllocKinds UpdatePhaseMisc {
    2526             :     AllocKind::SCRIPT,
    2527             :     AllocKind::LAZY_SCRIPT,
    2528             :     AllocKind::BASE_SHAPE,
    2529             :     AllocKind::SHAPE,
    2530             :     AllocKind::ACCESSOR_SHAPE,
    2531             :     AllocKind::OBJECT_GROUP,
    2532             :     AllocKind::STRING,
    2533             :     AllocKind::JITCODE,
    2534             :     AllocKind::SCOPE
    2535             : };
    2536             : 
    2537           3 : static const AllocKinds UpdatePhaseObjects {
    2538             :     AllocKind::FUNCTION,
    2539             :     AllocKind::FUNCTION_EXTENDED,
    2540             :     AllocKind::OBJECT0,
    2541             :     AllocKind::OBJECT0_BACKGROUND,
    2542             :     AllocKind::OBJECT2,
    2543             :     AllocKind::OBJECT2_BACKGROUND,
    2544             :     AllocKind::OBJECT4,
    2545             :     AllocKind::OBJECT4_BACKGROUND,
    2546             :     AllocKind::OBJECT8,
    2547             :     AllocKind::OBJECT8_BACKGROUND,
    2548             :     AllocKind::OBJECT12,
    2549             :     AllocKind::OBJECT12_BACKGROUND,
    2550             :     AllocKind::OBJECT16,
    2551             :     AllocKind::OBJECT16_BACKGROUND
    2552             : };
    2553             : 
    2554             : void
    2555           0 : GCRuntime::updateAllCellPointers(MovingTracer* trc, Zone* zone)
    2556             : {
    2557           0 :     size_t bgTaskCount = CellUpdateBackgroundTaskCount();
    2558             : 
    2559           0 :     updateCellPointers(trc, zone, UpdatePhaseMisc, bgTaskCount);
    2560             : 
    2561             :     // Update TypeDescrs before all other objects as typed objects access these
    2562             :     // objects when we trace them.
    2563           0 :     updateTypeDescrObjects(trc, zone);
    2564             : 
    2565           0 :     updateCellPointers(trc, zone, UpdatePhaseObjects, bgTaskCount);
    2566           0 : }
    2567             : 
    2568             : /*
    2569             :  * Update pointers to relocated cells in a single zone by doing a traversal of
    2570             :  * that zone's arenas and calling per-zone sweep hooks.
    2571             :  *
    2572             :  * The latter is necessary to update weak references which are not marked as
    2573             :  * part of the traversal.
    2574             :  */
    2575             : void
    2576           0 : GCRuntime::updateZonePointersToRelocatedCells(Zone* zone, AutoLockForExclusiveAccess& lock)
    2577             : {
    2578           0 :     MOZ_ASSERT(!rt->isBeingDestroyed());
    2579           0 :     MOZ_ASSERT(zone->isGCCompacting());
    2580             : 
    2581           0 :     gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::COMPACT_UPDATE);
    2582           0 :     MovingTracer trc(rt);
    2583             : 
    2584           0 :     zone->fixupAfterMovingGC();
    2585             : 
    2586             :     // Fixup compartment global pointers as these get accessed during marking.
    2587           0 :     for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next())
    2588           0 :         comp->fixupAfterMovingGC();
    2589             : 
    2590           0 :     zone->externalStringCache().purge();
    2591             : 
    2592             :     // Iterate through all cells that can contain relocatable pointers to update
    2593             :     // them. Since updating each cell is independent we try to parallelize this
    2594             :     // as much as possible.
    2595           0 :     updateAllCellPointers(&trc, zone);
    2596             : 
    2597             :     // Mark roots to update them.
    2598             :     {
    2599           0 :         gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::MARK_ROOTS);
    2600             : 
    2601           0 :         WeakMapBase::traceZone(zone, &trc);
    2602           0 :         for (CompartmentsInZoneIter c(zone); !c.done(); c.next()) {
    2603           0 :             if (c->watchpointMap)
    2604           0 :                 c->watchpointMap->trace(&trc);
    2605             :         }
    2606             :     }
    2607             : 
    2608             :     // Sweep everything to fix up weak pointers.
    2609           0 :     rt->gc.sweepZoneAfterCompacting(zone);
    2610             : 
    2611             :     // Call callbacks to get the rest of the system to fixup other untraced pointers.
    2612           0 :     for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next())
    2613           0 :         callWeakPointerCompartmentCallbacks(comp);
    2614           0 : }
    2615             : 
    2616             : /*
    2617             :  * Update runtime-wide pointers to relocated cells.
    2618             :  */
    2619             : void
    2620           0 : GCRuntime::updateRuntimePointersToRelocatedCells(AutoLockForExclusiveAccess& lock)
    2621             : {
    2622           0 :     MOZ_ASSERT(!rt->isBeingDestroyed());
    2623             : 
    2624           0 :     gcstats::AutoPhase ap1(stats(), gcstats::PhaseKind::COMPACT_UPDATE);
    2625           0 :     MovingTracer trc(rt);
    2626             : 
    2627           0 :     JSCompartment::fixupCrossCompartmentWrappersAfterMovingGC(&trc);
    2628             : 
    2629           0 :     rt->geckoProfiler().fixupStringsMapAfterMovingGC();
    2630             : 
    2631           0 :     traceRuntimeForMajorGC(&trc, lock);
    2632             : 
    2633             :     // Mark roots to update them.
    2634             :     {
    2635           0 :         gcstats::AutoPhase ap2(stats(), gcstats::PhaseKind::MARK_ROOTS);
    2636           0 :         Debugger::traceAllForMovingGC(&trc);
    2637           0 :         Debugger::traceIncomingCrossCompartmentEdges(&trc);
    2638             : 
    2639             :         // Mark all gray roots, making sure we call the trace callback to get the
    2640             :         // current set.
    2641           0 :         if (JSTraceDataOp op = grayRootTracer.op)
    2642           0 :             (*op)(&trc, grayRootTracer.data);
    2643             :     }
    2644             : 
    2645             :     // Sweep everything to fix up weak pointers.
    2646           0 :     WatchpointMap::sweepAll(rt);
    2647           0 :     Debugger::sweepAll(rt->defaultFreeOp());
    2648           0 :     jit::JitRuntime::SweepJitcodeGlobalTable(rt);
    2649           0 :     for (JS::detail::WeakCacheBase* cache : rt->weakCaches())
    2650           0 :         cache->sweep();
    2651             : 
    2652             :     // Type inference may put more blocks here to free.
    2653           0 :     blocksToFreeAfterSweeping.ref().freeAll();
    2654             : 
    2655             :     // Call callbacks to get the rest of the system to fixup other untraced pointers.
    2656           0 :     callWeakPointerZonesCallbacks();
    2657           0 : }
    2658             : 
    2659             : void
    2660           0 : GCRuntime::protectAndHoldArenas(Arena* arenaList)
    2661             : {
    2662           0 :     for (Arena* arena = arenaList; arena; ) {
    2663           0 :         MOZ_ASSERT(arena->allocated());
    2664           0 :         Arena* next = arena->next;
    2665           0 :         if (!next) {
    2666             :             // Prepend to hold list before we protect the memory.
    2667           0 :             arena->next = relocatedArenasToRelease;
    2668           0 :             relocatedArenasToRelease = arenaList;
    2669             :         }
    2670           0 :         ProtectPages(arena, ArenaSize);
    2671           0 :         arena = next;
    2672             :     }
    2673           0 : }
    2674             : 
    2675             : void
    2676          15 : GCRuntime::unprotectHeldRelocatedArenas()
    2677             : {
    2678          15 :     for (Arena* arena = relocatedArenasToRelease; arena; arena = arena->next) {
    2679           0 :         UnprotectPages(arena, ArenaSize);
    2680           0 :         MOZ_ASSERT(arena->allocated());
    2681             :     }
    2682          15 : }
    2683             : 
    2684             : void
    2685          15 : GCRuntime::releaseRelocatedArenas(Arena* arenaList)
    2686             : {
    2687          30 :     AutoLockGC lock(rt);
    2688          15 :     releaseRelocatedArenasWithoutUnlocking(arenaList, lock);
    2689          15 : }
    2690             : 
    2691             : void
    2692          15 : GCRuntime::releaseRelocatedArenasWithoutUnlocking(Arena* arenaList, const AutoLockGC& lock)
    2693             : {
    2694             :     // Release the relocated arenas, now containing only forwarding pointers
    2695          15 :     unsigned count = 0;
    2696          15 :     while (arenaList) {
    2697           0 :         Arena* arena = arenaList;
    2698           0 :         arenaList = arenaList->next;
    2699             : 
    2700             :         // Clear the mark bits
    2701           0 :         arena->unmarkAll();
    2702             : 
    2703             :         // Mark arena as empty
    2704           0 :         arena->setAsFullyUnused();
    2705             : 
    2706             : #if defined(JS_CRASH_DIAGNOSTICS) || defined(JS_GC_ZEAL)
    2707           0 :         JS_POISON(reinterpret_cast<void*>(arena->thingsStart()),
    2708           0 :                   JS_MOVED_TENURED_PATTERN, arena->getThingsSpan());
    2709             : #endif
    2710             : 
    2711           0 :         releaseArena(arena, lock);
    2712           0 :         ++count;
    2713             :     }
    2714          15 : }
    2715             : 
    2716             : // In debug mode we don't always release relocated arenas straight away.
    2717             : // Sometimes protect them instead and hold onto them until the next GC sweep
    2718             : // phase to catch any pointers to them that didn't get forwarded.
    2719             : 
    2720             : void
    2721          15 : GCRuntime::releaseHeldRelocatedArenas()
    2722             : {
    2723             : #ifdef DEBUG
    2724          15 :     unprotectHeldRelocatedArenas();
    2725          15 :     Arena* arenas = relocatedArenasToRelease;
    2726          15 :     relocatedArenasToRelease = nullptr;
    2727          15 :     releaseRelocatedArenas(arenas);
    2728             : #endif
    2729          15 : }
    2730             : 
    2731             : void
    2732           0 : GCRuntime::releaseHeldRelocatedArenasWithoutUnlocking(const AutoLockGC& lock)
    2733             : {
    2734             : #ifdef DEBUG
    2735           0 :     unprotectHeldRelocatedArenas();
    2736           0 :     releaseRelocatedArenasWithoutUnlocking(relocatedArenasToRelease, lock);
    2737           0 :     relocatedArenasToRelease = nullptr;
    2738             : #endif
    2739           0 : }
    2740             : 
    2741          31 : ArenaLists::ArenaLists(JSRuntime* rt, ZoneGroup* group)
    2742             :   : runtime_(rt),
    2743             :     freeLists_(group),
    2744             :     arenaLists_(group),
    2745             :     backgroundFinalizeState_(),
    2746             :     arenaListsToSweep_(),
    2747             :     incrementalSweptArenaKind(group, AllocKind::LIMIT),
    2748             :     incrementalSweptArenas(group),
    2749             :     gcShapeArenasToUpdate(group, nullptr),
    2750             :     gcAccessorShapeArenasToUpdate(group, nullptr),
    2751             :     gcScriptArenasToUpdate(group, nullptr),
    2752             :     gcObjectGroupArenasToUpdate(group, nullptr),
    2753             :     savedObjectArenas_(group),
    2754          31 :     savedEmptyObjectArenas(group, nullptr)
    2755             : {
    2756         930 :     for (auto i : AllAllocKinds())
    2757         899 :         freeLists(i) = &placeholder;
    2758         930 :     for (auto i : AllAllocKinds())
    2759         899 :         backgroundFinalizeState(i) = BFS_DONE;
    2760         930 :     for (auto i : AllAllocKinds())
    2761         899 :         arenaListsToSweep(i) = nullptr;
    2762          31 : }
    2763             : 
    2764             : void
    2765           0 : ReleaseArenaList(JSRuntime* rt, Arena* arena, const AutoLockGC& lock)
    2766             : {
    2767             :     Arena* next;
    2768           0 :     for (; arena; arena = next) {
    2769           0 :         next = arena->next;
    2770           0 :         rt->gc.releaseArena(arena, lock);
    2771             :     }
    2772           0 : }
    2773             : 
    2774           0 : ArenaLists::~ArenaLists()
    2775             : {
    2776           0 :     AutoLockGC lock(runtime_);
    2777             : 
    2778           0 :     for (auto i : AllAllocKinds()) {
    2779             :         /*
    2780             :          * We can only call this during the shutdown after the last GC when
    2781             :          * the background finalization is disabled.
    2782             :          */
    2783           0 :         MOZ_ASSERT(backgroundFinalizeState(i) == BFS_DONE);
    2784           0 :         ReleaseArenaList(runtime_, arenaLists(i).head(), lock);
    2785             :     }
    2786           0 :     ReleaseArenaList(runtime_, incrementalSweptArenas.ref().head(), lock);
    2787             : 
    2788           0 :     for (auto i : ObjectAllocKinds())
    2789           0 :         ReleaseArenaList(runtime_, savedObjectArenas(i).head(), lock);
    2790           0 :     ReleaseArenaList(runtime_, savedEmptyObjectArenas, lock);
    2791           0 : }
    2792             : 
    2793             : void
    2794           0 : ArenaLists::queueForForegroundSweep(FreeOp* fop, const FinalizePhase& phase)
    2795             : {
    2796           0 :     gcstats::AutoPhase ap(fop->runtime()->gc.stats(), phase.statsPhase);
    2797           0 :     for (auto kind : phase.kinds)
    2798           0 :         queueForForegroundSweep(fop, kind);
    2799           0 : }
    2800             : 
    2801             : void
    2802           0 : ArenaLists::queueForForegroundSweep(FreeOp* fop, AllocKind thingKind)
    2803             : {
    2804           0 :     MOZ_ASSERT(!IsBackgroundFinalized(thingKind));
    2805           0 :     MOZ_ASSERT(backgroundFinalizeState(thingKind) == BFS_DONE);
    2806           0 :     MOZ_ASSERT(!arenaListsToSweep(thingKind));
    2807             : 
    2808           0 :     arenaListsToSweep(thingKind) = arenaLists(thingKind).head();
    2809           0 :     arenaLists(thingKind).clear();
    2810           0 : }
    2811             : 
    2812             : void
    2813           0 : ArenaLists::queueForBackgroundSweep(FreeOp* fop, const FinalizePhase& phase)
    2814             : {
    2815           0 :     gcstats::AutoPhase ap(fop->runtime()->gc.stats(), phase.statsPhase);
    2816           0 :     for (auto kind : phase.kinds)
    2817           0 :         queueForBackgroundSweep(fop, kind);
    2818           0 : }
    2819             : 
    2820             : inline void
    2821           0 : ArenaLists::queueForBackgroundSweep(FreeOp* fop, AllocKind thingKind)
    2822             : {
    2823           0 :     MOZ_ASSERT(IsBackgroundFinalized(thingKind));
    2824             : 
    2825           0 :     ArenaList* al = &arenaLists(thingKind);
    2826           0 :     if (al->isEmpty()) {
    2827           0 :         MOZ_ASSERT(backgroundFinalizeState(thingKind) == BFS_DONE);
    2828           0 :         return;
    2829             :     }
    2830             : 
    2831           0 :     MOZ_ASSERT(backgroundFinalizeState(thingKind) == BFS_DONE);
    2832             : 
    2833           0 :     arenaListsToSweep(thingKind) = al->head();
    2834           0 :     al->clear();
    2835           0 :     backgroundFinalizeState(thingKind) = BFS_RUN;
    2836             : }
    2837             : 
    2838             : /*static*/ void
    2839           0 : ArenaLists::backgroundFinalize(FreeOp* fop, Arena* listHead, Arena** empty)
    2840             : {
    2841           0 :     MOZ_ASSERT(listHead);
    2842           0 :     MOZ_ASSERT(empty);
    2843             : 
    2844           0 :     AllocKind thingKind = listHead->getAllocKind();
    2845           0 :     Zone* zone = listHead->zone;
    2846             : 
    2847           0 :     size_t thingsPerArena = Arena::thingsPerArena(thingKind);
    2848           0 :     SortedArenaList finalizedSorted(thingsPerArena);
    2849             : 
    2850           0 :     auto unlimited = SliceBudget::unlimited();
    2851           0 :     FinalizeArenas(fop, &listHead, finalizedSorted, thingKind, unlimited, KEEP_ARENAS);
    2852           0 :     MOZ_ASSERT(!listHead);
    2853             : 
    2854           0 :     finalizedSorted.extractEmpty(empty);
    2855             : 
    2856             :     // When arenas are queued for background finalization, all arenas are moved
    2857             :     // to arenaListsToSweep[], leaving the arenaLists[] empty. However, new
    2858             :     // arenas may be allocated before background finalization finishes; now that
    2859             :     // finalization is complete, we want to merge these lists back together.
    2860           0 :     ArenaLists* lists = &zone->arenas;
    2861           0 :     ArenaList* al = &lists->arenaLists(thingKind);
    2862             : 
    2863             :     // Flatten |finalizedSorted| into a regular ArenaList.
    2864           0 :     ArenaList finalized = finalizedSorted.toArenaList();
    2865             : 
    2866             :     // We must take the GC lock to be able to safely modify the ArenaList;
    2867             :     // however, this does not by itself make the changes visible to all threads,
    2868             :     // as not all threads take the GC lock to read the ArenaLists.
    2869             :     // That safety is provided by the ReleaseAcquire memory ordering of the
    2870             :     // background finalize state, which we explicitly set as the final step.
    2871             :     {
    2872           0 :         AutoLockGC lock(lists->runtime_);
    2873           0 :         MOZ_ASSERT(lists->backgroundFinalizeState(thingKind) == BFS_RUN);
    2874             : 
    2875             :         // Join |al| and |finalized| into a single list.
    2876           0 :         *al = finalized.insertListWithCursorAtEnd(*al);
    2877             : 
    2878           0 :         lists->arenaListsToSweep(thingKind) = nullptr;
    2879             :     }
    2880             : 
    2881           0 :     lists->backgroundFinalizeState(thingKind) = BFS_DONE;
    2882           0 : }
    2883             : 
    2884             : void
    2885           0 : ArenaLists::mergeForegroundSweptObjectArenas()
    2886             : {
    2887           0 :     AutoLockGC lock(runtime_);
    2888           0 :     ReleaseArenaList(runtime_, savedEmptyObjectArenas, lock);
    2889           0 :     savedEmptyObjectArenas = nullptr;
    2890             : 
    2891           0 :     mergeSweptArenas(AllocKind::OBJECT0);
    2892           0 :     mergeSweptArenas(AllocKind::OBJECT2);
    2893           0 :     mergeSweptArenas(AllocKind::OBJECT4);
    2894           0 :     mergeSweptArenas(AllocKind::OBJECT8);
    2895           0 :     mergeSweptArenas(AllocKind::OBJECT12);
    2896           0 :     mergeSweptArenas(AllocKind::OBJECT16);
    2897           0 : }
    2898             : 
    2899             : inline void
    2900           0 : ArenaLists::mergeSweptArenas(AllocKind thingKind)
    2901             : {
    2902           0 :     ArenaList* al = &arenaLists(thingKind);
    2903           0 :     ArenaList* saved = &savedObjectArenas(thingKind);
    2904             : 
    2905           0 :     *al = saved->insertListWithCursorAtEnd(*al);
    2906           0 :     saved->clear();
    2907           0 : }
    2908             : 
    2909             : void
    2910           0 : ArenaLists::queueForegroundThingsForSweep(FreeOp* fop)
    2911             : {
    2912           0 :     gcShapeArenasToUpdate = arenaListsToSweep(AllocKind::SHAPE);
    2913           0 :     gcAccessorShapeArenasToUpdate = arenaListsToSweep(AllocKind::ACCESSOR_SHAPE);
    2914           0 :     gcObjectGroupArenasToUpdate = arenaListsToSweep(AllocKind::OBJECT_GROUP);
    2915           0 :     gcScriptArenasToUpdate = arenaListsToSweep(AllocKind::SCRIPT);
    2916           0 : }
    2917             : 
    2918           0 : SliceBudget::SliceBudget()
    2919           0 :   : timeBudget(UnlimitedTimeBudget), workBudget(UnlimitedWorkBudget)
    2920             : {
    2921           0 :     makeUnlimited();
    2922           0 : }
    2923             : 
    2924           3 : SliceBudget::SliceBudget(TimeBudget time)
    2925           3 :   : timeBudget(time), workBudget(UnlimitedWorkBudget)
    2926             : {
    2927           3 :     if (time.budget < 0) {
    2928           0 :         makeUnlimited();
    2929             :     } else {
    2930             :         // Note: TimeBudget(0) is equivalent to WorkBudget(CounterReset).
    2931           3 :         deadline = PRMJ_Now() + time.budget * PRMJ_USEC_PER_MSEC;
    2932           3 :         counter = CounterReset;
    2933             :     }
    2934           3 : }
    2935             : 
    2936           0 : SliceBudget::SliceBudget(WorkBudget work)
    2937           0 :   : timeBudget(UnlimitedTimeBudget), workBudget(work)
    2938             : {
    2939           0 :     if (work.budget < 0) {
    2940           0 :         makeUnlimited();
    2941             :     } else {
    2942           0 :         deadline = 0;
    2943           0 :         counter = work.budget;
    2944             :     }
    2945           0 : }
    2946             : 
    2947             : int
    2948           0 : SliceBudget::describe(char* buffer, size_t maxlen) const
    2949             : {
    2950           0 :     if (isUnlimited())
    2951           0 :         return snprintf(buffer, maxlen, "unlimited");
    2952           0 :     else if (isWorkBudget())
    2953           0 :         return snprintf(buffer, maxlen, "work(%" PRId64 ")", workBudget.budget);
    2954             :     else
    2955           0 :         return snprintf(buffer, maxlen, "%" PRId64 "ms", timeBudget.budget);
    2956             : }
    2957             : 
    2958             : bool
    2959           6 : SliceBudget::checkOverBudget()
    2960             : {
    2961           6 :     bool over = PRMJ_Now() >= deadline;
    2962           6 :     if (!over)
    2963           0 :         counter = CounterReset;
    2964           6 :     return over;
    2965             : }
    2966             : 
    2967             : void
    2968           0 : GCRuntime::requestMajorGC(JS::gcreason::Reason reason)
    2969             : {
    2970           0 :     MOZ_ASSERT(!CurrentThreadIsPerformingGC());
    2971             : 
    2972           0 :     if (majorGCRequested())
    2973           0 :         return;
    2974             : 
    2975           0 :     majorGCTriggerReason = reason;
    2976             : 
    2977             :     // There's no need to use RequestInterruptUrgent here. It's slower because
    2978             :     // it has to interrupt (looping) Ion code, but loops in Ion code that
    2979             :     // affect GC will have an explicit interrupt check.
    2980           0 :     TlsContext.get()->requestInterrupt(JSContext::RequestInterruptCanWait);
    2981             : }
    2982             : 
    2983             : void
    2984          18 : Nursery::requestMinorGC(JS::gcreason::Reason reason) const
    2985             : {
    2986          18 :     MOZ_ASSERT(CurrentThreadCanAccessRuntime(runtime()));
    2987          18 :     MOZ_ASSERT(!CurrentThreadIsPerformingGC());
    2988             : 
    2989          18 :     if (minorGCRequested())
    2990           0 :         return;
    2991             : 
    2992          18 :     minorGCTriggerReason_ = reason;
    2993             : 
    2994             :     // See comment in requestMajorGC.
    2995          18 :     TlsContext.get()->requestInterrupt(JSContext::RequestInterruptCanWait);
    2996             : }
    2997             : 
    2998             : bool
    2999           0 : GCRuntime::triggerGC(JS::gcreason::Reason reason)
    3000             : {
    3001             :     /*
    3002             :      * Don't trigger GCs if this is being called off the active thread from
    3003             :      * onTooMuchMalloc().
    3004             :      */
    3005           0 :     if (!CurrentThreadCanAccessRuntime(rt))
    3006           0 :         return false;
    3007             : 
    3008             :     /* GC is already running. */
    3009           0 :     if (JS::CurrentThreadIsHeapCollecting())
    3010           0 :         return false;
    3011             : 
    3012           0 :     JS::PrepareForFullGC(rt->activeContextFromOwnThread());
    3013           0 :     requestMajorGC(reason);
    3014           0 :     return true;
    3015             : }
    3016             : 
    3017             : void
    3018        6498 : GCRuntime::maybeAllocTriggerZoneGC(Zone* zone, const AutoLockGC& lock)
    3019             : {
    3020        6498 :     size_t usedBytes = zone->usage.gcBytes();
    3021        6498 :     size_t thresholdBytes = zone->threshold.gcTriggerBytes();
    3022             : 
    3023        6498 :     if (!CurrentThreadCanAccessRuntime(rt)) {
    3024             :         /* Zones in use by a helper thread can't be collected. */
    3025         691 :         MOZ_ASSERT(zone->usedByHelperThread() || zone->isAtomsZone());
    3026         691 :         return;
    3027             :     }
    3028             : 
    3029        5807 :     if (usedBytes >= thresholdBytes) {
    3030             :         /*
    3031             :          * The threshold has been surpassed, immediately trigger a GC,
    3032             :          * which will be done non-incrementally.
    3033             :          */
    3034           0 :         triggerZoneGC(zone, JS::gcreason::ALLOC_TRIGGER, usedBytes, thresholdBytes);
    3035             :     } else {
    3036             :         bool wouldInterruptCollection;
    3037             :         size_t igcThresholdBytes;
    3038             :         double zoneAllocThresholdFactor;
    3039             : 
    3040        5964 :         wouldInterruptCollection = isIncrementalGCInProgress() &&
    3041         157 :             !zone->isCollecting();
    3042       11614 :         zoneAllocThresholdFactor = wouldInterruptCollection ?
    3043           0 :             tunables.zoneAllocThresholdFactorAvoidInterrupt() :
    3044        5807 :             tunables.zoneAllocThresholdFactor();
    3045             : 
    3046        5807 :         igcThresholdBytes = thresholdBytes * zoneAllocThresholdFactor;
    3047             : 
    3048        5807 :         if (usedBytes >= igcThresholdBytes) {
    3049             :             // Reduce the delay to the start of the next incremental slice.
    3050           0 :             if (zone->gcDelayBytes < ArenaSize)
    3051           0 :                 zone->gcDelayBytes = 0;
    3052             :             else
    3053           0 :                 zone->gcDelayBytes -= ArenaSize;
    3054             : 
    3055           0 :             if (!zone->gcDelayBytes) {
    3056             :                 // Start or continue an in progress incremental GC. We do this
    3057             :                 // to try to avoid performing non-incremental GCs on zones
    3058             :                 // which allocate a lot of data, even when incremental slices
    3059             :                 // can't be triggered via scheduling in the event loop.
    3060           0 :                 triggerZoneGC(zone, JS::gcreason::ALLOC_TRIGGER, usedBytes, igcThresholdBytes);
    3061             : 
    3062             :                 // Delay the next slice until a certain amount of allocation
    3063             :                 // has been performed.
    3064           0 :                 zone->gcDelayBytes = tunables.zoneAllocDelayBytes();
    3065             :             }
    3066             :         }
    3067             :     }
    3068             : }
    3069             : 
    3070             : bool
    3071           0 : GCRuntime::triggerZoneGC(Zone* zone, JS::gcreason::Reason reason, size_t used, size_t threshold)
    3072             : {
    3073           0 :     MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
    3074             : 
    3075             :     /* GC is already running. */
    3076           0 :     if (JS::CurrentThreadIsHeapCollecting())
    3077           0 :         return false;
    3078             : 
    3079             : #ifdef JS_GC_ZEAL
    3080           0 :     if (hasZealMode(ZealMode::Alloc)) {
    3081           0 :         MOZ_RELEASE_ASSERT(triggerGC(reason));
    3082           0 :         return true;
    3083             :     }
    3084             : #endif
    3085             : 
    3086           0 :     if (zone->isAtomsZone()) {
    3087             :         /* We can't do a zone GC of the atoms compartment. */
    3088           0 :         if (TlsContext.get()->keepAtoms || rt->hasHelperThreadZones()) {
    3089             :             /* Skip GC and retrigger later, since atoms zone won't be collected
    3090             :              * if keepAtoms is true. */
    3091           0 :             fullGCForAtomsRequested_ = true;
    3092           0 :             return false;
    3093             :         }
    3094           0 :         stats().recordTrigger(used, threshold);
    3095           0 :         MOZ_RELEASE_ASSERT(triggerGC(reason));
    3096           0 :         return true;
    3097             :     }
    3098             : 
    3099           0 :     stats().recordTrigger(used, threshold);
    3100           0 :     PrepareZoneForGC(zone);
    3101           0 :     requestMajorGC(reason);
    3102           0 :     return true;
    3103             : }
    3104             : 
    3105             : void
    3106        3878 : GCRuntime::maybeGC(Zone* zone)
    3107             : {
    3108        3878 :     MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
    3109             : 
    3110             : #ifdef JS_GC_ZEAL
    3111        3878 :     if (hasZealMode(ZealMode::Alloc) || hasZealMode(ZealMode::RootsChange)) {
    3112           0 :         JS::PrepareForFullGC(rt->activeContextFromOwnThread());
    3113           0 :         gc(GC_NORMAL, JS::gcreason::DEBUG_GC);
    3114           0 :         return;
    3115             :     }
    3116             : #endif
    3117             : 
    3118        3878 :     if (gcIfRequested())
    3119           0 :         return;
    3120             : 
    3121        3878 :     double threshold = zone->threshold.allocTrigger(schedulingState.inHighFrequencyGCMode());
    3122        3878 :     double usedBytes = zone->usage.gcBytes();
    3123        7394 :     if (usedBytes > 1024 * 1024 && usedBytes >= threshold &&
    3124        3878 :         !isIncrementalGCInProgress() && !isBackgroundSweeping())
    3125             :     {
    3126           0 :         stats().recordTrigger(usedBytes, threshold);
    3127           0 :         PrepareZoneForGC(zone);
    3128           0 :         startGC(GC_NORMAL, JS::gcreason::EAGER_ALLOC_TRIGGER);
    3129             :     }
    3130             : }
    3131             : 
    3132             : // Do all possible decommit immediately from the current thread without
    3133             : // releasing the GC lock or allocating any memory.
    3134             : void
    3135           0 : GCRuntime::decommitAllWithoutUnlocking(const AutoLockGC& lock)
    3136             : {
    3137           0 :     MOZ_ASSERT(emptyChunks(lock).count() == 0);
    3138           0 :     for (ChunkPool::Iter chunk(availableChunks(lock)); !chunk.done(); chunk.next())
    3139           0 :         chunk->decommitAllArenasWithoutUnlocking(lock);
    3140           0 :     MOZ_ASSERT(availableChunks(lock).verify());
    3141           0 : }
    3142             : 
    3143             : void
    3144           0 : GCRuntime::startDecommit()
    3145             : {
    3146           0 :     MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
    3147           0 :     MOZ_ASSERT(!decommitTask.isRunning());
    3148             : 
    3149             :     // If we are allocating heavily enough to trigger "high freqency" GC, then
    3150             :     // skip decommit so that we do not compete with the mutator.
    3151           0 :     if (schedulingState.inHighFrequencyGCMode())
    3152           0 :         return;
    3153             : 
    3154           0 :     BackgroundDecommitTask::ChunkVector toDecommit;
    3155             :     {
    3156           0 :         AutoLockGC lock(rt);
    3157             : 
    3158             :         // Verify that all entries in the empty chunks pool are already decommitted.
    3159           0 :         for (ChunkPool::Iter chunk(emptyChunks(lock)); !chunk.done(); chunk.next())
    3160           0 :             MOZ_ASSERT(!chunk->info.numArenasFreeCommitted);
    3161             : 
    3162             :         // Since we release the GC lock while doing the decommit syscall below,
    3163             :         // it is dangerous to iterate the available list directly, as the active
    3164             :         // thread could modify it concurrently. Instead, we build and pass an
    3165             :         // explicit Vector containing the Chunks we want to visit.
    3166           0 :         MOZ_ASSERT(availableChunks(lock).verify());
    3167           0 :         for (ChunkPool::Iter iter(availableChunks(lock)); !iter.done(); iter.next()) {
    3168           0 :             if (!toDecommit.append(iter.get())) {
    3169             :                 // The OOM handler does a full, immediate decommit.
    3170           0 :                 return onOutOfMallocMemory(lock);
    3171             :             }
    3172             :         }
    3173             :     }
    3174           0 :     decommitTask.setChunksToScan(toDecommit);
    3175             : 
    3176           0 :     if (sweepOnBackgroundThread && decommitTask.start())
    3177           0 :         return;
    3178             : 
    3179           0 :     decommitTask.runFromActiveCooperatingThread(rt);
    3180             : }
    3181             : 
    3182             : void
    3183           0 : js::gc::BackgroundDecommitTask::setChunksToScan(ChunkVector &chunks)
    3184             : {
    3185           0 :     MOZ_ASSERT(CurrentThreadCanAccessRuntime(runtime()));
    3186           0 :     MOZ_ASSERT(!isRunning());
    3187           0 :     MOZ_ASSERT(toDecommit.ref().empty());
    3188           0 :     Swap(toDecommit.ref(), chunks);
    3189           0 : }
    3190             : 
    3191             : /* virtual */ void
    3192           0 : js::gc::BackgroundDecommitTask::run()
    3193             : {
    3194           0 :     AutoLockGC lock(runtime());
    3195             : 
    3196           0 :     for (Chunk* chunk : toDecommit.ref()) {
    3197             : 
    3198             :         // The arena list is not doubly-linked, so we have to work in the free
    3199             :         // list order and not in the natural order.
    3200           0 :         while (chunk->info.numArenasFreeCommitted) {
    3201           0 :             bool ok = chunk->decommitOneFreeArena(runtime(), lock);
    3202             : 
    3203             :             // If we are low enough on memory that we can't update the page
    3204             :             // tables, or if we need to return for any other reason, break out
    3205             :             // of the loop.
    3206           0 :             if (cancel_ || !ok)
    3207           0 :                 break;
    3208             :         }
    3209             :     }
    3210           0 :     toDecommit.ref().clearAndFree();
    3211             : 
    3212           0 :     ChunkPool toFree = runtime()->gc.expireEmptyChunkPool(lock);
    3213           0 :     if (toFree.count()) {
    3214           0 :         AutoUnlockGC unlock(lock);
    3215           0 :         FreeChunkPool(runtime(), toFree);
    3216             :     }
    3217           0 : }
    3218             : 
    3219             : void
    3220           0 : GCRuntime::sweepBackgroundThings(ZoneList& zones, LifoAlloc& freeBlocks)
    3221             : {
    3222           0 :     freeBlocks.freeAll();
    3223             : 
    3224           0 :     if (zones.isEmpty())
    3225           0 :         return;
    3226             : 
    3227             :     // We must finalize thing kinds in the order specified by BackgroundFinalizePhases.
    3228           0 :     Arena* emptyArenas = nullptr;
    3229           0 :     FreeOp fop(nullptr);
    3230           0 :     for (unsigned phase = 0 ; phase < ArrayLength(BackgroundFinalizePhases) ; ++phase) {
    3231           0 :         for (Zone* zone = zones.front(); zone; zone = zone->nextZone()) {
    3232           0 :             for (auto kind : BackgroundFinalizePhases[phase].kinds) {
    3233           0 :                 Arena* arenas = zone->arenas.arenaListsToSweep(kind);
    3234           0 :                 MOZ_RELEASE_ASSERT(uintptr_t(arenas) != uintptr_t(-1));
    3235           0 :                 if (arenas)
    3236           0 :                     ArenaLists::backgroundFinalize(&fop, arenas, &emptyArenas);
    3237             :             }
    3238             :         }
    3239             :     }
    3240             : 
    3241           0 :     AutoLockGC lock(rt);
    3242             : 
    3243             :     // Release swept arenas, dropping and reaquiring the lock every so often to
    3244             :     // avoid blocking the active thread from allocating chunks.
    3245             :     static const size_t LockReleasePeriod = 32;
    3246           0 :     size_t releaseCount = 0;
    3247             :     Arena* next;
    3248           0 :     for (Arena* arena = emptyArenas; arena; arena = next) {
    3249           0 :         next = arena->next;
    3250           0 :         rt->gc.releaseArena(arena, lock);
    3251           0 :         releaseCount++;
    3252           0 :         if (releaseCount % LockReleasePeriod == 0) {
    3253           0 :             lock.unlock();
    3254           0 :             lock.lock();
    3255             :         }
    3256             :     }
    3257             : 
    3258           0 :     while (!zones.isEmpty())
    3259           0 :         zones.removeFront();
    3260             : }
    3261             : 
    3262             : void
    3263          51 : GCRuntime::assertBackgroundSweepingFinished()
    3264             : {
    3265             : #ifdef DEBUG
    3266          51 :     MOZ_ASSERT(backgroundSweepZones.ref().isEmpty());
    3267         423 :     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    3268       11160 :         for (auto i : AllAllocKinds()) {
    3269       10788 :             MOZ_ASSERT(!zone->arenas.arenaListsToSweep(i));
    3270       10788 :             MOZ_ASSERT(zone->arenas.doneBackgroundFinalize(i));
    3271             :         }
    3272             :     }
    3273          51 :     MOZ_ASSERT(blocksToFreeAfterSweeping.ref().computedSizeOfExcludingThis() == 0);
    3274             : #endif
    3275          51 : }
    3276             : 
    3277             : void
    3278           0 : GCHelperState::finish()
    3279             : {
    3280             :     // Wait for any lingering background sweeping to finish.
    3281           0 :     waitBackgroundSweepEnd();
    3282           0 : }
    3283             : 
    3284             : GCHelperState::State
    3285          50 : GCHelperState::state(const AutoLockGC&)
    3286             : {
    3287          50 :     return state_;
    3288             : }
    3289             : 
    3290             : void
    3291           0 : GCHelperState::setState(State state, const AutoLockGC&)
    3292             : {
    3293           0 :     state_ = state;
    3294           0 : }
    3295             : 
    3296             : void
    3297           0 : GCHelperState::startBackgroundThread(State newState, const AutoLockGC& lock,
    3298             :                                      const AutoLockHelperThreadState& helperLock)
    3299             : {
    3300           0 :     MOZ_ASSERT(!hasThread && state(lock) == IDLE && newState != IDLE);
    3301           0 :     setState(newState, lock);
    3302             : 
    3303             :     {
    3304           0 :         AutoEnterOOMUnsafeRegion noOOM;
    3305           0 :         if (!HelperThreadState().gcHelperWorklist(helperLock).append(this))
    3306           0 :             noOOM.crash("Could not add to pending GC helpers list");
    3307             :     }
    3308             : 
    3309           0 :     HelperThreadState().notifyAll(GlobalHelperThreadState::PRODUCER, helperLock);
    3310           0 : }
    3311             : 
    3312             : void
    3313           0 : GCHelperState::waitForBackgroundThread(js::AutoLockGC& lock)
    3314             : {
    3315           0 :     while (isBackgroundSweeping())
    3316           0 :         done.wait(lock.guard());
    3317           0 : }
    3318             : 
    3319             : void
    3320           0 : GCHelperState::work()
    3321             : {
    3322           0 :     MOZ_ASSERT(CanUseExtraThreads());
    3323             : 
    3324           0 :     AutoLockGC lock(rt);
    3325             : 
    3326           0 :     MOZ_ASSERT(!hasThread);
    3327           0 :     hasThread = true;
    3328             : 
    3329             : #ifdef DEBUG
    3330           0 :     MOZ_ASSERT(!TlsContext.get()->gcHelperStateThread);
    3331           0 :     TlsContext.get()->gcHelperStateThread = true;
    3332             : #endif
    3333             : 
    3334           0 :     TraceLoggerThread* logger = TraceLoggerForCurrentThread();
    3335             : 
    3336           0 :     switch (state(lock)) {
    3337             : 
    3338             :       case IDLE:
    3339           0 :         MOZ_CRASH("GC helper triggered on idle state");
    3340             :         break;
    3341             : 
    3342             :       case SWEEPING: {
    3343           0 :         AutoTraceLog logSweeping(logger, TraceLogger_GCSweeping);
    3344           0 :         doSweep(lock);
    3345           0 :         MOZ_ASSERT(state(lock) == SWEEPING);
    3346           0 :         break;
    3347             :       }
    3348             : 
    3349             :     }
    3350             : 
    3351           0 :     setState(IDLE, lock);
    3352           0 :     hasThread = false;
    3353             : 
    3354             : #ifdef DEBUG
    3355           0 :     TlsContext.get()->gcHelperStateThread = false;
    3356             : #endif
    3357             : 
    3358           0 :     done.notify_all();
    3359           0 : }
    3360             : 
    3361             : void
    3362           0 : GCRuntime::queueZonesForBackgroundSweep(ZoneList& zones)
    3363             : {
    3364           0 :     AutoLockHelperThreadState helperLock;
    3365           0 :     AutoLockGC lock(rt);
    3366           0 :     backgroundSweepZones.ref().transferFrom(zones);
    3367           0 :     helperState.maybeStartBackgroundSweep(lock, helperLock);
    3368           0 : }
    3369             : 
    3370             : void
    3371           2 : GCRuntime::freeUnusedLifoBlocksAfterSweeping(LifoAlloc* lifo)
    3372             : {
    3373           2 :     MOZ_ASSERT(JS::CurrentThreadIsHeapBusy());
    3374           4 :     AutoLockGC lock(rt);
    3375           2 :     blocksToFreeAfterSweeping.ref().transferUnusedFrom(lifo);
    3376           2 : }
    3377             : 
    3378             : void
    3379           0 : GCRuntime::freeAllLifoBlocksAfterSweeping(LifoAlloc* lifo)
    3380             : {
    3381           0 :     MOZ_ASSERT(JS::CurrentThreadIsHeapBusy());
    3382           0 :     AutoLockGC lock(rt);
    3383           0 :     blocksToFreeAfterSweeping.ref().transferFrom(lifo);
    3384           0 : }
    3385             : 
    3386             : void
    3387         554 : GCRuntime::freeAllLifoBlocksAfterMinorGC(LifoAlloc* lifo)
    3388             : {
    3389         554 :     blocksToFreeAfterMinorGC.ref().transferFrom(lifo);
    3390         554 : }
    3391             : 
    3392             : void
    3393           0 : GCHelperState::maybeStartBackgroundSweep(const AutoLockGC& lock,
    3394             :                                          const AutoLockHelperThreadState& helperLock)
    3395             : {
    3396           0 :     MOZ_ASSERT(CanUseExtraThreads());
    3397             : 
    3398           0 :     if (state(lock) == IDLE)
    3399           0 :         startBackgroundThread(SWEEPING, lock, helperLock);
    3400           0 : }
    3401             : 
    3402             : void
    3403          50 : GCHelperState::waitBackgroundSweepEnd()
    3404             : {
    3405         100 :     AutoLockGC lock(rt);
    3406          50 :     while (state(lock) == SWEEPING)
    3407           0 :         waitForBackgroundThread(lock);
    3408          50 :     if (!rt->gc.isIncrementalGCInProgress())
    3409          50 :         rt->gc.assertBackgroundSweepingFinished();
    3410          50 : }
    3411             : 
    3412             : void
    3413           0 : GCHelperState::doSweep(AutoLockGC& lock)
    3414             : {
    3415             :     // The active thread may call queueZonesForBackgroundSweep() while this is
    3416             :     // running so we must check there is no more work to do before exiting.
    3417             : 
    3418           0 :     do {
    3419           0 :         while (!rt->gc.backgroundSweepZones.ref().isEmpty()) {
    3420           0 :             AutoSetThreadIsSweeping threadIsSweeping;
    3421             : 
    3422           0 :             ZoneList zones;
    3423           0 :             zones.transferFrom(rt->gc.backgroundSweepZones.ref());
    3424           0 :             LifoAlloc freeLifoAlloc(JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE);
    3425           0 :             freeLifoAlloc.transferFrom(&rt->gc.blocksToFreeAfterSweeping.ref());
    3426             : 
    3427           0 :             AutoUnlockGC unlock(lock);
    3428           0 :             rt->gc.sweepBackgroundThings(zones, freeLifoAlloc);
    3429             :         }
    3430           0 :     } while (!rt->gc.backgroundSweepZones.ref().isEmpty());
    3431           0 : }
    3432             : 
    3433             : #ifdef DEBUG
    3434             : 
    3435             : bool
    3436     5463249 : GCHelperState::onBackgroundThread()
    3437             : {
    3438     5463249 :     return TlsContext.get()->gcHelperStateThread;
    3439             : }
    3440             : 
    3441             : #endif // DEBUG
    3442             : 
    3443             : bool
    3444           0 : GCRuntime::shouldReleaseObservedTypes()
    3445             : {
    3446           0 :     bool releaseTypes = false;
    3447             : 
    3448             : #ifdef JS_GC_ZEAL
    3449           0 :     if (zealModeBits != 0)
    3450           0 :         releaseTypes = true;
    3451             : #endif
    3452             : 
    3453             :     /* We may miss the exact target GC due to resets. */
    3454           0 :     if (majorGCNumber >= jitReleaseNumber)
    3455           0 :         releaseTypes = true;
    3456             : 
    3457           0 :     if (releaseTypes)
    3458           0 :         jitReleaseNumber = majorGCNumber + JIT_SCRIPT_RELEASE_TYPES_PERIOD;
    3459             : 
    3460           0 :     return releaseTypes;
    3461             : }
    3462             : 
    3463             : struct IsAboutToBeFinalizedFunctor {
    3464           0 :     template <typename T> bool operator()(Cell** t) {
    3465           0 :         mozilla::DebugOnly<const Cell*> prior = *t;
    3466           0 :         bool result = IsAboutToBeFinalizedUnbarriered(reinterpret_cast<T**>(t));
    3467             :         // Sweep should not have to deal with moved pointers, since moving GC
    3468             :         // handles updating the UID table manually.
    3469           0 :         MOZ_ASSERT(*t == prior);
    3470           0 :         return result;
    3471             :     }
    3472             : };
    3473             : 
    3474             : /* static */ bool
    3475           0 : UniqueIdGCPolicy::needsSweep(Cell** cell, uint64_t*)
    3476             : {
    3477           0 :     return DispatchTraceKindTyped(IsAboutToBeFinalizedFunctor(), (*cell)->getTraceKind(), cell);
    3478             : }
    3479             : 
    3480             : void
    3481           0 : JS::Zone::sweepUniqueIds(js::FreeOp* fop)
    3482             : {
    3483           0 :     uniqueIds().sweep();
    3484           0 : }
    3485             : 
    3486             : /*
    3487             :  * It's simpler if we preserve the invariant that every zone has at least one
    3488             :  * compartment. If we know we're deleting the entire zone, then
    3489             :  * SweepCompartments is allowed to delete all compartments. In this case,
    3490             :  * |keepAtleastOne| is false. If some objects remain in the zone so that it
    3491             :  * cannot be deleted, then we set |keepAtleastOne| to true, which prohibits
    3492             :  * SweepCompartments from deleting every compartment. Instead, it preserves an
    3493             :  * arbitrary compartment in the zone.
    3494             :  */
    3495             : void
    3496           0 : Zone::sweepCompartments(FreeOp* fop, bool keepAtleastOne, bool destroyingRuntime)
    3497             : {
    3498           0 :     JSRuntime* rt = runtimeFromActiveCooperatingThread();
    3499           0 :     JSDestroyCompartmentCallback callback = rt->destroyCompartmentCallback;
    3500             : 
    3501           0 :     JSCompartment** read = compartments().begin();
    3502           0 :     JSCompartment** end = compartments().end();
    3503           0 :     JSCompartment** write = read;
    3504           0 :     bool foundOne = false;
    3505           0 :     while (read < end) {
    3506           0 :         JSCompartment* comp = *read++;
    3507           0 :         MOZ_ASSERT(!rt->isAtomsCompartment(comp));
    3508             : 
    3509             :         /*
    3510             :          * Don't delete the last compartment if all the ones before it were
    3511             :          * deleted and keepAtleastOne is true.
    3512             :          */
    3513           0 :         bool dontDelete = read == end && !foundOne && keepAtleastOne;
    3514           0 :         if ((!comp->marked && !dontDelete) || destroyingRuntime) {
    3515           0 :             if (callback)
    3516           0 :                 callback(fop, comp);
    3517           0 :             if (comp->principals())
    3518           0 :                 JS_DropPrincipals(TlsContext.get(), comp->principals());
    3519           0 :             js_delete(comp);
    3520           0 :             rt->gc.stats().sweptCompartment();
    3521             :         } else {
    3522           0 :             *write++ = comp;
    3523           0 :             foundOne = true;
    3524             :         }
    3525             :     }
    3526           0 :     compartments().shrinkTo(write - compartments().begin());
    3527           0 :     MOZ_ASSERT_IF(keepAtleastOne, !compartments().empty());
    3528           0 : }
    3529             : 
    3530             : void
    3531           0 : GCRuntime::sweepZones(FreeOp* fop, ZoneGroup* group, bool destroyingRuntime)
    3532             : {
    3533           0 :     Zone** read = group->zones().begin();
    3534           0 :     Zone** end = group->zones().end();
    3535           0 :     Zone** write = read;
    3536             : 
    3537           0 :     while (read < end) {
    3538           0 :         Zone* zone = *read++;
    3539             : 
    3540           0 :         if (zone->wasGCStarted()) {
    3541           0 :             MOZ_ASSERT(!zone->isQueuedForBackgroundSweep());
    3542           0 :             const bool zoneIsDead = zone->arenas.arenaListsAreEmpty() &&
    3543           0 :                                     !zone->hasMarkedCompartments();
    3544           0 :             if (zoneIsDead || destroyingRuntime)
    3545             :             {
    3546             :                 // We have just finished sweeping, so we should have freed any
    3547             :                 // empty arenas back to their Chunk for future allocation.
    3548           0 :                 zone->arenas.checkEmptyFreeLists();
    3549             : 
    3550             :                 // We are about to delete the Zone; this will leave the Zone*
    3551             :                 // in the arena header dangling if there are any arenas
    3552             :                 // remaining at this point.
    3553             : #ifdef DEBUG
    3554           0 :                 if (!zone->arenas.checkEmptyArenaLists())
    3555           0 :                     arenasEmptyAtShutdown = false;
    3556             : #endif
    3557             : 
    3558           0 :                 zone->sweepCompartments(fop, false, destroyingRuntime);
    3559           0 :                 MOZ_ASSERT(zone->compartments().empty());
    3560           0 :                 MOZ_ASSERT_IF(arenasEmptyAtShutdown, zone->typeDescrObjects().empty());
    3561           0 :                 fop->delete_(zone);
    3562           0 :                 stats().sweptZone();
    3563           0 :                 continue;
    3564             :             }
    3565           0 :             zone->sweepCompartments(fop, true, destroyingRuntime);
    3566             :         }
    3567           0 :         *write++ = zone;
    3568             :     }
    3569           0 :     group->zones().shrinkTo(write - group->zones().begin());
    3570           0 : }
    3571             : 
    3572             : void
    3573           0 : GCRuntime::sweepZoneGroups(FreeOp* fop, bool destroyingRuntime)
    3574             : {
    3575           0 :     MOZ_ASSERT_IF(destroyingRuntime, numActiveZoneIters == 0);
    3576           0 :     MOZ_ASSERT_IF(destroyingRuntime, arenasEmptyAtShutdown);
    3577             : 
    3578           0 :     if (rt->gc.numActiveZoneIters)
    3579           0 :         return;
    3580             : 
    3581           0 :     assertBackgroundSweepingFinished();
    3582             : 
    3583           0 :     ZoneGroup** read = groups.ref().begin();
    3584           0 :     ZoneGroup** end = groups.ref().end();
    3585           0 :     ZoneGroup** write = read;
    3586             : 
    3587           0 :     while (read < end) {
    3588           0 :         ZoneGroup* group = *read++;
    3589           0 :         sweepZones(fop, group, destroyingRuntime);
    3590             : 
    3591           0 :         if (group->zones().empty()) {
    3592           0 :             MOZ_ASSERT(numActiveZoneIters == 0);
    3593           0 :             fop->delete_(group);
    3594             :         } else {
    3595           0 :             *write++ = group;
    3596             :         }
    3597             :     }
    3598           0 :     groups.ref().shrinkTo(write - groups.ref().begin());
    3599             : }
    3600             : 
    3601             : #ifdef DEBUG
    3602             : static const char*
    3603           0 : AllocKindToAscii(AllocKind kind)
    3604             : {
    3605           0 :     switch(kind) {
    3606             : #define MAKE_CASE(allocKind, traceKind, type, sizedType) \
    3607             :       case AllocKind:: allocKind: return #allocKind;
    3608           0 : FOR_EACH_ALLOCKIND(MAKE_CASE)
    3609             : #undef MAKE_CASE
    3610             : 
    3611             :       default:
    3612           0 :         MOZ_CRASH("Unknown AllocKind in AllocKindToAscii");
    3613             :     }
    3614             : }
    3615             : #endif // DEBUG
    3616             : 
    3617             : bool
    3618           0 : ArenaLists::checkEmptyArenaList(AllocKind kind)
    3619             : {
    3620           0 :     size_t num_live = 0;
    3621             : #ifdef DEBUG
    3622           0 :     if (!arenaLists(kind).isEmpty()) {
    3623           0 :         size_t max_cells = 20;
    3624           0 :         char *env = getenv("JS_GC_MAX_LIVE_CELLS");
    3625           0 :         if (env && *env)
    3626           0 :             max_cells = atol(env);
    3627           0 :         for (Arena* current = arenaLists(kind).head(); current; current = current->next) {
    3628           0 :             for (ArenaCellIterUnderGC i(current); !i.done(); i.next()) {
    3629           0 :                 TenuredCell* t = i.getCell();
    3630           0 :                 MOZ_ASSERT(t->isMarkedAny(), "unmarked cells should have been finalized");
    3631           0 :                 if (++num_live <= max_cells) {
    3632           0 :                     fprintf(stderr, "ERROR: GC found live Cell %p of kind %s at shutdown\n",
    3633           0 :                             t, AllocKindToAscii(kind));
    3634             :                 }
    3635             :             }
    3636             :         }
    3637           0 :         fprintf(stderr, "ERROR: GC found %" PRIuSIZE " live Cells at shutdown\n", num_live);
    3638             :     }
    3639             : #endif // DEBUG
    3640           0 :     return num_live == 0;
    3641             : }
    3642             : 
    3643             : class MOZ_RAII js::gc::AutoRunParallelTask : public GCParallelTask
    3644             : {
    3645             :     using Func = void (*)(JSRuntime*);
    3646             : 
    3647             :     Func func_;
    3648             :     gcstats::PhaseKind phase_;
    3649             :     AutoLockHelperThreadState& lock_;
    3650             : 
    3651             :   public:
    3652           2 :     AutoRunParallelTask(JSRuntime* rt, Func func, gcstats::PhaseKind phase,
    3653             :                         AutoLockHelperThreadState& lock)
    3654           2 :       : GCParallelTask(rt),
    3655             :         func_(func),
    3656             :         phase_(phase),
    3657           2 :         lock_(lock)
    3658             :     {
    3659           2 :         runtime()->gc.startTask(*this, phase_, lock_);
    3660           2 :     }
    3661             : 
    3662           4 :     ~AutoRunParallelTask() {
    3663           2 :         runtime()->gc.joinTask(*this, phase_, lock_);
    3664           2 :     }
    3665             : 
    3666           2 :     void run() override {
    3667           2 :         func_(runtime());
    3668           2 :     }
    3669             : };
    3670             : 
    3671             : void
    3672           1 : GCRuntime::purgeRuntime(AutoLockForExclusiveAccess& lock)
    3673             : {
    3674           2 :     gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::PURGE);
    3675             : 
    3676         224 :     for (GCCompartmentsIter comp(rt); !comp.done(); comp.next())
    3677         223 :         comp->purge();
    3678             : 
    3679          17 :     for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
    3680          16 :         zone->atomCache().clearAndShrink();
    3681          16 :         zone->externalStringCache().purge();
    3682             :     }
    3683             : 
    3684           2 :     for (const CooperatingContext& target : rt->cooperatingContexts()) {
    3685           1 :         freeUnusedLifoBlocksAfterSweeping(&target.context()->tempLifoAlloc());
    3686           1 :         target.context()->interpreterStack().purge(rt);
    3687           1 :         target.context()->frontendCollectionPool().purge();
    3688             :     }
    3689             : 
    3690           1 :     rt->caches().gsnCache.purge();
    3691           1 :     rt->caches().envCoordinateNameCache.purge();
    3692           1 :     rt->caches().newObjectCache.purge();
    3693           1 :     rt->caches().nativeIterCache.purge();
    3694           1 :     rt->caches().uncompressedSourceCache.purge();
    3695           1 :     if (rt->caches().evalCache.initialized())
    3696           1 :         rt->caches().evalCache.clear();
    3697             : 
    3698           1 :     if (auto cache = rt->maybeThisRuntimeSharedImmutableStrings())
    3699           1 :         cache->purge();
    3700             : 
    3701           1 :     rt->promiseTasksToDestroy.lock()->clear();
    3702             : 
    3703           1 :     MOZ_ASSERT(unmarkGrayStack.empty());
    3704           1 :     unmarkGrayStack.clearAndFree();
    3705           1 : }
    3706             : 
    3707             : bool
    3708         223 : GCRuntime::shouldPreserveJITCode(JSCompartment* comp, int64_t currentTime,
    3709             :                                  JS::gcreason::Reason reason, bool canAllocateMoreCode)
    3710             : {
    3711         223 :     if (cleanUpEverything)
    3712           0 :         return false;
    3713         223 :     if (!canAllocateMoreCode)
    3714           0 :         return false;
    3715             : 
    3716         223 :     if (alwaysPreserveCode)
    3717           0 :         return true;
    3718         223 :     if (comp->preserveJitCode())
    3719           0 :         return true;
    3720         223 :     if (comp->lastAnimationTime + PRMJ_USEC_PER_SEC >= currentTime)
    3721           0 :         return true;
    3722         223 :     if (reason == JS::gcreason::DEBUG_GC)
    3723           0 :         return true;
    3724             : 
    3725         223 :     return false;
    3726             : }
    3727             : 
    3728             : #ifdef DEBUG
    3729             : class CompartmentCheckTracer : public JS::CallbackTracer
    3730             : {
    3731             :     void onChild(const JS::GCCellPtr& thing) override;
    3732             : 
    3733             :   public:
    3734           0 :     explicit CompartmentCheckTracer(JSRuntime* rt)
    3735           0 :       : JS::CallbackTracer(rt), src(nullptr), zone(nullptr), compartment(nullptr)
    3736           0 :     {}
    3737             : 
    3738             :     Cell* src;
    3739             :     JS::TraceKind srcKind;
    3740             :     Zone* zone;
    3741             :     JSCompartment* compartment;
    3742             : };
    3743             : 
    3744             : namespace {
    3745             : struct IsDestComparatorFunctor {
    3746             :     JS::GCCellPtr dst_;
    3747           0 :     explicit IsDestComparatorFunctor(JS::GCCellPtr dst) : dst_(dst) {}
    3748             : 
    3749           0 :     template <typename T> bool operator()(T* t) { return (*t) == dst_.asCell(); }
    3750             : };
    3751             : } // namespace (anonymous)
    3752             : 
    3753             : static bool
    3754           0 : InCrossCompartmentMap(JSObject* src, JS::GCCellPtr dst)
    3755             : {
    3756           0 :     JSCompartment* srccomp = src->compartment();
    3757             : 
    3758           0 :     if (dst.is<JSObject>()) {
    3759           0 :         Value key = ObjectValue(dst.as<JSObject>());
    3760           0 :         if (WrapperMap::Ptr p = srccomp->lookupWrapper(key)) {
    3761           0 :             if (*p->value().unsafeGet() == ObjectValue(*src))
    3762           0 :                 return true;
    3763             :         }
    3764             :     }
    3765             : 
    3766             :     /*
    3767             :      * If the cross-compartment edge is caused by the debugger, then we don't
    3768             :      * know the right hashtable key, so we have to iterate.
    3769             :      */
    3770           0 :     for (JSCompartment::WrapperEnum e(srccomp); !e.empty(); e.popFront()) {
    3771           0 :         if (e.front().mutableKey().applyToWrapped(IsDestComparatorFunctor(dst)) &&
    3772           0 :             ToMarkable(e.front().value().unbarrieredGet()) == src)
    3773             :         {
    3774           0 :             return true;
    3775             :         }
    3776             :     }
    3777             : 
    3778           0 :     return false;
    3779             : }
    3780             : 
    3781             : struct MaybeCompartmentFunctor {
    3782           0 :     template <typename T> JSCompartment* operator()(T* t) { return t->maybeCompartment(); }
    3783             : };
    3784             : 
    3785             : void
    3786           0 : CompartmentCheckTracer::onChild(const JS::GCCellPtr& thing)
    3787             : {
    3788           0 :     JSCompartment* comp = DispatchTyped(MaybeCompartmentFunctor(), thing);
    3789           0 :     if (comp && compartment) {
    3790           0 :         MOZ_ASSERT(comp == compartment || runtime()->isAtomsCompartment(comp) ||
    3791             :                    (srcKind == JS::TraceKind::Object &&
    3792             :                     InCrossCompartmentMap(static_cast<JSObject*>(src), thing)));
    3793             :     } else {
    3794           0 :         TenuredCell* tenured = TenuredCell::fromPointer(thing.asCell());
    3795           0 :         Zone* thingZone = tenured->zoneFromAnyThread();
    3796           0 :         MOZ_ASSERT(thingZone == zone || thingZone->isAtomsZone());
    3797             :     }
    3798           0 : }
    3799             : 
    3800             : void
    3801           0 : GCRuntime::checkForCompartmentMismatches()
    3802             : {
    3803           0 :     if (TlsContext.get()->disableStrictProxyCheckingCount)
    3804           0 :         return;
    3805             : 
    3806           0 :     CompartmentCheckTracer trc(rt);
    3807           0 :     AutoAssertEmptyNursery empty(TlsContext.get());
    3808           0 :     for (ZonesIter zone(rt, SkipAtoms); !zone.done(); zone.next()) {
    3809           0 :         trc.zone = zone;
    3810           0 :         for (auto thingKind : AllAllocKinds()) {
    3811           0 :             for (auto i = zone->cellIter<TenuredCell>(thingKind, empty); !i.done(); i.next()) {
    3812           0 :                 trc.src = i.getCell();
    3813           0 :                 trc.srcKind = MapAllocToTraceKind(thingKind);
    3814           0 :                 trc.compartment = DispatchTraceKindTyped(MaybeCompartmentFunctor(),
    3815           0 :                                                          trc.src, trc.srcKind);
    3816           0 :                 js::TraceChildren(&trc, trc.src, trc.srcKind);
    3817             :             }
    3818             :         }
    3819             :     }
    3820             : }
    3821             : #endif
    3822             : 
    3823             : static void
    3824           0 : RelazifyFunctions(Zone* zone, AllocKind kind)
    3825             : {
    3826           0 :     MOZ_ASSERT(kind == AllocKind::FUNCTION ||
    3827             :                kind == AllocKind::FUNCTION_EXTENDED);
    3828             : 
    3829           0 :     AutoAssertEmptyNursery empty(TlsContext.get());
    3830             : 
    3831           0 :     JSRuntime* rt = zone->runtimeFromActiveCooperatingThread();
    3832           0 :     for (auto i = zone->cellIter<JSObject>(kind, empty); !i.done(); i.next()) {
    3833           0 :         JSFunction* fun = &i->as<JSFunction>();
    3834           0 :         if (fun->hasScript())
    3835           0 :             fun->maybeRelazify(rt);
    3836             :     }
    3837           0 : }
    3838             : 
    3839             : static bool
    3840          16 : ShouldCollectZone(Zone* zone, JS::gcreason::Reason reason)
    3841             : {
    3842             :     // Normally we collect all scheduled zones.
    3843          16 :     if (reason != JS::gcreason::COMPARTMENT_REVIVED)
    3844          16 :         return zone->isGCScheduled();
    3845             : 
    3846             :     // If we are repeating a GC becuase we noticed dead compartments haven't
    3847             :     // been collected, then only collect zones contianing those compartments.
    3848           0 :     for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) {
    3849           0 :         if (comp->scheduledForDestruction)
    3850           0 :             return true;
    3851             :     }
    3852             : 
    3853           0 :     return false;
    3854             : }
    3855             : 
    3856             : bool
    3857           1 : GCRuntime::prepareZonesForCollection(JS::gcreason::Reason reason, bool* isFullOut,
    3858             :                                      AutoLockForExclusiveAccess& lock)
    3859             : {
    3860             : #ifdef DEBUG
    3861             :     /* Assert that zone state is as we expect */
    3862          17 :     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    3863          16 :         MOZ_ASSERT(!zone->isCollecting());
    3864          16 :         MOZ_ASSERT(!zone->compartments().empty());
    3865         480 :         for (auto i : AllAllocKinds())
    3866         464 :             MOZ_ASSERT(!zone->arenas.arenaListsToSweep(i));
    3867             :     }
    3868             : #endif
    3869             : 
    3870           1 :     *isFullOut = true;
    3871           1 :     bool any = false;
    3872             : 
    3873           1 :     int64_t currentTime = PRMJ_Now();
    3874             : 
    3875          17 :     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    3876             :         /* Set up which zones will be collected. */
    3877          16 :         if (ShouldCollectZone(zone, reason)) {
    3878          16 :             if (!zone->isAtomsZone()) {
    3879          15 :                 any = true;
    3880          15 :                 zone->setGCState(Zone::Mark);
    3881             :             }
    3882             :         } else {
    3883           0 :             *isFullOut = false;
    3884             :         }
    3885             : 
    3886          16 :         zone->setPreservingCode(false);
    3887             :     }
    3888             : 
    3889             :     // Discard JIT code more aggressively if the process is approaching its
    3890             :     // executable code limit.
    3891           1 :     bool canAllocateMoreCode = jit::CanLikelyAllocateMoreExecutableMemory();
    3892             : 
    3893         224 :     for (CompartmentsIter c(rt, WithAtoms); !c.done(); c.next()) {
    3894         223 :         c->marked = false;
    3895         223 :         c->scheduledForDestruction = false;
    3896         223 :         c->maybeAlive = c->hasBeenEntered() || !c->zone()->isGCScheduled();
    3897         223 :         if (shouldPreserveJITCode(c, currentTime, reason, canAllocateMoreCode))
    3898           0 :             c->zone()->setPreservingCode(true);
    3899             :     }
    3900             : 
    3901           1 :     if (!cleanUpEverything && canAllocateMoreCode) {
    3902           1 :         jit::JitActivationIterator activation(TlsContext.get());
    3903           1 :         if (!activation.done())
    3904           0 :             activation->compartment()->zone()->setPreservingCode(true);
    3905             :     }
    3906             : 
    3907             :     /*
    3908             :      * If keepAtoms() is true then either an instance of AutoKeepAtoms is
    3909             :      * currently on the stack or parsing is currently happening on another
    3910             :      * thread. In either case we don't have information about which atoms are
    3911             :      * roots, so we must skip collecting atoms.
    3912             :      *
    3913             :      * Note that only affects the first slice of an incremental GC since root
    3914             :      * marking is completed before we return to the mutator.
    3915             :      *
    3916             :      * Off-thread parsing is inhibited after the start of GC which prevents
    3917             :      * races between creating atoms during parsing and sweeping atoms on the
    3918             :      * active thread.
    3919             :      *
    3920             :      * Otherwise, we always schedule a GC in the atoms zone so that atoms which
    3921             :      * the other collected zones are using are marked, and we can update the
    3922             :      * set of atoms in use by the other collected zones at the end of the GC.
    3923             :      */
    3924           1 :     if (!TlsContext.get()->keepAtoms || rt->hasHelperThreadZones()) {
    3925           1 :         Zone* atomsZone = rt->atomsCompartment(lock)->zone();
    3926           1 :         if (atomsZone->isGCScheduled()) {
    3927           1 :             MOZ_ASSERT(!atomsZone->isCollecting());
    3928           1 :             atomsZone->setGCState(Zone::Mark);
    3929           1 :             any = true;
    3930             :         }
    3931             :     }
    3932             : 
    3933             :     /* Check that at least one zone is scheduled for collection. */
    3934           1 :     return any;
    3935             : }
    3936             : 
    3937             : static void
    3938           1 : DiscardJITCodeForIncrementalGC(JSRuntime* rt)
    3939             : {
    3940           1 :     js::CancelOffThreadIonCompile(rt, JS::Zone::Mark);
    3941          17 :     for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
    3942          32 :         gcstats::AutoPhase ap(rt->gc.stats(), gcstats::PhaseKind::MARK_DISCARD_CODE);
    3943          16 :         zone->discardJitCode(rt->defaultFreeOp());
    3944             :     }
    3945           1 : }
    3946             : 
    3947             : static void
    3948           0 : RelazifyFunctionsForShrinkingGC(JSRuntime* rt)
    3949             : {
    3950           0 :     gcstats::AutoPhase ap(rt->gc.stats(), gcstats::PhaseKind::RELAZIFY_FUNCTIONS);
    3951           0 :     for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
    3952           0 :         if (zone->isSelfHostingZone())
    3953           0 :             continue;
    3954           0 :         RelazifyFunctions(zone, AllocKind::FUNCTION);
    3955           0 :         RelazifyFunctions(zone, AllocKind::FUNCTION_EXTENDED);
    3956             :     }
    3957           0 : }
    3958             : 
    3959             : static void
    3960           0 : PurgeShapeTablesForShrinkingGC(JSRuntime* rt)
    3961             : {
    3962           0 :     gcstats::AutoPhase ap(rt->gc.stats(), gcstats::PhaseKind::PURGE_SHAPE_TABLES);
    3963           0 :     for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
    3964           0 :         if (zone->keepShapeTables() || zone->isSelfHostingZone())
    3965           0 :             continue;
    3966           0 :         for (auto baseShape = zone->cellIter<BaseShape>(); !baseShape.done(); baseShape.next())
    3967           0 :             baseShape->maybePurgeTable();
    3968             :     }
    3969           0 : }
    3970             : 
    3971             : static void
    3972           1 : UnmarkCollectedZones(JSRuntime* rt)
    3973             : {
    3974          17 :     for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
    3975             :         /* Unmark everything in the zones being collected. */
    3976          16 :         zone->arenas.unmarkAll();
    3977             :     }
    3978             : 
    3979          17 :     for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
    3980             :         /* Unmark all weak maps in the zones being collected. */
    3981          16 :         WeakMapBase::unmarkZone(zone);
    3982             :     }
    3983           1 : }
    3984             : 
    3985             : static void
    3986           1 : BufferGrayRoots(JSRuntime* rt)
    3987             : {
    3988           1 :     rt->gc.bufferGrayRoots();
    3989           1 : }
    3990             : 
    3991             : bool
    3992           1 : GCRuntime::beginMarkPhase(JS::gcreason::Reason reason, AutoLockForExclusiveAccess& lock)
    3993             : {
    3994             : #ifdef DEBUG
    3995           1 :     if (fullCompartmentChecks)
    3996           0 :         checkForCompartmentMismatches();
    3997             : #endif
    3998             : 
    3999           1 :     if (!prepareZonesForCollection(reason, &isFull.ref(), lock))
    4000           0 :         return false;
    4001             : 
    4002             :     /*
    4003             :      * Ensure that after the start of a collection we don't allocate into any
    4004             :      * existing arenas, as this can cause unreachable things to be marked.
    4005             :      */
    4006           1 :     if (isIncremental) {
    4007          17 :         for (GCZonesIter zone(rt); !zone.done(); zone.next())
    4008          16 :             zone->arenas.prepareForIncrementalGC();
    4009             :     }
    4010             : 
    4011           1 :     MemProfiler::MarkTenuredStart(rt);
    4012           1 :     marker.start();
    4013           1 :     GCMarker* gcmarker = &marker;
    4014             : 
    4015             :     {
    4016           2 :         gcstats::AutoPhase ap1(stats(), gcstats::PhaseKind::PREPARE);
    4017           2 :         AutoLockHelperThreadState helperLock;
    4018             : 
    4019             :         /*
    4020             :          * Clear all mark state for the zones we are collecting. This is linear
    4021             :          * in the size of the heap we are collecting and so can be slow. Do this
    4022             :          * in parallel with the rest of this block.
    4023             :          */
    4024             :         AutoRunParallelTask
    4025           2 :             unmarkCollectedZones(rt, UnmarkCollectedZones, gcstats::PhaseKind::UNMARK, helperLock);
    4026             : 
    4027             :         /*
    4028             :          * Buffer gray roots for incremental collections. This is linear in the
    4029             :          * number of roots which can be in the tens of thousands. Do this in
    4030             :          * parallel with the rest of this block.
    4031             :          */
    4032           2 :         Maybe<AutoRunParallelTask> bufferGrayRoots;
    4033           1 :         if (isIncremental)
    4034           1 :             bufferGrayRoots.emplace(rt, BufferGrayRoots, gcstats::PhaseKind::BUFFER_GRAY_ROOTS, helperLock);
    4035           2 :         AutoUnlockHelperThreadState unlock(helperLock);
    4036             : 
    4037             :         /*
    4038             :          * Discard JIT code for incremental collections (for non-incremental
    4039             :          * collections the following sweep discards the jit code).
    4040             :          */
    4041           1 :         if (isIncremental)
    4042           1 :             DiscardJITCodeForIncrementalGC(rt);
    4043             : 
    4044             :         /*
    4045             :          * Relazify functions after discarding JIT code (we can't relazify
    4046             :          * functions with JIT code) and before the actual mark phase, so that
    4047             :          * the current GC can collect the JSScripts we're unlinking here.  We do
    4048             :          * this only when we're performing a shrinking GC, as too much
    4049             :          * relazification can cause performance issues when we have to reparse
    4050             :          * the same functions over and over.
    4051             :          */
    4052           1 :         if (invocationKind == GC_SHRINK) {
    4053           0 :             RelazifyFunctionsForShrinkingGC(rt);
    4054           0 :             PurgeShapeTablesForShrinkingGC(rt);
    4055             :         }
    4056             : 
    4057             :         /*
    4058             :          * We must purge the runtime at the beginning of an incremental GC. The
    4059             :          * danger if we purge later is that the snapshot invariant of
    4060             :          * incremental GC will be broken, as follows. If some object is
    4061             :          * reachable only through some cache (say the dtoaCache) then it will
    4062             :          * not be part of the snapshot.  If we purge after root marking, then
    4063             :          * the mutator could obtain a pointer to the object and start using
    4064             :          * it. This object might never be marked, so a GC hazard would exist.
    4065             :          */
    4066           1 :         purgeRuntime(lock);
    4067             :     }
    4068             : 
    4069             :     /*
    4070             :      * Mark phase.
    4071             :      */
    4072           2 :     gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::MARK);
    4073           1 :     traceRuntimeForMajorGC(gcmarker, lock);
    4074             : 
    4075           1 :     if (isIncremental)
    4076           1 :         markCompartments();
    4077             : 
    4078             :     /*
    4079             :      * Process any queued source compressions during the start of a major
    4080             :      * GC.
    4081             :      */
    4082             :     {
    4083           2 :         AutoLockHelperThreadState helperLock;
    4084           1 :         HelperThreadState().startHandlingCompressionTasks(helperLock);
    4085             :     }
    4086             : 
    4087           1 :     return true;
    4088             : }
    4089             : 
    4090             : void
    4091           1 : GCRuntime::markCompartments()
    4092             : {
    4093           2 :     gcstats::AutoPhase ap1(stats(), gcstats::PhaseKind::MARK_ROOTS);
    4094           2 :     gcstats::AutoPhase ap2(stats(), gcstats::PhaseKind::MARK_COMPARTMENTS);
    4095             : 
    4096             :     /*
    4097             :      * This code ensures that if a compartment is "dead", then it will be
    4098             :      * collected in this GC. A compartment is considered dead if its maybeAlive
    4099             :      * flag is false. The maybeAlive flag is set if:
    4100             :      *
    4101             :      *   (1) the compartment has been entered (set in beginMarkPhase() above)
    4102             :      *   (2) the compartment is not being collected (set in beginMarkPhase()
    4103             :      *       above)
    4104             :      *   (3) an object in the compartment was marked during root marking, either
    4105             :      *       as a black root or a gray root (set in RootMarking.cpp), or
    4106             :      *   (4) the compartment has incoming cross-compartment edges from another
    4107             :      *       compartment that has maybeAlive set (set by this method).
    4108             :      *
    4109             :      * If the maybeAlive is false, then we set the scheduledForDestruction flag.
    4110             :      * At the end of the GC, we look for compartments where
    4111             :      * scheduledForDestruction is true. These are compartments that were somehow
    4112             :      * "revived" during the incremental GC. If any are found, we do a special,
    4113             :      * non-incremental GC of those compartments to try to collect them.
    4114             :      *
    4115             :      * Compartments can be revived for a variety of reasons. On reason is bug
    4116             :      * 811587, where a reflector that was dead can be revived by DOM code that
    4117             :      * still refers to the underlying DOM node.
    4118             :      *
    4119             :      * Read barriers and allocations can also cause revival. This might happen
    4120             :      * during a function like JS_TransplantObject, which iterates over all
    4121             :      * compartments, live or dead, and operates on their objects. See bug 803376
    4122             :      * for details on this problem. To avoid the problem, we try to avoid
    4123             :      * allocation and read barriers during JS_TransplantObject and the like.
    4124             :      */
    4125             : 
    4126             :     /* Propagate the maybeAlive flag via cross-compartment edges. */
    4127             : 
    4128           2 :     Vector<JSCompartment*, 0, js::SystemAllocPolicy> workList;
    4129             : 
    4130         223 :     for (CompartmentsIter comp(rt, SkipAtoms); !comp.done(); comp.next()) {
    4131         222 :         if (comp->maybeAlive) {
    4132         210 :             if (!workList.append(comp))
    4133           0 :                 return;
    4134             :         }
    4135             :     }
    4136             : 
    4137         423 :     while (!workList.empty()) {
    4138         211 :         JSCompartment* comp = workList.popCopy();
    4139        6853 :         for (JSCompartment::NonStringWrapperEnum e(comp); !e.empty(); e.popFront()) {
    4140        6642 :             JSCompartment* dest = e.front().mutableKey().compartment();
    4141        6642 :             if (dest && !dest->maybeAlive) {
    4142           1 :                 dest->maybeAlive = true;
    4143           1 :                 if (!workList.append(dest))
    4144           0 :                     return;
    4145             :             }
    4146             :         }
    4147             :     }
    4148             : 
    4149             :     /* Set scheduleForDestruction based on maybeAlive. */
    4150             : 
    4151         224 :     for (GCCompartmentsIter comp(rt); !comp.done(); comp.next()) {
    4152         223 :         MOZ_ASSERT(!comp->scheduledForDestruction);
    4153         223 :         if (!comp->maybeAlive && !rt->isAtomsCompartment(comp))
    4154          11 :             comp->scheduledForDestruction = true;
    4155             :     }
    4156             : }
    4157             : 
    4158             : template <class ZoneIterT>
    4159             : void
    4160           0 : GCRuntime::markWeakReferences(gcstats::PhaseKind phase)
    4161             : {
    4162           0 :     MOZ_ASSERT(marker.isDrained());
    4163             : 
    4164           0 :     gcstats::AutoPhase ap1(stats(), phase);
    4165             : 
    4166           0 :     marker.enterWeakMarkingMode();
    4167             : 
    4168             :     // TODO bug 1167452: Make weak marking incremental
    4169           0 :     auto unlimited = SliceBudget::unlimited();
    4170           0 :     MOZ_RELEASE_ASSERT(marker.drainMarkStack(unlimited));
    4171             : 
    4172           0 :     for (;;) {
    4173           0 :         bool markedAny = false;
    4174           0 :         if (!marker.isWeakMarkingTracer()) {
    4175           0 :             for (ZoneIterT zone(rt); !zone.done(); zone.next())
    4176           0 :                 markedAny |= WeakMapBase::markZoneIteratively(zone, &marker);
    4177             :         }
    4178           0 :         for (CompartmentsIterT<ZoneIterT> c(rt); !c.done(); c.next()) {
    4179           0 :             if (c->watchpointMap)
    4180           0 :                 markedAny |= c->watchpointMap->markIteratively(&marker);
    4181             :         }
    4182           0 :         markedAny |= Debugger::markIteratively(&marker);
    4183           0 :         markedAny |= jit::JitRuntime::MarkJitcodeGlobalTableIteratively(&marker);
    4184             : 
    4185           0 :         if (!markedAny)
    4186           0 :             break;
    4187             : 
    4188           0 :         auto unlimited = SliceBudget::unlimited();
    4189           0 :         MOZ_RELEASE_ASSERT(marker.drainMarkStack(unlimited));
    4190             :     }
    4191           0 :     MOZ_ASSERT(marker.isDrained());
    4192             : 
    4193           0 :     marker.leaveWeakMarkingMode();
    4194           0 : }
    4195             : 
    4196             : void
    4197           0 : GCRuntime::markWeakReferencesInCurrentGroup(gcstats::PhaseKind phase)
    4198             : {
    4199           0 :     markWeakReferences<GCSweepGroupIter>(phase);
    4200           0 : }
    4201             : 
    4202             : template <class ZoneIterT, class CompartmentIterT>
    4203             : void
    4204           0 : GCRuntime::markGrayReferences(gcstats::PhaseKind phase)
    4205             : {
    4206           0 :     gcstats::AutoPhase ap(stats(), phase);
    4207           0 :     if (hasBufferedGrayRoots()) {
    4208           0 :         for (ZoneIterT zone(rt); !zone.done(); zone.next())
    4209           0 :             markBufferedGrayRoots(zone);
    4210             :     } else {
    4211           0 :         MOZ_ASSERT(!isIncremental);
    4212           0 :         if (JSTraceDataOp op = grayRootTracer.op)
    4213           0 :             (*op)(&marker, grayRootTracer.data);
    4214             :     }
    4215           0 :     auto unlimited = SliceBudget::unlimited();
    4216           0 :     MOZ_RELEASE_ASSERT(marker.drainMarkStack(unlimited));
    4217           0 : }
    4218             : 
    4219             : void
    4220           0 : GCRuntime::markGrayReferencesInCurrentGroup(gcstats::PhaseKind phase)
    4221             : {
    4222           0 :     markGrayReferences<GCSweepGroupIter, GCCompartmentGroupIter>(phase);
    4223           0 : }
    4224             : 
    4225             : void
    4226           0 : GCRuntime::markAllWeakReferences(gcstats::PhaseKind phase)
    4227             : {
    4228           0 :     markWeakReferences<GCZonesIter>(phase);
    4229           0 : }
    4230             : 
    4231             : void
    4232           0 : GCRuntime::markAllGrayReferences(gcstats::PhaseKind phase)
    4233             : {
    4234           0 :     markGrayReferences<GCZonesIter, GCCompartmentsIter>(phase);
    4235           0 : }
    4236             : 
    4237             : #ifdef JS_GC_ZEAL
    4238             : 
    4239             : struct GCChunkHasher {
    4240             :     typedef gc::Chunk* Lookup;
    4241             : 
    4242             :     /*
    4243             :      * Strip zeros for better distribution after multiplying by the golden
    4244             :      * ratio.
    4245             :      */
    4246           0 :     static HashNumber hash(gc::Chunk* chunk) {
    4247           0 :         MOZ_ASSERT(!(uintptr_t(chunk) & gc::ChunkMask));
    4248           0 :         return HashNumber(uintptr_t(chunk) >> gc::ChunkShift);
    4249             :     }
    4250             : 
    4251           0 :     static bool match(gc::Chunk* k, gc::Chunk* l) {
    4252           0 :         MOZ_ASSERT(!(uintptr_t(k) & gc::ChunkMask));
    4253           0 :         MOZ_ASSERT(!(uintptr_t(l) & gc::ChunkMask));
    4254           0 :         return k == l;
    4255             :     }
    4256             : };
    4257             : 
    4258             : class js::gc::MarkingValidator
    4259             : {
    4260             :   public:
    4261             :     explicit MarkingValidator(GCRuntime* gc);
    4262             :     ~MarkingValidator();
    4263             :     void nonIncrementalMark(AutoLockForExclusiveAccess& lock);
    4264             :     void validate();
    4265             : 
    4266             :   private:
    4267             :     GCRuntime* gc;
    4268             :     bool initialized;
    4269             : 
    4270             :     typedef HashMap<Chunk*, ChunkBitmap*, GCChunkHasher, SystemAllocPolicy> BitmapMap;
    4271             :     BitmapMap map;
    4272             : };
    4273             : 
    4274           0 : js::gc::MarkingValidator::MarkingValidator(GCRuntime* gc)
    4275             :   : gc(gc),
    4276           0 :     initialized(false)
    4277           0 : {}
    4278             : 
    4279           0 : js::gc::MarkingValidator::~MarkingValidator()
    4280             : {
    4281           0 :     if (!map.initialized())
    4282           0 :         return;
    4283             : 
    4284           0 :     for (BitmapMap::Range r(map.all()); !r.empty(); r.popFront())
    4285           0 :         js_delete(r.front().value());
    4286           0 : }
    4287             : 
    4288             : void
    4289           0 : js::gc::MarkingValidator::nonIncrementalMark(AutoLockForExclusiveAccess& lock)
    4290             : {
    4291             :     /*
    4292             :      * Perform a non-incremental mark for all collecting zones and record
    4293             :      * the results for later comparison.
    4294             :      *
    4295             :      * Currently this does not validate gray marking.
    4296             :      */
    4297             : 
    4298           0 :     if (!map.init())
    4299           0 :         return;
    4300             : 
    4301           0 :     JSRuntime* runtime = gc->rt;
    4302           0 :     GCMarker* gcmarker = &gc->marker;
    4303             : 
    4304           0 :     gc->waitBackgroundSweepEnd();
    4305             : 
    4306             :     /* Save existing mark bits. */
    4307           0 :     for (auto chunk = gc->allNonEmptyChunks(); !chunk.done(); chunk.next()) {
    4308           0 :         ChunkBitmap* bitmap = &chunk->bitmap;
    4309           0 :         ChunkBitmap* entry = js_new<ChunkBitmap>();
    4310           0 :         if (!entry)
    4311           0 :             return;
    4312             : 
    4313           0 :         memcpy((void*)entry->bitmap, (void*)bitmap->bitmap, sizeof(bitmap->bitmap));
    4314           0 :         if (!map.putNew(chunk, entry))
    4315           0 :             return;
    4316             :     }
    4317             : 
    4318             :     /*
    4319             :      * Temporarily clear the weakmaps' mark flags for the compartments we are
    4320             :      * collecting.
    4321             :      */
    4322             : 
    4323           0 :     WeakMapSet markedWeakMaps;
    4324           0 :     if (!markedWeakMaps.init())
    4325           0 :         return;
    4326             : 
    4327             :     /*
    4328             :      * For saving, smush all of the keys into one big table and split them back
    4329             :      * up into per-zone tables when restoring.
    4330             :      */
    4331           0 :     gc::WeakKeyTable savedWeakKeys(SystemAllocPolicy(), runtime->randomHashCodeScrambler());
    4332           0 :     if (!savedWeakKeys.init())
    4333           0 :         return;
    4334             : 
    4335           0 :     for (GCZonesIter zone(runtime); !zone.done(); zone.next()) {
    4336           0 :         if (!WeakMapBase::saveZoneMarkedWeakMaps(zone, markedWeakMaps))
    4337           0 :             return;
    4338             : 
    4339           0 :         AutoEnterOOMUnsafeRegion oomUnsafe;
    4340           0 :         for (gc::WeakKeyTable::Range r = zone->gcWeakKeys().all(); !r.empty(); r.popFront()) {
    4341           0 :             if (!savedWeakKeys.put(Move(r.front().key), Move(r.front().value)))
    4342           0 :                 oomUnsafe.crash("saving weak keys table for validator");
    4343             :         }
    4344             : 
    4345           0 :         if (!zone->gcWeakKeys().clear())
    4346           0 :             oomUnsafe.crash("clearing weak keys table for validator");
    4347             :     }
    4348             : 
    4349             :     /*
    4350             :      * After this point, the function should run to completion, so we shouldn't
    4351             :      * do anything fallible.
    4352             :      */
    4353           0 :     initialized = true;
    4354             : 
    4355             :     /* Re-do all the marking, but non-incrementally. */
    4356           0 :     js::gc::State state = gc->incrementalState;
    4357           0 :     gc->incrementalState = State::MarkRoots;
    4358             : 
    4359             :     {
    4360           0 :         gcstats::AutoPhase ap(gc->stats(), gcstats::PhaseKind::PREPARE);
    4361             : 
    4362             :         {
    4363           0 :             gcstats::AutoPhase ap(gc->stats(), gcstats::PhaseKind::UNMARK);
    4364             : 
    4365           0 :             for (GCZonesIter zone(runtime); !zone.done(); zone.next())
    4366           0 :                 WeakMapBase::unmarkZone(zone);
    4367             : 
    4368           0 :             MOZ_ASSERT(gcmarker->isDrained());
    4369           0 :             gcmarker->reset();
    4370             : 
    4371           0 :             for (auto chunk = gc->allNonEmptyChunks(); !chunk.done(); chunk.next())
    4372           0 :                 chunk->bitmap.clear();
    4373             :         }
    4374             :     }
    4375             : 
    4376             :     {
    4377           0 :         gcstats::AutoPhase ap(gc->stats(), gcstats::PhaseKind::MARK);
    4378             : 
    4379           0 :         gc->traceRuntimeForMajorGC(gcmarker, lock);
    4380             : 
    4381           0 :         gc->incrementalState = State::Mark;
    4382           0 :         auto unlimited = SliceBudget::unlimited();
    4383           0 :         MOZ_RELEASE_ASSERT(gc->marker.drainMarkStack(unlimited));
    4384             :     }
    4385             : 
    4386           0 :     gc->incrementalState = State::Sweep;
    4387             :     {
    4388           0 :         gcstats::AutoPhase ap1(gc->stats(), gcstats::PhaseKind::SWEEP);
    4389           0 :         gcstats::AutoPhase ap2(gc->stats(), gcstats::PhaseKind::SWEEP_MARK);
    4390             : 
    4391           0 :         gc->markAllWeakReferences(gcstats::PhaseKind::SWEEP_MARK_WEAK);
    4392             : 
    4393             :         /* Update zone state for gray marking. */
    4394           0 :         for (GCZonesIter zone(runtime); !zone.done(); zone.next()) {
    4395           0 :             MOZ_ASSERT(zone->isGCMarkingBlack());
    4396           0 :             zone->setGCState(Zone::MarkGray);
    4397             :         }
    4398           0 :         gc->marker.setMarkColorGray();
    4399             : 
    4400           0 :         gc->markAllGrayReferences(gcstats::PhaseKind::SWEEP_MARK_GRAY);
    4401           0 :         gc->markAllWeakReferences(gcstats::PhaseKind::SWEEP_MARK_GRAY_WEAK);
    4402             : 
    4403             :         /* Restore zone state. */
    4404           0 :         for (GCZonesIter zone(runtime); !zone.done(); zone.next()) {
    4405           0 :             MOZ_ASSERT(zone->isGCMarkingGray());
    4406           0 :             zone->setGCState(Zone::Mark);
    4407             :         }
    4408           0 :         MOZ_ASSERT(gc->marker.isDrained());
    4409           0 :         gc->marker.setMarkColorBlack();
    4410             :     }
    4411             : 
    4412             :     /* Take a copy of the non-incremental mark state and restore the original. */
    4413           0 :     for (auto chunk = gc->allNonEmptyChunks(); !chunk.done(); chunk.next()) {
    4414           0 :         ChunkBitmap* bitmap = &chunk->bitmap;
    4415           0 :         ChunkBitmap* entry = map.lookup(chunk)->value();
    4416           0 :         Swap(*entry, *bitmap);
    4417             :     }
    4418             : 
    4419           0 :     for (GCZonesIter zone(runtime); !zone.done(); zone.next()) {
    4420           0 :         WeakMapBase::unmarkZone(zone);
    4421           0 :         AutoEnterOOMUnsafeRegion oomUnsafe;
    4422           0 :         if (!zone->gcWeakKeys().clear())
    4423           0 :             oomUnsafe.crash("clearing weak keys table for validator");
    4424             :     }
    4425             : 
    4426           0 :     WeakMapBase::restoreMarkedWeakMaps(markedWeakMaps);
    4427             : 
    4428           0 :     for (gc::WeakKeyTable::Range r = savedWeakKeys.all(); !r.empty(); r.popFront()) {
    4429           0 :         AutoEnterOOMUnsafeRegion oomUnsafe;
    4430           0 :         Zone* zone = gc::TenuredCell::fromPointer(r.front().key.asCell())->zone();
    4431           0 :         if (!zone->gcWeakKeys().put(Move(r.front().key), Move(r.front().value)))
    4432           0 :             oomUnsafe.crash("restoring weak keys table for validator");
    4433             :     }
    4434             : 
    4435           0 :     gc->incrementalState = state;
    4436             : }
    4437             : 
    4438             : void
    4439           0 : js::gc::MarkingValidator::validate()
    4440             : {
    4441             :     /*
    4442             :      * Validates the incremental marking for a single compartment by comparing
    4443             :      * the mark bits to those previously recorded for a non-incremental mark.
    4444             :      */
    4445             : 
    4446           0 :     if (!initialized)
    4447           0 :         return;
    4448             : 
    4449           0 :     gc->waitBackgroundSweepEnd();
    4450             : 
    4451           0 :     for (auto chunk = gc->allNonEmptyChunks(); !chunk.done(); chunk.next()) {
    4452           0 :         BitmapMap::Ptr ptr = map.lookup(chunk);
    4453           0 :         if (!ptr)
    4454           0 :             continue;  /* Allocated after we did the non-incremental mark. */
    4455             : 
    4456           0 :         ChunkBitmap* bitmap = ptr->value();
    4457           0 :         ChunkBitmap* incBitmap = &chunk->bitmap;
    4458             : 
    4459           0 :         for (size_t i = 0; i < ArenasPerChunk; i++) {
    4460           0 :             if (chunk->decommittedArenas.get(i))
    4461           0 :                 continue;
    4462           0 :             Arena* arena = &chunk->arenas[i];
    4463           0 :             if (!arena->allocated())
    4464           0 :                 continue;
    4465           0 :             if (!arena->zone->isGCSweeping())
    4466           0 :                 continue;
    4467           0 :             if (arena->allocatedDuringIncremental)
    4468           0 :                 continue;
    4469             : 
    4470           0 :             AllocKind kind = arena->getAllocKind();
    4471           0 :             uintptr_t thing = arena->thingsStart();
    4472           0 :             uintptr_t end = arena->thingsEnd();
    4473           0 :             while (thing < end) {
    4474           0 :                 auto cell = reinterpret_cast<TenuredCell*>(thing);
    4475             : 
    4476             :                 /*
    4477             :                  * If a non-incremental GC wouldn't have collected a cell, then
    4478             :                  * an incremental GC won't collect it.
    4479             :                  */
    4480           0 :                 if (bitmap->isMarkedAny(cell))
    4481           0 :                     MOZ_RELEASE_ASSERT(incBitmap->isMarkedAny(cell));
    4482             : 
    4483             :                 /*
    4484             :                  * If the cycle collector isn't allowed to collect an object
    4485             :                  * after a non-incremental GC has run, then it isn't allowed to
    4486             :                  * collected it after an incremental GC.
    4487             :                  */
    4488           0 :                 if (!bitmap->isMarkedGray(cell))
    4489           0 :                     MOZ_RELEASE_ASSERT(!incBitmap->isMarkedGray(cell));
    4490             : 
    4491           0 :                 thing += Arena::thingSize(kind);
    4492             :             }
    4493             :         }
    4494             :     }
    4495             : }
    4496             : 
    4497             : #endif // JS_GC_ZEAL
    4498             : 
    4499             : void
    4500           0 : GCRuntime::computeNonIncrementalMarkingForValidation(AutoLockForExclusiveAccess& lock)
    4501             : {
    4502             : #ifdef JS_GC_ZEAL
    4503           0 :     MOZ_ASSERT(!markingValidator);
    4504           0 :     if (isIncremental && hasZealMode(ZealMode::IncrementalMarkingValidator))
    4505           0 :         markingValidator = js_new<MarkingValidator>(this);
    4506           0 :     if (markingValidator)
    4507           0 :         markingValidator->nonIncrementalMark(lock);
    4508             : #endif
    4509           0 : }
    4510             : 
    4511             : void
    4512           0 : GCRuntime::validateIncrementalMarking()
    4513             : {
    4514             : #ifdef JS_GC_ZEAL
    4515           0 :     if (markingValidator)
    4516           0 :         markingValidator->validate();
    4517             : #endif
    4518           0 : }
    4519             : 
    4520             : void
    4521           0 : GCRuntime::finishMarkingValidation()
    4522             : {
    4523             : #ifdef JS_GC_ZEAL
    4524           0 :     js_delete(markingValidator.ref());
    4525           0 :     markingValidator = nullptr;
    4526             : #endif
    4527           0 : }
    4528             : 
    4529             : static void
    4530           0 : DropStringWrappers(JSRuntime* rt)
    4531             : {
    4532             :     /*
    4533             :      * String "wrappers" are dropped on GC because their presence would require
    4534             :      * us to sweep the wrappers in all compartments every time we sweep a
    4535             :      * compartment group.
    4536             :      */
    4537           0 :     for (CompartmentsIter c(rt, SkipAtoms); !c.done(); c.next()) {
    4538           0 :         for (JSCompartment::StringWrapperEnum e(c); !e.empty(); e.popFront()) {
    4539           0 :             MOZ_ASSERT(e.front().key().is<JSString*>());
    4540           0 :             e.removeFront();
    4541             :         }
    4542             :     }
    4543           0 : }
    4544             : 
    4545             : /*
    4546             :  * Group zones that must be swept at the same time.
    4547             :  *
    4548             :  * If compartment A has an edge to an unmarked object in compartment B, then we
    4549             :  * must not sweep A in a later slice than we sweep B. That's because a write
    4550             :  * barrier in A could lead to the unmarked object in B becoming marked.
    4551             :  * However, if we had already swept that object, we would be in trouble.
    4552             :  *
    4553             :  * If we consider these dependencies as a graph, then all the compartments in
    4554             :  * any strongly-connected component of this graph must be swept in the same
    4555             :  * slice.
    4556             :  *
    4557             :  * Tarjan's algorithm is used to calculate the components.
    4558             :  */
    4559             : namespace {
    4560             : struct AddOutgoingEdgeFunctor {
    4561             :     bool needsEdge_;
    4562             :     ZoneComponentFinder& finder_;
    4563             : 
    4564           0 :     AddOutgoingEdgeFunctor(bool needsEdge, ZoneComponentFinder& finder)
    4565           0 :       : needsEdge_(needsEdge), finder_(finder)
    4566           0 :     {}
    4567             : 
    4568             :     template <typename T>
    4569           0 :     void operator()(T tp) {
    4570           0 :         TenuredCell& other = (*tp)->asTenured();
    4571             : 
    4572             :         /*
    4573             :          * Add edge to wrapped object compartment if wrapped object is not
    4574             :          * marked black to indicate that wrapper compartment not be swept
    4575             :          * after wrapped compartment.
    4576             :          */
    4577           0 :         if (needsEdge_) {
    4578           0 :             JS::Zone* zone = other.zone();
    4579           0 :             if (zone->isGCMarking())
    4580           0 :                 finder_.addEdgeTo(zone);
    4581             :         }
    4582           0 :     }
    4583             : };
    4584             : } // namespace (anonymous)
    4585             : 
    4586             : void
    4587           0 : JSCompartment::findOutgoingEdges(ZoneComponentFinder& finder)
    4588             : {
    4589           0 :     for (js::WrapperMap::Enum e(crossCompartmentWrappers); !e.empty(); e.popFront()) {
    4590           0 :         CrossCompartmentKey& key = e.front().mutableKey();
    4591           0 :         MOZ_ASSERT(!key.is<JSString*>());
    4592           0 :         bool needsEdge = true;
    4593           0 :         if (key.is<JSObject*>()) {
    4594           0 :             TenuredCell& other = key.as<JSObject*>()->asTenured();
    4595           0 :             needsEdge = !other.isMarkedBlack();
    4596             :         }
    4597           0 :         key.applyToWrapped(AddOutgoingEdgeFunctor(needsEdge, finder));
    4598             :     }
    4599           0 : }
    4600             : 
    4601             : void
    4602           0 : Zone::findOutgoingEdges(ZoneComponentFinder& finder)
    4603             : {
    4604             :     /*
    4605             :      * Any compartment may have a pointer to an atom in the atoms
    4606             :      * compartment, and these aren't in the cross compartment map.
    4607             :      */
    4608           0 :     JSRuntime* rt = runtimeFromActiveCooperatingThread();
    4609           0 :     Zone* atomsZone = rt->atomsCompartment(finder.lock)->zone();
    4610           0 :     if (atomsZone->isGCMarking())
    4611           0 :         finder.addEdgeTo(atomsZone);
    4612             : 
    4613           0 :     for (CompartmentsInZoneIter comp(this); !comp.done(); comp.next())
    4614           0 :         comp->findOutgoingEdges(finder);
    4615             : 
    4616           0 :     for (ZoneSet::Range r = gcSweepGroupEdges().all(); !r.empty(); r.popFront()) {
    4617           0 :         if (r.front()->isGCMarking())
    4618           0 :             finder.addEdgeTo(r.front());
    4619             :     }
    4620             : 
    4621           0 :     Debugger::findZoneEdges(this, finder);
    4622           0 : }
    4623             : 
    4624             : bool
    4625           0 : GCRuntime::findInterZoneEdges()
    4626             : {
    4627             :     /*
    4628             :      * Weakmaps which have keys with delegates in a different zone introduce the
    4629             :      * need for zone edges from the delegate's zone to the weakmap zone.
    4630             :      *
    4631             :      * Since the edges point into and not away from the zone the weakmap is in
    4632             :      * we must find these edges in advance and store them in a set on the Zone.
    4633             :      * If we run out of memory, we fall back to sweeping everything in one
    4634             :      * group.
    4635             :      */
    4636             : 
    4637           0 :     for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
    4638           0 :         if (!WeakMapBase::findInterZoneEdges(zone))
    4639           0 :             return false;
    4640             :     }
    4641             : 
    4642           0 :     return true;
    4643             : }
    4644             : 
    4645             : void
    4646           0 : GCRuntime::groupZonesForSweeping(JS::gcreason::Reason reason, AutoLockForExclusiveAccess& lock)
    4647             : {
    4648             : #ifdef DEBUG
    4649           0 :     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next())
    4650           0 :         MOZ_ASSERT(zone->gcSweepGroupEdges().empty());
    4651             : #endif
    4652             : 
    4653           0 :     JSContext* cx = TlsContext.get();
    4654           0 :     ZoneComponentFinder finder(cx->nativeStackLimit[JS::StackForSystemCode], lock);
    4655           0 :     if (!isIncremental || !findInterZoneEdges())
    4656           0 :         finder.useOneComponent();
    4657             : 
    4658             : #ifdef JS_GC_ZEAL
    4659             :     // Use one component for IncrementalSweepThenFinish zeal mode.
    4660           0 :     if (isIncremental && reason == JS::gcreason::DEBUG_GC &&
    4661           0 :         hasZealMode(ZealMode::IncrementalSweepThenFinish))
    4662             :     {
    4663           0 :         finder.useOneComponent();
    4664             :     }
    4665             : #endif
    4666             : 
    4667           0 :     for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
    4668           0 :         MOZ_ASSERT(zone->isGCMarking());
    4669           0 :         finder.addNode(zone);
    4670             :     }
    4671           0 :     sweepGroups = finder.getResultsList();
    4672           0 :     currentSweepGroup = sweepGroups;
    4673           0 :     sweepGroupIndex = 0;
    4674             : 
    4675           0 :     for (GCZonesIter zone(rt); !zone.done(); zone.next())
    4676           0 :         zone->gcSweepGroupEdges().clear();
    4677             : 
    4678             : #ifdef DEBUG
    4679           0 :     for (Zone* head = currentSweepGroup; head; head = head->nextGroup()) {
    4680           0 :         for (Zone* zone = head; zone; zone = zone->nextNodeInGroup())
    4681           0 :             MOZ_ASSERT(zone->isGCMarking());
    4682             :     }
    4683             : 
    4684           0 :     MOZ_ASSERT_IF(!isIncremental, !currentSweepGroup->nextGroup());
    4685           0 :     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next())
    4686           0 :         MOZ_ASSERT(zone->gcSweepGroupEdges().empty());
    4687             : #endif
    4688           0 : }
    4689             : 
    4690             : static void
    4691             : ResetGrayList(JSCompartment* comp);
    4692             : 
    4693             : void
    4694           0 : GCRuntime::getNextSweepGroup()
    4695             : {
    4696           0 :     currentSweepGroup = currentSweepGroup->nextGroup();
    4697           0 :     ++sweepGroupIndex;
    4698           0 :     if (!currentSweepGroup) {
    4699           0 :         abortSweepAfterCurrentGroup = false;
    4700           0 :         return;
    4701             :     }
    4702             : 
    4703           0 :     for (Zone* zone = currentSweepGroup; zone; zone = zone->nextNodeInGroup()) {
    4704           0 :         MOZ_ASSERT(zone->isGCMarking());
    4705           0 :         MOZ_ASSERT(!zone->isQueuedForBackgroundSweep());
    4706             :     }
    4707             : 
    4708           0 :     if (!isIncremental)
    4709           0 :         ZoneComponentFinder::mergeGroups(currentSweepGroup);
    4710             : 
    4711           0 :     if (abortSweepAfterCurrentGroup) {
    4712           0 :         MOZ_ASSERT(!isIncremental);
    4713           0 :         for (GCSweepGroupIter zone(rt); !zone.done(); zone.next()) {
    4714           0 :             MOZ_ASSERT(!zone->gcNextGraphComponent);
    4715           0 :             MOZ_ASSERT(zone->isGCMarking());
    4716           0 :             zone->setNeedsIncrementalBarrier(false);
    4717           0 :             zone->setGCState(Zone::NoGC);
    4718           0 :             zone->gcGrayRoots().clearAndFree();
    4719             :         }
    4720             : 
    4721           0 :         for (GCCompartmentGroupIter comp(rt); !comp.done(); comp.next())
    4722           0 :             ResetGrayList(comp);
    4723             : 
    4724           0 :         abortSweepAfterCurrentGroup = false;
    4725           0 :         currentSweepGroup = nullptr;
    4726             :     }
    4727             : }
    4728             : 
    4729             : /*
    4730             :  * Gray marking:
    4731             :  *
    4732             :  * At the end of collection, anything reachable from a gray root that has not
    4733             :  * otherwise been marked black must be marked gray.
    4734             :  *
    4735             :  * This means that when marking things gray we must not allow marking to leave
    4736             :  * the current compartment group, as that could result in things being marked
    4737             :  * grey when they might subsequently be marked black.  To achieve this, when we
    4738             :  * find a cross compartment pointer we don't mark the referent but add it to a
    4739             :  * singly-linked list of incoming gray pointers that is stored with each
    4740             :  * compartment.
    4741             :  *
    4742             :  * The list head is stored in JSCompartment::gcIncomingGrayPointers and contains
    4743             :  * cross compartment wrapper objects. The next pointer is stored in the second
    4744             :  * extra slot of the cross compartment wrapper.
    4745             :  *
    4746             :  * The list is created during gray marking when one of the
    4747             :  * MarkCrossCompartmentXXX functions is called for a pointer that leaves the
    4748             :  * current compartent group.  This calls DelayCrossCompartmentGrayMarking to
    4749             :  * push the referring object onto the list.
    4750             :  *
    4751             :  * The list is traversed and then unlinked in
    4752             :  * MarkIncomingCrossCompartmentPointers.
    4753             :  */
    4754             : 
    4755             : static bool
    4756          94 : IsGrayListObject(JSObject* obj)
    4757             : {
    4758          94 :     MOZ_ASSERT(obj);
    4759          94 :     return obj->is<CrossCompartmentWrapperObject>() && !IsDeadProxyObject(obj);
    4760             : }
    4761             : 
    4762             : /* static */ unsigned
    4763          46 : ProxyObject::grayLinkReservedSlot(JSObject* obj)
    4764             : {
    4765          46 :     MOZ_ASSERT(IsGrayListObject(obj));
    4766          46 :     return CrossCompartmentWrapperObject::GrayLinkReservedSlot;
    4767             : }
    4768             : 
    4769             : #ifdef DEBUG
    4770             : static void
    4771           0 : AssertNotOnGrayList(JSObject* obj)
    4772             : {
    4773           0 :     MOZ_ASSERT_IF(IsGrayListObject(obj),
    4774             :                   GetProxyReservedSlot(obj, ProxyObject::grayLinkReservedSlot(obj)).isUndefined());
    4775           0 : }
    4776             : #endif
    4777             : 
    4778             : static void
    4779           0 : AssertNoWrappersInGrayList(JSRuntime* rt)
    4780             : {
    4781             : #ifdef DEBUG
    4782           0 :     for (CompartmentsIter c(rt, SkipAtoms); !c.done(); c.next()) {
    4783           0 :         MOZ_ASSERT(!c->gcIncomingGrayPointers);
    4784           0 :         for (JSCompartment::NonStringWrapperEnum e(c); !e.empty(); e.popFront())
    4785           0 :             AssertNotOnGrayList(&e.front().value().unbarrieredGet().toObject());
    4786             :     }
    4787             : #endif
    4788           0 : }
    4789             : 
    4790             : static JSObject*
    4791           0 : CrossCompartmentPointerReferent(JSObject* obj)
    4792             : {
    4793           0 :     MOZ_ASSERT(IsGrayListObject(obj));
    4794           0 :     return &obj->as<ProxyObject>().private_().toObject();
    4795             : }
    4796             : 
    4797             : static JSObject*
    4798           0 : NextIncomingCrossCompartmentPointer(JSObject* prev, bool unlink)
    4799             : {
    4800           0 :     unsigned slot = ProxyObject::grayLinkReservedSlot(prev);
    4801           0 :     JSObject* next = GetProxyReservedSlot(prev, slot).toObjectOrNull();
    4802           0 :     MOZ_ASSERT_IF(next, IsGrayListObject(next));
    4803             : 
    4804           0 :     if (unlink)
    4805           0 :         SetProxyReservedSlot(prev, slot, UndefinedValue());
    4806             : 
    4807           0 :     return next;
    4808             : }
    4809             : 
    4810             : void
    4811           0 : js::DelayCrossCompartmentGrayMarking(JSObject* src)
    4812             : {
    4813           0 :     MOZ_ASSERT(IsGrayListObject(src));
    4814             : 
    4815             :     /* Called from MarkCrossCompartmentXXX functions. */
    4816           0 :     unsigned slot = ProxyObject::grayLinkReservedSlot(src);
    4817           0 :     JSObject* dest = CrossCompartmentPointerReferent(src);
    4818           0 :     JSCompartment* comp = dest->compartment();
    4819             : 
    4820           0 :     if (GetProxyReservedSlot(src, slot).isUndefined()) {
    4821           0 :         SetProxyReservedSlot(src, slot, ObjectOrNullValue(comp->gcIncomingGrayPointers));
    4822           0 :         comp->gcIncomingGrayPointers = src;
    4823             :     } else {
    4824           0 :         MOZ_ASSERT(GetProxyReservedSlot(src, slot).isObjectOrNull());
    4825             :     }
    4826             : 
    4827             : #ifdef DEBUG
    4828             :     /*
    4829             :      * Assert that the object is in our list, also walking the list to check its
    4830             :      * integrity.
    4831             :      */
    4832           0 :     JSObject* obj = comp->gcIncomingGrayPointers;
    4833           0 :     bool found = false;
    4834           0 :     while (obj) {
    4835           0 :         if (obj == src)
    4836           0 :             found = true;
    4837           0 :         obj = NextIncomingCrossCompartmentPointer(obj, false);
    4838             :     }
    4839           0 :     MOZ_ASSERT(found);
    4840             : #endif
    4841           0 : }
    4842             : 
    4843             : static void
    4844           0 : MarkIncomingCrossCompartmentPointers(JSRuntime* rt, MarkColor color)
    4845             : {
    4846           0 :     MOZ_ASSERT(color == MarkColor::Black || color == MarkColor::Gray);
    4847             : 
    4848             :     static const gcstats::PhaseKind statsPhases[] = {
    4849             :         gcstats::PhaseKind::SWEEP_MARK_INCOMING_BLACK,
    4850             :         gcstats::PhaseKind::SWEEP_MARK_INCOMING_GRAY
    4851             :     };
    4852           0 :     gcstats::AutoPhase ap1(rt->gc.stats(), statsPhases[unsigned(color)]);
    4853             : 
    4854           0 :     bool unlinkList = color == MarkColor::Gray;
    4855             : 
    4856           0 :     for (GCCompartmentGroupIter c(rt); !c.done(); c.next()) {
    4857           0 :         MOZ_ASSERT_IF(color == MarkColor::Gray, c->zone()->isGCMarkingGray());
    4858           0 :         MOZ_ASSERT_IF(color == MarkColor::Black, c->zone()->isGCMarkingBlack());
    4859           0 :         MOZ_ASSERT_IF(c->gcIncomingGrayPointers, IsGrayListObject(c->gcIncomingGrayPointers));
    4860             : 
    4861           0 :         for (JSObject* src = c->gcIncomingGrayPointers;
    4862           0 :              src;
    4863           0 :              src = NextIncomingCrossCompartmentPointer(src, unlinkList))
    4864             :         {
    4865           0 :             JSObject* dst = CrossCompartmentPointerReferent(src);
    4866           0 :             MOZ_ASSERT(dst->compartment() == c);
    4867             : 
    4868           0 :             if (color == MarkColor::Gray) {
    4869           0 :                 if (IsMarkedUnbarriered(rt, &src) && src->asTenured().isMarkedGray())
    4870           0 :                     TraceManuallyBarrieredEdge(&rt->gc.marker, &dst,
    4871           0 :                                                "cross-compartment gray pointer");
    4872             :             } else {
    4873           0 :                 if (IsMarkedUnbarriered(rt, &src) && !src->asTenured().isMarkedGray())
    4874           0 :                     TraceManuallyBarrieredEdge(&rt->gc.marker, &dst,
    4875           0 :                                                "cross-compartment black pointer");
    4876             :             }
    4877             :         }
    4878             : 
    4879           0 :         if (unlinkList)
    4880           0 :             c->gcIncomingGrayPointers = nullptr;
    4881             :     }
    4882             : 
    4883           0 :     auto unlimited = SliceBudget::unlimited();
    4884           0 :     MOZ_RELEASE_ASSERT(rt->gc.marker.drainMarkStack(unlimited));
    4885           0 : }
    4886             : 
    4887             : static bool
    4888          48 : RemoveFromGrayList(JSObject* wrapper)
    4889             : {
    4890          48 :     if (!IsGrayListObject(wrapper))
    4891           2 :         return false;
    4892             : 
    4893          46 :     unsigned slot = ProxyObject::grayLinkReservedSlot(wrapper);
    4894          46 :     if (GetProxyReservedSlot(wrapper, slot).isUndefined())
    4895          46 :         return false;  /* Not on our list. */
    4896             : 
    4897           0 :     JSObject* tail = GetProxyReservedSlot(wrapper, slot).toObjectOrNull();
    4898           0 :     SetProxyReservedSlot(wrapper, slot, UndefinedValue());
    4899             : 
    4900           0 :     JSCompartment* comp = CrossCompartmentPointerReferent(wrapper)->compartment();
    4901           0 :     JSObject* obj = comp->gcIncomingGrayPointers;
    4902           0 :     if (obj == wrapper) {
    4903           0 :         comp->gcIncomingGrayPointers = tail;
    4904           0 :         return true;
    4905             :     }
    4906             : 
    4907           0 :     while (obj) {
    4908           0 :         unsigned slot = ProxyObject::grayLinkReservedSlot(obj);
    4909           0 :         JSObject* next = GetProxyReservedSlot(obj, slot).toObjectOrNull();
    4910           0 :         if (next == wrapper) {
    4911           0 :             SetProxyReservedSlot(obj, slot, ObjectOrNullValue(tail));
    4912           0 :             return true;
    4913             :         }
    4914           0 :         obj = next;
    4915             :     }
    4916             : 
    4917           0 :     MOZ_CRASH("object not found in gray link list");
    4918             : }
    4919             : 
    4920             : static void
    4921           0 : ResetGrayList(JSCompartment* comp)
    4922             : {
    4923           0 :     JSObject* src = comp->gcIncomingGrayPointers;
    4924           0 :     while (src)
    4925           0 :         src = NextIncomingCrossCompartmentPointer(src, true);
    4926           0 :     comp->gcIncomingGrayPointers = nullptr;
    4927           0 : }
    4928             : 
    4929             : void
    4930          44 : js::NotifyGCNukeWrapper(JSObject* obj)
    4931             : {
    4932             :     /*
    4933             :      * References to target of wrapper are being removed, we no longer have to
    4934             :      * remember to mark it.
    4935             :      */
    4936          44 :     RemoveFromGrayList(obj);
    4937          44 : }
    4938             : 
    4939             : enum {
    4940             :     JS_GC_SWAP_OBJECT_A_REMOVED = 1 << 0,
    4941             :     JS_GC_SWAP_OBJECT_B_REMOVED = 1 << 1
    4942             : };
    4943             : 
    4944             : unsigned
    4945           2 : js::NotifyGCPreSwap(JSObject* a, JSObject* b)
    4946             : {
    4947             :     /*
    4948             :      * Two objects in the same compartment are about to have had their contents
    4949             :      * swapped.  If either of them are in our gray pointer list, then we remove
    4950             :      * them from the lists, returning a bitset indicating what happened.
    4951             :      */
    4952           4 :     return (RemoveFromGrayList(a) ? JS_GC_SWAP_OBJECT_A_REMOVED : 0) |
    4953           4 :            (RemoveFromGrayList(b) ? JS_GC_SWAP_OBJECT_B_REMOVED : 0);
    4954             : }
    4955             : 
    4956             : void
    4957           2 : js::NotifyGCPostSwap(JSObject* a, JSObject* b, unsigned removedFlags)
    4958             : {
    4959             :     /*
    4960             :      * Two objects in the same compartment have had their contents swapped.  If
    4961             :      * either of them were in our gray pointer list, we re-add them again.
    4962             :      */
    4963           2 :     if (removedFlags & JS_GC_SWAP_OBJECT_A_REMOVED)
    4964           0 :         DelayCrossCompartmentGrayMarking(b);
    4965           2 :     if (removedFlags & JS_GC_SWAP_OBJECT_B_REMOVED)
    4966           0 :         DelayCrossCompartmentGrayMarking(a);
    4967           2 : }
    4968             : 
    4969             : void
    4970           0 : GCRuntime::endMarkingSweepGroup()
    4971             : {
    4972           0 :     gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::SWEEP_MARK);
    4973             : 
    4974             :     /*
    4975             :      * Mark any incoming black pointers from previously swept compartments
    4976             :      * whose referents are not marked. This can occur when gray cells become
    4977             :      * black by the action of UnmarkGray.
    4978             :      */
    4979           0 :     MarkIncomingCrossCompartmentPointers(rt, MarkColor::Black);
    4980           0 :     markWeakReferencesInCurrentGroup(gcstats::PhaseKind::SWEEP_MARK_WEAK);
    4981             : 
    4982             :     /*
    4983             :      * Change state of current group to MarkGray to restrict marking to this
    4984             :      * group.  Note that there may be pointers to the atoms compartment, and
    4985             :      * these will be marked through, as they are not marked with
    4986             :      * MarkCrossCompartmentXXX.
    4987             :      */
    4988           0 :     for (GCSweepGroupIter zone(rt); !zone.done(); zone.next()) {
    4989           0 :         MOZ_ASSERT(zone->isGCMarkingBlack());
    4990           0 :         zone->setGCState(Zone::MarkGray);
    4991             :     }
    4992           0 :     marker.setMarkColorGray();
    4993             : 
    4994             :     /* Mark incoming gray pointers from previously swept compartments. */
    4995           0 :     MarkIncomingCrossCompartmentPointers(rt, MarkColor::Gray);
    4996             : 
    4997             :     /* Mark gray roots and mark transitively inside the current compartment group. */
    4998           0 :     markGrayReferencesInCurrentGroup(gcstats::PhaseKind::SWEEP_MARK_GRAY);
    4999           0 :     markWeakReferencesInCurrentGroup(gcstats::PhaseKind::SWEEP_MARK_GRAY_WEAK);
    5000             : 
    5001             :     /* Restore marking state. */
    5002           0 :     for (GCSweepGroupIter zone(rt); !zone.done(); zone.next()) {
    5003           0 :         MOZ_ASSERT(zone->isGCMarkingGray());
    5004           0 :         zone->setGCState(Zone::Mark);
    5005             :     }
    5006           0 :     MOZ_ASSERT(marker.isDrained());
    5007           0 :     marker.setMarkColorBlack();
    5008           0 : }
    5009             : 
    5010             : // Causes the given WeakCache to be swept when run.
    5011           0 : class ImmediateSweepWeakCacheTask : public GCParallelTask
    5012             : {
    5013             :     JS::detail::WeakCacheBase& cache;
    5014             : 
    5015             :     ImmediateSweepWeakCacheTask(const ImmediateSweepWeakCacheTask&) = delete;
    5016             : 
    5017             :   public:
    5018           0 :     ImmediateSweepWeakCacheTask(JSRuntime* rt, JS::detail::WeakCacheBase& wc)
    5019           0 :       : GCParallelTask(rt), cache(wc)
    5020           0 :     {}
    5021             : 
    5022           0 :     ImmediateSweepWeakCacheTask(ImmediateSweepWeakCacheTask&& other)
    5023           0 :       : GCParallelTask(mozilla::Move(other)), cache(other.cache)
    5024           0 :     {}
    5025             : 
    5026           0 :     void run() override {
    5027           0 :         cache.sweep();
    5028           0 :     }
    5029             : };
    5030             : 
    5031             : static void
    5032           0 : UpdateAtomsBitmap(JSRuntime* runtime)
    5033             : {
    5034           0 :     DenseBitmap marked;
    5035           0 :     if (runtime->gc.atomMarking.computeBitmapFromChunkMarkBits(runtime, marked)) {
    5036           0 :         for (GCZonesIter zone(runtime); !zone.done(); zone.next())
    5037           0 :             runtime->gc.atomMarking.updateZoneBitmap(zone, marked);
    5038             :     } else {
    5039             :         // Ignore OOM in computeBitmapFromChunkMarkBits. The updateZoneBitmap
    5040             :         // call can only remove atoms from the zone bitmap, so it is
    5041             :         // conservative to just not call it.
    5042             :     }
    5043             : 
    5044           0 :     runtime->gc.atomMarking.updateChunkMarkBits(runtime);
    5045             : 
    5046             :     // For convenience sweep these tables non-incrementally as part of bitmap
    5047             :     // sweeping; they are likely to be much smaller than the main atoms table.
    5048           0 :     runtime->unsafeSymbolRegistry().sweep();
    5049           0 :     for (CompartmentsIter comp(runtime, SkipAtoms); !comp.done(); comp.next())
    5050           0 :         comp->sweepVarNames();
    5051           0 : }
    5052             : 
    5053             : static void
    5054           0 : SweepCCWrappers(JSRuntime* runtime)
    5055             : {
    5056           0 :     for (GCCompartmentGroupIter c(runtime); !c.done(); c.next())
    5057           0 :         c->sweepCrossCompartmentWrappers();
    5058           0 : }
    5059             : 
    5060             : static void
    5061           0 : SweepObjectGroups(JSRuntime* runtime)
    5062             : {
    5063           0 :     for (GCCompartmentGroupIter c(runtime); !c.done(); c.next())
    5064           0 :         c->objectGroups.sweep(runtime->defaultFreeOp());
    5065           0 : }
    5066             : 
    5067             : static void
    5068           0 : SweepRegExps(JSRuntime* runtime)
    5069             : {
    5070           0 :     for (GCCompartmentGroupIter c(runtime); !c.done(); c.next())
    5071           0 :         c->sweepRegExps();
    5072           0 : }
    5073             : 
    5074             : static void
    5075           0 : SweepMisc(JSRuntime* runtime)
    5076             : {
    5077           0 :     for (GCCompartmentGroupIter c(runtime); !c.done(); c.next()) {
    5078           0 :         c->sweepGlobalObject();
    5079           0 :         c->sweepTemplateObjects();
    5080           0 :         c->sweepSavedStacks();
    5081           0 :         c->sweepTemplateLiteralMap();
    5082           0 :         c->sweepSelfHostingScriptSource();
    5083           0 :         c->sweepNativeIterators();
    5084           0 :         c->sweepWatchpoints();
    5085             :     }
    5086           0 : }
    5087             : 
    5088             : static void
    5089           0 : SweepCompressionTasks(JSRuntime* runtime)
    5090             : {
    5091           0 :     AutoLockHelperThreadState lock;
    5092             : 
    5093             :     // Attach finished compression tasks.
    5094           0 :     auto& finished = HelperThreadState().compressionFinishedList(lock);
    5095           0 :     for (size_t i = 0; i < finished.length(); i++) {
    5096           0 :         if (finished[i]->runtimeMatches(runtime)) {
    5097           0 :             UniquePtr<SourceCompressionTask> task(Move(finished[i]));
    5098           0 :             HelperThreadState().remove(finished, &i);
    5099           0 :             task->complete();
    5100             :         }
    5101             :     }
    5102             : 
    5103             :     // Sweep pending tasks that are holding onto should-be-dead ScriptSources.
    5104           0 :     auto& pending = HelperThreadState().compressionPendingList(lock);
    5105           0 :     for (size_t i = 0; i < pending.length(); i++) {
    5106           0 :         if (pending[i]->shouldCancel())
    5107           0 :             HelperThreadState().remove(pending, &i);
    5108             :     }
    5109           0 : }
    5110             : 
    5111             : static void
    5112           0 : SweepWeakMaps(JSRuntime* runtime)
    5113             : {
    5114           0 :     for (GCSweepGroupIter zone(runtime); !zone.done(); zone.next()) {
    5115             :         /* Clear all weakrefs that point to unmarked things. */
    5116           0 :         for (auto edge : zone->gcWeakRefs()) {
    5117             :             /* Edges may be present multiple times, so may already be nulled. */
    5118           0 :             if (*edge && IsAboutToBeFinalizedDuringSweep(**edge))
    5119           0 :                 *edge = nullptr;
    5120             :         }
    5121           0 :         zone->gcWeakRefs().clear();
    5122             : 
    5123             :         /* No need to look up any more weakmap keys from this sweep group. */
    5124           0 :         AutoEnterOOMUnsafeRegion oomUnsafe;
    5125           0 :         if (!zone->gcWeakKeys().clear())
    5126           0 :             oomUnsafe.crash("clearing weak keys in beginSweepingSweepGroup()");
    5127             : 
    5128           0 :         zone->sweepWeakMaps();
    5129             :     }
    5130           0 : }
    5131             : 
    5132             : static void
    5133           0 : SweepUniqueIds(JSRuntime* runtime)
    5134             : {
    5135           0 :     FreeOp fop(nullptr);
    5136           0 :     for (GCSweepGroupIter zone(runtime); !zone.done(); zone.next())
    5137           0 :         zone->sweepUniqueIds(&fop);
    5138           0 : }
    5139             : 
    5140             : void
    5141           2 : GCRuntime::startTask(GCParallelTask& task, gcstats::PhaseKind phase, AutoLockHelperThreadState& locked)
    5142             : {
    5143           2 :     if (!task.startWithLockHeld(locked)) {
    5144           0 :         AutoUnlockHelperThreadState unlock(locked);
    5145           0 :         gcstats::AutoPhase ap(stats(), phase);
    5146           0 :         task.runFromActiveCooperatingThread(rt);
    5147             :     }
    5148           2 : }
    5149             : 
    5150             : void
    5151           2 : GCRuntime::joinTask(GCParallelTask& task, gcstats::PhaseKind phase, AutoLockHelperThreadState& locked)
    5152             : {
    5153             :     {
    5154           4 :         gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::JOIN_PARALLEL_TASKS);
    5155           2 :         task.joinWithLockHeld(locked);
    5156             :     }
    5157           2 :     stats().recordParallelPhase(phase, task.duration());
    5158           2 : }
    5159             : 
    5160             : void
    5161           0 : GCRuntime::sweepDebuggerOnMainThread(FreeOp* fop)
    5162             : {
    5163             :     // Detach unreachable debuggers and global objects from each other.
    5164             :     // This can modify weakmaps and so must happen before weakmap sweeping.
    5165           0 :     Debugger::sweepAll(fop);
    5166             : 
    5167           0 :     gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::SWEEP_COMPARTMENTS);
    5168             : 
    5169             :     // Sweep debug environment information. This performs lookups in the Zone's
    5170             :     // unique IDs table and so must not happen in parallel with sweeping that
    5171             :     // table.
    5172             :     {
    5173           0 :         gcstats::AutoPhase ap2(stats(), gcstats::PhaseKind::SWEEP_MISC);
    5174           0 :         for (GCCompartmentGroupIter c(rt); !c.done(); c.next())
    5175           0 :             c->sweepDebugEnvironments();
    5176             :     }
    5177             : 
    5178             :     // Sweep breakpoints. This is done here to be with the other debug sweeping,
    5179             :     // although note that it can cause JIT code to be patched.
    5180             :     {
    5181           0 :         gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::SWEEP_BREAKPOINT);
    5182           0 :         for (GCSweepGroupIter zone(rt); !zone.done(); zone.next())
    5183           0 :             zone->sweepBreakpoints(fop);
    5184             :     }
    5185           0 : }
    5186             : 
    5187             : void
    5188           0 : GCRuntime::sweepJitDataOnMainThread(FreeOp* fop)
    5189             : {
    5190             :     {
    5191           0 :         gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::SWEEP_JIT_DATA);
    5192             : 
    5193             :         // Cancel any active or pending off thread compilations.
    5194           0 :         js::CancelOffThreadIonCompile(rt, JS::Zone::Sweep);
    5195             : 
    5196           0 :         for (GCCompartmentGroupIter c(rt); !c.done(); c.next())
    5197           0 :             c->sweepJitCompartment(fop);
    5198             : 
    5199           0 :         for (GCSweepGroupIter zone(rt); !zone.done(); zone.next()) {
    5200           0 :             if (jit::JitZone* jitZone = zone->jitZone())
    5201           0 :                 jitZone->sweep(fop);
    5202             :         }
    5203             : 
    5204             :         // Bug 1071218: the following method has not yet been refactored to
    5205             :         // work on a single zone-group at once.
    5206             : 
    5207             :         // Sweep entries containing about-to-be-finalized JitCode and
    5208             :         // update relocated TypeSet::Types inside the JitcodeGlobalTable.
    5209           0 :         jit::JitRuntime::SweepJitcodeGlobalTable(rt);
    5210             :     }
    5211             : 
    5212             :     {
    5213           0 :         gcstats::AutoPhase apdc(stats(), gcstats::PhaseKind::SWEEP_DISCARD_CODE);
    5214           0 :         for (GCSweepGroupIter zone(rt); !zone.done(); zone.next())
    5215           0 :             zone->discardJitCode(fop);
    5216             :     }
    5217             : 
    5218             :     {
    5219           0 :         gcstats::AutoPhase ap1(stats(), gcstats::PhaseKind::SWEEP_TYPES);
    5220           0 :         gcstats::AutoPhase ap2(stats(), gcstats::PhaseKind::SWEEP_TYPES_BEGIN);
    5221           0 :         for (GCSweepGroupIter zone(rt); !zone.done(); zone.next())
    5222           0 :             zone->beginSweepTypes(fop, releaseObservedTypes && !zone->isPreservingCode());
    5223             :     }
    5224           0 : }
    5225             : 
    5226             : using WeakCacheTaskVector = mozilla::Vector<ImmediateSweepWeakCacheTask, 0, SystemAllocPolicy>;
    5227             : 
    5228             : enum WeakCacheLocation
    5229             : {
    5230             :     RuntimeWeakCache,
    5231             :     ZoneWeakCache
    5232             : };
    5233             : 
    5234             : // Call a functor for all weak caches that need to be swept in the current
    5235             : // sweep group.
    5236             : template <typename Functor>
    5237             : static inline bool
    5238           0 : IterateWeakCaches(JSRuntime* rt, Functor f)
    5239             : {
    5240           0 :     for (GCSweepGroupIter zone(rt); !zone.done(); zone.next()) {
    5241           0 :         for (JS::detail::WeakCacheBase* cache : zone->weakCaches()) {
    5242           0 :             if (!f(cache, ZoneWeakCache))
    5243           0 :                 return false;
    5244             :         }
    5245             :     }
    5246             : 
    5247           0 :     for (JS::detail::WeakCacheBase* cache : rt->weakCaches()) {
    5248           0 :         if (!f(cache, RuntimeWeakCache))
    5249           0 :             return false;
    5250             :     }
    5251             : 
    5252           0 :     return true;
    5253             : }
    5254             : 
    5255             : static bool
    5256           0 : PrepareWeakCacheTasks(JSRuntime* rt, WeakCacheTaskVector* immediateTasks)
    5257             : {
    5258             :     // Start incremental sweeping for caches that support it or add to a vector
    5259             :     // of sweep tasks to run on a helper thread.
    5260             : 
    5261           0 :     MOZ_ASSERT(immediateTasks->empty());
    5262             : 
    5263           0 :     bool ok = IterateWeakCaches(rt, [&] (JS::detail::WeakCacheBase* cache,
    5264           0 :                                          WeakCacheLocation location)
    5265             :     {
    5266           0 :         if (!cache->needsSweep())
    5267           0 :             return true;
    5268             : 
    5269             :         // Caches that support incremental sweeping will be swept later.
    5270           0 :         if (location == ZoneWeakCache && cache->setNeedsIncrementalBarrier(true))
    5271           0 :             return true;
    5272             : 
    5273           0 :         return immediateTasks->emplaceBack(rt, *cache);
    5274           0 :     });
    5275             : 
    5276           0 :     if (!ok)
    5277           0 :         immediateTasks->clearAndFree();
    5278             : 
    5279           0 :     return ok;
    5280             : }
    5281             : 
    5282             : static void
    5283           0 : SweepWeakCachesOnMainThread(JSRuntime* rt)
    5284             : {
    5285             :     // If we ran out of memory, do all the work on the main thread.
    5286           0 :     gcstats::AutoPhase ap(rt->gc.stats(), gcstats::PhaseKind::SWEEP_WEAK_CACHES);
    5287           0 :     IterateWeakCaches(rt, [&] (JS::detail::WeakCacheBase* cache, WeakCacheLocation location) {
    5288           0 :         if (cache->needsIncrementalBarrier())
    5289           0 :             cache->setNeedsIncrementalBarrier(false);
    5290           0 :         cache->sweep();
    5291           0 :         return true;
    5292           0 :     });
    5293           0 : }
    5294             : 
    5295             : void
    5296           0 : GCRuntime::beginSweepingSweepGroup()
    5297             : {
    5298             :     /*
    5299             :      * Begin sweeping the group of zones in currentSweepGroup, performing
    5300             :      * actions that must be done before yielding to caller.
    5301             :      */
    5302             : 
    5303             :     using namespace gcstats;
    5304             : 
    5305           0 :     AutoSCC scc(stats(), sweepGroupIndex);
    5306             : 
    5307           0 :     bool sweepingAtoms = false;
    5308           0 :     for (GCSweepGroupIter zone(rt); !zone.done(); zone.next()) {
    5309             :         /* Set the GC state to sweeping. */
    5310           0 :         MOZ_ASSERT(zone->isGCMarking());
    5311           0 :         zone->setGCState(Zone::Sweep);
    5312             : 
    5313             :         /* Purge the ArenaLists before sweeping. */
    5314           0 :         zone->arenas.purge();
    5315             : 
    5316           0 :         if (zone->isAtomsZone())
    5317           0 :             sweepingAtoms = true;
    5318             : 
    5319             : #ifdef DEBUG
    5320           0 :         zone->gcLastSweepGroupIndex = sweepGroupIndex;
    5321             : #endif
    5322             :     }
    5323             : 
    5324           0 :     validateIncrementalMarking();
    5325             : 
    5326           0 :     FreeOp fop(rt);
    5327             : 
    5328             :     {
    5329           0 :         AutoPhase ap(stats(), PhaseKind::FINALIZE_START);
    5330           0 :         callFinalizeCallbacks(&fop, JSFINALIZE_GROUP_PREPARE);
    5331             :         {
    5332           0 :             AutoPhase ap2(stats(), PhaseKind::WEAK_ZONES_CALLBACK);
    5333           0 :             callWeakPointerZonesCallbacks();
    5334             :         }
    5335             :         {
    5336           0 :             AutoPhase ap2(stats(), PhaseKind::WEAK_COMPARTMENT_CALLBACK);
    5337           0 :             for (GCSweepGroupIter zone(rt); !zone.done(); zone.next()) {
    5338           0 :                 for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next())
    5339           0 :                     callWeakPointerCompartmentCallbacks(comp);
    5340             :             }
    5341             :         }
    5342           0 :         callFinalizeCallbacks(&fop, JSFINALIZE_GROUP_START);
    5343             :     }
    5344             : 
    5345           0 :     sweepDebuggerOnMainThread(&fop);
    5346             : 
    5347             :     {
    5348           0 :         AutoLockHelperThreadState lock;
    5349             : 
    5350           0 :         Maybe<AutoRunParallelTask> updateAtomsBitmap;
    5351           0 :         if (sweepingAtoms)
    5352           0 :             updateAtomsBitmap.emplace(rt, UpdateAtomsBitmap, PhaseKind::UPDATE_ATOMS_BITMAP, lock);
    5353             : 
    5354           0 :         AutoPhase ap(stats(), PhaseKind::SWEEP_COMPARTMENTS);
    5355             : 
    5356           0 :         AutoRunParallelTask sweepCCWrappers(rt, SweepCCWrappers, PhaseKind::SWEEP_CC_WRAPPER, lock);
    5357           0 :         AutoRunParallelTask sweepObjectGroups(rt, SweepObjectGroups, PhaseKind::SWEEP_TYPE_OBJECT, lock);
    5358           0 :         AutoRunParallelTask sweepRegExps(rt, SweepRegExps, PhaseKind::SWEEP_REGEXP, lock);
    5359           0 :         AutoRunParallelTask sweepMisc(rt, SweepMisc, PhaseKind::SWEEP_MISC, lock);
    5360           0 :         AutoRunParallelTask sweepCompTasks(rt, SweepCompressionTasks, PhaseKind::SWEEP_COMPRESSION, lock);
    5361           0 :         AutoRunParallelTask sweepWeakMaps(rt, SweepWeakMaps, PhaseKind::SWEEP_WEAKMAPS, lock);
    5362           0 :         AutoRunParallelTask sweepUniqueIds(rt, SweepUniqueIds, PhaseKind::SWEEP_UNIQUEIDS, lock);
    5363             : 
    5364           0 :         WeakCacheTaskVector sweepCacheTasks;
    5365           0 :         if (!PrepareWeakCacheTasks(rt, &sweepCacheTasks))
    5366           0 :             SweepWeakCachesOnMainThread(rt);
    5367             : 
    5368           0 :         for (auto& task : sweepCacheTasks)
    5369           0 :             startTask(task, PhaseKind::SWEEP_WEAK_CACHES, lock);
    5370             : 
    5371             :         {
    5372           0 :             AutoUnlockHelperThreadState unlock(lock);
    5373           0 :             sweepJitDataOnMainThread(&fop);
    5374             :         }
    5375             : 
    5376           0 :         for (auto& task : sweepCacheTasks)
    5377           0 :             joinTask(task, PhaseKind::SWEEP_WEAK_CACHES, lock);
    5378             :     }
    5379             : 
    5380           0 :     if (sweepingAtoms)
    5381           0 :         startSweepingAtomsTable();
    5382             : 
    5383             :     // Queue all GC things in all zones for sweeping, either on the foreground
    5384             :     // or on the background thread.
    5385             : 
    5386           0 :     for (GCSweepGroupIter zone(rt); !zone.done(); zone.next()) {
    5387             : 
    5388           0 :         zone->arenas.queueForForegroundSweep(&fop, ForegroundObjectFinalizePhase);
    5389           0 :         for (unsigned i = 0; i < ArrayLength(IncrementalFinalizePhases); ++i)
    5390           0 :             zone->arenas.queueForForegroundSweep(&fop, IncrementalFinalizePhases[i]);
    5391             : 
    5392           0 :         for (unsigned i = 0; i < ArrayLength(BackgroundFinalizePhases); ++i)
    5393           0 :             zone->arenas.queueForBackgroundSweep(&fop, BackgroundFinalizePhases[i]);
    5394             : 
    5395           0 :         zone->arenas.queueForegroundThingsForSweep(&fop);
    5396             :     }
    5397             : 
    5398           0 :     sweepActionList = PerSweepGroupActionList;
    5399           0 :     sweepActionIndex = 0;
    5400           0 :     sweepPhaseIndex = 0;
    5401           0 :     sweepZone = nullptr;
    5402           0 :     sweepCache = nullptr;
    5403           0 : }
    5404             : 
    5405             : void
    5406           0 : GCRuntime::endSweepingSweepGroup()
    5407             : {
    5408             :     {
    5409           0 :         gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::FINALIZE_END);
    5410           0 :         FreeOp fop(rt);
    5411           0 :         callFinalizeCallbacks(&fop, JSFINALIZE_GROUP_END);
    5412             :     }
    5413             : 
    5414             :     /* Update the GC state for zones we have swept. */
    5415           0 :     for (GCSweepGroupIter zone(rt); !zone.done(); zone.next()) {
    5416           0 :         MOZ_ASSERT(zone->isGCSweeping());
    5417           0 :         AutoLockGC lock(rt);
    5418           0 :         zone->setGCState(Zone::Finished);
    5419           0 :         zone->threshold.updateAfterGC(zone->usage.gcBytes(), invocationKind, tunables,
    5420           0 :                                       schedulingState, lock);
    5421             :     }
    5422             : 
    5423             :     /* Start background thread to sweep zones if required. */
    5424           0 :     ZoneList zones;
    5425           0 :     for (GCSweepGroupIter zone(rt); !zone.done(); zone.next())
    5426           0 :         zones.append(zone);
    5427           0 :     if (sweepOnBackgroundThread)
    5428           0 :         queueZonesForBackgroundSweep(zones);
    5429             :     else
    5430           0 :         sweepBackgroundThings(zones, blocksToFreeAfterSweeping.ref());
    5431             : 
    5432             :     /* Reset the list of arenas marked as being allocated during sweep phase. */
    5433           0 :     while (Arena* arena = arenasAllocatedDuringSweep) {
    5434           0 :         arenasAllocatedDuringSweep = arena->getNextAllocDuringSweep();
    5435           0 :         arena->unsetAllocDuringSweep();
    5436           0 :     }
    5437           0 : }
    5438             : 
    5439             : void
    5440           0 : GCRuntime::beginSweepPhase(JS::gcreason::Reason reason, AutoLockForExclusiveAccess& lock)
    5441             : {
    5442             :     /*
    5443             :      * Sweep phase.
    5444             :      *
    5445             :      * Finalize as we sweep, outside of lock but with CurrentThreadIsHeapBusy()
    5446             :      * true so that any attempt to allocate a GC-thing from a finalizer will
    5447             :      * fail, rather than nest badly and leave the unmarked newborn to be swept.
    5448             :      */
    5449             : 
    5450           0 :     MOZ_ASSERT(!abortSweepAfterCurrentGroup);
    5451             : 
    5452           0 :     AutoSetThreadIsSweeping threadIsSweeping;
    5453             : 
    5454           0 :     releaseHeldRelocatedArenas();
    5455             : 
    5456           0 :     computeNonIncrementalMarkingForValidation(lock);
    5457             : 
    5458           0 :     gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::SWEEP);
    5459             : 
    5460             :     sweepOnBackgroundThread =
    5461           0 :         reason != JS::gcreason::DESTROY_RUNTIME && !TraceEnabled() && CanUseExtraThreads();
    5462             : 
    5463           0 :     releaseObservedTypes = shouldReleaseObservedTypes();
    5464             : 
    5465           0 :     AssertNoWrappersInGrayList(rt);
    5466           0 :     DropStringWrappers(rt);
    5467             : 
    5468           0 :     groupZonesForSweeping(reason, lock);
    5469           0 :     endMarkingSweepGroup();
    5470           0 :     beginSweepingSweepGroup();
    5471           0 : }
    5472             : 
    5473             : bool
    5474           0 : ArenaLists::foregroundFinalize(FreeOp* fop, AllocKind thingKind, SliceBudget& sliceBudget,
    5475             :                                SortedArenaList& sweepList)
    5476             : {
    5477           0 :     MOZ_ASSERT_IF(IsObjectAllocKind(thingKind), savedObjectArenas(thingKind).isEmpty());
    5478             : 
    5479           0 :     if (!arenaListsToSweep(thingKind) && incrementalSweptArenas.ref().isEmpty())
    5480           0 :         return true;
    5481             : 
    5482           0 :     KeepArenasEnum keepArenas = IsObjectAllocKind(thingKind) ? KEEP_ARENAS : RELEASE_ARENAS;
    5483           0 :     if (!FinalizeArenas(fop, &arenaListsToSweep(thingKind), sweepList,
    5484             :                         thingKind, sliceBudget, keepArenas))
    5485             :     {
    5486           0 :         incrementalSweptArenaKind = thingKind;
    5487           0 :         incrementalSweptArenas = sweepList.toArenaList();
    5488           0 :         return false;
    5489             :     }
    5490             : 
    5491             :     // Clear any previous incremental sweep state we may have saved.
    5492           0 :     incrementalSweptArenas.ref().clear();
    5493             : 
    5494           0 :     if (IsObjectAllocKind(thingKind)) {
    5495             :         // Delay releasing of object arenas until types have been swept.
    5496           0 :         sweepList.extractEmpty(&savedEmptyObjectArenas.ref());
    5497           0 :         savedObjectArenas(thingKind) = sweepList.toArenaList();
    5498             :     } else {
    5499             :         // Join |arenaLists[thingKind]| and |sweepList| into a single list.
    5500           0 :         ArenaList finalized = sweepList.toArenaList();
    5501           0 :         arenaLists(thingKind) =
    5502           0 :             finalized.insertListWithCursorAtEnd(arenaLists(thingKind));
    5503             :     }
    5504             : 
    5505           0 :     return true;
    5506             : }
    5507             : 
    5508             : IncrementalProgress
    5509           3 : GCRuntime::drainMarkStack(SliceBudget& sliceBudget, gcstats::PhaseKind phase)
    5510             : {
    5511             :     /* Run a marking slice and return whether the stack is now empty. */
    5512           6 :     gcstats::AutoPhase ap(stats(), phase);
    5513           6 :     return marker.drainMarkStack(sliceBudget) ? Finished : NotFinished;
    5514             : }
    5515             : 
    5516             : static void
    5517           0 : SweepThing(Shape* shape)
    5518             : {
    5519           0 :     if (!shape->isMarkedAny())
    5520           0 :         shape->sweep();
    5521           0 : }
    5522             : 
    5523             : static void
    5524           0 : SweepThing(JSScript* script, AutoClearTypeInferenceStateOnOOM* oom)
    5525             : {
    5526           0 :     script->maybeSweepTypes(oom);
    5527           0 : }
    5528             : 
    5529             : static void
    5530           0 : SweepThing(ObjectGroup* group, AutoClearTypeInferenceStateOnOOM* oom)
    5531             : {
    5532           0 :     group->maybeSweep(oom);
    5533           0 : }
    5534             : 
    5535             : template <typename T, typename... Args>
    5536             : static bool
    5537           0 : SweepArenaList(Arena** arenasToSweep, SliceBudget& sliceBudget, Args... args)
    5538             : {
    5539           0 :     while (Arena* arena = *arenasToSweep) {
    5540           0 :         for (ArenaCellIterUnderGC i(arena); !i.done(); i.next())
    5541           0 :             SweepThing(i.get<T>(), args...);
    5542             : 
    5543           0 :         *arenasToSweep = (*arenasToSweep)->next;
    5544           0 :         AllocKind kind = MapTypeToFinalizeKind<T>::kind;
    5545           0 :         sliceBudget.step(Arena::thingsPerArena(kind));
    5546           0 :         if (sliceBudget.isOverBudget())
    5547           0 :             return false;
    5548             :     }
    5549             : 
    5550           0 :     return true;
    5551             : }
    5552             : 
    5553             : /* static */ IncrementalProgress
    5554           0 : GCRuntime::sweepTypeInformation(GCRuntime* gc, FreeOp* fop, Zone* zone, SliceBudget& budget,
    5555             :                                 AllocKind kind)
    5556             : {
    5557             :     // Sweep dead type information stored in scripts and object groups, but
    5558             :     // don't finalize them yet. We have to sweep dead information from both live
    5559             :     // and dead scripts and object groups, so that no dead references remain in
    5560             :     // them. Type inference can end up crawling these zones again, such as for
    5561             :     // TypeCompartment::markSetsUnknown, and if this happens after sweeping for
    5562             :     // the sweep group finishes we won't be able to determine which things in
    5563             :     // the zone are live.
    5564             : 
    5565           0 :     MOZ_ASSERT(kind == AllocKind::LIMIT);
    5566             : 
    5567           0 :     gcstats::AutoPhase ap1(gc->stats(), gcstats::PhaseKind::SWEEP_COMPARTMENTS);
    5568           0 :     gcstats::AutoPhase ap2(gc->stats(), gcstats::PhaseKind::SWEEP_TYPES);
    5569             : 
    5570           0 :     ArenaLists& al = zone->arenas;
    5571             : 
    5572           0 :     AutoClearTypeInferenceStateOnOOM oom(zone);
    5573             : 
    5574           0 :     if (!SweepArenaList<JSScript>(&al.gcScriptArenasToUpdate.ref(), budget, &oom))
    5575           0 :         return NotFinished;
    5576             : 
    5577           0 :     if (!SweepArenaList<ObjectGroup>(&al.gcObjectGroupArenasToUpdate.ref(), budget, &oom))
    5578           0 :         return NotFinished;
    5579             : 
    5580             :     // Finish sweeping type information in the zone.
    5581             :     {
    5582           0 :         gcstats::AutoPhase ap(gc->stats(), gcstats::PhaseKind::SWEEP_TYPES_END);
    5583           0 :         zone->types.endSweep(gc->rt);
    5584             :     }
    5585             : 
    5586           0 :     return Finished;
    5587             : }
    5588             : 
    5589             : /* static */ IncrementalProgress
    5590           0 : GCRuntime::mergeSweptObjectArenas(GCRuntime* gc, FreeOp* fop, Zone* zone, SliceBudget& budget,
    5591             :                                   AllocKind kind)
    5592             : {
    5593             :     // Foreground finalized objects have already been finalized, and now their
    5594             :     // arenas can be reclaimed by freeing empty ones and making non-empty ones
    5595             :     // available for allocation.
    5596             : 
    5597           0 :     MOZ_ASSERT(kind == AllocKind::LIMIT);
    5598           0 :     zone->arenas.mergeForegroundSweptObjectArenas();
    5599           0 :     return Finished;
    5600             : }
    5601             : 
    5602             : void
    5603           0 : GCRuntime::startSweepingAtomsTable()
    5604             : {
    5605           0 :     auto& maybeAtoms = maybeAtomsToSweep.ref();
    5606           0 :     MOZ_ASSERT(maybeAtoms.isNothing());
    5607             : 
    5608           0 :     AtomSet* atomsTable = rt->atomsForSweeping();
    5609           0 :     if (!atomsTable)
    5610           0 :         return;
    5611             : 
    5612             :     // Create a secondary table to hold new atoms added while we're sweeping
    5613             :     // the main table incrementally.
    5614           0 :     if (!rt->createAtomsAddedWhileSweepingTable()) {
    5615           0 :         atomsTable->sweep();
    5616           0 :         return;
    5617             :     }
    5618             : 
    5619             :     // Initialize remaining atoms to sweep.
    5620           0 :     maybeAtoms.emplace(*atomsTable);
    5621             : }
    5622             : 
    5623             : /* static */ IncrementalProgress
    5624           0 : GCRuntime::sweepAtomsTable(GCRuntime* gc, SliceBudget& budget)
    5625             : {
    5626           0 :     if (!gc->atomsZone->isGCSweeping())
    5627           0 :         return Finished;
    5628             : 
    5629           0 :     return gc->sweepAtomsTable(budget);
    5630             : }
    5631             : 
    5632             : 
    5633             : IncrementalProgress
    5634           0 : GCRuntime::sweepAtomsTable(SliceBudget& budget)
    5635             : {
    5636           0 :     gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::SWEEP_ATOMS_TABLE);
    5637             : 
    5638           0 :     auto& maybeAtoms = maybeAtomsToSweep.ref();
    5639           0 :     if (!maybeAtoms)
    5640           0 :         return Finished;
    5641             : 
    5642           0 :     MOZ_ASSERT(rt->atomsAddedWhileSweeping());
    5643             : 
    5644             :     // Sweep the table incrementally until we run out of work or budget.
    5645           0 :     auto& atomsToSweep = *maybeAtoms;
    5646           0 :     while (!atomsToSweep.empty()) {
    5647           0 :         budget.step();
    5648           0 :         if (budget.isOverBudget())
    5649           0 :             return NotFinished;
    5650             : 
    5651           0 :         JSAtom* atom = atomsToSweep.front().asPtrUnbarriered();
    5652           0 :         if (IsAboutToBeFinalizedUnbarriered(&atom))
    5653           0 :             atomsToSweep.removeFront();
    5654           0 :         atomsToSweep.popFront();
    5655             :     }
    5656             : 
    5657             :     // Add any new atoms from the secondary table.
    5658           0 :     AutoEnterOOMUnsafeRegion oomUnsafe;
    5659           0 :     AtomSet* atomsTable = rt->atomsForSweeping();
    5660           0 :     MOZ_ASSERT(atomsTable);
    5661           0 :     for (auto r = rt->atomsAddedWhileSweeping()->all(); !r.empty(); r.popFront()) {
    5662           0 :         if (!atomsTable->putNew(AtomHasher::Lookup(r.front().asPtrUnbarriered()), r.front()))
    5663           0 :             oomUnsafe.crash("Adding atom from secondary table after sweep");
    5664             :     }
    5665           0 :     rt->destroyAtomsAddedWhileSweepingTable();
    5666             : 
    5667           0 :     maybeAtoms.reset();
    5668           0 :     return Finished;
    5669             : }
    5670             : 
    5671             : class js::gc::WeakCacheSweepIterator
    5672             : {
    5673             :     JS::Zone*& sweepZone;
    5674             :     JS::detail::WeakCacheBase*& sweepCache;
    5675             : 
    5676             :   public:
    5677           0 :     explicit WeakCacheSweepIterator(GCRuntime* gc)
    5678           0 :       : sweepZone(gc->sweepZone.ref()), sweepCache(gc->sweepCache.ref())
    5679             :     {
    5680             :         // Initialize state when we start sweeping a sweep group.
    5681           0 :         if (!sweepZone) {
    5682           0 :             sweepZone = gc->currentSweepGroup;
    5683           0 :             MOZ_ASSERT(!sweepCache);
    5684           0 :             sweepCache = sweepZone->weakCaches().getFirst();
    5685           0 :             settle();
    5686             :         }
    5687             : 
    5688           0 :         checkState();
    5689           0 :     }
    5690             : 
    5691           0 :     bool empty(AutoLockHelperThreadState& lock) {
    5692           0 :         return !sweepZone;
    5693             :     }
    5694             : 
    5695           0 :     JS::detail::WeakCacheBase* next(AutoLockHelperThreadState& lock) {
    5696           0 :         if (empty(lock))
    5697           0 :             return nullptr;
    5698             : 
    5699           0 :         JS::detail::WeakCacheBase* result = sweepCache;
    5700           0 :         sweepCache = sweepCache->getNext();
    5701           0 :         settle();
    5702           0 :         checkState();
    5703           0 :         return result;
    5704             :     }
    5705             : 
    5706           0 :     void settle() {
    5707           0 :         while (sweepZone) {
    5708           0 :             while (sweepCache && !sweepCache->needsIncrementalBarrier())
    5709           0 :                 sweepCache = sweepCache->getNext();
    5710             : 
    5711           0 :             if (sweepCache)
    5712           0 :                 break;
    5713             : 
    5714           0 :             sweepZone = sweepZone->nextNodeInGroup();
    5715           0 :             if (sweepZone)
    5716           0 :                 sweepCache = sweepZone->weakCaches().getFirst();
    5717             :         }
    5718           0 :     }
    5719             : 
    5720             :   private:
    5721           0 :     void checkState() {
    5722           0 :         MOZ_ASSERT((!sweepZone && !sweepCache) ||
    5723             :                    (sweepCache && sweepCache->needsIncrementalBarrier()));
    5724           0 :     }
    5725             : };
    5726             : 
    5727             : class IncrementalSweepWeakCacheTask : public GCParallelTask
    5728             : {
    5729             :     WeakCacheSweepIterator& work_;
    5730             :     SliceBudget& budget_;
    5731             :     AutoLockHelperThreadState& lock_;
    5732             :     JS::detail::WeakCacheBase* cache_;
    5733             : 
    5734             :   public:
    5735           0 :     IncrementalSweepWeakCacheTask(JSRuntime* rt, WeakCacheSweepIterator& work, SliceBudget& budget,
    5736             :                                   AutoLockHelperThreadState& lock)
    5737           0 :       : GCParallelTask(rt), work_(work), budget_(budget), lock_(lock),
    5738           0 :         cache_(work.next(lock))
    5739             :     {
    5740           0 :         MOZ_ASSERT(cache_);
    5741           0 :         runtime()->gc.startTask(*this, gcstats::PhaseKind::SWEEP_WEAK_CACHES, lock_);
    5742           0 :     }
    5743             : 
    5744           0 :     ~IncrementalSweepWeakCacheTask() {
    5745           0 :         runtime()->gc.joinTask(*this, gcstats::PhaseKind::SWEEP_WEAK_CACHES, lock_);
    5746           0 :     }
    5747             : 
    5748             :   private:
    5749           0 :     void run() override {
    5750           0 :         do {
    5751           0 :             MOZ_ASSERT(cache_->needsIncrementalBarrier());
    5752           0 :             size_t steps = cache_->sweep();
    5753           0 :             cache_->setNeedsIncrementalBarrier(false);
    5754             : 
    5755           0 :             AutoLockHelperThreadState lock;
    5756           0 :             budget_.step(steps);
    5757           0 :             if (budget_.isOverBudget())
    5758           0 :                 break;
    5759             : 
    5760           0 :             cache_ = work_.next(lock);
    5761           0 :         } while(cache_);
    5762           0 :     }
    5763             : };
    5764             : 
    5765             : /* static */ IncrementalProgress
    5766           0 : GCRuntime::sweepWeakCaches(GCRuntime* gc, SliceBudget& budget)
    5767             : {
    5768           0 :     return gc->sweepWeakCaches(budget);
    5769             : }
    5770             : 
    5771             : static const size_t MaxWeakCacheSweepTasks = 8;
    5772             : 
    5773             : static size_t
    5774           0 : WeakCacheSweepTaskCount()
    5775             : {
    5776           0 :     size_t targetTaskCount = HelperThreadState().cpuCount;
    5777           0 :     return Min(targetTaskCount, MaxWeakCacheSweepTasks);
    5778             : }
    5779             : 
    5780             : IncrementalProgress
    5781           0 : GCRuntime::sweepWeakCaches(SliceBudget& budget)
    5782             : {
    5783           0 :     WeakCacheSweepIterator work(this);
    5784             : 
    5785             :     {
    5786           0 :         AutoLockHelperThreadState lock;
    5787           0 :         gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::SWEEP_COMPARTMENTS);
    5788             : 
    5789           0 :         Maybe<IncrementalSweepWeakCacheTask> tasks[MaxWeakCacheSweepTasks];
    5790           0 :         for (size_t i = 0; !work.empty(lock) && i < WeakCacheSweepTaskCount(); i++)
    5791           0 :             tasks[i].emplace(rt, work, budget, lock);
    5792             : 
    5793             :         // Tasks run until budget or work is exhausted.
    5794             :     }
    5795             : 
    5796           0 :     AutoLockHelperThreadState lock;
    5797           0 :     return work.empty(lock) ? Finished : NotFinished;
    5798             : }
    5799             : 
    5800             : /* static */ IncrementalProgress
    5801           0 : GCRuntime::finalizeAllocKind(GCRuntime* gc, FreeOp* fop, Zone* zone, SliceBudget& budget,
    5802             :                              AllocKind kind)
    5803             : {
    5804             :     // Set the number of things per arena for this AllocKind.
    5805           0 :     size_t thingsPerArena = Arena::thingsPerArena(kind);
    5806           0 :     auto& sweepList = gc->incrementalSweepList.ref();
    5807           0 :     sweepList.setThingsPerArena(thingsPerArena);
    5808             : 
    5809           0 :     if (!zone->arenas.foregroundFinalize(fop, kind, budget, sweepList))
    5810           0 :         return NotFinished;
    5811             : 
    5812             :     // Reset the slots of the sweep list that we used.
    5813           0 :     sweepList.reset(thingsPerArena);
    5814             : 
    5815           0 :     return Finished;
    5816             : }
    5817             : 
    5818             : /* static */ IncrementalProgress
    5819           0 : GCRuntime::sweepShapeTree(GCRuntime* gc, FreeOp* fop, Zone* zone, SliceBudget& budget,
    5820             :                           AllocKind kind)
    5821             : {
    5822             :     // Remove dead shapes from the shape tree, but don't finalize them yet.
    5823             : 
    5824           0 :     MOZ_ASSERT(kind == AllocKind::LIMIT);
    5825             : 
    5826           0 :     gcstats::AutoPhase ap(gc->stats(), gcstats::PhaseKind::SWEEP_SHAPE);
    5827             : 
    5828           0 :     ArenaLists& al = zone->arenas;
    5829             : 
    5830           0 :     if (!SweepArenaList<Shape>(&al.gcShapeArenasToUpdate.ref(), budget))
    5831           0 :         return NotFinished;
    5832             : 
    5833           0 :     if (!SweepArenaList<AccessorShape>(&al.gcAccessorShapeArenasToUpdate.ref(), budget))
    5834           0 :         return NotFinished;
    5835             : 
    5836           0 :     return Finished;
    5837             : }
    5838             : 
    5839             : static void
    5840           6 : AddPerSweepGroupSweepAction(bool* ok, PerSweepGroupSweepAction action)
    5841             : {
    5842           6 :     if (*ok)
    5843           6 :         *ok = PerSweepGroupSweepActions.emplaceBack(action);
    5844           6 : }
    5845             : 
    5846             : static void
    5847          15 : AddPerZoneSweepPhase(bool* ok)
    5848             : {
    5849          15 :     if (*ok)
    5850          15 :         *ok = PerZoneSweepPhases.emplaceBack();
    5851          15 : }
    5852             : 
    5853             : static void
    5854          33 : AddPerZoneSweepAction(bool* ok, PerZoneSweepAction::Func func, AllocKind kind = AllocKind::LIMIT)
    5855             : {
    5856          33 :     if (*ok)
    5857          33 :         *ok = PerZoneSweepPhases.back().emplaceBack(func, kind);
    5858          33 : }
    5859             : 
    5860             : /* static */ bool
    5861           3 : GCRuntime::initializeSweepActions()
    5862             : {
    5863           3 :     bool ok = true;
    5864             : 
    5865           3 :     AddPerSweepGroupSweepAction(&ok, GCRuntime::sweepAtomsTable);
    5866           3 :     AddPerSweepGroupSweepAction(&ok, GCRuntime::sweepWeakCaches);
    5867             : 
    5868           3 :     AddPerZoneSweepPhase(&ok);
    5869          21 :     for (auto kind : ForegroundObjectFinalizePhase.kinds)
    5870          18 :         AddPerZoneSweepAction(&ok, GCRuntime::finalizeAllocKind, kind);
    5871             : 
    5872           3 :     AddPerZoneSweepPhase(&ok);
    5873           3 :     AddPerZoneSweepAction(&ok, GCRuntime::sweepTypeInformation);
    5874           3 :     AddPerZoneSweepAction(&ok, GCRuntime::mergeSweptObjectArenas);
    5875             : 
    5876           9 :     for (const auto& finalizePhase : IncrementalFinalizePhases) {
    5877           6 :         AddPerZoneSweepPhase(&ok);
    5878          12 :         for (auto kind : finalizePhase.kinds)
    5879           6 :             AddPerZoneSweepAction(&ok, GCRuntime::finalizeAllocKind, kind);
    5880             :     }
    5881             : 
    5882           3 :     AddPerZoneSweepPhase(&ok);
    5883           3 :     AddPerZoneSweepAction(&ok, GCRuntime::sweepShapeTree);
    5884             : 
    5885           3 :     return ok;
    5886             : }
    5887             : 
    5888             : static inline SweepActionList
    5889           0 : NextSweepActionList(SweepActionList list)
    5890             : {
    5891           0 :     MOZ_ASSERT(list < SweepActionListCount);
    5892           0 :     return SweepActionList(unsigned(list) + 1);
    5893             : }
    5894             : 
    5895             : IncrementalProgress
    5896           0 : GCRuntime::performSweepActions(SliceBudget& budget, AutoLockForExclusiveAccess& lock)
    5897             : {
    5898           0 :     AutoSetThreadIsSweeping threadIsSweeping;
    5899             : 
    5900           0 :     gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::SWEEP);
    5901           0 :     FreeOp fop(rt);
    5902             : 
    5903           0 :     if (drainMarkStack(budget, gcstats::PhaseKind::SWEEP_MARK) == NotFinished)
    5904           0 :         return NotFinished;
    5905             : 
    5906             :     for (;;) {
    5907           0 :         for (; sweepActionList < SweepActionListCount;
    5908           0 :              sweepActionList = NextSweepActionList(sweepActionList))
    5909             :         {
    5910           0 :             switch (sweepActionList) {
    5911             :               case PerSweepGroupActionList: {
    5912           0 :                 const auto& actions = PerSweepGroupSweepActions;
    5913           0 :                 for (; sweepActionIndex < actions.length(); sweepActionIndex++) {
    5914           0 :                     auto action = actions[sweepActionIndex];
    5915           0 :                     if (action(this, budget) == NotFinished)
    5916           0 :                         return NotFinished;
    5917             :                 }
    5918           0 :                 sweepActionIndex = 0;
    5919           0 :                 break;
    5920             :               }
    5921             : 
    5922             :               case PerZoneActionList:
    5923           0 :                 for (; sweepPhaseIndex < PerZoneSweepPhases.length(); sweepPhaseIndex++) {
    5924           0 :                     const auto& actions = PerZoneSweepPhases[sweepPhaseIndex];
    5925           0 :                     if (!sweepZone)
    5926           0 :                         sweepZone = currentSweepGroup;
    5927           0 :                     for (; sweepZone; sweepZone = sweepZone->nextNodeInGroup()) {
    5928           0 :                         for (; sweepActionIndex < actions.length(); sweepActionIndex++) {
    5929           0 :                             const auto& action = actions[sweepActionIndex];
    5930           0 :                             if (action.func(this, &fop, sweepZone, budget, action.kind) == NotFinished)
    5931           0 :                                 return NotFinished;
    5932             :                         }
    5933           0 :                         sweepActionIndex = 0;
    5934             :                     }
    5935           0 :                     sweepZone = nullptr;
    5936             :                 }
    5937           0 :                 sweepPhaseIndex = 0;
    5938           0 :                 break;
    5939             : 
    5940             :               default:
    5941           0 :                 MOZ_CRASH("Unexpected sweepActionList value");
    5942             :             }
    5943             :         }
    5944           0 :         sweepActionList = PerSweepGroupActionList;
    5945             : 
    5946           0 :         endSweepingSweepGroup();
    5947           0 :         getNextSweepGroup();
    5948           0 :         if (!currentSweepGroup)
    5949           0 :             return Finished;
    5950             : 
    5951           0 :         endMarkingSweepGroup();
    5952           0 :         beginSweepingSweepGroup();
    5953           0 :     }
    5954             : }
    5955             : 
    5956             : bool
    5957           0 : GCRuntime::allCCVisibleZonesWereCollected() const
    5958             : {
    5959             :     // Calculate whether the gray marking state is now valid.
    5960             :     //
    5961             :     // The gray bits change from invalid to valid if we finished a full GC from
    5962             :     // the point of view of the cycle collector. We ignore the following:
    5963             :     //
    5964             :     //  - Helper thread zones, as these are not reachable from the main heap.
    5965             :     //  - The atoms zone, since strings and symbols are never marked gray.
    5966             :     //  - Empty zones.
    5967             :     //
    5968             :     // These exceptions ensure that when the CC requests a full GC the gray mark
    5969             :     // state ends up valid even it we don't collect all of the zones.
    5970             : 
    5971           0 :     if (isFull)
    5972           0 :         return true;
    5973             : 
    5974           0 :     for (ZonesIter zone(rt, SkipAtoms); !zone.done(); zone.next()) {
    5975           0 :         if (!zone->isCollecting() &&
    5976           0 :             !zone->usedByHelperThread() &&
    5977           0 :             !zone->arenas.arenaListsAreEmpty())
    5978             :         {
    5979           0 :             return false;
    5980             :         }
    5981             :     }
    5982             : 
    5983           0 :     return true;
    5984             : }
    5985             : 
    5986             : void
    5987           0 : GCRuntime::endSweepPhase(bool destroyingRuntime, AutoLockForExclusiveAccess& lock)
    5988             : {
    5989           0 :     AutoSetThreadIsSweeping threadIsSweeping;
    5990             : 
    5991           0 :     gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::SWEEP);
    5992           0 :     FreeOp fop(rt);
    5993             : 
    5994           0 :     MOZ_ASSERT_IF(destroyingRuntime, !sweepOnBackgroundThread);
    5995             : 
    5996             :     /*
    5997             :      * Recalculate whether GC was full or not as this may have changed due to
    5998             :      * newly created zones.  Can only change from full to not full.
    5999             :      */
    6000           0 :     if (isFull) {
    6001           0 :         for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    6002           0 :             if (!zone->isCollecting()) {
    6003           0 :                 isFull = false;
    6004           0 :                 break;
    6005             :             }
    6006             :         }
    6007             :     }
    6008             : 
    6009             :     {
    6010           0 :         gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::DESTROY);
    6011             : 
    6012             :         /*
    6013             :          * Sweep script filenames after sweeping functions in the generic loop
    6014             :          * above. In this way when a scripted function's finalizer destroys the
    6015             :          * script and calls rt->destroyScriptHook, the hook can still access the
    6016             :          * script's filename. See bug 323267.
    6017             :          */
    6018           0 :         SweepScriptData(rt, lock);
    6019             : 
    6020             :         /* Clear out any small pools that we're hanging on to. */
    6021           0 :         if (rt->hasJitRuntime()) {
    6022           0 :             rt->jitRuntime()->execAlloc().purge();
    6023           0 :             rt->jitRuntime()->backedgeExecAlloc().purge();
    6024             :         }
    6025             :     }
    6026             : 
    6027             :     {
    6028           0 :         gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::FINALIZE_END);
    6029           0 :         callFinalizeCallbacks(&fop, JSFINALIZE_COLLECTION_END);
    6030             : 
    6031           0 :         if (allCCVisibleZonesWereCollected())
    6032           0 :             grayBitsValid = true;
    6033             :     }
    6034             : 
    6035           0 :     finishMarkingValidation();
    6036             : 
    6037             : #ifdef DEBUG
    6038           0 :     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    6039           0 :         for (auto i : AllAllocKinds()) {
    6040           0 :             MOZ_ASSERT_IF(!IsBackgroundFinalized(i) ||
    6041             :                           !sweepOnBackgroundThread,
    6042             :                           !zone->arenas.arenaListsToSweep(i));
    6043             :         }
    6044             :     }
    6045             : #endif
    6046             : 
    6047           0 :     AssertNoWrappersInGrayList(rt);
    6048           0 : }
    6049             : 
    6050             : void
    6051           0 : GCRuntime::beginCompactPhase()
    6052             : {
    6053           0 :     MOZ_ASSERT(!isBackgroundSweeping());
    6054             : 
    6055           0 :     gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::COMPACT);
    6056             : 
    6057           0 :     MOZ_ASSERT(zonesToMaybeCompact.ref().isEmpty());
    6058           0 :     for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
    6059           0 :         if (CanRelocateZone(zone))
    6060           0 :             zonesToMaybeCompact.ref().append(zone);
    6061             :     }
    6062             : 
    6063           0 :     MOZ_ASSERT(!relocatedArenasToRelease);
    6064           0 :     startedCompacting = true;
    6065           0 : }
    6066             : 
    6067             : IncrementalProgress
    6068           0 : GCRuntime::compactPhase(JS::gcreason::Reason reason, SliceBudget& sliceBudget,
    6069             :                         AutoLockForExclusiveAccess& lock)
    6070             : {
    6071           0 :     assertBackgroundSweepingFinished();
    6072           0 :     MOZ_ASSERT(startedCompacting);
    6073             : 
    6074           0 :     gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::COMPACT);
    6075             : 
    6076             :     // TODO: JSScripts can move. If the sampler interrupts the GC in the
    6077             :     // middle of relocating an arena, invalid JSScript pointers may be
    6078             :     // accessed. Suppress all sampling until a finer-grained solution can be
    6079             :     // found. See bug 1295775.
    6080           0 :     AutoSuppressProfilerSampling suppressSampling(TlsContext.get());
    6081             : 
    6082           0 :     ZoneList relocatedZones;
    6083           0 :     Arena* relocatedArenas = nullptr;
    6084           0 :     while (!zonesToMaybeCompact.ref().isEmpty()) {
    6085             : 
    6086           0 :         Zone* zone = zonesToMaybeCompact.ref().front();
    6087           0 :         zonesToMaybeCompact.ref().removeFront();
    6088             : 
    6089           0 :         MOZ_ASSERT(zone->group()->nursery().isEmpty());
    6090           0 :         MOZ_ASSERT(zone->isGCFinished());
    6091           0 :         zone->setGCState(Zone::Compact);
    6092             : 
    6093           0 :         if (relocateArenas(zone, reason, relocatedArenas, sliceBudget)) {
    6094           0 :             updateZonePointersToRelocatedCells(zone, lock);
    6095           0 :             relocatedZones.append(zone);
    6096             :         } else {
    6097           0 :             zone->setGCState(Zone::Finished);
    6098             :         }
    6099             : 
    6100           0 :         if (sliceBudget.isOverBudget())
    6101           0 :             break;
    6102             :     }
    6103             : 
    6104           0 :     if (!relocatedZones.isEmpty()) {
    6105           0 :         updateRuntimePointersToRelocatedCells(lock);
    6106             : 
    6107           0 :         do {
    6108           0 :             Zone* zone = relocatedZones.front();
    6109           0 :             relocatedZones.removeFront();
    6110           0 :             zone->setGCState(Zone::Finished);
    6111             :         }
    6112           0 :         while (!relocatedZones.isEmpty());
    6113             :     }
    6114             : 
    6115           0 :     if (ShouldProtectRelocatedArenas(reason))
    6116           0 :         protectAndHoldArenas(relocatedArenas);
    6117             :     else
    6118           0 :         releaseRelocatedArenas(relocatedArenas);
    6119             : 
    6120             :     // Clear caches that can contain cell pointers.
    6121           0 :     rt->caches().newObjectCache.purge();
    6122           0 :     rt->caches().nativeIterCache.purge();
    6123           0 :     if (rt->caches().evalCache.initialized())
    6124           0 :         rt->caches().evalCache.clear();
    6125             : 
    6126             : #ifdef DEBUG
    6127           0 :     CheckHashTablesAfterMovingGC(rt);
    6128             : #endif
    6129             : 
    6130           0 :     return zonesToMaybeCompact.ref().isEmpty() ? Finished : NotFinished;
    6131             : }
    6132             : 
    6133             : void
    6134           0 : GCRuntime::endCompactPhase(JS::gcreason::Reason reason)
    6135             : {
    6136           0 :     startedCompacting = false;
    6137           0 : }
    6138             : 
    6139             : void
    6140           0 : GCRuntime::finishCollection(JS::gcreason::Reason reason)
    6141             : {
    6142           0 :     assertBackgroundSweepingFinished();
    6143           0 :     MOZ_ASSERT(marker.isDrained());
    6144           0 :     marker.stop();
    6145           0 :     clearBufferedGrayRoots();
    6146           0 :     MemProfiler::SweepTenured(rt);
    6147             : 
    6148           0 :     uint64_t currentTime = PRMJ_Now();
    6149           0 :     schedulingState.updateHighFrequencyMode(lastGCTime, currentTime, tunables);
    6150             : 
    6151           0 :     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    6152           0 :         if (zone->isCollecting()) {
    6153           0 :             MOZ_ASSERT(zone->isGCFinished());
    6154           0 :             zone->setGCState(Zone::NoGC);
    6155             :         }
    6156             : 
    6157           0 :         MOZ_ASSERT(!zone->isCollectingFromAnyThread());
    6158           0 :         MOZ_ASSERT(!zone->wasGCStarted());
    6159             :     }
    6160             : 
    6161           0 :     MOZ_ASSERT(zonesToMaybeCompact.ref().isEmpty());
    6162             : 
    6163           0 :     lastGCTime = currentTime;
    6164           0 : }
    6165             : 
    6166             : static const char*
    6167         270 : HeapStateToLabel(JS::HeapState heapState)
    6168             : {
    6169         270 :     switch (heapState) {
    6170             :       case JS::HeapState::MinorCollecting:
    6171          21 :         return "js::Nursery::collect";
    6172             :       case JS::HeapState::MajorCollecting:
    6173           3 :         return "js::GCRuntime::collect";
    6174             :       case JS::HeapState::Tracing:
    6175         246 :         return "JS_IterateCompartments";
    6176             :       case JS::HeapState::Idle:
    6177             :       case JS::HeapState::CycleCollecting:
    6178           0 :         MOZ_CRASH("Should never have an Idle or CC heap state when pushing GC pseudo frames!");
    6179             :     }
    6180           0 :     MOZ_ASSERT_UNREACHABLE("Should have exhausted every JS::HeapState variant!");
    6181             :     return nullptr;
    6182             : }
    6183             : 
    6184             : #ifdef DEBUG
    6185             : static bool
    6186           3 : AllNurseriesAreEmpty(JSRuntime* rt)
    6187             : {
    6188          36 :     for (ZoneGroupsIter group(rt); !group.done(); group.next()) {
    6189          33 :         if (!group->nursery().isEmpty())
    6190           0 :             return false;
    6191             :     }
    6192           3 :     return true;
    6193             : }
    6194             : #endif
    6195             : 
    6196             : /* Start a new heap session. */
    6197         270 : AutoTraceSession::AutoTraceSession(JSRuntime* rt, JS::HeapState heapState)
    6198             :   : lock(rt),
    6199             :     runtime(rt),
    6200         270 :     prevState(TlsContext.get()->heapState),
    6201         540 :     pseudoFrame(rt, HeapStateToLabel(heapState), ProfileEntry::Category::GC)
    6202             : {
    6203         270 :     MOZ_ASSERT(prevState == JS::HeapState::Idle);
    6204         270 :     MOZ_ASSERT(heapState != JS::HeapState::Idle);
    6205         270 :     MOZ_ASSERT_IF(heapState == JS::HeapState::MajorCollecting, AllNurseriesAreEmpty(rt));
    6206         270 :     TlsContext.get()->heapState = heapState;
    6207         270 : }
    6208             : 
    6209         540 : AutoTraceSession::~AutoTraceSession()
    6210             : {
    6211         270 :     MOZ_ASSERT(JS::CurrentThreadIsHeapBusy());
    6212         270 :     TlsContext.get()->heapState = prevState;
    6213         270 : }
    6214             : 
    6215             : JS_PUBLIC_API(JS::HeapState)
    6216     3642947 : JS::CurrentThreadHeapState()
    6217             : {
    6218     3642947 :     return TlsContext.get()->heapState;
    6219             : }
    6220             : 
    6221             : bool
    6222          93 : GCRuntime::canChangeActiveContext(JSContext* cx)
    6223             : {
    6224             :     // Threads cannot be in the middle of any operation that affects GC
    6225             :     // behavior when execution transfers to another thread for cooperative
    6226             :     // scheduling.
    6227         372 :     return cx->heapState == JS::HeapState::Idle
    6228           0 :         && !cx->suppressGC
    6229           0 :         && !cx->inUnsafeRegion
    6230           0 :         && !cx->generationalDisabled
    6231           0 :         && !cx->compactingDisabledCount
    6232         372 :         && !cx->keepAtoms;
    6233             : }
    6234             : 
    6235             : GCRuntime::IncrementalResult
    6236           0 : GCRuntime::resetIncrementalGC(gc::AbortReason reason, AutoLockForExclusiveAccess& lock)
    6237             : {
    6238           0 :     MOZ_ASSERT(reason != gc::AbortReason::None);
    6239             : 
    6240           0 :     switch (incrementalState) {
    6241             :       case State::NotActive:
    6242           0 :           return IncrementalResult::Ok;
    6243             : 
    6244             :       case State::MarkRoots:
    6245           0 :         MOZ_CRASH("resetIncrementalGC did not expect MarkRoots state");
    6246             :         break;
    6247             : 
    6248             :       case State::Mark: {
    6249             :         /* Cancel any ongoing marking. */
    6250           0 :         marker.reset();
    6251           0 :         marker.stop();
    6252           0 :         clearBufferedGrayRoots();
    6253             : 
    6254           0 :         for (GCCompartmentsIter c(rt); !c.done(); c.next())
    6255           0 :             ResetGrayList(c);
    6256             : 
    6257           0 :         for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
    6258           0 :             MOZ_ASSERT(zone->isGCMarking());
    6259           0 :             zone->setNeedsIncrementalBarrier(false);
    6260           0 :             zone->setGCState(Zone::NoGC);
    6261             :         }
    6262             : 
    6263           0 :         blocksToFreeAfterSweeping.ref().freeAll();
    6264             : 
    6265           0 :         incrementalState = State::NotActive;
    6266             : 
    6267           0 :         MOZ_ASSERT(!marker.shouldCheckCompartments());
    6268             : 
    6269           0 :         break;
    6270             :       }
    6271             : 
    6272             :       case State::Sweep: {
    6273           0 :         marker.reset();
    6274             : 
    6275           0 :         for (CompartmentsIter c(rt, SkipAtoms); !c.done(); c.next())
    6276           0 :             c->scheduledForDestruction = false;
    6277             : 
    6278             :         /* Finish sweeping the current sweep group, then abort. */
    6279           0 :         abortSweepAfterCurrentGroup = true;
    6280             : 
    6281             :         /* Don't perform any compaction after sweeping. */
    6282           0 :         bool wasCompacting = isCompacting;
    6283           0 :         isCompacting = false;
    6284             : 
    6285           0 :         auto unlimited = SliceBudget::unlimited();
    6286           0 :         incrementalCollectSlice(unlimited, JS::gcreason::RESET, lock);
    6287             : 
    6288           0 :         isCompacting = wasCompacting;
    6289             : 
    6290             :         {
    6291           0 :             gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::WAIT_BACKGROUND_THREAD);
    6292           0 :             rt->gc.waitBackgroundSweepOrAllocEnd();
    6293             :         }
    6294           0 :         break;
    6295             :       }
    6296             : 
    6297             :       case State::Finalize: {
    6298             :         {
    6299           0 :             gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::WAIT_BACKGROUND_THREAD);
    6300           0 :             rt->gc.waitBackgroundSweepOrAllocEnd();
    6301             :         }
    6302             : 
    6303           0 :         bool wasCompacting = isCompacting;
    6304           0 :         isCompacting = false;
    6305             : 
    6306           0 :         auto unlimited = SliceBudget::unlimited();
    6307           0 :         incrementalCollectSlice(unlimited, JS::gcreason::RESET, lock);
    6308             : 
    6309           0 :         isCompacting = wasCompacting;
    6310             : 
    6311           0 :         break;
    6312             :       }
    6313             : 
    6314             :       case State::Compact: {
    6315           0 :         bool wasCompacting = isCompacting;
    6316             : 
    6317           0 :         isCompacting = true;
    6318           0 :         startedCompacting = true;
    6319           0 :         zonesToMaybeCompact.ref().clear();
    6320             : 
    6321           0 :         auto unlimited = SliceBudget::unlimited();
    6322           0 :         incrementalCollectSlice(unlimited, JS::gcreason::RESET, lock);
    6323             : 
    6324           0 :         isCompacting = wasCompacting;
    6325           0 :         break;
    6326             :       }
    6327             : 
    6328             :       case State::Decommit: {
    6329           0 :         auto unlimited = SliceBudget::unlimited();
    6330           0 :         incrementalCollectSlice(unlimited, JS::gcreason::RESET, lock);
    6331           0 :         break;
    6332             :       }
    6333             :     }
    6334             : 
    6335           0 :     stats().reset(reason);
    6336             : 
    6337             : #ifdef DEBUG
    6338           0 :     assertBackgroundSweepingFinished();
    6339           0 :     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    6340           0 :         MOZ_ASSERT(!zone->isCollectingFromAnyThread());
    6341           0 :         MOZ_ASSERT(!zone->needsIncrementalBarrier());
    6342           0 :         MOZ_ASSERT(!zone->isOnList());
    6343             :     }
    6344           0 :     MOZ_ASSERT(zonesToMaybeCompact.ref().isEmpty());
    6345           0 :     MOZ_ASSERT(incrementalState == State::NotActive);
    6346             : #endif
    6347             : 
    6348           0 :     return IncrementalResult::Reset;
    6349             : }
    6350             : 
    6351             : namespace {
    6352             : 
    6353             : class AutoGCSlice {
    6354             :   public:
    6355             :     explicit AutoGCSlice(JSRuntime* rt);
    6356             :     ~AutoGCSlice();
    6357             : 
    6358             :   private:
    6359             :     JSRuntime* runtime;
    6360             :     AutoSetThreadIsPerformingGC performingGC;
    6361             : };
    6362             : 
    6363             : } /* anonymous namespace */
    6364             : 
    6365           3 : AutoGCSlice::AutoGCSlice(JSRuntime* rt)
    6366           3 :   : runtime(rt)
    6367             : {
    6368          35 :     for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
    6369             :         /*
    6370             :          * Clear needsIncrementalBarrier early so we don't do any write
    6371             :          * barriers during GC. We don't need to update the Ion barriers (which
    6372             :          * is expensive) because Ion code doesn't run during GC. If need be,
    6373             :          * we'll update the Ion barriers in ~AutoGCSlice.
    6374             :          */
    6375          32 :         if (zone->isGCMarking()) {
    6376          32 :             MOZ_ASSERT(zone->needsIncrementalBarrier());
    6377          32 :             zone->setNeedsIncrementalBarrier(false);
    6378             :         } else {
    6379           0 :             MOZ_ASSERT(!zone->needsIncrementalBarrier());
    6380             :         }
    6381             :     }
    6382           3 : }
    6383             : 
    6384           6 : AutoGCSlice::~AutoGCSlice()
    6385             : {
    6386             :     /* We can't use GCZonesIter if this is the end of the last slice. */
    6387          51 :     for (ZonesIter zone(runtime, WithAtoms); !zone.done(); zone.next()) {
    6388          48 :         if (zone->isGCMarking()) {
    6389          48 :             zone->setNeedsIncrementalBarrier(true);
    6390          48 :             zone->arenas.purge();
    6391             :         } else {
    6392           0 :             zone->setNeedsIncrementalBarrier(false);
    6393             :         }
    6394             :     }
    6395           3 : }
    6396             : 
    6397             : void
    6398           1 : GCRuntime::pushZealSelectedObjects()
    6399             : {
    6400             : #ifdef JS_GC_ZEAL
    6401             :     /* Push selected objects onto the mark stack and clear the list. */
    6402           1 :     for (JSObject** obj = selectedForMarking.ref().begin(); obj != selectedForMarking.ref().end(); obj++)
    6403           0 :         TraceManuallyBarrieredEdge(&marker, obj, "selected obj");
    6404             : #endif
    6405           1 : }
    6406             : 
    6407             : static bool
    6408           4 : IsShutdownGC(JS::gcreason::Reason reason)
    6409             : {
    6410           4 :     return reason == JS::gcreason::SHUTDOWN_CC || reason == JS::gcreason::DESTROY_RUNTIME;
    6411             : }
    6412             : 
    6413             : static bool
    6414           1 : ShouldCleanUpEverything(JS::gcreason::Reason reason, JSGCInvocationKind gckind)
    6415             : {
    6416             :     // During shutdown, we must clean everything up, for the sake of leak
    6417             :     // detection. When a runtime has no contexts, or we're doing a GC before a
    6418             :     // shutdown CC, those are strong indications that we're shutting down.
    6419           1 :     return IsShutdownGC(reason) || gckind == GC_SHRINK;
    6420             : }
    6421             : 
    6422             : void
    6423           3 : GCRuntime::incrementalCollectSlice(SliceBudget& budget, JS::gcreason::Reason reason,
    6424             :                                    AutoLockForExclusiveAccess& lock)
    6425             : {
    6426           6 :     AutoGCSlice slice(rt);
    6427             : 
    6428           3 :     bool destroyingRuntime = (reason == JS::gcreason::DESTROY_RUNTIME);
    6429             : 
    6430           3 :     gc::State initialState = incrementalState;
    6431             : 
    6432           3 :     bool useZeal = false;
    6433             : #ifdef JS_GC_ZEAL
    6434           3 :     if (reason == JS::gcreason::DEBUG_GC && !budget.isUnlimited()) {
    6435             :         /*
    6436             :          * Do the incremental collection type specified by zeal mode if the
    6437             :          * collection was triggered by runDebugGC() and incremental GC has not
    6438             :          * been cancelled by resetIncrementalGC().
    6439             :          */
    6440           0 :         useZeal = true;
    6441             :     }
    6442             : #endif
    6443             : 
    6444           3 :     MOZ_ASSERT_IF(isIncrementalGCInProgress(), isIncremental);
    6445           3 :     isIncremental = !budget.isUnlimited();
    6446             : 
    6447           3 :     if (useZeal && (hasZealMode(ZealMode::IncrementalRootsThenFinish) ||
    6448           0 :                     hasZealMode(ZealMode::IncrementalMarkAllThenFinish) ||
    6449           0 :                     hasZealMode(ZealMode::IncrementalSweepThenFinish)))
    6450             :     {
    6451             :         /*
    6452             :          * Yields between slices occurs at predetermined points in these modes;
    6453             :          * the budget is not used.
    6454             :          */
    6455           0 :         budget.makeUnlimited();
    6456             :     }
    6457             : 
    6458           3 :     switch (incrementalState) {
    6459             :       case State::NotActive:
    6460           1 :         initialReason = reason;
    6461           1 :         cleanUpEverything = ShouldCleanUpEverything(reason, invocationKind);
    6462           1 :         isCompacting = shouldCompact();
    6463           1 :         lastMarkSlice = false;
    6464           1 :         rootsRemoved = false;
    6465             : 
    6466           1 :         incrementalState = State::MarkRoots;
    6467             : 
    6468             :         MOZ_FALLTHROUGH;
    6469             : 
    6470             :       case State::MarkRoots:
    6471           1 :         if (!beginMarkPhase(reason, lock)) {
    6472           0 :             incrementalState = State::NotActive;
    6473           0 :             return;
    6474             :         }
    6475             : 
    6476           1 :         if (!destroyingRuntime)
    6477           1 :             pushZealSelectedObjects();
    6478             : 
    6479           1 :         incrementalState = State::Mark;
    6480             : 
    6481           1 :         if (isIncremental && useZeal && hasZealMode(ZealMode::IncrementalRootsThenFinish))
    6482           0 :             break;
    6483             : 
    6484             :         MOZ_FALLTHROUGH;
    6485             : 
    6486             :       case State::Mark:
    6487           6 :         for (const CooperatingContext& target : rt->cooperatingContexts())
    6488           3 :             AutoGCRooter::traceAllWrappers(target, &marker);
    6489             : 
    6490             :         /* If we needed delayed marking for gray roots, then collect until done. */
    6491           3 :         if (!hasBufferedGrayRoots()) {
    6492           0 :             budget.makeUnlimited();
    6493           0 :             isIncremental = false;
    6494             :         }
    6495             : 
    6496           3 :         if (drainMarkStack(budget, gcstats::PhaseKind::MARK) == NotFinished)
    6497           3 :             break;
    6498             : 
    6499           0 :         MOZ_ASSERT(marker.isDrained());
    6500             : 
    6501             :         /*
    6502             :          * In incremental GCs where we have already performed more than once
    6503             :          * slice we yield after marking with the aim of starting the sweep in
    6504             :          * the next slice, since the first slice of sweeping can be expensive.
    6505             :          *
    6506             :          * This is modified by the various zeal modes.  We don't yield in
    6507             :          * IncrementalRootsThenFinish mode and we always yield in
    6508             :          * IncrementalMarkAllThenFinish mode.
    6509             :          *
    6510             :          * We will need to mark anything new on the stack when we resume, so
    6511             :          * we stay in Mark state.
    6512             :          */
    6513           0 :         if (!lastMarkSlice && isIncremental &&
    6514           0 :             ((initialState == State::Mark &&
    6515           0 :               !(useZeal && hasZealMode(ZealMode::IncrementalRootsThenFinish))) ||
    6516           0 :              (useZeal && hasZealMode(ZealMode::IncrementalMarkAllThenFinish))))
    6517             :         {
    6518           0 :             lastMarkSlice = true;
    6519           0 :             break;
    6520             :         }
    6521             : 
    6522           0 :         incrementalState = State::Sweep;
    6523             : 
    6524             :         /*
    6525             :          * This runs to completion, but we don't continue if the budget is
    6526             :          * now exhasted.
    6527             :          */
    6528           0 :         beginSweepPhase(reason, lock);
    6529           0 :         if (budget.isOverBudget())
    6530           0 :             break;
    6531             : 
    6532             :         /*
    6533             :          * Always yield here when running in incremental multi-slice zeal
    6534             :          * mode, so RunDebugGC can reset the slice buget.
    6535             :          */
    6536           0 :         if (isIncremental && useZeal &&
    6537           0 :             (hasZealMode(ZealMode::IncrementalMultipleSlices) ||
    6538           0 :              hasZealMode(ZealMode::IncrementalSweepThenFinish)))
    6539             :         {
    6540           0 :             break;
    6541             :         }
    6542             : 
    6543             :         MOZ_FALLTHROUGH;
    6544             : 
    6545             :       case State::Sweep:
    6546           0 :         if (performSweepActions(budget, lock) == NotFinished)
    6547           0 :             break;
    6548             : 
    6549           0 :         endSweepPhase(destroyingRuntime, lock);
    6550             : 
    6551           0 :         incrementalState = State::Finalize;
    6552             : 
    6553             :         MOZ_FALLTHROUGH;
    6554             : 
    6555             :       case State::Finalize:
    6556             :         {
    6557           0 :             gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::WAIT_BACKGROUND_THREAD);
    6558             : 
    6559             :             // Yield until background finalization is done.
    6560           0 :             if (!budget.isUnlimited()) {
    6561             :                 // Poll for end of background sweeping
    6562           0 :                 AutoLockGC lock(rt);
    6563           0 :                 if (isBackgroundSweeping())
    6564           0 :                     break;
    6565             :             } else {
    6566           0 :                 waitBackgroundSweepEnd();
    6567             :             }
    6568             :         }
    6569             : 
    6570             :         {
    6571             :             // Re-sweep the zones list, now that background finalization is
    6572             :             // finished to actually remove and free dead zones.
    6573           0 :             gcstats::AutoPhase ap1(stats(), gcstats::PhaseKind::SWEEP);
    6574           0 :             gcstats::AutoPhase ap2(stats(), gcstats::PhaseKind::DESTROY);
    6575           0 :             AutoSetThreadIsSweeping threadIsSweeping;
    6576           0 :             FreeOp fop(rt);
    6577           0 :             sweepZoneGroups(&fop, destroyingRuntime);
    6578             :         }
    6579             : 
    6580           0 :         MOZ_ASSERT(!startedCompacting);
    6581           0 :         incrementalState = State::Compact;
    6582             : 
    6583             :         // Always yield before compacting since it is not incremental.
    6584           0 :         if (isCompacting && !budget.isUnlimited())
    6585           0 :             break;
    6586             : 
    6587             :         MOZ_FALLTHROUGH;
    6588             : 
    6589             :       case State::Compact:
    6590           0 :         if (isCompacting) {
    6591           0 :             if (!startedCompacting)
    6592           0 :                 beginCompactPhase();
    6593             : 
    6594           0 :             if (compactPhase(reason, budget, lock) == NotFinished)
    6595           0 :                 break;
    6596             : 
    6597           0 :             endCompactPhase(reason);
    6598             :         }
    6599             : 
    6600           0 :         startDecommit();
    6601           0 :         incrementalState = State::Decommit;
    6602             : 
    6603             :         MOZ_FALLTHROUGH;
    6604             : 
    6605             :       case State::Decommit:
    6606             :         {
    6607           0 :             gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::WAIT_BACKGROUND_THREAD);
    6608             : 
    6609             :             // Yield until background decommit is done.
    6610           0 :             if (!budget.isUnlimited() && decommitTask.isRunning())
    6611           0 :                 break;
    6612             : 
    6613           0 :             decommitTask.join();
    6614             :         }
    6615             : 
    6616           0 :         finishCollection(reason);
    6617           0 :         incrementalState = State::NotActive;
    6618           0 :         break;
    6619             :     }
    6620             : }
    6621             : 
    6622             : gc::AbortReason
    6623           3 : gc::IsIncrementalGCUnsafe(JSRuntime* rt)
    6624             : {
    6625           3 :     MOZ_ASSERT(!TlsContext.get()->suppressGC);
    6626             : 
    6627           3 :     if (!rt->gc.isIncrementalGCAllowed())
    6628           0 :         return gc::AbortReason::IncrementalDisabled;
    6629             : 
    6630           3 :     return gc::AbortReason::None;
    6631             : }
    6632             : 
    6633             : GCRuntime::IncrementalResult
    6634           3 : GCRuntime::budgetIncrementalGC(bool nonincrementalByAPI, JS::gcreason::Reason reason,
    6635             :                                SliceBudget& budget, AutoLockForExclusiveAccess& lock)
    6636             : {
    6637           3 :     if (nonincrementalByAPI) {
    6638           0 :         stats().nonincremental(gc::AbortReason::NonIncrementalRequested);
    6639           0 :         budget.makeUnlimited();
    6640             : 
    6641             :         // Reset any in progress incremental GC if this was triggered via the
    6642             :         // API. This isn't required for correctness, but sometimes during tests
    6643             :         // the caller expects this GC to collect certain objects, and we need
    6644             :         // to make sure to collect everything possible.
    6645           0 :         if (reason != JS::gcreason::ALLOC_TRIGGER)
    6646           0 :             return resetIncrementalGC(gc::AbortReason::NonIncrementalRequested, lock);
    6647             : 
    6648           0 :         return IncrementalResult::Ok;
    6649             :     }
    6650             : 
    6651           3 :     if (reason == JS::gcreason::ABORT_GC) {
    6652           0 :         budget.makeUnlimited();
    6653           0 :         stats().nonincremental(gc::AbortReason::AbortRequested);
    6654           0 :         return resetIncrementalGC(gc::AbortReason::AbortRequested, lock);
    6655             :     }
    6656             : 
    6657           3 :     AbortReason unsafeReason = IsIncrementalGCUnsafe(rt);
    6658           3 :     if (unsafeReason == AbortReason::None) {
    6659           3 :         if (reason == JS::gcreason::COMPARTMENT_REVIVED)
    6660           0 :             unsafeReason = gc::AbortReason::CompartmentRevived;
    6661           3 :         else if (mode != JSGC_MODE_INCREMENTAL)
    6662           0 :             unsafeReason = gc::AbortReason::ModeChange;
    6663             :     }
    6664             : 
    6665           3 :     if (unsafeReason != AbortReason::None) {
    6666           0 :         budget.makeUnlimited();
    6667           0 :         stats().nonincremental(unsafeReason);
    6668           0 :         return resetIncrementalGC(unsafeReason, lock);
    6669             :     }
    6670             : 
    6671           3 :     if (isTooMuchMalloc()) {
    6672           0 :         budget.makeUnlimited();
    6673           0 :         stats().nonincremental(AbortReason::MallocBytesTrigger);
    6674             :     }
    6675             : 
    6676           3 :     bool reset = false;
    6677          51 :     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    6678          48 :         if (zone->usage.gcBytes() >= zone->threshold.gcTriggerBytes()) {
    6679           0 :             budget.makeUnlimited();
    6680           0 :             stats().nonincremental(AbortReason::GCBytesTrigger);
    6681             :         }
    6682             : 
    6683          48 :         if (isIncrementalGCInProgress() && zone->isGCScheduled() != zone->wasGCStarted())
    6684           0 :             reset = true;
    6685             : 
    6686          48 :         if (zone->isTooMuchMalloc()) {
    6687           0 :             budget.makeUnlimited();
    6688           0 :             stats().nonincremental(AbortReason::MallocBytesTrigger);
    6689             :         }
    6690             :     }
    6691             : 
    6692           3 :     if (reset)
    6693           0 :         return resetIncrementalGC(AbortReason::ZoneChange, lock);
    6694             : 
    6695           3 :     return IncrementalResult::Ok;
    6696             : }
    6697             : 
    6698             : namespace {
    6699             : 
    6700             : class AutoScheduleZonesForGC
    6701             : {
    6702             :     JSRuntime* rt_;
    6703             : 
    6704             :   public:
    6705           3 :     explicit AutoScheduleZonesForGC(JSRuntime* rt) : rt_(rt) {
    6706          51 :         for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    6707          48 :             if (rt->gc.gcMode() == JSGC_MODE_GLOBAL)
    6708           0 :                 zone->scheduleGC();
    6709             : 
    6710             :             /* This is a heuristic to avoid resets. */
    6711          48 :             if (rt->gc.isIncrementalGCInProgress() && zone->needsIncrementalBarrier())
    6712          32 :                 zone->scheduleGC();
    6713             : 
    6714             :             /* This is a heuristic to reduce the total number of collections. */
    6715          96 :             if (zone->usage.gcBytes() >=
    6716          48 :                 zone->threshold.allocTrigger(rt->gc.schedulingState.inHighFrequencyGCMode()))
    6717             :             {
    6718           0 :                 zone->scheduleGC();
    6719             :             }
    6720             :         }
    6721           3 :     }
    6722             : 
    6723           6 :     ~AutoScheduleZonesForGC() {
    6724          51 :         for (ZonesIter zone(rt_, WithAtoms); !zone.done(); zone.next())
    6725          48 :             zone->unscheduleGC();
    6726           3 :     }
    6727             : };
    6728             : 
    6729             : /*
    6730             :  * An invariant of our GC/CC interaction is that there must not ever be any
    6731             :  * black to gray edges in the system. It is possible to violate this with
    6732             :  * simple compartmental GC. For example, in GC[n], we collect in both
    6733             :  * compartmentA and compartmentB, and mark both sides of the cross-compartment
    6734             :  * edge gray. Later in GC[n+1], we only collect compartmentA, but this time
    6735             :  * mark it black. Now we are violating the invariants and must fix it somehow.
    6736             :  *
    6737             :  * To prevent this situation, we explicitly detect the black->gray state when
    6738             :  * marking cross-compartment edges -- see ShouldMarkCrossCompartment -- adding
    6739             :  * each violating edges to foundBlackGrayEdges. After we leave the trace
    6740             :  * session for each GC slice, we "ExposeToActiveJS" on each of these edges
    6741             :  * (which we cannot do safely from the guts of the GC).
    6742             :  */
    6743             : class AutoExposeLiveCrossZoneEdges
    6744             : {
    6745             :     BlackGrayEdgeVector* edges;
    6746             : 
    6747             :   public:
    6748           3 :     explicit AutoExposeLiveCrossZoneEdges(BlackGrayEdgeVector* edgesPtr) : edges(edgesPtr) {
    6749           3 :         MOZ_ASSERT(edges->empty());
    6750           3 :     }
    6751           6 :     ~AutoExposeLiveCrossZoneEdges() {
    6752           3 :         for (auto& target : *edges) {
    6753           0 :             MOZ_ASSERT(target);
    6754           0 :             MOZ_ASSERT(!target->zone()->isCollecting());
    6755           0 :             UnmarkGrayCellRecursively(target, target->getTraceKind());
    6756             :         }
    6757           3 :         edges->clear();
    6758           3 :     }
    6759             : };
    6760             : 
    6761             : } /* anonymous namespace */
    6762             : 
    6763             : /*
    6764             :  * Run one GC "cycle" (either a slice of incremental GC or an entire
    6765             :  * non-incremental GC. We disable inlining to ensure that the bottom of the
    6766             :  * stack with possible GC roots recorded in MarkRuntime excludes any pointers we
    6767             :  * use during the marking implementation.
    6768             :  *
    6769             :  * Returns true if we "reset" an existing incremental GC, which would force us
    6770             :  * to run another cycle.
    6771             :  */
    6772             : MOZ_NEVER_INLINE GCRuntime::IncrementalResult
    6773           3 : GCRuntime::gcCycle(bool nonincrementalByAPI, SliceBudget& budget, JS::gcreason::Reason reason)
    6774             : {
    6775             :     // Note that the following is allowed to re-enter GC in the finalizer.
    6776           6 :     AutoNotifyGCActivity notify(*this);
    6777             : 
    6778           6 :     gcstats::AutoGCSlice agc(stats(), scanZonesBeforeGC(), invocationKind, budget, reason);
    6779             : 
    6780           6 :     AutoExposeLiveCrossZoneEdges aelcze(&foundBlackGrayEdges.ref());
    6781             : 
    6782           3 :     EvictAllNurseries(rt, reason);
    6783             : 
    6784           6 :     AutoTraceSession session(rt, JS::HeapState::MajorCollecting);
    6785             : 
    6786           3 :     majorGCTriggerReason = JS::gcreason::NO_REASON;
    6787           3 :     interFrameGC = true;
    6788             : 
    6789           3 :     number++;
    6790           3 :     if (!isIncrementalGCInProgress())
    6791           1 :         incMajorGcNumber();
    6792             : 
    6793             :     // It's ok if threads other than the active thread have suppressGC set, as
    6794             :     // they are operating on zones which will not be collected from here.
    6795           3 :     MOZ_ASSERT(!TlsContext.get()->suppressGC);
    6796             : 
    6797             :     // Assert if this is a GC unsafe region.
    6798           3 :     TlsContext.get()->verifyIsSafeToGC();
    6799             : 
    6800             :     {
    6801           6 :         gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::WAIT_BACKGROUND_THREAD);
    6802             : 
    6803             :         // Background finalization and decommit are finished by defininition
    6804             :         // before we can start a new GC session.
    6805           3 :         if (!isIncrementalGCInProgress()) {
    6806           1 :             assertBackgroundSweepingFinished();
    6807           1 :             MOZ_ASSERT(!decommitTask.isRunning());
    6808             :         }
    6809             : 
    6810             :         // We must also wait for background allocation to finish so we can
    6811             :         // avoid taking the GC lock when manipulating the chunks during the GC.
    6812             :         // The background alloc task can run between slices, so we must wait
    6813             :         // for it at the start of every slice.
    6814           3 :         allocTask.cancel(GCParallelTask::CancelAndWait);
    6815             :     }
    6816             : 
    6817             :     // We don't allow off-thread parsing to start while we're doing an
    6818             :     // incremental GC.
    6819           3 :     MOZ_ASSERT_IF(rt->activeGCInAtomsZone(), !rt->hasHelperThreadZones());
    6820             : 
    6821           3 :     auto result = budgetIncrementalGC(nonincrementalByAPI, reason, budget, session.lock);
    6822             : 
    6823             :     // If an ongoing incremental GC was reset, we may need to restart.
    6824           3 :     if (result == IncrementalResult::Reset) {
    6825           0 :         MOZ_ASSERT(!isIncrementalGCInProgress());
    6826           0 :         return result;
    6827             :     }
    6828             : 
    6829           3 :     TraceMajorGCStart();
    6830             : 
    6831           3 :     incrementalCollectSlice(budget, reason, session.lock);
    6832             : 
    6833           3 :     chunkAllocationSinceLastGC = false;
    6834             : 
    6835             : #ifdef JS_GC_ZEAL
    6836             :     /* Keeping these around after a GC is dangerous. */
    6837           3 :     clearSelectedForMarking();
    6838             : #endif
    6839             : 
    6840             :     /* Clear gcMallocBytes for all zones. */
    6841          51 :     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next())
    6842          48 :         zone->resetAllMallocBytes();
    6843             : 
    6844           3 :     resetMallocBytes();
    6845             : 
    6846           3 :     TraceMajorGCEnd();
    6847             : 
    6848           3 :     return IncrementalResult::Ok;
    6849             : }
    6850             : 
    6851             : #ifdef JS_GC_ZEAL
    6852             : static bool
    6853           0 : IsDeterministicGCReason(JS::gcreason::Reason reason)
    6854             : {
    6855           0 :     switch (reason) {
    6856             :       case JS::gcreason::API:
    6857             :       case JS::gcreason::DESTROY_RUNTIME:
    6858             :       case JS::gcreason::LAST_DITCH:
    6859             :       case JS::gcreason::TOO_MUCH_MALLOC:
    6860             :       case JS::gcreason::ALLOC_TRIGGER:
    6861             :       case JS::gcreason::DEBUG_GC:
    6862             :       case JS::gcreason::CC_FORCED:
    6863             :       case JS::gcreason::SHUTDOWN_CC:
    6864             :       case JS::gcreason::ABORT_GC:
    6865           0 :         return true;
    6866             : 
    6867             :       default:
    6868           0 :         return false;
    6869             :     }
    6870             : }
    6871             : #endif
    6872             : 
    6873             : gcstats::ZoneGCStats
    6874           3 : GCRuntime::scanZonesBeforeGC()
    6875             : {
    6876           3 :     gcstats::ZoneGCStats zoneStats;
    6877          51 :     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    6878          48 :         zoneStats.zoneCount++;
    6879          48 :         if (zone->isGCScheduled()) {
    6880          48 :             zoneStats.collectedZoneCount++;
    6881          48 :             zoneStats.collectedCompartmentCount += zone->compartments().length();
    6882             :         }
    6883             :     }
    6884             : 
    6885         674 :     for (CompartmentsIter c(rt, WithAtoms); !c.done(); c.next())
    6886         671 :         zoneStats.compartmentCount++;
    6887             : 
    6888           3 :     return zoneStats;
    6889             : }
    6890             : 
    6891             : // The GC can only clean up scheduledForDestruction compartments that were
    6892             : // marked live by a barrier (e.g. by RemapWrappers from a navigation event).
    6893             : // It is also common to have compartments held live because they are part of a
    6894             : // cycle in gecko, e.g. involving the HTMLDocument wrapper. In this case, we
    6895             : // need to run the CycleCollector in order to remove these edges before the
    6896             : // compartment can be freed.
    6897             : void
    6898           0 : GCRuntime::maybeDoCycleCollection()
    6899             : {
    6900             :     const static double ExcessiveGrayCompartments = 0.8;
    6901             :     const static size_t LimitGrayCompartments = 200;
    6902             : 
    6903           0 :     size_t compartmentsTotal = 0;
    6904           0 :     size_t compartmentsGray = 0;
    6905           0 :     for (CompartmentsIter c(rt, SkipAtoms); !c.done(); c.next()) {
    6906           0 :         ++compartmentsTotal;
    6907           0 :         GlobalObject* global = c->unsafeUnbarrieredMaybeGlobal();
    6908           0 :         if (global && global->isMarkedGray())
    6909           0 :             ++compartmentsGray;
    6910             :     }
    6911           0 :     double grayFraction = double(compartmentsGray) / double(compartmentsTotal);
    6912           0 :     if (grayFraction > ExcessiveGrayCompartments || compartmentsGray > LimitGrayCompartments)
    6913           0 :         callDoCycleCollectionCallback(rt->activeContextFromOwnThread());
    6914           0 : }
    6915             : 
    6916             : void
    6917           3 : GCRuntime::checkCanCallAPI()
    6918             : {
    6919           3 :     MOZ_RELEASE_ASSERT(CurrentThreadCanAccessRuntime(rt));
    6920             : 
    6921             :     /* If we attempt to invoke the GC while we are running in the GC, assert. */
    6922           3 :     MOZ_RELEASE_ASSERT(!JS::CurrentThreadIsHeapBusy());
    6923             : 
    6924           3 :     MOZ_ASSERT(TlsContext.get()->isAllocAllowed());
    6925           3 : }
    6926             : 
    6927             : bool
    6928           3 : GCRuntime::checkIfGCAllowedInCurrentState(JS::gcreason::Reason reason)
    6929             : {
    6930           3 :     if (TlsContext.get()->suppressGC)
    6931           0 :         return false;
    6932             : 
    6933             :     // Only allow shutdown GCs when we're destroying the runtime. This keeps
    6934             :     // the GC callback from triggering a nested GC and resetting global state.
    6935           3 :     if (rt->isBeingDestroyed() && !IsShutdownGC(reason))
    6936           0 :         return false;
    6937             : 
    6938             : #ifdef JS_GC_ZEAL
    6939           3 :     if (deterministicOnly && !IsDeterministicGCReason(reason))
    6940           0 :         return false;
    6941             : #endif
    6942             : 
    6943           3 :     return true;
    6944             : }
    6945             : 
    6946             : bool
    6947           0 : GCRuntime::shouldRepeatForDeadZone(JS::gcreason::Reason reason)
    6948             : {
    6949           0 :     MOZ_ASSERT_IF(reason == JS::gcreason::COMPARTMENT_REVIVED, !isIncremental);
    6950           0 :     MOZ_ASSERT(!isIncrementalGCInProgress());
    6951             : 
    6952           0 :     if (!isIncremental)
    6953           0 :         return false;
    6954             : 
    6955           0 :     for (CompartmentsIter c(rt, SkipAtoms); !c.done(); c.next()) {
    6956           0 :         if (c->scheduledForDestruction)
    6957           0 :             return true;
    6958             :     }
    6959             : 
    6960           0 :     return false;
    6961             : }
    6962             : 
    6963             : void
    6964           3 : GCRuntime::collect(bool nonincrementalByAPI, SliceBudget budget, JS::gcreason::Reason reason)
    6965             : {
    6966             :     // Checks run for each request, even if we do not actually GC.
    6967           3 :     checkCanCallAPI();
    6968             : 
    6969             :     // Check if we are allowed to GC at this time before proceeding.
    6970           3 :     if (!checkIfGCAllowedInCurrentState(reason))
    6971           0 :         return;
    6972             : 
    6973           6 :     AutoTraceLog logGC(TraceLoggerForCurrentThread(), TraceLogger_GC);
    6974           6 :     AutoStopVerifyingBarriers av(rt, IsShutdownGC(reason));
    6975           6 :     AutoEnqueuePendingParseTasksAfterGC aept(*this);
    6976           6 :     AutoScheduleZonesForGC asz(rt);
    6977             : 
    6978             :     bool repeat;
    6979           3 :     do {
    6980           3 :         bool wasReset = gcCycle(nonincrementalByAPI, budget, reason) == IncrementalResult::Reset;
    6981             : 
    6982           3 :         if (reason == JS::gcreason::ABORT_GC) {
    6983           0 :             MOZ_ASSERT(!isIncrementalGCInProgress());
    6984           0 :             break;
    6985             :         }
    6986             : 
    6987             :         /*
    6988             :          * Sometimes when we finish a GC we need to immediately start a new one.
    6989             :          * This happens in the following cases:
    6990             :          *  - when we reset the current GC
    6991             :          *  - when finalizers drop roots during shutdown (the cleanUpEverything
    6992             :          *    case)
    6993             :          *  - when zones that we thought were dead at the start of GC are
    6994             :          *    not collected (see the large comment in beginMarkPhase)
    6995             :          */
    6996           3 :         repeat = false;
    6997           3 :         if (!isIncrementalGCInProgress()) {
    6998           0 :             if (wasReset) {
    6999           0 :                 repeat = true;
    7000           0 :             } else if (rootsRemoved && cleanUpEverything) {
    7001             :                 /* Need to re-schedule all zones for GC. */
    7002           0 :                 JS::PrepareForFullGC(rt->activeContextFromOwnThread());
    7003           0 :                 repeat = true;
    7004           0 :                 reason = JS::gcreason::ROOTS_REMOVED;
    7005           0 :             } else if (shouldRepeatForDeadZone(reason)) {
    7006           0 :                 repeat = true;
    7007           0 :                 reason = JS::gcreason::COMPARTMENT_REVIVED;
    7008             :             }
    7009             :          }
    7010             :     } while (repeat);
    7011             : 
    7012           3 :     if (reason == JS::gcreason::COMPARTMENT_REVIVED)
    7013           0 :         maybeDoCycleCollection();
    7014             : 
    7015             : #ifdef JS_GC_ZEAL
    7016           3 :     if (rt->hasZealMode(ZealMode::CheckHeapAfterGC)) {
    7017           0 :         gcstats::AutoPhase ap(rt->gc.stats(), gcstats::PhaseKind::TRACE_HEAP);
    7018           0 :         CheckHeapAfterGC(rt);
    7019             :     }
    7020             : #endif
    7021             : }
    7022             : 
    7023           6 : js::AutoEnqueuePendingParseTasksAfterGC::~AutoEnqueuePendingParseTasksAfterGC()
    7024             : {
    7025           3 :     if (!OffThreadParsingMustWaitForGC(gc_.rt))
    7026           0 :         EnqueuePendingParseTasksAfterGC(gc_.rt);
    7027           3 : }
    7028             : 
    7029             : SliceBudget
    7030           3 : GCRuntime::defaultBudget(JS::gcreason::Reason reason, int64_t millis)
    7031             : {
    7032           3 :     if (millis == 0) {
    7033           0 :         if (reason == JS::gcreason::ALLOC_TRIGGER)
    7034           0 :             millis = defaultSliceBudget();
    7035           0 :         else if (schedulingState.inHighFrequencyGCMode() && tunables.isDynamicMarkSliceEnabled())
    7036           0 :             millis = defaultSliceBudget() * IGC_MARK_SLICE_MULTIPLIER;
    7037             :         else
    7038           0 :             millis = defaultSliceBudget();
    7039             :     }
    7040             : 
    7041           3 :     return SliceBudget(TimeBudget(millis));
    7042             : }
    7043             : 
    7044             : void
    7045           0 : GCRuntime::gc(JSGCInvocationKind gckind, JS::gcreason::Reason reason)
    7046             : {
    7047           0 :     invocationKind = gckind;
    7048           0 :     collect(true, SliceBudget::unlimited(), reason);
    7049           0 : }
    7050             : 
    7051             : void
    7052           1 : GCRuntime::startGC(JSGCInvocationKind gckind, JS::gcreason::Reason reason, int64_t millis)
    7053             : {
    7054           1 :     MOZ_ASSERT(!isIncrementalGCInProgress());
    7055           1 :     if (!JS::IsIncrementalGCEnabled(TlsContext.get())) {
    7056           0 :         gc(gckind, reason);
    7057           0 :         return;
    7058             :     }
    7059           1 :     invocationKind = gckind;
    7060           1 :     collect(false, defaultBudget(reason, millis), reason);
    7061             : }
    7062             : 
    7063             : void
    7064           2 : GCRuntime::gcSlice(JS::gcreason::Reason reason, int64_t millis)
    7065             : {
    7066           2 :     MOZ_ASSERT(isIncrementalGCInProgress());
    7067           2 :     collect(false, defaultBudget(reason, millis), reason);
    7068           2 : }
    7069             : 
    7070             : void
    7071           0 : GCRuntime::finishGC(JS::gcreason::Reason reason)
    7072             : {
    7073           0 :     MOZ_ASSERT(isIncrementalGCInProgress());
    7074             : 
    7075             :     // If we're not collecting because we're out of memory then skip the
    7076             :     // compacting phase if we need to finish an ongoing incremental GC
    7077             :     // non-incrementally to avoid janking the browser.
    7078           0 :     if (!IsOOMReason(initialReason)) {
    7079           0 :         if (incrementalState == State::Compact) {
    7080           0 :             abortGC();
    7081           0 :             return;
    7082             :         }
    7083             : 
    7084           0 :         isCompacting = false;
    7085             :     }
    7086             : 
    7087           0 :     collect(false, SliceBudget::unlimited(), reason);
    7088             : }
    7089             : 
    7090             : void
    7091           0 : GCRuntime::abortGC()
    7092             : {
    7093           0 :     MOZ_ASSERT(isIncrementalGCInProgress());
    7094           0 :     checkCanCallAPI();
    7095           0 :     MOZ_ASSERT(!TlsContext.get()->suppressGC);
    7096             : 
    7097           0 :     collect(false, SliceBudget::unlimited(), JS::gcreason::ABORT_GC);
    7098           0 : }
    7099             : 
    7100             : void
    7101           0 : GCRuntime::notifyDidPaint()
    7102             : {
    7103           0 :     MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
    7104             : 
    7105             : #ifdef JS_GC_ZEAL
    7106           0 :     if (hasZealMode(ZealMode::FrameVerifierPre))
    7107           0 :         verifyPreBarriers();
    7108             : 
    7109           0 :     if (hasZealMode(ZealMode::FrameGC)) {
    7110           0 :         JS::PrepareForFullGC(rt->activeContextFromOwnThread());
    7111           0 :         gc(GC_NORMAL, JS::gcreason::REFRESH_FRAME);
    7112           0 :         return;
    7113             :     }
    7114             : #endif
    7115             : 
    7116           0 :     if (isIncrementalGCInProgress() && !interFrameGC && tunables.areRefreshFrameSlicesEnabled()) {
    7117           0 :         JS::PrepareForIncrementalGC(rt->activeContextFromOwnThread());
    7118           0 :         gcSlice(JS::gcreason::REFRESH_FRAME);
    7119             :     }
    7120             : 
    7121           0 :     interFrameGC = false;
    7122             : }
    7123             : 
    7124             : static bool
    7125           0 : ZonesSelected(JSRuntime* rt)
    7126             : {
    7127           0 :     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    7128           0 :         if (zone->isGCScheduled())
    7129           0 :             return true;
    7130             :     }
    7131           0 :     return false;
    7132             : }
    7133             : 
    7134             : void
    7135           0 : GCRuntime::startDebugGC(JSGCInvocationKind gckind, SliceBudget& budget)
    7136             : {
    7137           0 :     MOZ_ASSERT(!isIncrementalGCInProgress());
    7138           0 :     if (!ZonesSelected(rt))
    7139           0 :         JS::PrepareForFullGC(rt->activeContextFromOwnThread());
    7140           0 :     invocationKind = gckind;
    7141           0 :     collect(false, budget, JS::gcreason::DEBUG_GC);
    7142           0 : }
    7143             : 
    7144             : void
    7145           0 : GCRuntime::debugGCSlice(SliceBudget& budget)
    7146             : {
    7147           0 :     MOZ_ASSERT(isIncrementalGCInProgress());
    7148           0 :     if (!ZonesSelected(rt))
    7149           0 :         JS::PrepareForIncrementalGC(rt->activeContextFromOwnThread());
    7150           0 :     collect(false, budget, JS::gcreason::DEBUG_GC);
    7151           0 : }
    7152             : 
    7153             : /* Schedule a full GC unless a zone will already be collected. */
    7154             : void
    7155           0 : js::PrepareForDebugGC(JSRuntime* rt)
    7156             : {
    7157           0 :     if (!ZonesSelected(rt))
    7158           0 :         JS::PrepareForFullGC(rt->activeContextFromOwnThread());
    7159           0 : }
    7160             : 
    7161             : void
    7162           0 : GCRuntime::onOutOfMallocMemory()
    7163             : {
    7164             :     // Stop allocating new chunks.
    7165           0 :     allocTask.cancel(GCParallelTask::CancelAndWait);
    7166             : 
    7167             :     // Make sure we release anything queued for release.
    7168           0 :     decommitTask.join();
    7169             : 
    7170             :     // Wait for background free of nursery huge slots to finish.
    7171           0 :     for (ZoneGroupsIter group(rt); !group.done(); group.next())
    7172           0 :         group->nursery().waitBackgroundFreeEnd();
    7173             : 
    7174           0 :     AutoLockGC lock(rt);
    7175           0 :     onOutOfMallocMemory(lock);
    7176           0 : }
    7177             : 
    7178             : void
    7179           0 : GCRuntime::onOutOfMallocMemory(const AutoLockGC& lock)
    7180             : {
    7181             :     // Release any relocated arenas we may be holding on to, without releasing
    7182             :     // the GC lock.
    7183           0 :     releaseHeldRelocatedArenasWithoutUnlocking(lock);
    7184             : 
    7185             :     // Throw away any excess chunks we have lying around.
    7186           0 :     freeEmptyChunks(rt, lock);
    7187             : 
    7188             :     // Immediately decommit as many arenas as possible in the hopes that this
    7189             :     // might let the OS scrape together enough pages to satisfy the failing
    7190             :     // malloc request.
    7191           0 :     decommitAllWithoutUnlocking(lock);
    7192           0 : }
    7193             : 
    7194             : void
    7195          24 : GCRuntime::minorGC(JS::gcreason::Reason reason, gcstats::PhaseKind phase)
    7196             : {
    7197          24 :     MOZ_ASSERT(!JS::CurrentThreadIsHeapBusy());
    7198             : 
    7199          24 :     if (TlsContext.get()->suppressGC)
    7200           0 :         return;
    7201             : 
    7202          48 :     gcstats::AutoPhase ap(rt->gc.stats(), phase);
    7203             : 
    7204          24 :     nursery().clearMinorGCRequest();
    7205          24 :     TraceLoggerThread* logger = TraceLoggerForCurrentThread();
    7206          48 :     AutoTraceLog logMinorGC(logger, TraceLogger_MinorGC);
    7207          24 :     nursery().collect(reason);
    7208          24 :     MOZ_ASSERT(nursery().isEmpty());
    7209             : 
    7210          24 :     blocksToFreeAfterMinorGC.ref().freeAll();
    7211             : 
    7212             : #ifdef JS_GC_ZEAL
    7213          24 :     if (rt->hasZealMode(ZealMode::CheckHeapAfterGC))
    7214           0 :         CheckHeapAfterGC(rt);
    7215             : #endif
    7216             : 
    7217             :     {
    7218          48 :         AutoLockGC lock(rt);
    7219         263 :         for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next())
    7220         239 :             maybeAllocTriggerZoneGC(zone, lock);
    7221             :     }
    7222             : }
    7223             : 
    7224           3 : JS::AutoDisableGenerationalGC::AutoDisableGenerationalGC(JSContext* cx)
    7225           3 :   : cx(cx)
    7226             : {
    7227           3 :     if (!cx->generationalDisabled) {
    7228           3 :         cx->runtime()->gc.evictNursery(JS::gcreason::API);
    7229           3 :         cx->nursery().disable();
    7230             :     }
    7231           3 :     ++cx->generationalDisabled;
    7232           3 : }
    7233             : 
    7234           6 : JS::AutoDisableGenerationalGC::~AutoDisableGenerationalGC()
    7235             : {
    7236           3 :     if (--cx->generationalDisabled == 0) {
    7237           6 :         for (ZoneGroupsIter group(cx->runtime()); !group.done(); group.next())
    7238           3 :             group->nursery().enable();
    7239             :     }
    7240           3 : }
    7241             : 
    7242             : JS_PUBLIC_API(bool)
    7243           0 : JS::IsGenerationalGCEnabled(JSRuntime* rt)
    7244             : {
    7245           0 :     return !TlsContext.get()->generationalDisabled;
    7246             : }
    7247             : 
    7248             : bool
    7249       54334 : GCRuntime::gcIfRequested()
    7250             : {
    7251             :     // This method returns whether a major GC was performed.
    7252             : 
    7253       54334 :     if (nursery().minorGCRequested())
    7254          18 :         minorGC(nursery().minorGCTriggerReason());
    7255             : 
    7256       54334 :     if (majorGCRequested()) {
    7257           0 :         if (!isIncrementalGCInProgress())
    7258           0 :             startGC(GC_NORMAL, majorGCTriggerReason);
    7259             :         else
    7260           0 :             gcSlice(majorGCTriggerReason);
    7261           0 :         return true;
    7262             :     }
    7263             : 
    7264       54334 :     return false;
    7265             : }
    7266             : 
    7267             : void
    7268           0 : js::gc::FinishGC(JSContext* cx)
    7269             : {
    7270           0 :     if (JS::IsIncrementalGCInProgress(cx)) {
    7271           0 :         JS::PrepareForIncrementalGC(cx);
    7272           0 :         JS::FinishIncrementalGC(cx, JS::gcreason::API);
    7273             :     }
    7274             : 
    7275           0 :     for (ZoneGroupsIter group(cx->runtime()); !group.done(); group.next())
    7276           0 :         group->nursery().waitBackgroundFreeEnd();
    7277           0 : }
    7278             : 
    7279           0 : AutoPrepareForTracing::AutoPrepareForTracing(JSContext* cx, ZoneSelector selector)
    7280             : {
    7281           0 :     js::gc::FinishGC(cx);
    7282           0 :     session_.emplace(cx->runtime());
    7283           0 : }
    7284             : 
    7285             : JSCompartment*
    7286         311 : js::NewCompartment(JSContext* cx, JSPrincipals* principals,
    7287             :                    const JS::CompartmentOptions& options)
    7288             : {
    7289         311 :     JSRuntime* rt = cx->runtime();
    7290         311 :     JS_AbortIfWrongThread(cx);
    7291             : 
    7292         622 :     ScopedJSDeletePtr<ZoneGroup> groupHolder;
    7293         622 :     ScopedJSDeletePtr<Zone> zoneHolder;
    7294             : 
    7295         311 :     Zone* zone = nullptr;
    7296         311 :     ZoneGroup* group = nullptr;
    7297         311 :     JS::ZoneSpecifier zoneSpec = options.creationOptions().zoneSpecifier();
    7298         311 :     switch (zoneSpec) {
    7299             :       case JS::SystemZone:
    7300             :         // systemZone and possibly systemZoneGroup might be null here, in which
    7301             :         // case we'll make a zone/group and set these fields below.
    7302         284 :         zone = rt->gc.systemZone;
    7303         284 :         group = rt->gc.systemZoneGroup;
    7304         284 :         break;
    7305             :       case JS::ExistingZone:
    7306           3 :         zone = static_cast<Zone*>(options.creationOptions().zonePointer());
    7307           3 :         MOZ_ASSERT(zone);
    7308           3 :         group = zone->group();
    7309           3 :         break;
    7310             :       case JS::NewZoneInNewZoneGroup:
    7311          17 :         break;
    7312             :       case JS::NewZoneInSystemZoneGroup:
    7313             :         // As above, systemZoneGroup might be null here.
    7314           7 :         group = rt->gc.systemZoneGroup;
    7315           7 :         break;
    7316             :       case JS::NewZoneInExistingZoneGroup:
    7317           0 :         group = static_cast<ZoneGroup*>(options.creationOptions().zonePointer());
    7318           0 :         MOZ_ASSERT(group);
    7319           0 :         break;
    7320             :     }
    7321             : 
    7322         311 :     if (group) {
    7323             :         // Take over ownership of the group while we create the compartment/zone.
    7324         290 :         group->enter(cx);
    7325             :     } else {
    7326          21 :         MOZ_ASSERT(!zone);
    7327          21 :         group = cx->new_<ZoneGroup>(rt);
    7328          21 :         if (!group)
    7329           0 :             return nullptr;
    7330             : 
    7331          21 :         groupHolder.reset(group);
    7332             : 
    7333          21 :         if (!group->init()) {
    7334           0 :             ReportOutOfMemory(cx);
    7335           0 :             return nullptr;
    7336             :         }
    7337             : 
    7338          21 :         if (cx->generationalDisabled)
    7339           3 :             group->nursery().disable();
    7340             :     }
    7341             : 
    7342         311 :     if (!zone) {
    7343          27 :         zone = cx->new_<Zone>(cx->runtime(), group);
    7344          27 :         if (!zone)
    7345           0 :             return nullptr;
    7346             : 
    7347          27 :         zoneHolder.reset(zone);
    7348             : 
    7349          27 :         const JSPrincipals* trusted = rt->trustedPrincipals();
    7350          27 :         bool isSystem = principals && principals == trusted;
    7351          27 :         if (!zone->init(isSystem)) {
    7352           0 :             ReportOutOfMemory(cx);
    7353           0 :             return nullptr;
    7354             :         }
    7355             :     }
    7356             : 
    7357         622 :     ScopedJSDeletePtr<JSCompartment> compartment(cx->new_<JSCompartment>(zone, options));
    7358         311 :     if (!compartment || !compartment->init(cx))
    7359           0 :         return nullptr;
    7360             : 
    7361             :     // Set up the principals.
    7362         311 :     JS_SetCompartmentPrincipals(compartment, principals);
    7363             : 
    7364         622 :     AutoLockGC lock(rt);
    7365             : 
    7366         311 :     if (!zone->compartments().append(compartment.get())) {
    7367           0 :         ReportOutOfMemory(cx);
    7368           0 :         return nullptr;
    7369             :     }
    7370             : 
    7371         311 :     if (zoneHolder) {
    7372          27 :         if (!group->zones().append(zone)) {
    7373           0 :             ReportOutOfMemory(cx);
    7374           0 :             return nullptr;
    7375             :         }
    7376             : 
    7377             :         // Lazily set the runtime's sytem zone.
    7378          27 :         if (zoneSpec == JS::SystemZone) {
    7379           3 :             MOZ_RELEASE_ASSERT(!rt->gc.systemZone);
    7380           3 :             rt->gc.systemZone = zone;
    7381           3 :             zone->isSystem = true;
    7382             :         }
    7383             :     }
    7384             : 
    7385         311 :     if (groupHolder) {
    7386          21 :         if (!rt->gc.groups.ref().append(group)) {
    7387           0 :             ReportOutOfMemory(cx);
    7388           0 :             return nullptr;
    7389             :         }
    7390             : 
    7391             :         // Lazily set the runtime's system zone group.
    7392          21 :         if (zoneSpec == JS::SystemZone || zoneSpec == JS::NewZoneInSystemZoneGroup) {
    7393           4 :             MOZ_RELEASE_ASSERT(!rt->gc.systemZoneGroup);
    7394           4 :             rt->gc.systemZoneGroup = group;
    7395           4 :             group->setUseExclusiveLocking();
    7396             :         }
    7397             :     }
    7398             : 
    7399         311 :     zoneHolder.forget();
    7400         311 :     groupHolder.forget();
    7401         311 :     group->leave();
    7402         311 :     return compartment.forget();
    7403             : }
    7404             : 
    7405             : void
    7406          15 : gc::MergeCompartments(JSCompartment* source, JSCompartment* target)
    7407             : {
    7408             :     // The source compartment must be specifically flagged as mergable.  This
    7409             :     // also implies that the compartment is not visible to the debugger.
    7410          15 :     MOZ_ASSERT(source->creationOptions_.mergeable());
    7411          15 :     MOZ_ASSERT(source->creationOptions_.invisibleToDebugger());
    7412             : 
    7413          15 :     MOZ_ASSERT(source->creationOptions().addonIdOrNull() ==
    7414             :                target->creationOptions().addonIdOrNull());
    7415             : 
    7416          15 :     JSContext* cx = source->runtimeFromActiveCooperatingThread()->activeContextFromOwnThread();
    7417             : 
    7418          15 :     MOZ_ASSERT(!source->zone()->wasGCStarted());
    7419          15 :     MOZ_ASSERT(!target->zone()->wasGCStarted());
    7420          30 :     JS::AutoAssertNoGC nogc(cx);
    7421             : 
    7422          30 :     AutoTraceSession session(cx->runtime());
    7423             : 
    7424             :     // Cleanup tables and other state in the source compartment that will be
    7425             :     // meaningless after merging into the target compartment.
    7426             : 
    7427          15 :     source->clearTables();
    7428          15 :     source->zone()->clearTables();
    7429          15 :     source->unsetIsDebuggee();
    7430             : 
    7431             :     // The delazification flag indicates the presence of LazyScripts in a
    7432             :     // compartment for the Debugger API, so if the source compartment created
    7433             :     // LazyScripts, the flag must be propagated to the target compartment.
    7434          15 :     if (source->needsDelazificationForDebugger())
    7435           9 :         target->scheduleDelazificationForDebugger();
    7436             : 
    7437             :     // Release any relocated arenas which we may be holding on to as they might
    7438             :     // be in the source zone
    7439          15 :     cx->runtime()->gc.releaseHeldRelocatedArenas();
    7440             : 
    7441             :     // Fixup compartment pointers in source to refer to target, and make sure
    7442             :     // type information generations are in sync.
    7443             : 
    7444        5139 :     for (auto script = source->zone()->cellIter<JSScript>(); !script.done(); script.next()) {
    7445        5124 :         MOZ_ASSERT(script->compartment() == source);
    7446        5124 :         script->compartment_ = target;
    7447        5124 :         script->setTypesGeneration(target->zone()->types.generation);
    7448             :     }
    7449             : 
    7450        7056 :     for (auto group = source->zone()->cellIter<ObjectGroup>(); !group.done(); group.next()) {
    7451        7041 :         group->setGeneration(target->zone()->types.generation);
    7452        7041 :         group->compartment_ = target;
    7453             : 
    7454             :         // Remove any unboxed layouts from the list in the off thread
    7455             :         // compartment. These do not need to be reinserted in the target
    7456             :         // compartment's list, as the list is not required to be complete.
    7457        7041 :         if (UnboxedLayout* layout = group->maybeUnboxedLayoutDontCheckGeneration())
    7458           0 :             layout->detachFromCompartment();
    7459             :     }
    7460             : 
    7461             :     // Fixup zone pointers in source's zone to refer to target's zone.
    7462             : 
    7463         450 :     for (auto thingKind : AllAllocKinds()) {
    7464        1245 :         for (ArenaIter aiter(source->zone(), thingKind); !aiter.done(); aiter.next()) {
    7465         810 :             Arena* arena = aiter.get();
    7466         810 :             arena->zone = target->zone();
    7467             :         }
    7468             :     }
    7469             : 
    7470             :     // The source should be the only compartment in its zone.
    7471          30 :     for (CompartmentsInZoneIter c(source->zone()); !c.done(); c.next())
    7472          15 :         MOZ_ASSERT(c.get() == source);
    7473             : 
    7474             :     // Merge the allocator, stats and UIDs in source's zone into target's zone.
    7475          15 :     target->zone()->arenas.adoptArenas(cx->runtime(), &source->zone()->arenas);
    7476          15 :     target->zone()->usage.adopt(source->zone()->usage);
    7477          15 :     target->zone()->adoptUniqueIds(source->zone());
    7478             : 
    7479             :     // Merge other info in source's zone into target's zone.
    7480          15 :     target->zone()->types.typeLifoAlloc().transferFrom(&source->zone()->types.typeLifoAlloc());
    7481             : 
    7482             :     // Atoms which are marked in source's zone are now marked in target's zone.
    7483          15 :     cx->atomMarking().adoptMarkedAtoms(target->zone(), source->zone());
    7484             : 
    7485             :     // Merge script name maps in the target compartment's map.
    7486          15 :     if (cx->runtime()->lcovOutput().isEnabled() && source->scriptNameMap) {
    7487           0 :         AutoEnterOOMUnsafeRegion oomUnsafe;
    7488             : 
    7489           0 :         if (!target->scriptNameMap) {
    7490           0 :             target->scriptNameMap = cx->new_<ScriptNameMap>();
    7491             : 
    7492           0 :             if (!target->scriptNameMap)
    7493           0 :                 oomUnsafe.crash("Failed to create a script name map.");
    7494             : 
    7495           0 :             if (!target->scriptNameMap->init())
    7496           0 :                 oomUnsafe.crash("Failed to initialize a script name map.");
    7497             :         }
    7498             : 
    7499           0 :         for (ScriptNameMap::Range r = source->scriptNameMap->all(); !r.empty(); r.popFront()) {
    7500           0 :             JSScript* key = r.front().key();
    7501           0 :             const char* value = r.front().value();
    7502           0 :             if (!target->scriptNameMap->putNew(key, value))
    7503           0 :                 oomUnsafe.crash("Failed to add an entry in the script name map.");
    7504             :         }
    7505             : 
    7506           0 :         source->scriptNameMap->clear();
    7507             :     }
    7508          15 : }
    7509             : 
    7510             : void
    7511           0 : GCRuntime::runDebugGC()
    7512             : {
    7513             : #ifdef JS_GC_ZEAL
    7514           0 :     if (TlsContext.get()->suppressGC)
    7515           0 :         return;
    7516             : 
    7517           0 :     if (hasZealMode(ZealMode::GenerationalGC))
    7518           0 :         return minorGC(JS::gcreason::DEBUG_GC);
    7519             : 
    7520           0 :     PrepareForDebugGC(rt);
    7521             : 
    7522           0 :     auto budget = SliceBudget::unlimited();
    7523           0 :     if (hasZealMode(ZealMode::IncrementalRootsThenFinish) ||
    7524           0 :         hasZealMode(ZealMode::IncrementalMarkAllThenFinish) ||
    7525           0 :         hasZealMode(ZealMode::IncrementalMultipleSlices) ||
    7526           0 :         hasZealMode(ZealMode::IncrementalSweepThenFinish))
    7527             :     {
    7528           0 :         js::gc::State initialState = incrementalState;
    7529           0 :         if (hasZealMode(ZealMode::IncrementalMultipleSlices)) {
    7530             :             /*
    7531             :              * Start with a small slice limit and double it every slice. This
    7532             :              * ensure that we get multiple slices, and collection runs to
    7533             :              * completion.
    7534             :              */
    7535           0 :             if (!isIncrementalGCInProgress())
    7536           0 :                 incrementalLimit = zealFrequency / 2;
    7537             :             else
    7538           0 :                 incrementalLimit *= 2;
    7539           0 :             budget = SliceBudget(WorkBudget(incrementalLimit));
    7540             :         } else {
    7541             :             // This triggers incremental GC but is actually ignored by IncrementalMarkSlice.
    7542           0 :             budget = SliceBudget(WorkBudget(1));
    7543             :         }
    7544             : 
    7545           0 :         if (!isIncrementalGCInProgress())
    7546           0 :             invocationKind = GC_SHRINK;
    7547           0 :         collect(false, budget, JS::gcreason::DEBUG_GC);
    7548             : 
    7549             :         /*
    7550             :          * For multi-slice zeal, reset the slice size when we get to the sweep
    7551             :          * or compact phases.
    7552             :          */
    7553           0 :         if (hasZealMode(ZealMode::IncrementalMultipleSlices)) {
    7554           0 :             if ((initialState == State::Mark && incrementalState == State::Sweep) ||
    7555           0 :                 (initialState == State::Sweep && incrementalState == State::Compact))
    7556             :             {
    7557           0 :                 incrementalLimit = zealFrequency / 2;
    7558             :             }
    7559             :         }
    7560           0 :     } else if (hasZealMode(ZealMode::Compact)) {
    7561           0 :         gc(GC_SHRINK, JS::gcreason::DEBUG_GC);
    7562             :     } else {
    7563           0 :         gc(GC_NORMAL, JS::gcreason::DEBUG_GC);
    7564             :     }
    7565             : 
    7566             : #endif
    7567             : }
    7568             : 
    7569             : void
    7570           0 : GCRuntime::setFullCompartmentChecks(bool enabled)
    7571             : {
    7572           0 :     MOZ_ASSERT(!JS::CurrentThreadIsHeapMajorCollecting());
    7573           0 :     fullCompartmentChecks = enabled;
    7574           0 : }
    7575             : 
    7576             : void
    7577        3144 : GCRuntime::notifyRootsRemoved()
    7578             : {
    7579        3144 :     rootsRemoved = true;
    7580             : 
    7581             : #ifdef JS_GC_ZEAL
    7582             :     /* Schedule a GC to happen "soon". */
    7583        3144 :     if (hasZealMode(ZealMode::RootsChange))
    7584           0 :         nextScheduled = 1;
    7585             : #endif
    7586        3144 : }
    7587             : 
    7588             : #ifdef JS_GC_ZEAL
    7589             : bool
    7590           0 : GCRuntime::selectForMarking(JSObject* object)
    7591             : {
    7592           0 :     MOZ_ASSERT(!JS::CurrentThreadIsHeapMajorCollecting());
    7593           0 :     return selectedForMarking.ref().append(object);
    7594             : }
    7595             : 
    7596             : void
    7597           3 : GCRuntime::clearSelectedForMarking()
    7598             : {
    7599           3 :     selectedForMarking.ref().clearAndFree();
    7600           3 : }
    7601             : 
    7602             : void
    7603           0 : GCRuntime::setDeterministic(bool enabled)
    7604             : {
    7605           0 :     MOZ_ASSERT(!JS::CurrentThreadIsHeapMajorCollecting());
    7606           0 :     deterministicOnly = enabled;
    7607           0 : }
    7608             : #endif
    7609             : 
    7610             : #ifdef DEBUG
    7611             : 
    7612             : /* Should only be called manually under gdb */
    7613           0 : void PreventGCDuringInteractiveDebug()
    7614             : {
    7615           0 :     TlsContext.get()->suppressGC++;
    7616           0 : }
    7617             : 
    7618             : #endif
    7619             : 
    7620             : void
    7621           0 : js::ReleaseAllJITCode(FreeOp* fop)
    7622             : {
    7623           0 :     js::CancelOffThreadIonCompile(fop->runtime());
    7624             : 
    7625           0 :     JSRuntime::AutoProhibitActiveContextChange apacc(fop->runtime());
    7626           0 :     for (ZonesIter zone(fop->runtime(), SkipAtoms); !zone.done(); zone.next()) {
    7627           0 :         zone->setPreservingCode(false);
    7628           0 :         zone->discardJitCode(fop);
    7629             :     }
    7630           0 : }
    7631             : 
    7632             : void
    7633         870 : ArenaLists::normalizeBackgroundFinalizeState(AllocKind thingKind)
    7634             : {
    7635         870 :     ArenaLists::BackgroundFinalizeState* bfs = &backgroundFinalizeState(thingKind);
    7636         870 :     switch (*bfs) {
    7637             :       case BFS_DONE:
    7638         870 :         break;
    7639             :       default:
    7640           0 :         MOZ_ASSERT_UNREACHABLE("Background finalization in progress, but it should not be.");
    7641             :         break;
    7642             :     }
    7643         870 : }
    7644             : 
    7645             : void
    7646          15 : ArenaLists::adoptArenas(JSRuntime* rt, ArenaLists* fromArenaLists)
    7647             : {
    7648             :     // GC should be inactive, but still take the lock as a kind of read fence.
    7649          30 :     AutoLockGC lock(rt);
    7650             : 
    7651          15 :     fromArenaLists->purge();
    7652             : 
    7653         450 :     for (auto thingKind : AllAllocKinds()) {
    7654             :         // When we enter a parallel section, we join the background
    7655             :         // thread, and we do not run GC while in the parallel section,
    7656             :         // so no finalizer should be active!
    7657         435 :         normalizeBackgroundFinalizeState(thingKind);
    7658         435 :         fromArenaLists->normalizeBackgroundFinalizeState(thingKind);
    7659             : 
    7660         435 :         ArenaList* fromList = &fromArenaLists->arenaLists(thingKind);
    7661         435 :         ArenaList* toList = &arenaLists(thingKind);
    7662         435 :         fromList->check();
    7663         435 :         toList->check();
    7664             :         Arena* next;
    7665        1245 :         for (Arena* fromArena = fromList->head(); fromArena; fromArena = next) {
    7666             :             // Copy fromArena->next before releasing/reinserting.
    7667         810 :             next = fromArena->next;
    7668             : 
    7669         810 :             MOZ_ASSERT(!fromArena->isEmpty());
    7670         810 :             toList->insertAtCursor(fromArena);
    7671             :         }
    7672         435 :         fromList->clear();
    7673         435 :         toList->check();
    7674             :     }
    7675          15 : }
    7676             : 
    7677             : bool
    7678           0 : ArenaLists::containsArena(JSRuntime* rt, Arena* needle)
    7679             : {
    7680           0 :     AutoLockGC lock(rt);
    7681           0 :     ArenaList& list = arenaLists(needle->getAllocKind());
    7682           0 :     for (Arena* arena = list.head(); arena; arena = arena->next) {
    7683           0 :         if (arena == needle)
    7684           0 :             return true;
    7685             :     }
    7686           0 :     return false;
    7687             : }
    7688             : 
    7689             : 
    7690      189748 : AutoSuppressGC::AutoSuppressGC(JSContext* cx)
    7691      189748 :   : suppressGC_(cx->suppressGC.ref())
    7692             : {
    7693      189750 :     suppressGC_++;
    7694      189750 : }
    7695             : 
    7696             : bool
    7697           0 : js::UninlinedIsInsideNursery(const gc::Cell* cell)
    7698             : {
    7699           0 :     return IsInsideNursery(cell);
    7700             : }
    7701             : 
    7702             : #ifdef DEBUG
    7703       44333 : AutoDisableProxyCheck::AutoDisableProxyCheck()
    7704             : {
    7705       44333 :     TlsContext.get()->disableStrictProxyChecking();
    7706       44333 : }
    7707             : 
    7708       44333 : AutoDisableProxyCheck::~AutoDisableProxyCheck()
    7709             : {
    7710       44333 :     TlsContext.get()->enableStrictProxyChecking();
    7711       44333 : }
    7712             : 
    7713             : JS_FRIEND_API(void)
    7714        6541 : JS::AssertGCThingMustBeTenured(JSObject* obj)
    7715             : {
    7716        6541 :     MOZ_ASSERT(obj->isTenured() &&
    7717             :                (!IsNurseryAllocable(obj->asTenured().getAllocKind()) ||
    7718             :                 obj->getClass()->hasFinalize()));
    7719        6541 : }
    7720             : 
    7721             : JS_FRIEND_API(void)
    7722         397 : JS::AssertGCThingIsNotAnObjectSubclass(Cell* cell)
    7723             : {
    7724         397 :     MOZ_ASSERT(cell);
    7725         397 :     MOZ_ASSERT(cell->getTraceKind() != JS::TraceKind::Object);
    7726         397 : }
    7727             : 
    7728             : JS_FRIEND_API(void)
    7729      262331 : js::gc::AssertGCThingHasType(js::gc::Cell* cell, JS::TraceKind kind)
    7730             : {
    7731      262331 :     if (!cell)
    7732           0 :         MOZ_ASSERT(kind == JS::TraceKind::Null);
    7733      262331 :     else if (IsInsideNursery(cell))
    7734       21256 :         MOZ_ASSERT(kind == JS::TraceKind::Object);
    7735             :     else
    7736      241075 :         MOZ_ASSERT(MapAllocToTraceKind(cell->asTenured().getAllocKind()) == kind);
    7737      262331 : }
    7738             : #endif
    7739             : 
    7740     3346559 : JS::AutoAssertNoGC::AutoAssertNoGC(JSContext* maybecx)
    7741     3346559 :   : cx_(maybecx ? maybecx : TlsContext.get())
    7742             : {
    7743     3346555 :     cx_->inUnsafeRegion++;
    7744     3347030 : }
    7745             : 
    7746     6694217 : JS::AutoAssertNoGC::~AutoAssertNoGC()
    7747             : {
    7748     3346823 :     MOZ_ASSERT(cx_->inUnsafeRegion > 0);
    7749     3347121 :     cx_->inUnsafeRegion--;
    7750     3347402 : }
    7751             : 
    7752             : #ifdef DEBUG
    7753           0 : JS::AutoAssertNoAlloc::AutoAssertNoAlloc(JSContext* cx)
    7754           0 :   : gc(nullptr)
    7755             : {
    7756           0 :     disallowAlloc(cx->runtime());
    7757           0 : }
    7758             : 
    7759           0 : void JS::AutoAssertNoAlloc::disallowAlloc(JSRuntime* rt)
    7760             : {
    7761           0 :     MOZ_ASSERT(!gc);
    7762           0 :     gc = &rt->gc;
    7763           0 :     TlsContext.get()->disallowAlloc();
    7764           0 : }
    7765             : 
    7766     1327728 : JS::AutoAssertNoAlloc::~AutoAssertNoAlloc()
    7767             : {
    7768      663864 :     if (gc)
    7769           0 :         TlsContext.get()->allowAlloc();
    7770      663864 : }
    7771             : 
    7772          16 : AutoAssertNoNurseryAlloc::AutoAssertNoNurseryAlloc()
    7773             : {
    7774          16 :     TlsContext.get()->disallowNurseryAlloc();
    7775          16 : }
    7776             : 
    7777          16 : AutoAssertNoNurseryAlloc::~AutoAssertNoNurseryAlloc()
    7778             : {
    7779          16 :     TlsContext.get()->allowNurseryAlloc();
    7780          16 : }
    7781             : 
    7782        1788 : JS::AutoEnterCycleCollection::AutoEnterCycleCollection(JSRuntime* rt)
    7783             : {
    7784        1788 :     MOZ_ASSERT(!JS::CurrentThreadIsHeapBusy());
    7785        1788 :     TlsContext.get()->heapState = HeapState::CycleCollecting;
    7786        1788 : }
    7787             : 
    7788        1788 : JS::AutoEnterCycleCollection::~AutoEnterCycleCollection()
    7789             : {
    7790        1788 :     MOZ_ASSERT(JS::CurrentThreadIsHeapCycleCollecting());
    7791        1788 :     TlsContext.get()->heapState = HeapState::Idle;
    7792        1788 : }
    7793             : 
    7794          61 : JS::AutoAssertGCCallback::AutoAssertGCCallback()
    7795          61 :   : AutoSuppressGCAnalysis()
    7796             : {
    7797          61 :     MOZ_ASSERT(JS::CurrentThreadIsHeapCollecting());
    7798          61 : }
    7799             : #endif
    7800             : 
    7801             : JS_FRIEND_API(const char*)
    7802           0 : JS::GCTraceKindToAscii(JS::TraceKind kind)
    7803             : {
    7804           0 :     switch(kind) {
    7805             : #define MAP_NAME(name, _0, _1) case JS::TraceKind::name: return #name;
    7806           0 : JS_FOR_EACH_TRACEKIND(MAP_NAME);
    7807             : #undef MAP_NAME
    7808           0 :       default: return "Invalid";
    7809             :     }
    7810             : }
    7811             : 
    7812        4699 : JS::GCCellPtr::GCCellPtr(const Value& v)
    7813        4699 :   : ptr(0)
    7814             : {
    7815        4699 :     if (v.isString())
    7816          94 :         ptr = checkedCast(v.toString(), JS::TraceKind::String);
    7817        4605 :     else if (v.isObject())
    7818        4605 :         ptr = checkedCast(&v.toObject(), JS::TraceKind::Object);
    7819           0 :     else if (v.isSymbol())
    7820           0 :         ptr = checkedCast(v.toSymbol(), JS::TraceKind::Symbol);
    7821           0 :     else if (v.isPrivateGCThing())
    7822           0 :         ptr = checkedCast(v.toGCThing(), v.toGCThing()->getTraceKind());
    7823             :     else
    7824           0 :         ptr = checkedCast(nullptr, JS::TraceKind::Null);
    7825        4699 : }
    7826             : 
    7827             : JS::TraceKind
    7828        2288 : JS::GCCellPtr::outOfLineKind() const
    7829             : {
    7830        2288 :     MOZ_ASSERT((ptr & OutOfLineTraceKindMask) == OutOfLineTraceKindMask);
    7831        2288 :     MOZ_ASSERT(asCell()->isTenured());
    7832        2288 :     return MapAllocToTraceKind(asCell()->asTenured().getAllocKind());
    7833             : }
    7834             : 
    7835             : bool
    7836          94 : JS::GCCellPtr::mayBeOwnedByOtherRuntimeSlow() const
    7837             : {
    7838          94 :     if (is<JSString>())
    7839          94 :         return as<JSString>().isPermanentAtom();
    7840           0 :     return as<Symbol>().isWellKnownSymbol();
    7841             : }
    7842             : 
    7843             : #ifdef JSGC_HASH_TABLE_CHECKS
    7844             : void
    7845           0 : js::gc::CheckHashTablesAfterMovingGC(JSRuntime* rt)
    7846             : {
    7847             :     /*
    7848             :      * Check that internal hash tables no longer have any pointers to things
    7849             :      * that have been moved.
    7850             :      */
    7851           0 :     rt->geckoProfiler().checkStringsMapAfterMovingGC();
    7852           0 :     for (ZonesIter zone(rt, SkipAtoms); !zone.done(); zone.next()) {
    7853           0 :         zone->checkUniqueIdTableAfterMovingGC();
    7854           0 :         zone->checkInitialShapesTableAfterMovingGC();
    7855           0 :         zone->checkBaseShapeTableAfterMovingGC();
    7856             : 
    7857           0 :         JS::AutoCheckCannotGC nogc;
    7858           0 :         for (auto baseShape = zone->cellIter<BaseShape>(); !baseShape.done(); baseShape.next()) {
    7859           0 :             if (ShapeTable* table = baseShape->maybeTable(nogc))
    7860           0 :                 table->checkAfterMovingGC();
    7861             :         }
    7862             :     }
    7863           0 :     for (CompartmentsIter c(rt, SkipAtoms); !c.done(); c.next()) {
    7864           0 :         c->objectGroups.checkTablesAfterMovingGC();
    7865           0 :         c->dtoaCache.checkCacheAfterMovingGC();
    7866           0 :         c->checkWrapperMapAfterMovingGC();
    7867           0 :         c->checkScriptMapsAfterMovingGC();
    7868           0 :         if (c->debugEnvs)
    7869           0 :             c->debugEnvs->checkHashTablesAfterMovingGC(rt);
    7870             :     }
    7871           0 : }
    7872             : #endif
    7873             : 
    7874             : JS_PUBLIC_API(void)
    7875          32 : JS::PrepareZoneForGC(Zone* zone)
    7876             : {
    7877          32 :     zone->scheduleGC();
    7878          32 : }
    7879             : 
    7880             : JS_PUBLIC_API(void)
    7881           1 : JS::PrepareForFullGC(JSContext* cx)
    7882             : {
    7883          17 :     for (ZonesIter zone(cx->runtime(), WithAtoms); !zone.done(); zone.next())
    7884          16 :         zone->scheduleGC();
    7885           1 : }
    7886             : 
    7887             : JS_PUBLIC_API(void)
    7888           2 : JS::PrepareForIncrementalGC(JSContext* cx)
    7889             : {
    7890           2 :     if (!JS::IsIncrementalGCInProgress(cx))
    7891           0 :         return;
    7892             : 
    7893          34 :     for (ZonesIter zone(cx->runtime(), WithAtoms); !zone.done(); zone.next()) {
    7894          32 :         if (zone->wasGCStarted())
    7895          32 :             PrepareZoneForGC(zone);
    7896             :     }
    7897             : }
    7898             : 
    7899             : JS_PUBLIC_API(bool)
    7900           0 : JS::IsGCScheduled(JSContext* cx)
    7901             : {
    7902           0 :     for (ZonesIter zone(cx->runtime(), WithAtoms); !zone.done(); zone.next()) {
    7903           0 :         if (zone->isGCScheduled())
    7904           0 :             return true;
    7905             :     }
    7906             : 
    7907           0 :     return false;
    7908             : }
    7909             : 
    7910             : JS_PUBLIC_API(void)
    7911           0 : JS::SkipZoneForGC(Zone* zone)
    7912             : {
    7913           0 :     zone->unscheduleGC();
    7914           0 : }
    7915             : 
    7916             : JS_PUBLIC_API(void)
    7917           0 : JS::GCForReason(JSContext* cx, JSGCInvocationKind gckind, gcreason::Reason reason)
    7918             : {
    7919           0 :     MOZ_ASSERT(gckind == GC_NORMAL || gckind == GC_SHRINK);
    7920           0 :     cx->runtime()->gc.gc(gckind, reason);
    7921           0 : }
    7922             : 
    7923             : JS_PUBLIC_API(void)
    7924           1 : JS::StartIncrementalGC(JSContext* cx, JSGCInvocationKind gckind, gcreason::Reason reason, int64_t millis)
    7925             : {
    7926           1 :     MOZ_ASSERT(gckind == GC_NORMAL || gckind == GC_SHRINK);
    7927           1 :     cx->runtime()->gc.startGC(gckind, reason, millis);
    7928           1 : }
    7929             : 
    7930             : JS_PUBLIC_API(void)
    7931           2 : JS::IncrementalGCSlice(JSContext* cx, gcreason::Reason reason, int64_t millis)
    7932             : {
    7933           2 :     cx->runtime()->gc.gcSlice(reason, millis);
    7934           2 : }
    7935             : 
    7936             : JS_PUBLIC_API(void)
    7937           0 : JS::FinishIncrementalGC(JSContext* cx, gcreason::Reason reason)
    7938             : {
    7939           0 :     cx->runtime()->gc.finishGC(reason);
    7940           0 : }
    7941             : 
    7942             : JS_PUBLIC_API(void)
    7943           0 : JS::AbortIncrementalGC(JSContext* cx)
    7944             : {
    7945           0 :     if (IsIncrementalGCInProgress(cx))
    7946           0 :         cx->runtime()->gc.abortGC();
    7947           0 : }
    7948             : 
    7949             : char16_t*
    7950           0 : JS::GCDescription::formatSliceMessage(JSContext* cx) const
    7951             : {
    7952           0 :     UniqueChars cstr = cx->runtime()->gc.stats().formatCompactSliceMessage();
    7953             : 
    7954           0 :     size_t nchars = strlen(cstr.get());
    7955           0 :     UniqueTwoByteChars out(js_pod_malloc<char16_t>(nchars + 1));
    7956           0 :     if (!out)
    7957           0 :         return nullptr;
    7958           0 :     out.get()[nchars] = 0;
    7959             : 
    7960           0 :     CopyAndInflateChars(out.get(), cstr.get(), nchars);
    7961           0 :     return out.release();
    7962             : }
    7963             : 
    7964             : char16_t*
    7965           0 : JS::GCDescription::formatSummaryMessage(JSContext* cx) const
    7966             : {
    7967           0 :     UniqueChars cstr = cx->runtime()->gc.stats().formatCompactSummaryMessage();
    7968             : 
    7969           0 :     size_t nchars = strlen(cstr.get());
    7970           0 :     UniqueTwoByteChars out(js_pod_malloc<char16_t>(nchars + 1));
    7971           0 :     if (!out)
    7972           0 :         return nullptr;
    7973           0 :     out.get()[nchars] = 0;
    7974             : 
    7975           0 :     CopyAndInflateChars(out.get(), cstr.get(), nchars);
    7976           0 :     return out.release();
    7977             : }
    7978             : 
    7979             : JS::dbg::GarbageCollectionEvent::Ptr
    7980           0 : JS::GCDescription::toGCEvent(JSContext* cx) const
    7981             : {
    7982           0 :     return JS::dbg::GarbageCollectionEvent::Create(cx->runtime(), cx->runtime()->gc.stats(),
    7983           0 :                                                    cx->runtime()->gc.majorGCCount());
    7984             : }
    7985             : 
    7986             : char16_t*
    7987           0 : JS::GCDescription::formatJSON(JSContext* cx, uint64_t timestamp) const
    7988             : {
    7989           0 :     UniqueChars cstr = cx->runtime()->gc.stats().renderJsonMessage(timestamp);
    7990             : 
    7991           0 :     size_t nchars = strlen(cstr.get());
    7992           0 :     UniqueTwoByteChars out(js_pod_malloc<char16_t>(nchars + 1));
    7993           0 :     if (!out)
    7994           0 :         return nullptr;
    7995           0 :     out.get()[nchars] = 0;
    7996             : 
    7997           0 :     CopyAndInflateChars(out.get(), cstr.get(), nchars);
    7998           0 :     return out.release();
    7999             : }
    8000             : 
    8001             : TimeStamp
    8002           0 : JS::GCDescription::startTime(JSContext* cx) const
    8003             : {
    8004           0 :     return cx->runtime()->gc.stats().start();
    8005             : }
    8006             : 
    8007             : TimeStamp
    8008           0 : JS::GCDescription::endTime(JSContext* cx) const
    8009             : {
    8010           0 :     return cx->runtime()->gc.stats().end();
    8011             : }
    8012             : 
    8013             : TimeStamp
    8014           3 : JS::GCDescription::lastSliceStart(JSContext* cx) const
    8015             : {
    8016           3 :     return cx->runtime()->gc.stats().slices().back().start;
    8017             : }
    8018             : 
    8019             : TimeStamp
    8020           3 : JS::GCDescription::lastSliceEnd(JSContext* cx) const
    8021             : {
    8022           3 :     return cx->runtime()->gc.stats().slices().back().end;
    8023             : }
    8024             : 
    8025             : JS::UniqueChars
    8026           0 : JS::GCDescription::sliceToJSON(JSContext* cx) const
    8027             : {
    8028           0 :     size_t slices = cx->runtime()->gc.stats().slices().length();
    8029           0 :     MOZ_ASSERT(slices > 0);
    8030           0 :     return cx->runtime()->gc.stats().renderJsonSlice(slices - 1);
    8031             : }
    8032             : 
    8033             : JS::UniqueChars
    8034           0 : JS::GCDescription::summaryToJSON(JSContext* cx) const
    8035             : {
    8036           0 :     return cx->runtime()->gc.stats().renderJsonMessage(0, false);
    8037             : }
    8038             : 
    8039             : JS_PUBLIC_API(JS::UniqueChars)
    8040           0 : JS::MinorGcToJSON(JSContext* cx)
    8041             : {
    8042           0 :     JSRuntime* rt = cx->runtime();
    8043           0 :     return rt->gc.stats().renderNurseryJson(rt);
    8044             : }
    8045             : 
    8046             : JS_PUBLIC_API(JS::GCSliceCallback)
    8047           9 : JS::SetGCSliceCallback(JSContext* cx, GCSliceCallback callback)
    8048             : {
    8049           9 :     return cx->runtime()->gc.setSliceCallback(callback);
    8050             : }
    8051             : 
    8052             : JS_PUBLIC_API(JS::DoCycleCollectionCallback)
    8053           3 : JS::SetDoCycleCollectionCallback(JSContext* cx, JS::DoCycleCollectionCallback callback)
    8054             : {
    8055           3 :     return cx->runtime()->gc.setDoCycleCollectionCallback(callback);
    8056             : }
    8057             : 
    8058             : JS_PUBLIC_API(JS::GCNurseryCollectionCallback)
    8059           3 : JS::SetGCNurseryCollectionCallback(JSContext* cx, GCNurseryCollectionCallback callback)
    8060             : {
    8061           3 :     return cx->runtime()->gc.setNurseryCollectionCallback(callback);
    8062             : }
    8063             : 
    8064             : JS_PUBLIC_API(void)
    8065           0 : JS::DisableIncrementalGC(JSContext* cx)
    8066             : {
    8067           0 :     cx->runtime()->gc.disallowIncrementalGC();
    8068           0 : }
    8069             : 
    8070             : JS_PUBLIC_API(bool)
    8071           1 : JS::IsIncrementalGCEnabled(JSContext* cx)
    8072             : {
    8073           1 :     return cx->runtime()->gc.isIncrementalGCEnabled();
    8074             : }
    8075             : 
    8076             : JS_PUBLIC_API(bool)
    8077        1249 : JS::IsIncrementalGCInProgress(JSContext* cx)
    8078             : {
    8079        1249 :     return cx->runtime()->gc.isIncrementalGCInProgress() && !cx->runtime()->gc.isVerifyPreBarriersEnabled();
    8080             : }
    8081             : 
    8082             : JS_PUBLIC_API(bool)
    8083           0 : JS::IsIncrementalGCInProgress(JSRuntime* rt)
    8084             : {
    8085           0 :     return rt->gc.isIncrementalGCInProgress() && !rt->gc.isVerifyPreBarriersEnabled();
    8086             : }
    8087             : 
    8088             : JS_PUBLIC_API(bool)
    8089           0 : JS::IsIncrementalBarrierNeeded(JSContext* cx)
    8090             : {
    8091           0 :     if (JS::CurrentThreadIsHeapBusy())
    8092           0 :         return false;
    8093             : 
    8094           0 :     auto state = cx->runtime()->gc.state();
    8095           0 :     return state != gc::State::NotActive && state <= gc::State::Sweep;
    8096             : }
    8097             : 
    8098             : JS_PUBLIC_API(void)
    8099        1811 : JS::IncrementalPreWriteBarrier(JSObject* obj)
    8100             : {
    8101        1811 :     if (!obj)
    8102        1811 :         return;
    8103             : 
    8104           0 :     MOZ_ASSERT(!JS::CurrentThreadIsHeapMajorCollecting());
    8105           0 :     JSObject::writeBarrierPre(obj);
    8106             : }
    8107             : 
    8108             : struct IncrementalReadBarrierFunctor {
    8109        3277 :     template <typename T> void operator()(T* t) { T::readBarrier(t); }
    8110             : };
    8111             : 
    8112             : JS_PUBLIC_API(void)
    8113        3277 : JS::IncrementalReadBarrier(GCCellPtr thing)
    8114             : {
    8115        3277 :     if (!thing)
    8116           0 :         return;
    8117             : 
    8118        3277 :     MOZ_ASSERT(!JS::CurrentThreadIsHeapMajorCollecting());
    8119        3277 :     DispatchTyped(IncrementalReadBarrierFunctor(), thing);
    8120             : }
    8121             : 
    8122             : JS_PUBLIC_API(bool)
    8123           0 : JS::WasIncrementalGC(JSRuntime* rt)
    8124             : {
    8125           0 :     return rt->gc.isIncrementalGc();
    8126             : }
    8127             : 
    8128             : uint64_t
    8129        6693 : js::gc::NextCellUniqueId(JSRuntime* rt)
    8130             : {
    8131        6693 :     return rt->gc.nextCellUniqueId();
    8132             : }
    8133             : 
    8134             : namespace js {
    8135             : namespace gc {
    8136             : namespace MemInfo {
    8137             : 
    8138             : static bool
    8139           0 : GCBytesGetter(JSContext* cx, unsigned argc, Value* vp)
    8140             : {
    8141           0 :     CallArgs args = CallArgsFromVp(argc, vp);
    8142           0 :     args.rval().setNumber(double(cx->runtime()->gc.usage.gcBytes()));
    8143           0 :     return true;
    8144             : }
    8145             : 
    8146             : static bool
    8147           0 : GCMaxBytesGetter(JSContext* cx, unsigned argc, Value* vp)
    8148             : {
    8149           0 :     CallArgs args = CallArgsFromVp(argc, vp);
    8150           0 :     args.rval().setNumber(double(cx->runtime()->gc.tunables.gcMaxBytes()));
    8151           0 :     return true;
    8152             : }
    8153             : 
    8154             : static bool
    8155           0 : MallocBytesGetter(JSContext* cx, unsigned argc, Value* vp)
    8156             : {
    8157           0 :     CallArgs args = CallArgsFromVp(argc, vp);
    8158           0 :     args.rval().setNumber(double(cx->runtime()->gc.getMallocBytes()));
    8159           0 :     return true;
    8160             : }
    8161             : 
    8162             : static bool
    8163           0 : MaxMallocGetter(JSContext* cx, unsigned argc, Value* vp)
    8164             : {
    8165           0 :     CallArgs args = CallArgsFromVp(argc, vp);
    8166           0 :     args.rval().setNumber(double(cx->runtime()->gc.maxMallocBytesAllocated()));
    8167           0 :     return true;
    8168             : }
    8169             : 
    8170             : static bool
    8171           0 : GCHighFreqGetter(JSContext* cx, unsigned argc, Value* vp)
    8172             : {
    8173           0 :     CallArgs args = CallArgsFromVp(argc, vp);
    8174           0 :     args.rval().setBoolean(cx->runtime()->gc.schedulingState.inHighFrequencyGCMode());
    8175           0 :     return true;
    8176             : }
    8177             : 
    8178             : static bool
    8179           0 : GCNumberGetter(JSContext* cx, unsigned argc, Value* vp)
    8180             : {
    8181           0 :     CallArgs args = CallArgsFromVp(argc, vp);
    8182           0 :     args.rval().setNumber(double(cx->runtime()->gc.gcNumber()));
    8183           0 :     return true;
    8184             : }
    8185             : 
    8186             : static bool
    8187           0 : MajorGCCountGetter(JSContext* cx, unsigned argc, Value* vp)
    8188             : {
    8189           0 :     CallArgs args = CallArgsFromVp(argc, vp);
    8190           0 :     args.rval().setNumber(double(cx->runtime()->gc.majorGCCount()));
    8191           0 :     return true;
    8192             : }
    8193             : 
    8194             : static bool
    8195           0 : MinorGCCountGetter(JSContext* cx, unsigned argc, Value* vp)
    8196             : {
    8197           0 :     CallArgs args = CallArgsFromVp(argc, vp);
    8198           0 :     args.rval().setNumber(double(cx->runtime()->gc.minorGCCount()));
    8199           0 :     return true;
    8200             : }
    8201             : 
    8202             : static bool
    8203           0 : ZoneGCBytesGetter(JSContext* cx, unsigned argc, Value* vp)
    8204             : {
    8205           0 :     CallArgs args = CallArgsFromVp(argc, vp);
    8206           0 :     args.rval().setNumber(double(cx->zone()->usage.gcBytes()));
    8207           0 :     return true;
    8208             : }
    8209             : 
    8210             : static bool
    8211           0 : ZoneGCTriggerBytesGetter(JSContext* cx, unsigned argc, Value* vp)
    8212             : {
    8213           0 :     CallArgs args = CallArgsFromVp(argc, vp);
    8214           0 :     args.rval().setNumber(double(cx->zone()->threshold.gcTriggerBytes()));
    8215           0 :     return true;
    8216             : }
    8217             : 
    8218             : static bool
    8219           0 : ZoneGCAllocTriggerGetter(JSContext* cx, unsigned argc, Value* vp)
    8220             : {
    8221           0 :     CallArgs args = CallArgsFromVp(argc, vp);
    8222           0 :     bool highFrequency = cx->runtime()->gc.schedulingState.inHighFrequencyGCMode();
    8223           0 :     args.rval().setNumber(double(cx->zone()->threshold.allocTrigger(highFrequency)));
    8224           0 :     return true;
    8225             : }
    8226             : 
    8227             : static bool
    8228           0 : ZoneMallocBytesGetter(JSContext* cx, unsigned argc, Value* vp)
    8229             : {
    8230           0 :     CallArgs args = CallArgsFromVp(argc, vp);
    8231           0 :     args.rval().setNumber(double(cx->zone()->GCMallocBytes()));
    8232           0 :     return true;
    8233             : }
    8234             : 
    8235             : static bool
    8236           0 : ZoneMaxMallocGetter(JSContext* cx, unsigned argc, Value* vp)
    8237             : {
    8238           0 :     CallArgs args = CallArgsFromVp(argc, vp);
    8239           0 :     args.rval().setNumber(double(cx->zone()->GCMaxMallocBytes()));
    8240           0 :     return true;
    8241             : }
    8242             : 
    8243             : static bool
    8244           0 : ZoneGCDelayBytesGetter(JSContext* cx, unsigned argc, Value* vp)
    8245             : {
    8246           0 :     CallArgs args = CallArgsFromVp(argc, vp);
    8247           0 :     args.rval().setNumber(double(cx->zone()->gcDelayBytes));
    8248           0 :     return true;
    8249             : }
    8250             : 
    8251             : static bool
    8252           0 : ZoneGCHeapGrowthFactorGetter(JSContext* cx, unsigned argc, Value* vp)
    8253             : {
    8254           0 :     CallArgs args = CallArgsFromVp(argc, vp);
    8255           0 :     AutoLockGC lock(cx->runtime());
    8256           0 :     args.rval().setNumber(cx->zone()->threshold.gcHeapGrowthFactor());
    8257           0 :     return true;
    8258             : }
    8259             : 
    8260             : static bool
    8261           0 : ZoneGCNumberGetter(JSContext* cx, unsigned argc, Value* vp)
    8262             : {
    8263           0 :     CallArgs args = CallArgsFromVp(argc, vp);
    8264           0 :     args.rval().setNumber(double(cx->zone()->gcNumber()));
    8265           0 :     return true;
    8266             : }
    8267             : 
    8268             : #ifdef JS_MORE_DETERMINISTIC
    8269             : static bool
    8270             : DummyGetter(JSContext* cx, unsigned argc, Value* vp)
    8271             : {
    8272             :     CallArgs args = CallArgsFromVp(argc, vp);
    8273             :     args.rval().setUndefined();
    8274             :     return true;
    8275             : }
    8276             : #endif
    8277             : 
    8278             : } /* namespace MemInfo */
    8279             : 
    8280             : JSObject*
    8281           0 : NewMemoryInfoObject(JSContext* cx)
    8282             : {
    8283           0 :     RootedObject obj(cx, JS_NewObject(cx, nullptr));
    8284           0 :     if (!obj)
    8285           0 :         return nullptr;
    8286             : 
    8287             :     using namespace MemInfo;
    8288             :     struct NamedGetter {
    8289             :         const char* name;
    8290             :         JSNative getter;
    8291             :     } getters[] = {
    8292             :         { "gcBytes", GCBytesGetter },
    8293             :         { "gcMaxBytes", GCMaxBytesGetter },
    8294             :         { "mallocBytesRemaining", MallocBytesGetter },
    8295             :         { "maxMalloc", MaxMallocGetter },
    8296             :         { "gcIsHighFrequencyMode", GCHighFreqGetter },
    8297             :         { "gcNumber", GCNumberGetter },
    8298             :         { "majorGCCount", MajorGCCountGetter },
    8299             :         { "minorGCCount", MinorGCCountGetter }
    8300           0 :     };
    8301             : 
    8302           0 :     for (auto pair : getters) {
    8303             : #ifdef JS_MORE_DETERMINISTIC
    8304             :         JSNative getter = DummyGetter;
    8305             : #else
    8306           0 :         JSNative getter = pair.getter;
    8307             : #endif
    8308           0 :         if (!JS_DefineProperty(cx, obj, pair.name, UndefinedHandleValue,
    8309             :                                JSPROP_ENUMERATE | JSPROP_SHARED,
    8310             :                                getter, nullptr))
    8311             :         {
    8312           0 :             return nullptr;
    8313             :         }
    8314             :     }
    8315             : 
    8316           0 :     RootedObject zoneObj(cx, JS_NewObject(cx, nullptr));
    8317           0 :     if (!zoneObj)
    8318           0 :         return nullptr;
    8319             : 
    8320           0 :     if (!JS_DefineProperty(cx, obj, "zone", zoneObj, JSPROP_ENUMERATE))
    8321           0 :         return nullptr;
    8322             : 
    8323             :     struct NamedZoneGetter {
    8324             :         const char* name;
    8325             :         JSNative getter;
    8326             :     } zoneGetters[] = {
    8327             :         { "gcBytes", ZoneGCBytesGetter },
    8328             :         { "gcTriggerBytes", ZoneGCTriggerBytesGetter },
    8329             :         { "gcAllocTrigger", ZoneGCAllocTriggerGetter },
    8330             :         { "mallocBytesRemaining", ZoneMallocBytesGetter },
    8331             :         { "maxMalloc", ZoneMaxMallocGetter },
    8332             :         { "delayBytes", ZoneGCDelayBytesGetter },
    8333             :         { "heapGrowthFactor", ZoneGCHeapGrowthFactorGetter },
    8334             :         { "gcNumber", ZoneGCNumberGetter }
    8335           0 :     };
    8336             : 
    8337           0 :     for (auto pair : zoneGetters) {
    8338             :  #ifdef JS_MORE_DETERMINISTIC
    8339             :         JSNative getter = DummyGetter;
    8340             : #else
    8341           0 :         JSNative getter = pair.getter;
    8342             : #endif
    8343           0 :         if (!JS_DefineProperty(cx, zoneObj, pair.name, UndefinedHandleValue,
    8344             :                                JSPROP_ENUMERATE | JSPROP_SHARED,
    8345             :                                getter, nullptr))
    8346             :         {
    8347           0 :             return nullptr;
    8348             :         }
    8349             :     }
    8350             : 
    8351           0 :     return obj;
    8352             : }
    8353             : 
    8354             : const char*
    8355           0 : StateName(State state)
    8356             : {
    8357           0 :     switch(state) {
    8358             : #define MAKE_CASE(name) case State::name: return #name;
    8359           0 :       GCSTATES(MAKE_CASE)
    8360             : #undef MAKE_CASE
    8361             :     }
    8362           0 :     MOZ_MAKE_COMPILER_ASSUME_IS_UNREACHABLE("invalide gc::State enum value");
    8363             : }
    8364             : 
    8365             : void
    8366           0 : AutoAssertHeapBusy::checkCondition(JSRuntime *rt)
    8367             : {
    8368           0 :     this->rt = rt;
    8369           0 :     MOZ_ASSERT(JS::CurrentThreadIsHeapBusy());
    8370           0 : }
    8371             : 
    8372             : void
    8373           0 : AutoAssertEmptyNursery::checkCondition(JSContext* cx) {
    8374           0 :     if (!noAlloc)
    8375           0 :         noAlloc.emplace();
    8376           0 :     this->cx = cx;
    8377           0 :     MOZ_ASSERT(AllNurseriesAreEmpty(cx->runtime()));
    8378           0 : }
    8379             : 
    8380           0 : AutoEmptyNursery::AutoEmptyNursery(JSContext* cx)
    8381           0 :   : AutoAssertEmptyNursery()
    8382             : {
    8383           0 :     MOZ_ASSERT(!cx->suppressGC);
    8384           0 :     cx->runtime()->gc.stats().suspendPhases();
    8385           0 :     EvictAllNurseries(cx->runtime(), JS::gcreason::EVICT_NURSERY);
    8386           0 :     cx->runtime()->gc.stats().resumePhases();
    8387           0 :     checkCondition(cx);
    8388           0 : }
    8389             : 
    8390             : } /* namespace gc */
    8391             : } /* namespace js */
    8392             : 
    8393             : #ifdef DEBUG
    8394             : void
    8395           0 : js::gc::Cell::dump(FILE* fp) const
    8396             : {
    8397           0 :     switch (getTraceKind()) {
    8398             :       case JS::TraceKind::Object:
    8399           0 :         reinterpret_cast<const JSObject*>(this)->dump(fp);
    8400           0 :         break;
    8401             : 
    8402             :       case JS::TraceKind::String:
    8403           0 :           js::DumpString(reinterpret_cast<JSString*>(const_cast<Cell*>(this)), fp);
    8404           0 :         break;
    8405             : 
    8406             :       case JS::TraceKind::Shape:
    8407           0 :         reinterpret_cast<const Shape*>(this)->dump(fp);
    8408           0 :         break;
    8409             : 
    8410             :       default:
    8411           0 :         fprintf(fp, "%s(%p)\n", JS::GCTraceKindToAscii(getTraceKind()), (void*) this);
    8412             :     }
    8413           0 : }
    8414             : 
    8415             : // For use in a debugger.
    8416             : void
    8417           0 : js::gc::Cell::dump() const
    8418             : {
    8419           0 :     dump(stderr);
    8420           0 : }
    8421             : #endif
    8422             : 
    8423             : static inline bool
    8424     1073101 : CanCheckGrayBits(const Cell* cell)
    8425             : {
    8426     1073101 :     MOZ_ASSERT(cell);
    8427     1073101 :     if (!cell->isTenured())
    8428      199568 :         return false;
    8429             : 
    8430      873537 :     auto tc = &cell->asTenured();
    8431      873534 :     auto rt = tc->runtimeFromAnyThread();
    8432      873534 :     return CurrentThreadCanAccessRuntime(rt) && rt->gc.areGrayBitsValid();
    8433             : }
    8434             : 
    8435             : JS_PUBLIC_API(bool)
    8436         655 : js::gc::detail::CellIsMarkedGrayIfKnown(const Cell* cell)
    8437             : {
    8438             :     // We ignore the gray marking state of cells and return false in the
    8439             :     // following cases:
    8440             :     //
    8441             :     // 1) When OOM has caused us to clear the gcGrayBitsValid_ flag.
    8442             :     //
    8443             :     // 2) When we are in an incremental GC and examine a cell that is in a zone
    8444             :     // that is not being collected. Gray targets of CCWs that are marked black
    8445             :     // by a barrier will eventually be marked black in the next GC slice.
    8446             :     //
    8447             :     // 3) When we are not on the runtime's active thread. Helper threads might
    8448             :     // call this while parsing, and they are not allowed to inspect the
    8449             :     // runtime's incremental state. The objects being operated on are not able
    8450             :     // to be collected and will not be marked any color.
    8451             : 
    8452         655 :     if (!CanCheckGrayBits(cell))
    8453         655 :         return false;
    8454             : 
    8455           0 :     auto tc = &cell->asTenured();
    8456           0 :     MOZ_ASSERT(!tc->zoneFromAnyThread()->usedByHelperThread());
    8457             : 
    8458           0 :     auto rt = tc->runtimeFromActiveCooperatingThread();
    8459           0 :     if (rt->gc.isIncrementalGCInProgress() && !tc->zone()->wasGCStarted())
    8460           0 :         return false;
    8461             : 
    8462           0 :     return detail::CellIsMarkedGray(tc);
    8463             : }
    8464             : 
    8465             : #ifdef DEBUG
    8466             : JS_PUBLIC_API(bool)
    8467     1072441 : js::gc::detail::CellIsNotGray(const Cell* cell)
    8468             : {
    8469             :     // Check that a cell is not marked gray.
    8470             :     //
    8471             :     // Since this is a debug-only check, take account of the eventual mark state
    8472             :     // of cells that will be marked black by the next GC slice in an incremental
    8473             :     // GC. For performance reasons we don't do this in CellIsMarkedGrayIfKnown.
    8474             : 
    8475             :     // TODO: I'd like to AssertHeapIsIdle() here, but this ends up getting
    8476             :     // called while iterating the heap for memory reporting.
    8477     1072441 :     MOZ_ASSERT(!JS::CurrentThreadIsHeapCollecting());
    8478     1072444 :     MOZ_ASSERT(!JS::CurrentThreadIsHeapCycleCollecting());
    8479             : 
    8480     1072446 :     if (!CanCheckGrayBits(cell))
    8481     1072444 :         return true;
    8482             : 
    8483           0 :     auto tc = &cell->asTenured();
    8484           0 :     if (!detail::CellIsMarkedGray(tc))
    8485           0 :         return true;
    8486             : 
    8487             :     // The cell is gray, but may eventually be marked black if we are in an
    8488             :     // incremental GC and the cell is reachable by something on the mark stack.
    8489             : 
    8490           0 :     auto rt = tc->runtimeFromAnyThread();
    8491           0 :     if (!rt->gc.isIncrementalGCInProgress() || tc->zone()->wasGCStarted())
    8492           0 :         return false;
    8493             : 
    8494           0 :     Zone* sourceZone = rt->gc.marker.stackContainsCrossZonePointerTo(tc);
    8495           0 :     if (sourceZone && sourceZone->wasGCStarted())
    8496           0 :         return true;
    8497             : 
    8498           0 :     return false;
    8499             : }
    8500             : #endif

Generated by: LCOV version 1.13