Line data Source code
1 : /* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- */
2 : /* vim: set ts=8 sts=2 et sw=2 tw=80: */
3 : /* This Source Code Form is subject to the terms of the Mozilla Public
4 : * License, v. 2.0. If a copy of the MPL was not distributed with this
5 : * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
6 :
7 : //
8 : // This file implements a garbage-cycle collector based on the paper
9 : //
10 : // Concurrent Cycle Collection in Reference Counted Systems
11 : // Bacon & Rajan (2001), ECOOP 2001 / Springer LNCS vol 2072
12 : //
13 : // We are not using the concurrent or acyclic cases of that paper; so
14 : // the green, red and orange colors are not used.
15 : //
16 : // The collector is based on tracking pointers of four colors:
17 : //
18 : // Black nodes are definitely live. If we ever determine a node is
19 : // black, it's ok to forget about, drop from our records.
20 : //
21 : // White nodes are definitely garbage cycles. Once we finish with our
22 : // scanning, we unlink all the white nodes and expect that by
23 : // unlinking them they will self-destruct (since a garbage cycle is
24 : // only keeping itself alive with internal links, by definition).
25 : //
26 : // Snow-white is an addition to the original algorithm. Snow-white object
27 : // has reference count zero and is just waiting for deletion.
28 : //
29 : // Grey nodes are being scanned. Nodes that turn grey will turn
30 : // either black if we determine that they're live, or white if we
31 : // determine that they're a garbage cycle. After the main collection
32 : // algorithm there should be no grey nodes.
33 : //
34 : // Purple nodes are *candidates* for being scanned. They are nodes we
35 : // haven't begun scanning yet because they're not old enough, or we're
36 : // still partway through the algorithm.
37 : //
38 : // XPCOM objects participating in garbage-cycle collection are obliged
39 : // to inform us when they ought to turn purple; that is, when their
40 : // refcount transitions from N+1 -> N, for nonzero N. Furthermore we
41 : // require that *after* an XPCOM object has informed us of turning
42 : // purple, they will tell us when they either transition back to being
43 : // black (incremented refcount) or are ultimately deleted.
44 :
45 : // Incremental cycle collection
46 : //
47 : // Beyond the simple state machine required to implement incremental
48 : // collection, the CC needs to be able to compensate for things the browser
49 : // is doing during the collection. There are two kinds of problems. For each
50 : // of these, there are two cases to deal with: purple-buffered C++ objects
51 : // and JS objects.
52 :
53 : // The first problem is that an object in the CC's graph can become garbage.
54 : // This is bad because the CC touches the objects in its graph at every
55 : // stage of its operation.
56 : //
57 : // All cycle collected C++ objects that die during a cycle collection
58 : // will end up actually getting deleted by the SnowWhiteKiller. Before
59 : // the SWK deletes an object, it checks if an ICC is running, and if so,
60 : // if the object is in the graph. If it is, the CC clears mPointer and
61 : // mParticipant so it does not point to the raw object any more. Because
62 : // objects could die any time the CC returns to the mutator, any time the CC
63 : // accesses a PtrInfo it must perform a null check on mParticipant to
64 : // ensure the object has not gone away.
65 : //
66 : // JS objects don't always run finalizers, so the CC can't remove them from
67 : // the graph when they die. Fortunately, JS objects can only die during a GC,
68 : // so if a GC is begun during an ICC, the browser synchronously finishes off
69 : // the ICC, which clears the entire CC graph. If the GC and CC are scheduled
70 : // properly, this should be rare.
71 : //
72 : // The second problem is that objects in the graph can be changed, say by
73 : // being addrefed or released, or by having a field updated, after the object
74 : // has been added to the graph. The problem is that ICC can miss a newly
75 : // created reference to an object, and end up unlinking an object that is
76 : // actually alive.
77 : //
78 : // The basic idea of the solution, from "An on-the-fly Reference Counting
79 : // Garbage Collector for Java" by Levanoni and Petrank, is to notice if an
80 : // object has had an additional reference to it created during the collection,
81 : // and if so, don't collect it during the current collection. This avoids having
82 : // to rerun the scan as in Bacon & Rajan 2001.
83 : //
84 : // For cycle collected C++ objects, we modify AddRef to place the object in
85 : // the purple buffer, in addition to Release. Then, in the CC, we treat any
86 : // objects in the purple buffer as being alive, after graph building has
87 : // completed. Because they are in the purple buffer, they will be suspected
88 : // in the next CC, so there's no danger of leaks. This is imprecise, because
89 : // we will treat as live an object that has been Released but not AddRefed
90 : // during graph building, but that's probably rare enough that the additional
91 : // bookkeeping overhead is not worthwhile.
92 : //
93 : // For JS objects, the cycle collector is only looking at gray objects. If a
94 : // gray object is touched during ICC, it will be made black by UnmarkGray.
95 : // Thus, if a JS object has become black during the ICC, we treat it as live.
96 : // Merged JS zones have to be handled specially: we scan all zone globals.
97 : // If any are black, we treat the zone as being black.
98 :
99 :
100 : // Safety
101 : //
102 : // An XPCOM object is either scan-safe or scan-unsafe, purple-safe or
103 : // purple-unsafe.
104 : //
105 : // An nsISupports object is scan-safe if:
106 : //
107 : // - It can be QI'ed to |nsXPCOMCycleCollectionParticipant|, though
108 : // this operation loses ISupports identity (like nsIClassInfo).
109 : // - Additionally, the operation |traverse| on the resulting
110 : // nsXPCOMCycleCollectionParticipant does not cause *any* refcount
111 : // adjustment to occur (no AddRef / Release calls).
112 : //
113 : // A non-nsISupports ("native") object is scan-safe by explicitly
114 : // providing its nsCycleCollectionParticipant.
115 : //
116 : // An object is purple-safe if it satisfies the following properties:
117 : //
118 : // - The object is scan-safe.
119 : //
120 : // When we receive a pointer |ptr| via
121 : // |nsCycleCollector::suspect(ptr)|, we assume it is purple-safe. We
122 : // can check the scan-safety, but have no way to ensure the
123 : // purple-safety; objects must obey, or else the entire system falls
124 : // apart. Don't involve an object in this scheme if you can't
125 : // guarantee its purple-safety. The easiest way to ensure that an
126 : // object is purple-safe is to use nsCycleCollectingAutoRefCnt.
127 : //
128 : // When we have a scannable set of purple nodes ready, we begin
129 : // our walks. During the walks, the nodes we |traverse| should only
130 : // feed us more scan-safe nodes, and should not adjust the refcounts
131 : // of those nodes.
132 : //
133 : // We do not |AddRef| or |Release| any objects during scanning. We
134 : // rely on the purple-safety of the roots that call |suspect| to
135 : // hold, such that we will clear the pointer from the purple buffer
136 : // entry to the object before it is destroyed. The pointers that are
137 : // merely scan-safe we hold only for the duration of scanning, and
138 : // there should be no objects released from the scan-safe set during
139 : // the scan.
140 : //
141 : // We *do* call |Root| and |Unroot| on every white object, on
142 : // either side of the calls to |Unlink|. This keeps the set of white
143 : // objects alive during the unlinking.
144 : //
145 :
146 : #if !defined(__MINGW32__)
147 : #ifdef WIN32
148 : #include <crtdbg.h>
149 : #include <errno.h>
150 : #endif
151 : #endif
152 :
153 : #include "base/process_util.h"
154 :
155 : #include "mozilla/ArrayUtils.h"
156 : #include "mozilla/AutoRestore.h"
157 : #include "mozilla/CycleCollectedJSContext.h"
158 : #include "mozilla/CycleCollectedJSRuntime.h"
159 : #include "mozilla/DebugOnly.h"
160 : #include "mozilla/HoldDropJSObjects.h"
161 : /* This must occur *after* base/process_util.h to avoid typedefs conflicts. */
162 : #include "mozilla/LinkedList.h"
163 : #include "mozilla/MemoryReporting.h"
164 : #include "mozilla/Move.h"
165 : #include "mozilla/SegmentedVector.h"
166 :
167 : #include "nsCycleCollectionParticipant.h"
168 : #include "nsCycleCollectionNoteRootCallback.h"
169 : #include "nsDeque.h"
170 : #include "nsCycleCollector.h"
171 : #include "nsThreadUtils.h"
172 : #include "nsXULAppAPI.h"
173 : #include "prenv.h"
174 : #include "nsPrintfCString.h"
175 : #include "nsTArray.h"
176 : #include "nsIConsoleService.h"
177 : #include "mozilla/Attributes.h"
178 : #include "nsICycleCollectorListener.h"
179 : #include "nsISerialEventTarget.h"
180 : #include "nsIMemoryReporter.h"
181 : #include "nsIFile.h"
182 : #include "nsDumpUtils.h"
183 : #include "xpcpublic.h"
184 : #include "GeckoProfiler.h"
185 : #include <stdint.h>
186 : #include <stdio.h>
187 :
188 : #include "mozilla/AutoGlobalTimelineMarker.h"
189 : #include "mozilla/Likely.h"
190 : #include "mozilla/PoisonIOInterposer.h"
191 : #include "mozilla/Telemetry.h"
192 : #include "mozilla/ThreadLocal.h"
193 :
194 : #ifdef MOZ_CRASHREPORTER
195 : #include "nsExceptionHandler.h"
196 : #endif
197 :
198 : using namespace mozilla;
199 :
200 : //#define COLLECT_TIME_DEBUG
201 :
202 : // Enable assertions that are useful for diagnosing errors in graph construction.
203 : //#define DEBUG_CC_GRAPH
204 :
205 : #define DEFAULT_SHUTDOWN_COLLECTIONS 5
206 :
207 : // One to do the freeing, then another to detect there is no more work to do.
208 : #define NORMAL_SHUTDOWN_COLLECTIONS 2
209 :
210 : // Cycle collector environment variables
211 : //
212 : // MOZ_CC_LOG_ALL: If defined, always log cycle collector heaps.
213 : //
214 : // MOZ_CC_LOG_SHUTDOWN: If defined, log cycle collector heaps at shutdown.
215 : //
216 : // MOZ_CC_LOG_THREAD: If set to "main", only automatically log main thread
217 : // CCs. If set to "worker", only automatically log worker CCs. If set to "all",
218 : // log either. The default value is "all". This must be used with either
219 : // MOZ_CC_LOG_ALL or MOZ_CC_LOG_SHUTDOWN for it to do anything.
220 : //
221 : // MOZ_CC_LOG_PROCESS: If set to "main", only automatically log main process
222 : // CCs. If set to "content", only automatically log tab CCs. If set to
223 : // "plugins", only automatically log plugin CCs. If set to "all", log
224 : // everything. The default value is "all". This must be used with either
225 : // MOZ_CC_LOG_ALL or MOZ_CC_LOG_SHUTDOWN for it to do anything.
226 : //
227 : // MOZ_CC_ALL_TRACES: If set to "all", any cycle collector
228 : // logging done will be WantAllTraces, which disables
229 : // various cycle collector optimizations to give a fuller picture of
230 : // the heap. If set to "shutdown", only shutdown logging will be WantAllTraces.
231 : // The default is none.
232 : //
233 : // MOZ_CC_RUN_DURING_SHUTDOWN: In non-DEBUG or builds, if this is set,
234 : // run cycle collections at shutdown.
235 : //
236 : // MOZ_CC_LOG_DIRECTORY: The directory in which logs are placed (such as
237 : // logs from MOZ_CC_LOG_ALL and MOZ_CC_LOG_SHUTDOWN, or other uses
238 : // of nsICycleCollectorListener)
239 :
240 : // Various parameters of this collector can be tuned using environment
241 : // variables.
242 :
243 : struct nsCycleCollectorParams
244 : {
245 : bool mLogAll;
246 : bool mLogShutdown;
247 : bool mAllTracesAll;
248 : bool mAllTracesShutdown;
249 : bool mLogThisThread;
250 :
251 4 : nsCycleCollectorParams() :
252 4 : mLogAll(PR_GetEnv("MOZ_CC_LOG_ALL") != nullptr),
253 4 : mLogShutdown(PR_GetEnv("MOZ_CC_LOG_SHUTDOWN") != nullptr),
254 : mAllTracesAll(false),
255 8 : mAllTracesShutdown(false)
256 : {
257 4 : const char* logThreadEnv = PR_GetEnv("MOZ_CC_LOG_THREAD");
258 4 : bool threadLogging = true;
259 4 : if (logThreadEnv && !!strcmp(logThreadEnv, "all")) {
260 0 : if (NS_IsMainThread()) {
261 0 : threadLogging = !strcmp(logThreadEnv, "main");
262 : } else {
263 0 : threadLogging = !strcmp(logThreadEnv, "worker");
264 : }
265 : }
266 :
267 4 : const char* logProcessEnv = PR_GetEnv("MOZ_CC_LOG_PROCESS");
268 4 : bool processLogging = true;
269 4 : if (logProcessEnv && !!strcmp(logProcessEnv, "all")) {
270 0 : switch (XRE_GetProcessType()) {
271 : case GeckoProcessType_Default:
272 0 : processLogging = !strcmp(logProcessEnv, "main");
273 0 : break;
274 : case GeckoProcessType_Plugin:
275 0 : processLogging = !strcmp(logProcessEnv, "plugins");
276 0 : break;
277 : case GeckoProcessType_Content:
278 0 : processLogging = !strcmp(logProcessEnv, "content");
279 0 : break;
280 : default:
281 0 : processLogging = false;
282 0 : break;
283 : }
284 : }
285 4 : mLogThisThread = threadLogging && processLogging;
286 :
287 4 : const char* allTracesEnv = PR_GetEnv("MOZ_CC_ALL_TRACES");
288 4 : if (allTracesEnv) {
289 0 : if (!strcmp(allTracesEnv, "all")) {
290 0 : mAllTracesAll = true;
291 0 : } else if (!strcmp(allTracesEnv, "shutdown")) {
292 0 : mAllTracesShutdown = true;
293 : }
294 : }
295 4 : }
296 :
297 0 : bool LogThisCC(bool aIsShutdown)
298 : {
299 0 : return (mLogAll || (aIsShutdown && mLogShutdown)) && mLogThisThread;
300 : }
301 :
302 0 : bool AllTracesThisCC(bool aIsShutdown)
303 : {
304 0 : return mAllTracesAll || (aIsShutdown && mAllTracesShutdown);
305 : }
306 : };
307 :
308 : #ifdef COLLECT_TIME_DEBUG
309 : class TimeLog
310 : {
311 : public:
312 : TimeLog() : mLastCheckpoint(TimeStamp::Now())
313 : {
314 : }
315 :
316 : void
317 : Checkpoint(const char* aEvent)
318 : {
319 : TimeStamp now = TimeStamp::Now();
320 : double dur = (now - mLastCheckpoint).ToMilliseconds();
321 : if (dur >= 0.5) {
322 : printf("cc: %s took %.1fms\n", aEvent, dur);
323 : }
324 : mLastCheckpoint = now;
325 : }
326 :
327 : private:
328 : TimeStamp mLastCheckpoint;
329 : };
330 : #else
331 : class TimeLog
332 : {
333 : public:
334 0 : TimeLog()
335 : {
336 0 : }
337 0 : void Checkpoint(const char* aEvent)
338 : {
339 0 : }
340 : };
341 : #endif
342 :
343 :
344 : ////////////////////////////////////////////////////////////////////////
345 : // Base types
346 : ////////////////////////////////////////////////////////////////////////
347 :
348 : struct PtrInfo;
349 :
350 : class EdgePool
351 : {
352 : public:
353 : // EdgePool allocates arrays of void*, primarily to hold PtrInfo*.
354 : // However, at the end of a block, the last two pointers are a null
355 : // and then a void** pointing to the next block. This allows
356 : // EdgePool::Iterators to be a single word but still capable of crossing
357 : // block boundaries.
358 :
359 4 : EdgePool()
360 4 : {
361 4 : mSentinelAndBlocks[0].block = nullptr;
362 4 : mSentinelAndBlocks[1].block = nullptr;
363 4 : }
364 :
365 0 : ~EdgePool()
366 0 : {
367 0 : MOZ_ASSERT(!mSentinelAndBlocks[0].block &&
368 : !mSentinelAndBlocks[1].block,
369 : "Didn't call Clear()?");
370 0 : }
371 :
372 0 : void Clear()
373 : {
374 0 : EdgeBlock* b = EdgeBlocks();
375 0 : while (b) {
376 0 : EdgeBlock* next = b->Next();
377 : delete b;
378 0 : b = next;
379 : }
380 :
381 0 : mSentinelAndBlocks[0].block = nullptr;
382 0 : mSentinelAndBlocks[1].block = nullptr;
383 0 : }
384 :
385 : #ifdef DEBUG
386 1 : bool IsEmpty()
387 : {
388 2 : return !mSentinelAndBlocks[0].block &&
389 2 : !mSentinelAndBlocks[1].block;
390 : }
391 : #endif
392 :
393 : private:
394 : struct EdgeBlock;
395 : union PtrInfoOrBlock
396 : {
397 : // Use a union to avoid reinterpret_cast and the ensuing
398 : // potential aliasing bugs.
399 : PtrInfo* ptrInfo;
400 : EdgeBlock* block;
401 : };
402 : struct EdgeBlock
403 : {
404 : enum { EdgeBlockSize = 16 * 1024 };
405 :
406 : PtrInfoOrBlock mPointers[EdgeBlockSize];
407 0 : EdgeBlock()
408 0 : {
409 0 : mPointers[EdgeBlockSize - 2].block = nullptr; // sentinel
410 0 : mPointers[EdgeBlockSize - 1].block = nullptr; // next block pointer
411 0 : }
412 0 : EdgeBlock*& Next()
413 : {
414 0 : return mPointers[EdgeBlockSize - 1].block;
415 : }
416 0 : PtrInfoOrBlock* Start()
417 : {
418 0 : return &mPointers[0];
419 : }
420 0 : PtrInfoOrBlock* End()
421 : {
422 0 : return &mPointers[EdgeBlockSize - 2];
423 : }
424 : };
425 :
426 : // Store the null sentinel so that we can have valid iterators
427 : // before adding any edges and without adding any blocks.
428 : PtrInfoOrBlock mSentinelAndBlocks[2];
429 :
430 0 : EdgeBlock*& EdgeBlocks()
431 : {
432 0 : return mSentinelAndBlocks[1].block;
433 : }
434 0 : EdgeBlock* EdgeBlocks() const
435 : {
436 0 : return mSentinelAndBlocks[1].block;
437 : }
438 :
439 : public:
440 : class Iterator
441 : {
442 : public:
443 0 : Iterator() : mPointer(nullptr) {}
444 0 : explicit Iterator(PtrInfoOrBlock* aPointer) : mPointer(aPointer) {}
445 0 : Iterator(const Iterator& aOther) : mPointer(aOther.mPointer) {}
446 :
447 0 : Iterator& operator++()
448 : {
449 0 : if (!mPointer->ptrInfo) {
450 : // Null pointer is a sentinel for link to the next block.
451 0 : mPointer = (mPointer + 1)->block->mPointers;
452 : }
453 0 : ++mPointer;
454 0 : return *this;
455 : }
456 :
457 0 : PtrInfo* operator*() const
458 : {
459 0 : if (!mPointer->ptrInfo) {
460 : // Null pointer is a sentinel for link to the next block.
461 0 : return (mPointer + 1)->block->mPointers->ptrInfo;
462 : }
463 0 : return mPointer->ptrInfo;
464 : }
465 : bool operator==(const Iterator& aOther) const
466 : {
467 : return mPointer == aOther.mPointer;
468 : }
469 0 : bool operator!=(const Iterator& aOther) const
470 : {
471 0 : return mPointer != aOther.mPointer;
472 : }
473 :
474 : #ifdef DEBUG_CC_GRAPH
475 : bool Initialized() const
476 : {
477 : return mPointer != nullptr;
478 : }
479 : #endif
480 :
481 : private:
482 : PtrInfoOrBlock* mPointer;
483 : };
484 :
485 : class Builder;
486 : friend class Builder;
487 : class Builder
488 : {
489 : public:
490 0 : explicit Builder(EdgePool& aPool)
491 0 : : mCurrent(&aPool.mSentinelAndBlocks[0])
492 0 : , mBlockEnd(&aPool.mSentinelAndBlocks[0])
493 0 : , mNextBlockPtr(&aPool.EdgeBlocks())
494 : {
495 0 : }
496 :
497 0 : Iterator Mark()
498 : {
499 0 : return Iterator(mCurrent);
500 : }
501 :
502 0 : void Add(PtrInfo* aEdge)
503 : {
504 0 : if (mCurrent == mBlockEnd) {
505 0 : EdgeBlock* b = new EdgeBlock();
506 0 : *mNextBlockPtr = b;
507 0 : mCurrent = b->Start();
508 0 : mBlockEnd = b->End();
509 0 : mNextBlockPtr = &b->Next();
510 : }
511 0 : (mCurrent++)->ptrInfo = aEdge;
512 0 : }
513 : private:
514 : // mBlockEnd points to space for null sentinel
515 : PtrInfoOrBlock* mCurrent;
516 : PtrInfoOrBlock* mBlockEnd;
517 : EdgeBlock** mNextBlockPtr;
518 : };
519 :
520 0 : size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const
521 : {
522 0 : size_t n = 0;
523 0 : EdgeBlock* b = EdgeBlocks();
524 0 : while (b) {
525 0 : n += aMallocSizeOf(b);
526 0 : b = b->Next();
527 : }
528 0 : return n;
529 : }
530 : };
531 :
532 : #ifdef DEBUG_CC_GRAPH
533 : #define CC_GRAPH_ASSERT(b) MOZ_ASSERT(b)
534 : #else
535 : #define CC_GRAPH_ASSERT(b)
536 : #endif
537 :
538 : #define CC_TELEMETRY(_name, _value) \
539 : do { \
540 : if (NS_IsMainThread()) { \
541 : Telemetry::Accumulate(Telemetry::CYCLE_COLLECTOR##_name, _value); \
542 : } else { \
543 : Telemetry::Accumulate(Telemetry::CYCLE_COLLECTOR_WORKER##_name, _value); \
544 : } \
545 : } while(0)
546 :
547 : enum NodeColor { black, white, grey };
548 :
549 : // This structure should be kept as small as possible; we may expect
550 : // hundreds of thousands of them to be allocated and touched
551 : // repeatedly during each cycle collection.
552 :
553 : struct PtrInfo
554 : {
555 : void* mPointer;
556 : nsCycleCollectionParticipant* mParticipant;
557 : uint32_t mColor : 2;
558 : uint32_t mInternalRefs : 30;
559 : uint32_t mRefCount;
560 : private:
561 : EdgePool::Iterator mFirstChild;
562 :
563 : static const uint32_t kInitialRefCount = UINT32_MAX - 1;
564 :
565 : public:
566 :
567 0 : PtrInfo(void* aPointer, nsCycleCollectionParticipant* aParticipant)
568 0 : : mPointer(aPointer),
569 : mParticipant(aParticipant),
570 : mColor(grey),
571 : mInternalRefs(0),
572 : mRefCount(kInitialRefCount),
573 0 : mFirstChild()
574 : {
575 0 : MOZ_ASSERT(aParticipant);
576 :
577 : // We initialize mRefCount to a large non-zero value so
578 : // that it doesn't look like a JS object to the cycle collector
579 : // in the case where the object dies before being traversed.
580 0 : MOZ_ASSERT(!IsGrayJS() && !IsBlackJS());
581 0 : }
582 :
583 : // Allow NodePool::NodeBlock's constructor to compile.
584 : PtrInfo()
585 : {
586 : NS_NOTREACHED("should never be called");
587 : }
588 :
589 0 : bool IsGrayJS() const
590 : {
591 0 : return mRefCount == 0;
592 : }
593 :
594 0 : bool IsBlackJS() const
595 : {
596 0 : return mRefCount == UINT32_MAX;
597 : }
598 :
599 0 : bool WasTraversed() const
600 : {
601 0 : return mRefCount != kInitialRefCount;
602 : }
603 :
604 0 : EdgePool::Iterator FirstChild() const
605 : {
606 : CC_GRAPH_ASSERT(mFirstChild.Initialized());
607 0 : return mFirstChild;
608 : }
609 :
610 : // this PtrInfo must be part of a NodePool
611 0 : EdgePool::Iterator LastChild() const
612 : {
613 : CC_GRAPH_ASSERT((this + 1)->mFirstChild.Initialized());
614 0 : return (this + 1)->mFirstChild;
615 : }
616 :
617 0 : void SetFirstChild(EdgePool::Iterator aFirstChild)
618 : {
619 : CC_GRAPH_ASSERT(aFirstChild.Initialized());
620 0 : mFirstChild = aFirstChild;
621 0 : }
622 :
623 : // this PtrInfo must be part of a NodePool
624 0 : void SetLastChild(EdgePool::Iterator aLastChild)
625 : {
626 : CC_GRAPH_ASSERT(aLastChild.Initialized());
627 0 : (this + 1)->mFirstChild = aLastChild;
628 0 : }
629 : };
630 :
631 : /**
632 : * A structure designed to be used like a linked list of PtrInfo, except
633 : * it allocates many PtrInfos at a time.
634 : */
635 : class NodePool
636 : {
637 : private:
638 : // The -2 allows us to use |NodeBlockSize + 1| for |mEntries|, and fit
639 : // |mNext|, all without causing slop.
640 : enum { NodeBlockSize = 4 * 1024 - 2 };
641 :
642 : struct NodeBlock
643 : {
644 : // We create and destroy NodeBlock using moz_xmalloc/free rather than new
645 : // and delete to avoid calling its constructor and destructor.
646 : NodeBlock()
647 : {
648 : NS_NOTREACHED("should never be called");
649 :
650 : // Ensure NodeBlock is the right size (see the comment on NodeBlockSize
651 : // above).
652 : static_assert(
653 : sizeof(NodeBlock) == 81904 || // 32-bit; equals 19.996 x 4 KiB pages
654 : sizeof(NodeBlock) == 131048, // 64-bit; equals 31.994 x 4 KiB pages
655 : "ill-sized NodeBlock"
656 : );
657 : }
658 : ~NodeBlock()
659 : {
660 : NS_NOTREACHED("should never be called");
661 : }
662 :
663 : NodeBlock* mNext;
664 : PtrInfo mEntries[NodeBlockSize + 1]; // +1 to store last child of last node
665 : };
666 :
667 : public:
668 4 : NodePool()
669 4 : : mBlocks(nullptr)
670 4 : , mLast(nullptr)
671 : {
672 4 : }
673 :
674 0 : ~NodePool()
675 0 : {
676 0 : MOZ_ASSERT(!mBlocks, "Didn't call Clear()?");
677 0 : }
678 :
679 0 : void Clear()
680 : {
681 0 : NodeBlock* b = mBlocks;
682 0 : while (b) {
683 0 : NodeBlock* n = b->mNext;
684 0 : free(b);
685 0 : b = n;
686 : }
687 :
688 0 : mBlocks = nullptr;
689 0 : mLast = nullptr;
690 0 : }
691 :
692 : #ifdef DEBUG
693 1 : bool IsEmpty()
694 : {
695 1 : return !mBlocks && !mLast;
696 : }
697 : #endif
698 :
699 : class Builder;
700 : friend class Builder;
701 : class Builder
702 : {
703 : public:
704 0 : explicit Builder(NodePool& aPool)
705 0 : : mNextBlock(&aPool.mBlocks)
706 : , mNext(aPool.mLast)
707 0 : , mBlockEnd(nullptr)
708 : {
709 0 : MOZ_ASSERT(!aPool.mBlocks && !aPool.mLast, "pool not empty");
710 0 : }
711 0 : PtrInfo* Add(void* aPointer, nsCycleCollectionParticipant* aParticipant)
712 : {
713 0 : if (mNext == mBlockEnd) {
714 0 : NodeBlock* block = static_cast<NodeBlock*>(malloc(sizeof(NodeBlock)));
715 0 : if (!block) {
716 0 : return nullptr;
717 : }
718 :
719 0 : *mNextBlock = block;
720 0 : mNext = block->mEntries;
721 0 : mBlockEnd = block->mEntries + NodeBlockSize;
722 0 : block->mNext = nullptr;
723 0 : mNextBlock = &block->mNext;
724 : }
725 0 : return new (mozilla::KnownNotNull, mNext++) PtrInfo(aPointer, aParticipant);
726 : }
727 : private:
728 : NodeBlock** mNextBlock;
729 : PtrInfo*& mNext;
730 : PtrInfo* mBlockEnd;
731 : };
732 :
733 : class Enumerator;
734 : friend class Enumerator;
735 : class Enumerator
736 : {
737 : public:
738 0 : explicit Enumerator(NodePool& aPool)
739 0 : : mFirstBlock(aPool.mBlocks)
740 : , mCurBlock(nullptr)
741 : , mNext(nullptr)
742 : , mBlockEnd(nullptr)
743 0 : , mLast(aPool.mLast)
744 : {
745 0 : }
746 :
747 0 : bool IsDone() const
748 : {
749 0 : return mNext == mLast;
750 : }
751 :
752 0 : bool AtBlockEnd() const
753 : {
754 0 : return mNext == mBlockEnd;
755 : }
756 :
757 0 : PtrInfo* GetNext()
758 : {
759 0 : MOZ_ASSERT(!IsDone(), "calling GetNext when done");
760 0 : if (mNext == mBlockEnd) {
761 0 : NodeBlock* nextBlock = mCurBlock ? mCurBlock->mNext : mFirstBlock;
762 0 : mNext = nextBlock->mEntries;
763 0 : mBlockEnd = mNext + NodeBlockSize;
764 0 : mCurBlock = nextBlock;
765 : }
766 0 : return mNext++;
767 : }
768 : private:
769 : // mFirstBlock is a reference to allow an Enumerator to be constructed
770 : // for an empty graph.
771 : NodeBlock*& mFirstBlock;
772 : NodeBlock* mCurBlock;
773 : // mNext is the next value we want to return, unless mNext == mBlockEnd
774 : // NB: mLast is a reference to allow enumerating while building!
775 : PtrInfo* mNext;
776 : PtrInfo* mBlockEnd;
777 : PtrInfo*& mLast;
778 : };
779 :
780 0 : size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const
781 : {
782 : // We don't measure the things pointed to by mEntries[] because those
783 : // pointers are non-owning.
784 0 : size_t n = 0;
785 0 : NodeBlock* b = mBlocks;
786 0 : while (b) {
787 0 : n += aMallocSizeOf(b);
788 0 : b = b->mNext;
789 : }
790 0 : return n;
791 : }
792 :
793 : private:
794 : NodeBlock* mBlocks;
795 : PtrInfo* mLast;
796 : };
797 :
798 :
799 : // Declarations for mPtrToNodeMap.
800 :
801 : struct PtrToNodeEntry : public PLDHashEntryHdr
802 : {
803 : // The key is mNode->mPointer
804 : PtrInfo* mNode;
805 : };
806 :
807 : static bool
808 0 : PtrToNodeMatchEntry(const PLDHashEntryHdr* aEntry, const void* aKey)
809 : {
810 0 : const PtrToNodeEntry* n = static_cast<const PtrToNodeEntry*>(aEntry);
811 0 : return n->mNode->mPointer == aKey;
812 : }
813 :
814 : static PLDHashTableOps PtrNodeOps = {
815 : PLDHashTable::HashVoidPtrKeyStub,
816 : PtrToNodeMatchEntry,
817 : PLDHashTable::MoveEntryStub,
818 : PLDHashTable::ClearEntryStub,
819 : nullptr
820 : };
821 :
822 :
823 : struct WeakMapping
824 : {
825 : // map and key will be null if the corresponding objects are GC marked
826 : PtrInfo* mMap;
827 : PtrInfo* mKey;
828 : PtrInfo* mKeyDelegate;
829 : PtrInfo* mVal;
830 : };
831 :
832 : class CCGraphBuilder;
833 :
834 : struct CCGraph
835 : {
836 : NodePool mNodes;
837 : EdgePool mEdges;
838 : nsTArray<WeakMapping> mWeakMaps;
839 : uint32_t mRootCount;
840 :
841 : private:
842 : PLDHashTable mPtrToNodeMap;
843 : bool mOutOfMemory;
844 :
845 : static const uint32_t kInitialMapLength = 16384;
846 :
847 : public:
848 4 : CCGraph()
849 4 : : mRootCount(0)
850 : , mPtrToNodeMap(&PtrNodeOps, sizeof(PtrToNodeEntry), kInitialMapLength)
851 4 : , mOutOfMemory(false)
852 4 : {}
853 :
854 0 : ~CCGraph() {}
855 :
856 0 : void Init()
857 : {
858 0 : MOZ_ASSERT(IsEmpty(), "Failed to call CCGraph::Clear");
859 0 : }
860 :
861 0 : void Clear()
862 : {
863 0 : mNodes.Clear();
864 0 : mEdges.Clear();
865 0 : mWeakMaps.Clear();
866 0 : mRootCount = 0;
867 0 : mPtrToNodeMap.ClearAndPrepareForLength(kInitialMapLength);
868 0 : mOutOfMemory = false;
869 0 : }
870 :
871 : #ifdef DEBUG
872 1 : bool IsEmpty()
873 : {
874 3 : return mNodes.IsEmpty() && mEdges.IsEmpty() &&
875 4 : mWeakMaps.IsEmpty() && mRootCount == 0 &&
876 2 : mPtrToNodeMap.EntryCount() == 0;
877 : }
878 : #endif
879 :
880 : PtrInfo* FindNode(void* aPtr);
881 : PtrToNodeEntry* AddNodeToMap(void* aPtr);
882 : void RemoveObjectFromMap(void* aObject);
883 :
884 0 : uint32_t MapCount() const
885 : {
886 0 : return mPtrToNodeMap.EntryCount();
887 : }
888 :
889 0 : size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const
890 : {
891 0 : size_t n = 0;
892 :
893 0 : n += mNodes.SizeOfExcludingThis(aMallocSizeOf);
894 0 : n += mEdges.SizeOfExcludingThis(aMallocSizeOf);
895 :
896 : // We don't measure what the WeakMappings point to, because the
897 : // pointers are non-owning.
898 0 : n += mWeakMaps.ShallowSizeOfExcludingThis(aMallocSizeOf);
899 :
900 0 : n += mPtrToNodeMap.ShallowSizeOfExcludingThis(aMallocSizeOf);
901 :
902 0 : return n;
903 : }
904 :
905 : private:
906 0 : PtrToNodeEntry* FindNodeEntry(void* aPtr)
907 : {
908 0 : return static_cast<PtrToNodeEntry*>(mPtrToNodeMap.Search(aPtr));
909 : }
910 : };
911 :
912 : PtrInfo*
913 0 : CCGraph::FindNode(void* aPtr)
914 : {
915 0 : PtrToNodeEntry* e = FindNodeEntry(aPtr);
916 0 : return e ? e->mNode : nullptr;
917 : }
918 :
919 : PtrToNodeEntry*
920 0 : CCGraph::AddNodeToMap(void* aPtr)
921 : {
922 0 : JS::AutoSuppressGCAnalysis suppress;
923 0 : if (mOutOfMemory) {
924 0 : return nullptr;
925 : }
926 :
927 0 : auto e = static_cast<PtrToNodeEntry*>(mPtrToNodeMap.Add(aPtr, fallible));
928 0 : if (!e) {
929 0 : mOutOfMemory = true;
930 0 : MOZ_ASSERT(false, "Ran out of memory while building cycle collector graph");
931 : return nullptr;
932 : }
933 0 : return e;
934 : }
935 :
936 : void
937 0 : CCGraph::RemoveObjectFromMap(void* aObj)
938 : {
939 0 : PtrToNodeEntry* e = FindNodeEntry(aObj);
940 0 : PtrInfo* pinfo = e ? e->mNode : nullptr;
941 0 : if (pinfo) {
942 0 : mPtrToNodeMap.RemoveEntry(e);
943 :
944 0 : pinfo->mPointer = nullptr;
945 0 : pinfo->mParticipant = nullptr;
946 : }
947 0 : }
948 :
949 :
950 : static nsISupports*
951 1080 : CanonicalizeXPCOMParticipant(nsISupports* aIn)
952 : {
953 1080 : nsISupports* out = nullptr;
954 : aIn->QueryInterface(NS_GET_IID(nsCycleCollectionISupports),
955 1080 : reinterpret_cast<void**>(&out));
956 1080 : return out;
957 : }
958 :
959 : static inline void
960 : ToParticipant(nsISupports* aPtr, nsXPCOMCycleCollectionParticipant** aCp);
961 :
962 : static void
963 1788 : CanonicalizeParticipant(void** aParti, nsCycleCollectionParticipant** aCp)
964 : {
965 : // If the participant is null, this is an nsISupports participant,
966 : // so we must QI to get the real participant.
967 :
968 1788 : if (!*aCp) {
969 1080 : nsISupports* nsparti = static_cast<nsISupports*>(*aParti);
970 1080 : nsparti = CanonicalizeXPCOMParticipant(nsparti);
971 1080 : NS_ASSERTION(nsparti,
972 : "Don't add objects that don't participate in collection!");
973 : nsXPCOMCycleCollectionParticipant* xcp;
974 1080 : ToParticipant(nsparti, &xcp);
975 1080 : *aParti = nsparti;
976 1080 : *aCp = xcp;
977 : }
978 1788 : }
979 :
980 : struct nsPurpleBufferEntry
981 : {
982 24809 : nsPurpleBufferEntry(void* aObject, nsCycleCollectingAutoRefCnt* aRefCnt,
983 : nsCycleCollectionParticipant* aParticipant)
984 24809 : : mObject(aObject)
985 : , mRefCnt(aRefCnt)
986 24809 : , mParticipant(aParticipant)
987 : {
988 24809 : }
989 :
990 24809 : nsPurpleBufferEntry(nsPurpleBufferEntry&& aOther)
991 24809 : : mObject(nullptr)
992 : , mRefCnt(nullptr)
993 24809 : , mParticipant(nullptr)
994 : {
995 24809 : Swap(aOther);
996 24809 : }
997 :
998 26261 : void Swap(nsPurpleBufferEntry& aOther)
999 : {
1000 26261 : std::swap(mObject, aOther.mObject);
1001 26261 : std::swap(mRefCnt, aOther.mRefCnt);
1002 26261 : std::swap(mParticipant, aOther.mParticipant);
1003 26261 : }
1004 :
1005 1788 : void Clear()
1006 : {
1007 1788 : mRefCnt->RemoveFromPurpleBuffer();
1008 1788 : mRefCnt = nullptr;
1009 1788 : mObject = nullptr;
1010 1788 : mParticipant = nullptr;
1011 1788 : }
1012 :
1013 26597 : ~nsPurpleBufferEntry()
1014 26597 : {
1015 26597 : if (mRefCnt) {
1016 0 : mRefCnt->RemoveFromPurpleBuffer();
1017 : }
1018 26597 : }
1019 :
1020 : void* mObject;
1021 : nsCycleCollectingAutoRefCnt* mRefCnt;
1022 : nsCycleCollectionParticipant* mParticipant; // nullptr for nsISupports
1023 : };
1024 :
1025 : class nsCycleCollector;
1026 :
1027 : struct nsPurpleBuffer
1028 : {
1029 : private:
1030 : uint32_t mCount;
1031 :
1032 : // Try to match the size of a jemalloc bucket, to minimize slop bytes.
1033 : // - On 32-bit platforms sizeof(nsPurpleBufferEntry) is 12, so mEntries'
1034 : // Segment is 16,372 bytes.
1035 : // - On 64-bit platforms sizeof(nsPurpleBufferEntry) is 24, so mEntries'
1036 : // Segment is 32,760 bytes.
1037 : static const uint32_t kEntriesPerSegment = 1365;
1038 : static const size_t kSegmentSize =
1039 : sizeof(nsPurpleBufferEntry) * kEntriesPerSegment;
1040 : typedef
1041 : SegmentedVector<nsPurpleBufferEntry, kSegmentSize, InfallibleAllocPolicy>
1042 : PurpleBufferVector;
1043 : PurpleBufferVector mEntries;
1044 : public:
1045 4 : nsPurpleBuffer()
1046 4 : : mCount(0)
1047 : {
1048 : static_assert(
1049 : sizeof(PurpleBufferVector::Segment) == 16372 || // 32-bit
1050 : sizeof(PurpleBufferVector::Segment) == 32760 || // 64-bit
1051 : sizeof(PurpleBufferVector::Segment) == 32744, // 64-bit Windows
1052 : "ill-sized nsPurpleBuffer::mEntries");
1053 4 : }
1054 :
1055 0 : ~nsPurpleBuffer()
1056 0 : {
1057 0 : }
1058 :
1059 : // This method compacts mEntries.
1060 : template<class PurpleVisitor>
1061 1 : void VisitEntries(PurpleVisitor& aVisitor)
1062 : {
1063 1 : if (mEntries.IsEmpty()) {
1064 0 : return;
1065 : }
1066 :
1067 1 : uint32_t oldLength = mEntries.Length();
1068 1 : uint32_t newLength = 0;
1069 1 : auto revIter = mEntries.IterFromLast();
1070 1 : auto iter = mEntries.Iter();
1071 : // After iteration this points to the first empty entry.
1072 1 : auto firstEmptyIter = mEntries.Iter();
1073 1 : auto iterFromLastEntry = mEntries.IterFromLast();
1074 41467 : for (; !iter.Done(); iter.Next()) {
1075 20734 : nsPurpleBufferEntry& e = iter.Get();
1076 20734 : if (e.mObject) {
1077 20734 : if (!aVisitor.Visit(*this, &e)) {
1078 0 : return;
1079 : }
1080 : }
1081 :
1082 : // Visit call above may have cleared the entry, or the entry was empty
1083 : // already.
1084 20734 : if (!e.mObject) {
1085 : // Try to find a non-empty entry from the end of the vector.
1086 2124 : for (; !revIter.Done(); revIter.Prev()) {
1087 1788 : nsPurpleBufferEntry& otherEntry = revIter.Get();
1088 1788 : if (&e == &otherEntry) {
1089 0 : break;
1090 : }
1091 1788 : if (otherEntry.mObject) {
1092 1788 : if (!aVisitor.Visit(*this, &otherEntry)) {
1093 0 : return;
1094 : }
1095 : // Visit may have cleared otherEntry.
1096 1788 : if (otherEntry.mObject) {
1097 1452 : e.Swap(otherEntry);
1098 1452 : revIter.Prev(); // We've swapped this now empty entry.
1099 1452 : break;
1100 : }
1101 : }
1102 : }
1103 : }
1104 :
1105 : // Entry is non-empty even after the Visit call, ensure it is kept
1106 : // in mEntries.
1107 20734 : if (e.mObject) {
1108 20734 : firstEmptyIter.Next();
1109 20734 : ++newLength;
1110 : }
1111 :
1112 20734 : if (&e == &revIter.Get()) {
1113 1 : break;
1114 : }
1115 : }
1116 :
1117 : // There were some empty entries.
1118 1 : if (oldLength != newLength) {
1119 :
1120 : // While visiting entries, some new ones were possibly added. This can
1121 : // happen during CanSkip. Move all such new entries to be after other
1122 : // entries. Note, we don't call Visit on newly added entries!
1123 1 : if (&iterFromLastEntry.Get() != &mEntries.GetLast()) {
1124 0 : iterFromLastEntry.Next(); // Now pointing to the first added entry.
1125 0 : auto& iterForNewEntries = iterFromLastEntry;
1126 0 : while (!iterForNewEntries.Done()) {
1127 0 : MOZ_ASSERT(!firstEmptyIter.Done());
1128 0 : MOZ_ASSERT(!firstEmptyIter.Get().mObject);
1129 0 : firstEmptyIter.Get().Swap(iterForNewEntries.Get());
1130 0 : firstEmptyIter.Next();
1131 0 : iterForNewEntries.Next();
1132 0 : ++newLength; // We keep all the new entries.
1133 : }
1134 : }
1135 :
1136 1 : mEntries.PopLastN(oldLength - newLength);
1137 : }
1138 : }
1139 :
1140 0 : void FreeBlocks()
1141 : {
1142 0 : mCount = 0;
1143 0 : mEntries.Clear();
1144 0 : }
1145 :
1146 : void SelectPointers(CCGraphBuilder& aBuilder);
1147 :
1148 : // RemoveSkippable removes entries from the purple buffer synchronously
1149 : // (1) if aAsyncSnowWhiteFreeing is false and nsPurpleBufferEntry::mRefCnt is 0 or
1150 : // (2) if the object's nsXPCOMCycleCollectionParticipant::CanSkip() returns true or
1151 : // (3) if nsPurpleBufferEntry::mRefCnt->IsPurple() is false.
1152 : // (4) If removeChildlessNodes is true, then any nodes in the purple buffer
1153 : // that will have no children in the cycle collector graph will also be
1154 : // removed. CanSkip() may be run on these children.
1155 : void RemoveSkippable(nsCycleCollector* aCollector,
1156 : js::SliceBudget& aBudget,
1157 : bool aRemoveChildlessNodes,
1158 : bool aAsyncSnowWhiteFreeing,
1159 : CC_ForgetSkippableCallback aCb);
1160 :
1161 24809 : MOZ_ALWAYS_INLINE void Put(void* aObject, nsCycleCollectionParticipant* aCp,
1162 : nsCycleCollectingAutoRefCnt* aRefCnt)
1163 : {
1164 49618 : nsPurpleBufferEntry entry(aObject, aRefCnt, aCp);
1165 24809 : Unused << mEntries.Append(Move(entry));
1166 24809 : MOZ_ASSERT(!entry.mRefCnt, "Move didn't work!");
1167 24809 : ++mCount;
1168 24809 : }
1169 :
1170 1788 : void Remove(nsPurpleBufferEntry* aEntry)
1171 : {
1172 1788 : MOZ_ASSERT(mCount != 0, "must have entries");
1173 1788 : --mCount;
1174 1788 : aEntry->Clear();
1175 1788 : }
1176 :
1177 3 : uint32_t Count() const
1178 : {
1179 3 : return mCount;
1180 : }
1181 :
1182 0 : size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const
1183 : {
1184 0 : return mEntries.SizeOfExcludingThis(aMallocSizeOf);
1185 : }
1186 : };
1187 :
1188 : static bool
1189 : AddPurpleRoot(CCGraphBuilder& aBuilder, void* aRoot,
1190 : nsCycleCollectionParticipant* aParti);
1191 :
1192 : struct SelectPointersVisitor
1193 : {
1194 0 : explicit SelectPointersVisitor(CCGraphBuilder& aBuilder)
1195 0 : : mBuilder(aBuilder)
1196 : {
1197 0 : }
1198 :
1199 : bool
1200 0 : Visit(nsPurpleBuffer& aBuffer, nsPurpleBufferEntry* aEntry)
1201 : {
1202 0 : MOZ_ASSERT(aEntry->mObject, "Null object in purple buffer");
1203 0 : MOZ_ASSERT(aEntry->mRefCnt->get() != 0,
1204 : "SelectPointersVisitor: snow-white object in the purple buffer");
1205 0 : if (!aEntry->mRefCnt->IsPurple() ||
1206 0 : AddPurpleRoot(mBuilder, aEntry->mObject, aEntry->mParticipant)) {
1207 0 : aBuffer.Remove(aEntry);
1208 : }
1209 0 : return true;
1210 : }
1211 :
1212 : private:
1213 : CCGraphBuilder& mBuilder;
1214 : };
1215 :
1216 : void
1217 0 : nsPurpleBuffer::SelectPointers(CCGraphBuilder& aBuilder)
1218 : {
1219 0 : SelectPointersVisitor visitor(aBuilder);
1220 0 : VisitEntries(visitor);
1221 :
1222 0 : MOZ_ASSERT(mCount == 0, "AddPurpleRoot failed");
1223 0 : if (mCount == 0) {
1224 0 : FreeBlocks();
1225 : }
1226 0 : }
1227 :
1228 : enum ccPhase
1229 : {
1230 : IdlePhase,
1231 : GraphBuildingPhase,
1232 : ScanAndCollectWhitePhase,
1233 : CleanupPhase
1234 : };
1235 :
1236 : enum ccType
1237 : {
1238 : SliceCC, /* If a CC is in progress, continue it. Otherwise, start a new one. */
1239 : ManualCC, /* Explicitly triggered. */
1240 : ShutdownCC /* Shutdown CC, used for finding leaks. */
1241 : };
1242 :
1243 : ////////////////////////////////////////////////////////////////////////
1244 : // Top level structure for the cycle collector.
1245 : ////////////////////////////////////////////////////////////////////////
1246 :
1247 : using js::SliceBudget;
1248 :
1249 : class JSPurpleBuffer;
1250 :
1251 : class nsCycleCollector : public nsIMemoryReporter
1252 : {
1253 : public:
1254 : NS_DECL_ISUPPORTS
1255 : NS_DECL_NSIMEMORYREPORTER
1256 :
1257 : private:
1258 : bool mActivelyCollecting;
1259 : bool mFreeingSnowWhite;
1260 : // mScanInProgress should be false when we're collecting white objects.
1261 : bool mScanInProgress;
1262 : CycleCollectorResults mResults;
1263 : TimeStamp mCollectionStart;
1264 :
1265 : CycleCollectedJSRuntime* mCCJSRuntime;
1266 :
1267 : ccPhase mIncrementalPhase;
1268 : CCGraph mGraph;
1269 : nsAutoPtr<CCGraphBuilder> mBuilder;
1270 : RefPtr<nsCycleCollectorLogger> mLogger;
1271 :
1272 : #ifdef DEBUG
1273 : nsISerialEventTarget* mEventTarget;
1274 : #endif
1275 :
1276 : nsCycleCollectorParams mParams;
1277 :
1278 : uint32_t mWhiteNodeCount;
1279 :
1280 : CC_BeforeUnlinkCallback mBeforeUnlinkCB;
1281 : CC_ForgetSkippableCallback mForgetSkippableCB;
1282 :
1283 : nsPurpleBuffer mPurpleBuf;
1284 :
1285 : uint32_t mUnmergedNeeded;
1286 : uint32_t mMergedInARow;
1287 :
1288 : RefPtr<JSPurpleBuffer> mJSPurpleBuffer;
1289 :
1290 : private:
1291 : virtual ~nsCycleCollector();
1292 :
1293 : public:
1294 : nsCycleCollector();
1295 :
1296 : void SetCCJSRuntime(CycleCollectedJSRuntime* aCCRuntime);
1297 : void ClearCCJSRuntime();
1298 :
1299 3 : void SetBeforeUnlinkCallback(CC_BeforeUnlinkCallback aBeforeUnlinkCB)
1300 : {
1301 3 : CheckThreadSafety();
1302 3 : mBeforeUnlinkCB = aBeforeUnlinkCB;
1303 3 : }
1304 :
1305 3 : void SetForgetSkippableCallback(CC_ForgetSkippableCallback aForgetSkippableCB)
1306 : {
1307 3 : CheckThreadSafety();
1308 3 : mForgetSkippableCB = aForgetSkippableCB;
1309 3 : }
1310 :
1311 : void Suspect(void* aPtr, nsCycleCollectionParticipant* aCp,
1312 : nsCycleCollectingAutoRefCnt* aRefCnt);
1313 : uint32_t SuspectedCount();
1314 : void ForgetSkippable(js::SliceBudget& aBudget, bool aRemoveChildlessNodes,
1315 : bool aAsyncSnowWhiteFreeing);
1316 : bool FreeSnowWhite(bool aUntilNoSWInPurpleBuffer);
1317 :
1318 : // This method assumes its argument is already canonicalized.
1319 : void RemoveObjectFromGraph(void* aPtr);
1320 :
1321 : void PrepareForGarbageCollection();
1322 : void FinishAnyCurrentCollection();
1323 :
1324 : bool Collect(ccType aCCType,
1325 : SliceBudget& aBudget,
1326 : nsICycleCollectorListener* aManualListener,
1327 : bool aPreferShorterSlices = false);
1328 : void Shutdown(bool aDoCollect);
1329 :
1330 1789 : bool IsIdle() const { return mIncrementalPhase == IdlePhase; }
1331 :
1332 : void SizeOfIncludingThis(mozilla::MallocSizeOf aMallocSizeOf,
1333 : size_t* aObjectSize,
1334 : size_t* aGraphSize,
1335 : size_t* aPurpleBufferSize) const;
1336 :
1337 : JSPurpleBuffer* GetJSPurpleBuffer();
1338 :
1339 1788 : CycleCollectedJSRuntime* Runtime() { return mCCJSRuntime; }
1340 :
1341 : private:
1342 : void CheckThreadSafety();
1343 : void ShutdownCollect();
1344 :
1345 : void FixGrayBits(bool aForceGC, TimeLog& aTimeLog);
1346 : bool IsIncrementalGCInProgress();
1347 : void FinishAnyIncrementalGCInProgress();
1348 : bool ShouldMergeZones(ccType aCCType);
1349 :
1350 : void BeginCollection(ccType aCCType, nsICycleCollectorListener* aManualListener);
1351 : void MarkRoots(SliceBudget& aBudget);
1352 : void ScanRoots(bool aFullySynchGraphBuild);
1353 : void ScanIncrementalRoots();
1354 : void ScanWhiteNodes(bool aFullySynchGraphBuild);
1355 : void ScanBlackNodes();
1356 : void ScanWeakMaps();
1357 :
1358 : // returns whether anything was collected
1359 : bool CollectWhite();
1360 :
1361 : void CleanupAfterCollection();
1362 : };
1363 :
1364 15 : NS_IMPL_ISUPPORTS(nsCycleCollector, nsIMemoryReporter)
1365 :
1366 : /**
1367 : * GraphWalker is templatized over a Visitor class that must provide
1368 : * the following two methods:
1369 : *
1370 : * bool ShouldVisitNode(PtrInfo const *pi);
1371 : * void VisitNode(PtrInfo *pi);
1372 : */
1373 : template<class Visitor>
1374 : class GraphWalker
1375 : {
1376 : private:
1377 : Visitor mVisitor;
1378 :
1379 : void DoWalk(nsDeque& aQueue);
1380 :
1381 0 : void CheckedPush(nsDeque& aQueue, PtrInfo* aPi)
1382 : {
1383 0 : if (!aPi) {
1384 0 : MOZ_CRASH();
1385 : }
1386 0 : if (!aQueue.Push(aPi, fallible)) {
1387 0 : mVisitor.Failed();
1388 : }
1389 0 : }
1390 :
1391 : public:
1392 : void Walk(PtrInfo* aPi);
1393 : void WalkFromRoots(CCGraph& aGraph);
1394 : // copy-constructing the visitor should be cheap, and less
1395 : // indirection than using a reference
1396 0 : explicit GraphWalker(const Visitor aVisitor) : mVisitor(aVisitor)
1397 : {
1398 0 : }
1399 : };
1400 :
1401 :
1402 : ////////////////////////////////////////////////////////////////////////
1403 : // The static collector struct
1404 : ////////////////////////////////////////////////////////////////////////
1405 :
1406 4 : struct CollectorData
1407 : {
1408 : RefPtr<nsCycleCollector> mCollector;
1409 : CycleCollectedJSContext* mContext;
1410 : };
1411 :
1412 : static MOZ_THREAD_LOCAL(CollectorData*) sCollectorData;
1413 :
1414 : ////////////////////////////////////////////////////////////////////////
1415 : // Utility functions
1416 : ////////////////////////////////////////////////////////////////////////
1417 :
1418 : static inline void
1419 21524 : ToParticipant(nsISupports* aPtr, nsXPCOMCycleCollectionParticipant** aCp)
1420 : {
1421 : // We use QI to move from an nsISupports to an
1422 : // nsXPCOMCycleCollectionParticipant, which is a per-class singleton helper
1423 : // object that implements traversal and unlinking logic for the nsISupports
1424 : // in question.
1425 21524 : *aCp = nullptr;
1426 21524 : CallQueryInterface(aPtr, aCp);
1427 21524 : }
1428 :
1429 : template<class Visitor>
1430 : MOZ_NEVER_INLINE void
1431 0 : GraphWalker<Visitor>::Walk(PtrInfo* aPi)
1432 : {
1433 0 : nsDeque queue;
1434 0 : CheckedPush(queue, aPi);
1435 0 : DoWalk(queue);
1436 0 : }
1437 :
1438 : template<class Visitor>
1439 : MOZ_NEVER_INLINE void
1440 : GraphWalker<Visitor>::WalkFromRoots(CCGraph& aGraph)
1441 : {
1442 : nsDeque queue;
1443 : NodePool::Enumerator etor(aGraph.mNodes);
1444 : for (uint32_t i = 0; i < aGraph.mRootCount; ++i) {
1445 : CheckedPush(queue, etor.GetNext());
1446 : }
1447 : DoWalk(queue);
1448 : }
1449 :
1450 : template<class Visitor>
1451 : MOZ_NEVER_INLINE void
1452 0 : GraphWalker<Visitor>::DoWalk(nsDeque& aQueue)
1453 : {
1454 : // Use a aQueue to match the breadth-first traversal used when we
1455 : // built the graph, for hopefully-better locality.
1456 0 : while (aQueue.GetSize() > 0) {
1457 0 : PtrInfo* pi = static_cast<PtrInfo*>(aQueue.PopFront());
1458 :
1459 0 : if (pi->WasTraversed() && mVisitor.ShouldVisitNode(pi)) {
1460 0 : mVisitor.VisitNode(pi);
1461 0 : for (EdgePool::Iterator child = pi->FirstChild(),
1462 0 : child_end = pi->LastChild();
1463 : child != child_end; ++child) {
1464 0 : CheckedPush(aQueue, *child);
1465 : }
1466 : }
1467 : }
1468 0 : }
1469 :
1470 0 : struct CCGraphDescriber : public LinkedListElement<CCGraphDescriber>
1471 : {
1472 0 : CCGraphDescriber()
1473 0 : : mAddress("0x"), mCnt(0), mType(eUnknown)
1474 : {
1475 0 : }
1476 :
1477 : enum Type
1478 : {
1479 : eRefCountedObject,
1480 : eGCedObject,
1481 : eGCMarkedObject,
1482 : eEdge,
1483 : eRoot,
1484 : eGarbage,
1485 : eUnknown
1486 : };
1487 :
1488 : nsCString mAddress;
1489 : nsCString mName;
1490 : nsCString mCompartmentOrToAddress;
1491 : uint32_t mCnt;
1492 : Type mType;
1493 : };
1494 :
1495 0 : class LogStringMessageAsync : public CancelableRunnable
1496 : {
1497 : public:
1498 0 : explicit LogStringMessageAsync(const nsAString& aMsg)
1499 0 : : mozilla::CancelableRunnable("LogStringMessageAsync")
1500 0 : , mMsg(aMsg)
1501 0 : {}
1502 :
1503 0 : NS_IMETHOD Run() override
1504 : {
1505 : nsCOMPtr<nsIConsoleService> cs =
1506 0 : do_GetService(NS_CONSOLESERVICE_CONTRACTID);
1507 0 : if (cs) {
1508 0 : cs->LogStringMessage(mMsg.get());
1509 : }
1510 0 : return NS_OK;
1511 : }
1512 :
1513 : private:
1514 : nsString mMsg;
1515 : };
1516 :
1517 : class nsCycleCollectorLogSinkToFile final : public nsICycleCollectorLogSink
1518 : {
1519 : public:
1520 : NS_DECL_ISUPPORTS
1521 :
1522 0 : nsCycleCollectorLogSinkToFile() :
1523 0 : mProcessIdentifier(base::GetCurrentProcId()),
1524 0 : mGCLog("gc-edges"), mCCLog("cc-edges")
1525 : {
1526 0 : }
1527 :
1528 0 : NS_IMETHOD GetFilenameIdentifier(nsAString& aIdentifier) override
1529 : {
1530 0 : aIdentifier = mFilenameIdentifier;
1531 0 : return NS_OK;
1532 : }
1533 :
1534 0 : NS_IMETHOD SetFilenameIdentifier(const nsAString& aIdentifier) override
1535 : {
1536 0 : mFilenameIdentifier = aIdentifier;
1537 0 : return NS_OK;
1538 : }
1539 :
1540 0 : NS_IMETHOD GetProcessIdentifier(int32_t* aIdentifier) override
1541 : {
1542 0 : *aIdentifier = mProcessIdentifier;
1543 0 : return NS_OK;
1544 : }
1545 :
1546 0 : NS_IMETHOD SetProcessIdentifier(int32_t aIdentifier) override
1547 : {
1548 0 : mProcessIdentifier = aIdentifier;
1549 0 : return NS_OK;
1550 : }
1551 :
1552 0 : NS_IMETHOD GetGcLog(nsIFile** aPath) override
1553 : {
1554 0 : NS_IF_ADDREF(*aPath = mGCLog.mFile);
1555 0 : return NS_OK;
1556 : }
1557 :
1558 0 : NS_IMETHOD GetCcLog(nsIFile** aPath) override
1559 : {
1560 0 : NS_IF_ADDREF(*aPath = mCCLog.mFile);
1561 0 : return NS_OK;
1562 : }
1563 :
1564 0 : NS_IMETHOD Open(FILE** aGCLog, FILE** aCCLog) override
1565 : {
1566 : nsresult rv;
1567 :
1568 0 : if (mGCLog.mStream || mCCLog.mStream) {
1569 0 : return NS_ERROR_UNEXPECTED;
1570 : }
1571 :
1572 0 : rv = OpenLog(&mGCLog);
1573 0 : NS_ENSURE_SUCCESS(rv, rv);
1574 0 : *aGCLog = mGCLog.mStream;
1575 :
1576 0 : rv = OpenLog(&mCCLog);
1577 0 : NS_ENSURE_SUCCESS(rv, rv);
1578 0 : *aCCLog = mCCLog.mStream;
1579 :
1580 0 : return NS_OK;
1581 : }
1582 :
1583 0 : NS_IMETHOD CloseGCLog() override
1584 : {
1585 0 : if (!mGCLog.mStream) {
1586 0 : return NS_ERROR_UNEXPECTED;
1587 : }
1588 0 : CloseLog(&mGCLog, NS_LITERAL_STRING("Garbage"));
1589 0 : return NS_OK;
1590 : }
1591 :
1592 0 : NS_IMETHOD CloseCCLog() override
1593 : {
1594 0 : if (!mCCLog.mStream) {
1595 0 : return NS_ERROR_UNEXPECTED;
1596 : }
1597 0 : CloseLog(&mCCLog, NS_LITERAL_STRING("Cycle"));
1598 0 : return NS_OK;
1599 : }
1600 :
1601 : private:
1602 0 : ~nsCycleCollectorLogSinkToFile()
1603 0 : {
1604 0 : if (mGCLog.mStream) {
1605 0 : MozillaUnRegisterDebugFILE(mGCLog.mStream);
1606 0 : fclose(mGCLog.mStream);
1607 : }
1608 0 : if (mCCLog.mStream) {
1609 0 : MozillaUnRegisterDebugFILE(mCCLog.mStream);
1610 0 : fclose(mCCLog.mStream);
1611 : }
1612 0 : }
1613 :
1614 0 : struct FileInfo
1615 : {
1616 : const char* const mPrefix;
1617 : nsCOMPtr<nsIFile> mFile;
1618 : FILE* mStream;
1619 :
1620 0 : explicit FileInfo(const char* aPrefix) : mPrefix(aPrefix), mStream(nullptr) { }
1621 : };
1622 :
1623 : /**
1624 : * Create a new file named something like aPrefix.$PID.$IDENTIFIER.log in
1625 : * $MOZ_CC_LOG_DIRECTORY or in the system's temp directory. No existing
1626 : * file will be overwritten; if aPrefix.$PID.$IDENTIFIER.log exists, we'll
1627 : * try a file named something like aPrefix.$PID.$IDENTIFIER-1.log, and so
1628 : * on.
1629 : */
1630 0 : already_AddRefed<nsIFile> CreateTempFile(const char* aPrefix)
1631 : {
1632 : nsPrintfCString filename("%s.%d%s%s.log",
1633 : aPrefix,
1634 : mProcessIdentifier,
1635 0 : mFilenameIdentifier.IsEmpty() ? "" : ".",
1636 0 : NS_ConvertUTF16toUTF8(mFilenameIdentifier).get());
1637 :
1638 : // Get the log directory either from $MOZ_CC_LOG_DIRECTORY or from
1639 : // the fallback directories in OpenTempFile. We don't use an nsCOMPtr
1640 : // here because OpenTempFile uses an in/out param and getter_AddRefs
1641 : // wouldn't work.
1642 0 : nsIFile* logFile = nullptr;
1643 0 : if (char* env = PR_GetEnv("MOZ_CC_LOG_DIRECTORY")) {
1644 0 : NS_NewNativeLocalFile(nsCString(env), /* followLinks = */ true,
1645 0 : &logFile);
1646 : }
1647 :
1648 : // On Android or B2G, this function will open a file named
1649 : // aFilename under a memory-reporting-specific folder
1650 : // (/data/local/tmp/memory-reports). Otherwise, it will open a
1651 : // file named aFilename under "NS_OS_TEMP_DIR".
1652 0 : nsresult rv = nsDumpUtils::OpenTempFile(filename, &logFile,
1653 0 : NS_LITERAL_CSTRING("memory-reports"));
1654 0 : if (NS_FAILED(rv)) {
1655 0 : NS_IF_RELEASE(logFile);
1656 0 : return nullptr;
1657 : }
1658 :
1659 0 : return dont_AddRef(logFile);
1660 : }
1661 :
1662 0 : nsresult OpenLog(FileInfo* aLog)
1663 : {
1664 : // Initially create the log in a file starting with "incomplete-".
1665 : // We'll move the file and strip off the "incomplete-" once the dump
1666 : // completes. (We do this because we don't want scripts which poll
1667 : // the filesystem looking for GC/CC dumps to grab a file before we're
1668 : // finished writing to it.)
1669 0 : nsAutoCString incomplete;
1670 0 : incomplete += "incomplete-";
1671 0 : incomplete += aLog->mPrefix;
1672 0 : MOZ_ASSERT(!aLog->mFile);
1673 0 : aLog->mFile = CreateTempFile(incomplete.get());
1674 0 : if (NS_WARN_IF(!aLog->mFile)) {
1675 0 : return NS_ERROR_UNEXPECTED;
1676 : }
1677 :
1678 0 : MOZ_ASSERT(!aLog->mStream);
1679 0 : nsresult rv = aLog->mFile->OpenANSIFileDesc("w", &aLog->mStream);
1680 0 : if (NS_WARN_IF(NS_FAILED(rv))) {
1681 0 : return NS_ERROR_UNEXPECTED;
1682 : }
1683 0 : MozillaRegisterDebugFILE(aLog->mStream);
1684 0 : return NS_OK;
1685 : }
1686 :
1687 0 : nsresult CloseLog(FileInfo* aLog, const nsAString& aCollectorKind)
1688 : {
1689 0 : MOZ_ASSERT(aLog->mStream);
1690 0 : MOZ_ASSERT(aLog->mFile);
1691 :
1692 0 : MozillaUnRegisterDebugFILE(aLog->mStream);
1693 0 : fclose(aLog->mStream);
1694 0 : aLog->mStream = nullptr;
1695 :
1696 : // Strip off "incomplete-".
1697 : nsCOMPtr<nsIFile> logFileFinalDestination =
1698 0 : CreateTempFile(aLog->mPrefix);
1699 0 : if (NS_WARN_IF(!logFileFinalDestination)) {
1700 0 : return NS_ERROR_UNEXPECTED;
1701 : }
1702 :
1703 0 : nsAutoString logFileFinalDestinationName;
1704 0 : logFileFinalDestination->GetLeafName(logFileFinalDestinationName);
1705 0 : if (NS_WARN_IF(logFileFinalDestinationName.IsEmpty())) {
1706 0 : return NS_ERROR_UNEXPECTED;
1707 : }
1708 :
1709 0 : aLog->mFile->MoveTo(/* directory */ nullptr, logFileFinalDestinationName);
1710 :
1711 : // Save the file path.
1712 0 : aLog->mFile = logFileFinalDestination;
1713 :
1714 : // Log to the error console.
1715 0 : nsAutoString logPath;
1716 0 : logFileFinalDestination->GetPath(logPath);
1717 0 : nsAutoString msg = aCollectorKind +
1718 0 : NS_LITERAL_STRING(" Collector log dumped to ") + logPath;
1719 :
1720 : // We don't want any JS to run between ScanRoots and CollectWhite calls,
1721 : // and since ScanRoots calls this method, better to log the message
1722 : // asynchronously.
1723 0 : RefPtr<LogStringMessageAsync> log = new LogStringMessageAsync(msg);
1724 0 : NS_DispatchToCurrentThread(log);
1725 0 : return NS_OK;
1726 : }
1727 :
1728 : int32_t mProcessIdentifier;
1729 : nsString mFilenameIdentifier;
1730 : FileInfo mGCLog;
1731 : FileInfo mCCLog;
1732 : };
1733 :
1734 0 : NS_IMPL_ISUPPORTS(nsCycleCollectorLogSinkToFile, nsICycleCollectorLogSink)
1735 :
1736 :
1737 : class nsCycleCollectorLogger final : public nsICycleCollectorListener
1738 : {
1739 0 : ~nsCycleCollectorLogger()
1740 0 : {
1741 0 : ClearDescribers();
1742 0 : }
1743 :
1744 : public:
1745 0 : nsCycleCollectorLogger()
1746 0 : : mLogSink(nsCycleCollector_createLogSink())
1747 : , mWantAllTraces(false)
1748 : , mDisableLog(false)
1749 : , mWantAfterProcessing(false)
1750 0 : , mCCLog(nullptr)
1751 : {
1752 0 : }
1753 :
1754 : NS_DECL_ISUPPORTS
1755 :
1756 0 : void SetAllTraces()
1757 : {
1758 0 : mWantAllTraces = true;
1759 0 : }
1760 :
1761 0 : bool IsAllTraces()
1762 : {
1763 0 : return mWantAllTraces;
1764 : }
1765 :
1766 0 : NS_IMETHOD AllTraces(nsICycleCollectorListener** aListener) override
1767 : {
1768 0 : SetAllTraces();
1769 0 : NS_ADDREF(*aListener = this);
1770 0 : return NS_OK;
1771 : }
1772 :
1773 0 : NS_IMETHOD GetWantAllTraces(bool* aAllTraces) override
1774 : {
1775 0 : *aAllTraces = mWantAllTraces;
1776 0 : return NS_OK;
1777 : }
1778 :
1779 0 : NS_IMETHOD GetDisableLog(bool* aDisableLog) override
1780 : {
1781 0 : *aDisableLog = mDisableLog;
1782 0 : return NS_OK;
1783 : }
1784 :
1785 0 : NS_IMETHOD SetDisableLog(bool aDisableLog) override
1786 : {
1787 0 : mDisableLog = aDisableLog;
1788 0 : return NS_OK;
1789 : }
1790 :
1791 0 : NS_IMETHOD GetWantAfterProcessing(bool* aWantAfterProcessing) override
1792 : {
1793 0 : *aWantAfterProcessing = mWantAfterProcessing;
1794 0 : return NS_OK;
1795 : }
1796 :
1797 0 : NS_IMETHOD SetWantAfterProcessing(bool aWantAfterProcessing) override
1798 : {
1799 0 : mWantAfterProcessing = aWantAfterProcessing;
1800 0 : return NS_OK;
1801 : }
1802 :
1803 0 : NS_IMETHOD GetLogSink(nsICycleCollectorLogSink** aLogSink) override
1804 : {
1805 0 : NS_ADDREF(*aLogSink = mLogSink);
1806 0 : return NS_OK;
1807 : }
1808 :
1809 0 : NS_IMETHOD SetLogSink(nsICycleCollectorLogSink* aLogSink) override
1810 : {
1811 0 : if (!aLogSink) {
1812 0 : return NS_ERROR_INVALID_ARG;
1813 : }
1814 0 : mLogSink = aLogSink;
1815 0 : return NS_OK;
1816 : }
1817 :
1818 0 : nsresult Begin()
1819 : {
1820 : nsresult rv;
1821 :
1822 0 : mCurrentAddress.AssignLiteral("0x");
1823 0 : ClearDescribers();
1824 0 : if (mDisableLog) {
1825 0 : return NS_OK;
1826 : }
1827 :
1828 : FILE* gcLog;
1829 0 : rv = mLogSink->Open(&gcLog, &mCCLog);
1830 0 : NS_ENSURE_SUCCESS(rv, rv);
1831 : // Dump the JS heap.
1832 0 : CollectorData* data = sCollectorData.get();
1833 0 : if (data && data->mContext) {
1834 0 : data->mContext->Runtime()->DumpJSHeap(gcLog);
1835 : }
1836 0 : rv = mLogSink->CloseGCLog();
1837 0 : NS_ENSURE_SUCCESS(rv, rv);
1838 :
1839 0 : fprintf(mCCLog, "# WantAllTraces=%s\n", mWantAllTraces ? "true" : "false");
1840 0 : return NS_OK;
1841 : }
1842 0 : void NoteRefCountedObject(uint64_t aAddress, uint32_t aRefCount,
1843 : const char* aObjectDescription)
1844 : {
1845 0 : if (!mDisableLog) {
1846 0 : fprintf(mCCLog, "%p [rc=%u] %s\n", (void*)aAddress, aRefCount,
1847 0 : aObjectDescription);
1848 : }
1849 0 : if (mWantAfterProcessing) {
1850 0 : CCGraphDescriber* d = new CCGraphDescriber();
1851 0 : mDescribers.insertBack(d);
1852 0 : mCurrentAddress.AssignLiteral("0x");
1853 0 : mCurrentAddress.AppendInt(aAddress, 16);
1854 0 : d->mType = CCGraphDescriber::eRefCountedObject;
1855 0 : d->mAddress = mCurrentAddress;
1856 0 : d->mCnt = aRefCount;
1857 0 : d->mName.Append(aObjectDescription);
1858 : }
1859 0 : }
1860 0 : void NoteGCedObject(uint64_t aAddress, bool aMarked,
1861 : const char* aObjectDescription,
1862 : uint64_t aCompartmentAddress)
1863 : {
1864 0 : if (!mDisableLog) {
1865 0 : fprintf(mCCLog, "%p [gc%s] %s\n", (void*)aAddress,
1866 0 : aMarked ? ".marked" : "", aObjectDescription);
1867 : }
1868 0 : if (mWantAfterProcessing) {
1869 0 : CCGraphDescriber* d = new CCGraphDescriber();
1870 0 : mDescribers.insertBack(d);
1871 0 : mCurrentAddress.AssignLiteral("0x");
1872 0 : mCurrentAddress.AppendInt(aAddress, 16);
1873 0 : d->mType = aMarked ? CCGraphDescriber::eGCMarkedObject :
1874 : CCGraphDescriber::eGCedObject;
1875 0 : d->mAddress = mCurrentAddress;
1876 0 : d->mName.Append(aObjectDescription);
1877 0 : if (aCompartmentAddress) {
1878 0 : d->mCompartmentOrToAddress.AssignLiteral("0x");
1879 0 : d->mCompartmentOrToAddress.AppendInt(aCompartmentAddress, 16);
1880 : } else {
1881 0 : d->mCompartmentOrToAddress.SetIsVoid(true);
1882 : }
1883 : }
1884 0 : }
1885 0 : void NoteEdge(uint64_t aToAddress, const char* aEdgeName)
1886 : {
1887 0 : if (!mDisableLog) {
1888 0 : fprintf(mCCLog, "> %p %s\n", (void*)aToAddress, aEdgeName);
1889 : }
1890 0 : if (mWantAfterProcessing) {
1891 0 : CCGraphDescriber* d = new CCGraphDescriber();
1892 0 : mDescribers.insertBack(d);
1893 0 : d->mType = CCGraphDescriber::eEdge;
1894 0 : d->mAddress = mCurrentAddress;
1895 0 : d->mCompartmentOrToAddress.AssignLiteral("0x");
1896 0 : d->mCompartmentOrToAddress.AppendInt(aToAddress, 16);
1897 0 : d->mName.Append(aEdgeName);
1898 : }
1899 0 : }
1900 0 : void NoteWeakMapEntry(uint64_t aMap, uint64_t aKey,
1901 : uint64_t aKeyDelegate, uint64_t aValue)
1902 : {
1903 0 : if (!mDisableLog) {
1904 0 : fprintf(mCCLog, "WeakMapEntry map=%p key=%p keyDelegate=%p value=%p\n",
1905 0 : (void*)aMap, (void*)aKey, (void*)aKeyDelegate, (void*)aValue);
1906 : }
1907 : // We don't support after-processing for weak map entries.
1908 0 : }
1909 0 : void NoteIncrementalRoot(uint64_t aAddress)
1910 : {
1911 0 : if (!mDisableLog) {
1912 0 : fprintf(mCCLog, "IncrementalRoot %p\n", (void*)aAddress);
1913 : }
1914 : // We don't support after-processing for incremental roots.
1915 0 : }
1916 0 : void BeginResults()
1917 : {
1918 0 : if (!mDisableLog) {
1919 0 : fputs("==========\n", mCCLog);
1920 : }
1921 0 : }
1922 0 : void DescribeRoot(uint64_t aAddress, uint32_t aKnownEdges)
1923 : {
1924 0 : if (!mDisableLog) {
1925 0 : fprintf(mCCLog, "%p [known=%u]\n", (void*)aAddress, aKnownEdges);
1926 : }
1927 0 : if (mWantAfterProcessing) {
1928 0 : CCGraphDescriber* d = new CCGraphDescriber();
1929 0 : mDescribers.insertBack(d);
1930 0 : d->mType = CCGraphDescriber::eRoot;
1931 0 : d->mAddress.AppendInt(aAddress, 16);
1932 0 : d->mCnt = aKnownEdges;
1933 : }
1934 0 : }
1935 0 : void DescribeGarbage(uint64_t aAddress)
1936 : {
1937 0 : if (!mDisableLog) {
1938 0 : fprintf(mCCLog, "%p [garbage]\n", (void*)aAddress);
1939 : }
1940 0 : if (mWantAfterProcessing) {
1941 0 : CCGraphDescriber* d = new CCGraphDescriber();
1942 0 : mDescribers.insertBack(d);
1943 0 : d->mType = CCGraphDescriber::eGarbage;
1944 0 : d->mAddress.AppendInt(aAddress, 16);
1945 : }
1946 0 : }
1947 0 : void End()
1948 : {
1949 0 : if (!mDisableLog) {
1950 0 : mCCLog = nullptr;
1951 0 : Unused << NS_WARN_IF(NS_FAILED(mLogSink->CloseCCLog()));
1952 : }
1953 0 : }
1954 0 : NS_IMETHOD ProcessNext(nsICycleCollectorHandler* aHandler,
1955 : bool* aCanContinue) override
1956 : {
1957 0 : if (NS_WARN_IF(!aHandler) || NS_WARN_IF(!mWantAfterProcessing)) {
1958 0 : return NS_ERROR_UNEXPECTED;
1959 : }
1960 0 : CCGraphDescriber* d = mDescribers.popFirst();
1961 0 : if (d) {
1962 0 : switch (d->mType) {
1963 : case CCGraphDescriber::eRefCountedObject:
1964 0 : aHandler->NoteRefCountedObject(d->mAddress,
1965 : d->mCnt,
1966 0 : d->mName);
1967 0 : break;
1968 : case CCGraphDescriber::eGCedObject:
1969 : case CCGraphDescriber::eGCMarkedObject:
1970 0 : aHandler->NoteGCedObject(d->mAddress,
1971 0 : d->mType ==
1972 : CCGraphDescriber::eGCMarkedObject,
1973 : d->mName,
1974 0 : d->mCompartmentOrToAddress);
1975 0 : break;
1976 : case CCGraphDescriber::eEdge:
1977 0 : aHandler->NoteEdge(d->mAddress,
1978 : d->mCompartmentOrToAddress,
1979 0 : d->mName);
1980 0 : break;
1981 : case CCGraphDescriber::eRoot:
1982 0 : aHandler->DescribeRoot(d->mAddress,
1983 0 : d->mCnt);
1984 0 : break;
1985 : case CCGraphDescriber::eGarbage:
1986 0 : aHandler->DescribeGarbage(d->mAddress);
1987 0 : break;
1988 : case CCGraphDescriber::eUnknown:
1989 0 : NS_NOTREACHED("CCGraphDescriber::eUnknown");
1990 0 : break;
1991 : }
1992 0 : delete d;
1993 : }
1994 0 : if (!(*aCanContinue = !mDescribers.isEmpty())) {
1995 0 : mCurrentAddress.AssignLiteral("0x");
1996 : }
1997 0 : return NS_OK;
1998 : }
1999 0 : NS_IMETHOD AsLogger(nsCycleCollectorLogger** aRetVal) override
2000 : {
2001 0 : RefPtr<nsCycleCollectorLogger> rval = this;
2002 0 : rval.forget(aRetVal);
2003 0 : return NS_OK;
2004 : }
2005 : private:
2006 0 : void ClearDescribers()
2007 : {
2008 : CCGraphDescriber* d;
2009 0 : while ((d = mDescribers.popFirst())) {
2010 0 : delete d;
2011 : }
2012 0 : }
2013 :
2014 : nsCOMPtr<nsICycleCollectorLogSink> mLogSink;
2015 : bool mWantAllTraces;
2016 : bool mDisableLog;
2017 : bool mWantAfterProcessing;
2018 : nsCString mCurrentAddress;
2019 : mozilla::LinkedList<CCGraphDescriber> mDescribers;
2020 : FILE* mCCLog;
2021 : };
2022 :
2023 0 : NS_IMPL_ISUPPORTS(nsCycleCollectorLogger, nsICycleCollectorListener)
2024 :
2025 : nsresult
2026 0 : nsCycleCollectorLoggerConstructor(nsISupports* aOuter,
2027 : const nsIID& aIID,
2028 : void** aInstancePtr)
2029 : {
2030 0 : if (NS_WARN_IF(aOuter)) {
2031 0 : return NS_ERROR_NO_AGGREGATION;
2032 : }
2033 :
2034 0 : nsISupports* logger = new nsCycleCollectorLogger();
2035 :
2036 0 : return logger->QueryInterface(aIID, aInstancePtr);
2037 : }
2038 :
2039 : static bool
2040 0 : GCThingIsGrayCCThing(JS::GCCellPtr thing)
2041 : {
2042 0 : return AddToCCKind(thing.kind()) &&
2043 0 : JS::GCThingIsMarkedGray(thing);
2044 : }
2045 :
2046 : static bool
2047 0 : ValueIsGrayCCThing(const JS::Value& value)
2048 : {
2049 0 : return AddToCCKind(value.traceKind()) &&
2050 0 : JS::GCThingIsMarkedGray(value.toGCCellPtr());
2051 : }
2052 :
2053 : ////////////////////////////////////////////////////////////////////////
2054 : // Bacon & Rajan's |MarkRoots| routine.
2055 : ////////////////////////////////////////////////////////////////////////
2056 :
2057 : class CCGraphBuilder final : public nsCycleCollectionTraversalCallback,
2058 : public nsCycleCollectionNoteRootCallback
2059 : {
2060 : private:
2061 : CCGraph& mGraph;
2062 : CycleCollectorResults& mResults;
2063 : NodePool::Builder mNodeBuilder;
2064 : EdgePool::Builder mEdgeBuilder;
2065 : MOZ_INIT_OUTSIDE_CTOR PtrInfo* mCurrPi;
2066 : nsCycleCollectionParticipant* mJSParticipant;
2067 : nsCycleCollectionParticipant* mJSZoneParticipant;
2068 : nsCString mNextEdgeName;
2069 : RefPtr<nsCycleCollectorLogger> mLogger;
2070 : bool mMergeZones;
2071 : nsAutoPtr<NodePool::Enumerator> mCurrNode;
2072 :
2073 : public:
2074 : CCGraphBuilder(CCGraph& aGraph,
2075 : CycleCollectorResults& aResults,
2076 : CycleCollectedJSRuntime* aCCRuntime,
2077 : nsCycleCollectorLogger* aLogger,
2078 : bool aMergeZones);
2079 : virtual ~CCGraphBuilder();
2080 :
2081 0 : bool WantAllTraces() const
2082 : {
2083 0 : return nsCycleCollectionNoteRootCallback::WantAllTraces();
2084 : }
2085 :
2086 : bool AddPurpleRoot(void* aRoot, nsCycleCollectionParticipant* aParti);
2087 :
2088 : // This is called when all roots have been added to the graph, to prepare for BuildGraph().
2089 : void DoneAddingRoots();
2090 :
2091 : // Do some work traversing nodes in the graph. Returns true if this graph building is finished.
2092 : bool BuildGraph(SliceBudget& aBudget);
2093 :
2094 : private:
2095 : PtrInfo* AddNode(void* aPtr, nsCycleCollectionParticipant* aParticipant);
2096 : PtrInfo* AddWeakMapNode(JS::GCCellPtr aThing);
2097 : PtrInfo* AddWeakMapNode(JSObject* aObject);
2098 :
2099 0 : void SetFirstChild()
2100 : {
2101 0 : mCurrPi->SetFirstChild(mEdgeBuilder.Mark());
2102 0 : }
2103 :
2104 0 : void SetLastChild()
2105 : {
2106 0 : mCurrPi->SetLastChild(mEdgeBuilder.Mark());
2107 0 : }
2108 :
2109 : public:
2110 : // nsCycleCollectionNoteRootCallback methods.
2111 : NS_IMETHOD_(void) NoteXPCOMRoot(nsISupports* aRoot);
2112 : NS_IMETHOD_(void) NoteJSRoot(JSObject* aRoot);
2113 : NS_IMETHOD_(void) NoteNativeRoot(void* aRoot,
2114 : nsCycleCollectionParticipant* aParticipant);
2115 : NS_IMETHOD_(void) NoteWeakMapping(JSObject* aMap, JS::GCCellPtr aKey,
2116 : JSObject* aKdelegate, JS::GCCellPtr aVal);
2117 :
2118 : // nsCycleCollectionTraversalCallback methods.
2119 : NS_IMETHOD_(void) DescribeRefCountedNode(nsrefcnt aRefCount,
2120 : const char* aObjName);
2121 : NS_IMETHOD_(void) DescribeGCedNode(bool aIsMarked, const char* aObjName,
2122 : uint64_t aCompartmentAddress);
2123 :
2124 : NS_IMETHOD_(void) NoteXPCOMChild(nsISupports* aChild);
2125 : NS_IMETHOD_(void) NoteJSChild(const JS::GCCellPtr& aThing);
2126 : NS_IMETHOD_(void) NoteNativeChild(void* aChild,
2127 : nsCycleCollectionParticipant* aParticipant);
2128 : NS_IMETHOD_(void) NoteNextEdgeName(const char* aName);
2129 :
2130 : private:
2131 : void NoteJSChild(JS::GCCellPtr aChild);
2132 :
2133 0 : NS_IMETHOD_(void) NoteRoot(void* aRoot,
2134 : nsCycleCollectionParticipant* aParticipant)
2135 : {
2136 0 : MOZ_ASSERT(aRoot);
2137 0 : MOZ_ASSERT(aParticipant);
2138 :
2139 0 : if (!aParticipant->CanSkipInCC(aRoot) || MOZ_UNLIKELY(WantAllTraces())) {
2140 0 : AddNode(aRoot, aParticipant);
2141 : }
2142 0 : }
2143 :
2144 0 : NS_IMETHOD_(void) NoteChild(void* aChild, nsCycleCollectionParticipant* aCp,
2145 : nsCString& aEdgeName)
2146 : {
2147 0 : PtrInfo* childPi = AddNode(aChild, aCp);
2148 0 : if (!childPi) {
2149 0 : return;
2150 : }
2151 0 : mEdgeBuilder.Add(childPi);
2152 0 : if (mLogger) {
2153 0 : mLogger->NoteEdge((uint64_t)aChild, aEdgeName.get());
2154 : }
2155 0 : ++childPi->mInternalRefs;
2156 : }
2157 :
2158 0 : JS::Zone* MergeZone(JS::GCCellPtr aGcthing)
2159 : {
2160 0 : if (!mMergeZones) {
2161 0 : return nullptr;
2162 : }
2163 0 : JS::Zone* zone = JS::GetTenuredGCThingZone(aGcthing);
2164 0 : if (js::IsSystemZone(zone)) {
2165 0 : return nullptr;
2166 : }
2167 0 : return zone;
2168 : }
2169 : };
2170 :
2171 0 : CCGraphBuilder::CCGraphBuilder(CCGraph& aGraph,
2172 : CycleCollectorResults& aResults,
2173 : CycleCollectedJSRuntime* aCCRuntime,
2174 : nsCycleCollectorLogger* aLogger,
2175 0 : bool aMergeZones)
2176 : : mGraph(aGraph)
2177 : , mResults(aResults)
2178 : , mNodeBuilder(aGraph.mNodes)
2179 : , mEdgeBuilder(aGraph.mEdges)
2180 : , mJSParticipant(nullptr)
2181 : , mJSZoneParticipant(nullptr)
2182 : , mLogger(aLogger)
2183 0 : , mMergeZones(aMergeZones)
2184 : {
2185 0 : if (aCCRuntime) {
2186 0 : mJSParticipant = aCCRuntime->GCThingParticipant();
2187 0 : mJSZoneParticipant = aCCRuntime->ZoneParticipant();
2188 : }
2189 :
2190 0 : if (mLogger) {
2191 0 : mFlags |= nsCycleCollectionTraversalCallback::WANT_DEBUG_INFO;
2192 0 : if (mLogger->IsAllTraces()) {
2193 0 : mFlags |= nsCycleCollectionTraversalCallback::WANT_ALL_TRACES;
2194 0 : mWantAllTraces = true; // for nsCycleCollectionNoteRootCallback
2195 : }
2196 : }
2197 :
2198 0 : mMergeZones = mMergeZones && MOZ_LIKELY(!WantAllTraces());
2199 :
2200 0 : MOZ_ASSERT(nsCycleCollectionNoteRootCallback::WantAllTraces() ==
2201 : nsCycleCollectionTraversalCallback::WantAllTraces());
2202 0 : }
2203 :
2204 0 : CCGraphBuilder::~CCGraphBuilder()
2205 : {
2206 0 : }
2207 :
2208 : PtrInfo*
2209 0 : CCGraphBuilder::AddNode(void* aPtr, nsCycleCollectionParticipant* aParticipant)
2210 : {
2211 0 : PtrToNodeEntry* e = mGraph.AddNodeToMap(aPtr);
2212 0 : if (!e) {
2213 0 : return nullptr;
2214 : }
2215 :
2216 : PtrInfo* result;
2217 0 : if (!e->mNode) {
2218 : // New entry.
2219 0 : result = mNodeBuilder.Add(aPtr, aParticipant);
2220 0 : if (!result) {
2221 0 : return nullptr;
2222 : }
2223 :
2224 0 : e->mNode = result;
2225 0 : NS_ASSERTION(result, "mNodeBuilder.Add returned null");
2226 : } else {
2227 0 : result = e->mNode;
2228 0 : MOZ_ASSERT(result->mParticipant == aParticipant,
2229 : "nsCycleCollectionParticipant shouldn't change!");
2230 : }
2231 0 : return result;
2232 : }
2233 :
2234 : bool
2235 0 : CCGraphBuilder::AddPurpleRoot(void* aRoot, nsCycleCollectionParticipant* aParti)
2236 : {
2237 0 : CanonicalizeParticipant(&aRoot, &aParti);
2238 :
2239 0 : if (WantAllTraces() || !aParti->CanSkipInCC(aRoot)) {
2240 0 : PtrInfo* pinfo = AddNode(aRoot, aParti);
2241 0 : if (!pinfo) {
2242 0 : return false;
2243 : }
2244 : }
2245 :
2246 0 : return true;
2247 : }
2248 :
2249 : void
2250 0 : CCGraphBuilder::DoneAddingRoots()
2251 : {
2252 : // We've finished adding roots, and everything in the graph is a root.
2253 0 : mGraph.mRootCount = mGraph.MapCount();
2254 :
2255 0 : mCurrNode = new NodePool::Enumerator(mGraph.mNodes);
2256 0 : }
2257 :
2258 : MOZ_NEVER_INLINE bool
2259 0 : CCGraphBuilder::BuildGraph(SliceBudget& aBudget)
2260 : {
2261 0 : const intptr_t kNumNodesBetweenTimeChecks = 1000;
2262 0 : const intptr_t kStep = SliceBudget::CounterReset / kNumNodesBetweenTimeChecks;
2263 :
2264 0 : MOZ_ASSERT(mCurrNode);
2265 :
2266 0 : while (!aBudget.isOverBudget() && !mCurrNode->IsDone()) {
2267 0 : PtrInfo* pi = mCurrNode->GetNext();
2268 0 : if (!pi) {
2269 0 : MOZ_CRASH();
2270 : }
2271 :
2272 0 : mCurrPi = pi;
2273 :
2274 : // We need to call SetFirstChild() even on deleted nodes, to set their
2275 : // firstChild() that may be read by a prior non-deleted neighbor.
2276 0 : SetFirstChild();
2277 :
2278 0 : if (pi->mParticipant) {
2279 0 : nsresult rv = pi->mParticipant->TraverseNativeAndJS(pi->mPointer, *this);
2280 0 : MOZ_RELEASE_ASSERT(!NS_FAILED(rv), "Cycle collector Traverse method failed");
2281 : }
2282 :
2283 0 : if (mCurrNode->AtBlockEnd()) {
2284 0 : SetLastChild();
2285 : }
2286 :
2287 0 : aBudget.step(kStep);
2288 : }
2289 :
2290 0 : if (!mCurrNode->IsDone()) {
2291 0 : return false;
2292 : }
2293 :
2294 0 : if (mGraph.mRootCount > 0) {
2295 0 : SetLastChild();
2296 : }
2297 :
2298 0 : mCurrNode = nullptr;
2299 :
2300 0 : return true;
2301 : }
2302 :
2303 : NS_IMETHODIMP_(void)
2304 0 : CCGraphBuilder::NoteXPCOMRoot(nsISupports* aRoot)
2305 : {
2306 0 : aRoot = CanonicalizeXPCOMParticipant(aRoot);
2307 0 : NS_ASSERTION(aRoot,
2308 : "Don't add objects that don't participate in collection!");
2309 :
2310 : nsXPCOMCycleCollectionParticipant* cp;
2311 0 : ToParticipant(aRoot, &cp);
2312 :
2313 0 : NoteRoot(aRoot, cp);
2314 0 : }
2315 :
2316 : NS_IMETHODIMP_(void)
2317 0 : CCGraphBuilder::NoteJSRoot(JSObject* aRoot)
2318 : {
2319 0 : if (JS::Zone* zone = MergeZone(JS::GCCellPtr(aRoot))) {
2320 0 : NoteRoot(zone, mJSZoneParticipant);
2321 : } else {
2322 0 : NoteRoot(aRoot, mJSParticipant);
2323 : }
2324 0 : }
2325 :
2326 : NS_IMETHODIMP_(void)
2327 0 : CCGraphBuilder::NoteNativeRoot(void* aRoot,
2328 : nsCycleCollectionParticipant* aParticipant)
2329 : {
2330 0 : NoteRoot(aRoot, aParticipant);
2331 0 : }
2332 :
2333 : NS_IMETHODIMP_(void)
2334 0 : CCGraphBuilder::DescribeRefCountedNode(nsrefcnt aRefCount, const char* aObjName)
2335 : {
2336 0 : MOZ_RELEASE_ASSERT(aRefCount != 0, "CCed refcounted object has zero refcount");
2337 0 : MOZ_RELEASE_ASSERT(aRefCount != UINT32_MAX, "CCed refcounted object has overflowing refcount");
2338 :
2339 0 : mResults.mVisitedRefCounted++;
2340 :
2341 0 : if (mLogger) {
2342 0 : mLogger->NoteRefCountedObject((uint64_t)mCurrPi->mPointer, aRefCount,
2343 0 : aObjName);
2344 : }
2345 :
2346 0 : mCurrPi->mRefCount = aRefCount;
2347 0 : }
2348 :
2349 : NS_IMETHODIMP_(void)
2350 0 : CCGraphBuilder::DescribeGCedNode(bool aIsMarked, const char* aObjName,
2351 : uint64_t aCompartmentAddress)
2352 : {
2353 0 : uint32_t refCount = aIsMarked ? UINT32_MAX : 0;
2354 0 : mResults.mVisitedGCed++;
2355 :
2356 0 : if (mLogger) {
2357 0 : mLogger->NoteGCedObject((uint64_t)mCurrPi->mPointer, aIsMarked,
2358 0 : aObjName, aCompartmentAddress);
2359 : }
2360 :
2361 0 : mCurrPi->mRefCount = refCount;
2362 0 : }
2363 :
2364 : NS_IMETHODIMP_(void)
2365 0 : CCGraphBuilder::NoteXPCOMChild(nsISupports* aChild)
2366 : {
2367 0 : nsCString edgeName;
2368 0 : if (WantDebugInfo()) {
2369 0 : edgeName.Assign(mNextEdgeName);
2370 0 : mNextEdgeName.Truncate();
2371 : }
2372 0 : if (!aChild || !(aChild = CanonicalizeXPCOMParticipant(aChild))) {
2373 0 : return;
2374 : }
2375 :
2376 : nsXPCOMCycleCollectionParticipant* cp;
2377 0 : ToParticipant(aChild, &cp);
2378 0 : if (cp && (!cp->CanSkipThis(aChild) || WantAllTraces())) {
2379 0 : NoteChild(aChild, cp, edgeName);
2380 : }
2381 : }
2382 :
2383 : NS_IMETHODIMP_(void)
2384 0 : CCGraphBuilder::NoteNativeChild(void* aChild,
2385 : nsCycleCollectionParticipant* aParticipant)
2386 : {
2387 0 : nsCString edgeName;
2388 0 : if (WantDebugInfo()) {
2389 0 : edgeName.Assign(mNextEdgeName);
2390 0 : mNextEdgeName.Truncate();
2391 : }
2392 0 : if (!aChild) {
2393 0 : return;
2394 : }
2395 :
2396 0 : MOZ_ASSERT(aParticipant, "Need a nsCycleCollectionParticipant!");
2397 0 : if (!aParticipant->CanSkipThis(aChild) || WantAllTraces()) {
2398 0 : NoteChild(aChild, aParticipant, edgeName);
2399 : }
2400 : }
2401 :
2402 : NS_IMETHODIMP_(void)
2403 0 : CCGraphBuilder::NoteJSChild(const JS::GCCellPtr& aChild)
2404 : {
2405 0 : if (!aChild) {
2406 0 : return;
2407 : }
2408 :
2409 0 : nsCString edgeName;
2410 0 : if (MOZ_UNLIKELY(WantDebugInfo())) {
2411 0 : edgeName.Assign(mNextEdgeName);
2412 0 : mNextEdgeName.Truncate();
2413 : }
2414 :
2415 0 : if (GCThingIsGrayCCThing(aChild) || MOZ_UNLIKELY(WantAllTraces())) {
2416 0 : if (JS::Zone* zone = MergeZone(aChild)) {
2417 0 : NoteChild(zone, mJSZoneParticipant, edgeName);
2418 : } else {
2419 0 : NoteChild(aChild.asCell(), mJSParticipant, edgeName);
2420 : }
2421 : }
2422 : }
2423 :
2424 : NS_IMETHODIMP_(void)
2425 0 : CCGraphBuilder::NoteNextEdgeName(const char* aName)
2426 : {
2427 0 : if (WantDebugInfo()) {
2428 0 : mNextEdgeName = aName;
2429 : }
2430 0 : }
2431 :
2432 : PtrInfo*
2433 0 : CCGraphBuilder::AddWeakMapNode(JS::GCCellPtr aNode)
2434 : {
2435 0 : MOZ_ASSERT(aNode, "Weak map node should be non-null.");
2436 :
2437 0 : if (!GCThingIsGrayCCThing(aNode) && !WantAllTraces()) {
2438 0 : return nullptr;
2439 : }
2440 :
2441 0 : if (JS::Zone* zone = MergeZone(aNode)) {
2442 0 : return AddNode(zone, mJSZoneParticipant);
2443 : }
2444 0 : return AddNode(aNode.asCell(), mJSParticipant);
2445 : }
2446 :
2447 : PtrInfo*
2448 0 : CCGraphBuilder::AddWeakMapNode(JSObject* aObject)
2449 : {
2450 0 : return AddWeakMapNode(JS::GCCellPtr(aObject));
2451 : }
2452 :
2453 : NS_IMETHODIMP_(void)
2454 0 : CCGraphBuilder::NoteWeakMapping(JSObject* aMap, JS::GCCellPtr aKey,
2455 : JSObject* aKdelegate, JS::GCCellPtr aVal)
2456 : {
2457 : // Don't try to optimize away the entry here, as we've already attempted to
2458 : // do that in TraceWeakMapping in nsXPConnect.
2459 0 : WeakMapping* mapping = mGraph.mWeakMaps.AppendElement();
2460 0 : mapping->mMap = aMap ? AddWeakMapNode(aMap) : nullptr;
2461 0 : mapping->mKey = aKey ? AddWeakMapNode(aKey) : nullptr;
2462 0 : mapping->mKeyDelegate = aKdelegate ? AddWeakMapNode(aKdelegate) : mapping->mKey;
2463 0 : mapping->mVal = aVal ? AddWeakMapNode(aVal) : nullptr;
2464 :
2465 0 : if (mLogger) {
2466 0 : mLogger->NoteWeakMapEntry((uint64_t)aMap, aKey ? aKey.unsafeAsInteger() : 0,
2467 : (uint64_t)aKdelegate,
2468 0 : aVal ? aVal.unsafeAsInteger() : 0);
2469 : }
2470 0 : }
2471 :
2472 : static bool
2473 0 : AddPurpleRoot(CCGraphBuilder& aBuilder, void* aRoot,
2474 : nsCycleCollectionParticipant* aParti)
2475 : {
2476 0 : return aBuilder.AddPurpleRoot(aRoot, aParti);
2477 : }
2478 :
2479 : // MayHaveChild() will be false after a Traverse if the object does
2480 : // not have any children the CC will visit.
2481 : class ChildFinder : public nsCycleCollectionTraversalCallback
2482 : {
2483 : public:
2484 0 : ChildFinder() : mMayHaveChild(false)
2485 : {
2486 0 : }
2487 :
2488 : // The logic of the Note*Child functions must mirror that of their
2489 : // respective functions in CCGraphBuilder.
2490 : NS_IMETHOD_(void) NoteXPCOMChild(nsISupports* aChild);
2491 : NS_IMETHOD_(void) NoteNativeChild(void* aChild,
2492 : nsCycleCollectionParticipant* aHelper);
2493 : NS_IMETHOD_(void) NoteJSChild(const JS::GCCellPtr& aThing);
2494 :
2495 0 : NS_IMETHOD_(void) DescribeRefCountedNode(nsrefcnt aRefcount,
2496 : const char* aObjname)
2497 : {
2498 0 : }
2499 0 : NS_IMETHOD_(void) DescribeGCedNode(bool aIsMarked,
2500 : const char* aObjname,
2501 : uint64_t aCompartmentAddress)
2502 : {
2503 0 : }
2504 0 : NS_IMETHOD_(void) NoteNextEdgeName(const char* aName)
2505 : {
2506 0 : }
2507 0 : bool MayHaveChild()
2508 : {
2509 0 : return mMayHaveChild;
2510 : }
2511 : private:
2512 : bool mMayHaveChild;
2513 : };
2514 :
2515 : NS_IMETHODIMP_(void)
2516 0 : ChildFinder::NoteXPCOMChild(nsISupports* aChild)
2517 : {
2518 0 : if (!aChild || !(aChild = CanonicalizeXPCOMParticipant(aChild))) {
2519 0 : return;
2520 : }
2521 : nsXPCOMCycleCollectionParticipant* cp;
2522 0 : ToParticipant(aChild, &cp);
2523 0 : if (cp && !cp->CanSkip(aChild, true)) {
2524 0 : mMayHaveChild = true;
2525 : }
2526 : }
2527 :
2528 : NS_IMETHODIMP_(void)
2529 0 : ChildFinder::NoteNativeChild(void* aChild,
2530 : nsCycleCollectionParticipant* aHelper)
2531 : {
2532 0 : if (!aChild) {
2533 0 : return;
2534 : }
2535 0 : MOZ_ASSERT(aHelper, "Native child must have a participant");
2536 0 : if (!aHelper->CanSkip(aChild, true)) {
2537 0 : mMayHaveChild = true;
2538 : }
2539 : }
2540 :
2541 : NS_IMETHODIMP_(void)
2542 0 : ChildFinder::NoteJSChild(const JS::GCCellPtr& aChild)
2543 : {
2544 0 : if (aChild && JS::GCThingIsMarkedGray(aChild)) {
2545 0 : mMayHaveChild = true;
2546 : }
2547 0 : }
2548 :
2549 : static bool
2550 0 : MayHaveChild(void* aObj, nsCycleCollectionParticipant* aCp)
2551 : {
2552 0 : ChildFinder cf;
2553 0 : aCp->TraverseNativeAndJS(aObj, cf);
2554 0 : return cf.MayHaveChild();
2555 : }
2556 :
2557 : // JSPurpleBuffer keeps references to GCThings which might affect the
2558 : // next cycle collection. It is owned only by itself and during unlink its
2559 : // self reference is broken down and the object ends up killing itself.
2560 : // If GC happens before CC, references to GCthings and the self reference are
2561 : // removed.
2562 : class JSPurpleBuffer
2563 : {
2564 0 : ~JSPurpleBuffer()
2565 0 : {
2566 0 : MOZ_ASSERT(mValues.IsEmpty());
2567 0 : MOZ_ASSERT(mObjects.IsEmpty());
2568 0 : }
2569 :
2570 : public:
2571 0 : explicit JSPurpleBuffer(RefPtr<JSPurpleBuffer>& aReferenceToThis)
2572 0 : : mReferenceToThis(aReferenceToThis)
2573 : , mValues(kSegmentSize)
2574 0 : , mObjects(kSegmentSize)
2575 : {
2576 0 : mReferenceToThis = this;
2577 0 : mozilla::HoldJSObjects(this);
2578 0 : }
2579 :
2580 0 : void Destroy()
2581 : {
2582 0 : mReferenceToThis = nullptr;
2583 0 : mValues.Clear();
2584 0 : mObjects.Clear();
2585 0 : mozilla::DropJSObjects(this);
2586 0 : }
2587 :
2588 0 : NS_INLINE_DECL_CYCLE_COLLECTING_NATIVE_REFCOUNTING(JSPurpleBuffer)
2589 0 : NS_DECL_CYCLE_COLLECTION_SCRIPT_HOLDER_NATIVE_CLASS(JSPurpleBuffer)
2590 :
2591 : RefPtr<JSPurpleBuffer>& mReferenceToThis;
2592 :
2593 : // These are raw pointers instead of Heap<T> because we only need Heap<T> for
2594 : // pointers which may point into the nursery. The purple buffer never contains
2595 : // pointers to the nursery because nursery gcthings can never be gray and only
2596 : // gray things can be inserted into the purple buffer.
2597 : static const size_t kSegmentSize = 512;
2598 : SegmentedVector<JS::Value, kSegmentSize, InfallibleAllocPolicy> mValues;
2599 : SegmentedVector<JSObject*, kSegmentSize, InfallibleAllocPolicy> mObjects;
2600 : };
2601 :
2602 : NS_IMPL_CYCLE_COLLECTION_CLASS(JSPurpleBuffer)
2603 :
2604 0 : NS_IMPL_CYCLE_COLLECTION_UNLINK_BEGIN(JSPurpleBuffer)
2605 0 : tmp->Destroy();
2606 0 : NS_IMPL_CYCLE_COLLECTION_UNLINK_END
2607 :
2608 0 : NS_IMPL_CYCLE_COLLECTION_TRAVERSE_BEGIN(JSPurpleBuffer)
2609 0 : CycleCollectionNoteChild(cb, tmp, "self");
2610 0 : NS_IMPL_CYCLE_COLLECTION_TRAVERSE_END
2611 :
2612 : #define NS_TRACE_SEGMENTED_ARRAY(_field, _type) \
2613 : { \
2614 : for (auto iter = tmp->_field.Iter(); !iter.Done(); iter.Next()) { \
2615 : js::gc::CallTraceCallbackOnNonHeap<_type, TraceCallbacks>( \
2616 : &iter.Get(), aCallbacks, #_field, aClosure); \
2617 : } \
2618 : }
2619 :
2620 0 : NS_IMPL_CYCLE_COLLECTION_TRACE_BEGIN(JSPurpleBuffer)
2621 0 : NS_TRACE_SEGMENTED_ARRAY(mValues, JS::Value)
2622 0 : NS_TRACE_SEGMENTED_ARRAY(mObjects, JSObject*)
2623 0 : NS_IMPL_CYCLE_COLLECTION_TRACE_END
2624 :
2625 0 : NS_IMPL_CYCLE_COLLECTION_ROOT_NATIVE(JSPurpleBuffer, AddRef)
2626 0 : NS_IMPL_CYCLE_COLLECTION_UNROOT_NATIVE(JSPurpleBuffer, Release)
2627 :
2628 : class SnowWhiteKiller : public TraceCallbacks
2629 : {
2630 : struct SnowWhiteObject
2631 : {
2632 : void* mPointer;
2633 : nsCycleCollectionParticipant* mParticipant;
2634 : nsCycleCollectingAutoRefCnt* mRefCnt;
2635 : };
2636 :
2637 : // Segments are 4 KiB on 32-bit and 8 KiB on 64-bit.
2638 : static const size_t kSegmentSize = sizeof(void*) * 1024;
2639 : typedef SegmentedVector<SnowWhiteObject, kSegmentSize, InfallibleAllocPolicy>
2640 : ObjectsVector;
2641 :
2642 : public:
2643 1 : explicit SnowWhiteKiller(nsCycleCollector* aCollector)
2644 1 : : mCollector(aCollector)
2645 1 : , mObjects(kSegmentSize)
2646 : {
2647 1 : MOZ_ASSERT(mCollector, "Calling SnowWhiteKiller after nsCC went away");
2648 1 : }
2649 :
2650 1 : ~SnowWhiteKiller()
2651 2 : {
2652 1789 : for (auto iter = mObjects.Iter(); !iter.Done(); iter.Next()) {
2653 1788 : SnowWhiteObject& o = iter.Get();
2654 1788 : if (!o.mRefCnt->get() && !o.mRefCnt->IsInPurpleBuffer()) {
2655 1788 : mCollector->RemoveObjectFromGraph(o.mPointer);
2656 1788 : o.mRefCnt->stabilizeForDeletion();
2657 : {
2658 3576 : JS::AutoEnterCycleCollection autocc(mCollector->Runtime()->Runtime());
2659 1788 : o.mParticipant->Trace(o.mPointer, *this, nullptr);
2660 : }
2661 1788 : o.mParticipant->DeleteCycleCollectable(o.mPointer);
2662 : }
2663 : }
2664 1 : }
2665 :
2666 : bool
2667 22522 : Visit(nsPurpleBuffer& aBuffer, nsPurpleBufferEntry* aEntry)
2668 : {
2669 22522 : MOZ_ASSERT(aEntry->mObject, "Null object in purple buffer");
2670 22522 : if (!aEntry->mRefCnt->get()) {
2671 1788 : void* o = aEntry->mObject;
2672 1788 : nsCycleCollectionParticipant* cp = aEntry->mParticipant;
2673 1788 : CanonicalizeParticipant(&o, &cp);
2674 1788 : SnowWhiteObject swo = { o, cp, aEntry->mRefCnt };
2675 1788 : mObjects.InfallibleAppend(swo);
2676 1788 : aBuffer.Remove(aEntry);
2677 : }
2678 22522 : return true;
2679 : }
2680 :
2681 2 : bool HasSnowWhiteObjects() const
2682 : {
2683 2 : return !mObjects.IsEmpty();
2684 : }
2685 :
2686 6 : virtual void Trace(JS::Heap<JS::Value>* aValue, const char* aName,
2687 : void* aClosure) const override
2688 : {
2689 6 : const JS::Value& val = aValue->unbarrieredGet();
2690 6 : if (val.isGCThing() && ValueIsGrayCCThing(val)) {
2691 0 : MOZ_ASSERT(!js::gc::IsInsideNursery(val.toGCThing()));
2692 0 : mCollector->GetJSPurpleBuffer()->mValues.InfallibleAppend(val);
2693 : }
2694 6 : }
2695 :
2696 0 : virtual void Trace(JS::Heap<jsid>* aId, const char* aName,
2697 : void* aClosure) const override
2698 : {
2699 0 : }
2700 :
2701 772 : void AppendJSObjectToPurpleBuffer(JSObject* obj) const
2702 : {
2703 772 : if (obj && JS::ObjectIsMarkedGray(obj)) {
2704 0 : MOZ_ASSERT(JS::ObjectIsTenured(obj));
2705 0 : mCollector->GetJSPurpleBuffer()->mObjects.InfallibleAppend(obj);
2706 : }
2707 772 : }
2708 :
2709 521 : virtual void Trace(JS::Heap<JSObject*>* aObject, const char* aName,
2710 : void* aClosure) const override
2711 : {
2712 521 : AppendJSObjectToPurpleBuffer(aObject->unbarrieredGet());
2713 521 : }
2714 :
2715 0 : virtual void Trace(JSObject** aObject, const char* aName,
2716 : void* aClosure) const override
2717 : {
2718 0 : AppendJSObjectToPurpleBuffer(*aObject);
2719 0 : }
2720 :
2721 251 : virtual void Trace(JS::TenuredHeap<JSObject*>* aObject, const char* aName,
2722 : void* aClosure) const override
2723 : {
2724 251 : AppendJSObjectToPurpleBuffer(aObject->unbarrieredGetPtr());
2725 251 : }
2726 :
2727 0 : virtual void Trace(JS::Heap<JSString*>* aString, const char* aName,
2728 : void* aClosure) const override
2729 : {
2730 0 : }
2731 :
2732 0 : virtual void Trace(JS::Heap<JSScript*>* aScript, const char* aName,
2733 : void* aClosure) const override
2734 : {
2735 0 : }
2736 :
2737 0 : virtual void Trace(JS::Heap<JSFunction*>* aFunction, const char* aName,
2738 : void* aClosure) const override
2739 : {
2740 0 : }
2741 :
2742 : private:
2743 : RefPtr<nsCycleCollector> mCollector;
2744 : ObjectsVector mObjects;
2745 : };
2746 :
2747 : class RemoveSkippableVisitor : public SnowWhiteKiller
2748 : {
2749 : public:
2750 0 : RemoveSkippableVisitor(nsCycleCollector* aCollector,
2751 : js::SliceBudget& aBudget,
2752 : bool aRemoveChildlessNodes,
2753 : bool aAsyncSnowWhiteFreeing,
2754 : CC_ForgetSkippableCallback aCb)
2755 0 : : SnowWhiteKiller(aCollector)
2756 : , mBudget(aBudget)
2757 : , mRemoveChildlessNodes(aRemoveChildlessNodes)
2758 : , mAsyncSnowWhiteFreeing(aAsyncSnowWhiteFreeing)
2759 : , mDispatchedDeferredDeletion(false)
2760 0 : , mCallback(aCb)
2761 : {
2762 0 : }
2763 :
2764 0 : ~RemoveSkippableVisitor()
2765 0 : {
2766 : // Note, we must call the callback before SnowWhiteKiller calls
2767 : // DeleteCycleCollectable!
2768 0 : if (mCallback) {
2769 0 : mCallback();
2770 : }
2771 0 : if (HasSnowWhiteObjects()) {
2772 : // Effectively a continuation.
2773 0 : nsCycleCollector_dispatchDeferredDeletion(true);
2774 : }
2775 0 : }
2776 :
2777 : bool
2778 0 : Visit(nsPurpleBuffer& aBuffer, nsPurpleBufferEntry* aEntry)
2779 : {
2780 0 : if (mBudget.isOverBudget()) {
2781 0 : return false;
2782 : }
2783 :
2784 : // CanSkip calls can be a bit slow, so increase the likelihood that
2785 : // isOverBudget actually checks whether we're over the time budget.
2786 0 : mBudget.step(5);
2787 0 : MOZ_ASSERT(aEntry->mObject, "null mObject in purple buffer");
2788 0 : if (!aEntry->mRefCnt->get()) {
2789 0 : if (!mAsyncSnowWhiteFreeing) {
2790 0 : SnowWhiteKiller::Visit(aBuffer, aEntry);
2791 0 : } else if (!mDispatchedDeferredDeletion) {
2792 0 : mDispatchedDeferredDeletion = true;
2793 0 : nsCycleCollector_dispatchDeferredDeletion(false);
2794 : }
2795 0 : return true;
2796 : }
2797 0 : void* o = aEntry->mObject;
2798 0 : nsCycleCollectionParticipant* cp = aEntry->mParticipant;
2799 0 : CanonicalizeParticipant(&o, &cp);
2800 0 : if (aEntry->mRefCnt->IsPurple() && !cp->CanSkip(o, false) &&
2801 0 : (!mRemoveChildlessNodes || MayHaveChild(o, cp))) {
2802 0 : return true;
2803 : }
2804 0 : aBuffer.Remove(aEntry);
2805 0 : return true;
2806 : }
2807 :
2808 : private:
2809 : js::SliceBudget& mBudget;
2810 : bool mRemoveChildlessNodes;
2811 : bool mAsyncSnowWhiteFreeing;
2812 : bool mDispatchedDeferredDeletion;
2813 : CC_ForgetSkippableCallback mCallback;
2814 : };
2815 :
2816 : void
2817 0 : nsPurpleBuffer::RemoveSkippable(nsCycleCollector* aCollector,
2818 : js::SliceBudget& aBudget,
2819 : bool aRemoveChildlessNodes,
2820 : bool aAsyncSnowWhiteFreeing,
2821 : CC_ForgetSkippableCallback aCb)
2822 : {
2823 : RemoveSkippableVisitor visitor(aCollector, aBudget, aRemoveChildlessNodes,
2824 0 : aAsyncSnowWhiteFreeing, aCb);
2825 0 : VisitEntries(visitor);
2826 0 : }
2827 :
2828 : bool
2829 1 : nsCycleCollector::FreeSnowWhite(bool aUntilNoSWInPurpleBuffer)
2830 : {
2831 1 : CheckThreadSafety();
2832 :
2833 1 : if (mFreeingSnowWhite) {
2834 0 : return false;
2835 : }
2836 :
2837 2 : AutoRestore<bool> ar(mFreeingSnowWhite);
2838 1 : mFreeingSnowWhite = true;
2839 :
2840 1 : bool hadSnowWhiteObjects = false;
2841 1 : do {
2842 2 : SnowWhiteKiller visitor(this);
2843 1 : mPurpleBuf.VisitEntries(visitor);
2844 2 : hadSnowWhiteObjects = hadSnowWhiteObjects ||
2845 1 : visitor.HasSnowWhiteObjects();
2846 1 : if (!visitor.HasSnowWhiteObjects()) {
2847 0 : break;
2848 : }
2849 : } while (aUntilNoSWInPurpleBuffer);
2850 1 : return hadSnowWhiteObjects;
2851 : }
2852 :
2853 : void
2854 0 : nsCycleCollector::ForgetSkippable(js::SliceBudget& aBudget,
2855 : bool aRemoveChildlessNodes,
2856 : bool aAsyncSnowWhiteFreeing)
2857 : {
2858 0 : CheckThreadSafety();
2859 :
2860 0 : mozilla::Maybe<mozilla::AutoGlobalTimelineMarker> marker;
2861 0 : if (NS_IsMainThread()) {
2862 0 : marker.emplace("nsCycleCollector::ForgetSkippable", MarkerStackRequest::NO_STACK);
2863 : }
2864 :
2865 : // If we remove things from the purple buffer during graph building, we may
2866 : // lose track of an object that was mutated during graph building.
2867 0 : MOZ_ASSERT(IsIdle());
2868 :
2869 0 : if (mCCJSRuntime) {
2870 0 : mCCJSRuntime->PrepareForForgetSkippable();
2871 : }
2872 0 : MOZ_ASSERT(!mScanInProgress,
2873 : "Don't forget skippable or free snow-white while scan is in progress.");
2874 0 : mPurpleBuf.RemoveSkippable(this, aBudget, aRemoveChildlessNodes,
2875 0 : aAsyncSnowWhiteFreeing, mForgetSkippableCB);
2876 0 : }
2877 :
2878 : MOZ_NEVER_INLINE void
2879 0 : nsCycleCollector::MarkRoots(SliceBudget& aBudget)
2880 : {
2881 0 : JS::AutoAssertNoGC nogc;
2882 0 : TimeLog timeLog;
2883 0 : AutoRestore<bool> ar(mScanInProgress);
2884 0 : MOZ_RELEASE_ASSERT(!mScanInProgress);
2885 0 : mScanInProgress = true;
2886 0 : MOZ_ASSERT(mIncrementalPhase == GraphBuildingPhase);
2887 :
2888 0 : JS::AutoEnterCycleCollection autocc(Runtime()->Runtime());
2889 0 : bool doneBuilding = mBuilder->BuildGraph(aBudget);
2890 :
2891 0 : if (!doneBuilding) {
2892 0 : timeLog.Checkpoint("MarkRoots()");
2893 0 : return;
2894 : }
2895 :
2896 0 : mBuilder = nullptr;
2897 0 : mIncrementalPhase = ScanAndCollectWhitePhase;
2898 0 : timeLog.Checkpoint("MarkRoots()");
2899 : }
2900 :
2901 :
2902 : ////////////////////////////////////////////////////////////////////////
2903 : // Bacon & Rajan's |ScanRoots| routine.
2904 : ////////////////////////////////////////////////////////////////////////
2905 :
2906 :
2907 : struct ScanBlackVisitor
2908 : {
2909 0 : ScanBlackVisitor(uint32_t& aWhiteNodeCount, bool& aFailed)
2910 0 : : mWhiteNodeCount(aWhiteNodeCount), mFailed(aFailed)
2911 : {
2912 0 : }
2913 :
2914 0 : bool ShouldVisitNode(PtrInfo const* aPi)
2915 : {
2916 0 : return aPi->mColor != black;
2917 : }
2918 :
2919 0 : MOZ_NEVER_INLINE void VisitNode(PtrInfo* aPi)
2920 : {
2921 0 : if (aPi->mColor == white) {
2922 0 : --mWhiteNodeCount;
2923 : }
2924 0 : aPi->mColor = black;
2925 0 : }
2926 :
2927 0 : void Failed()
2928 : {
2929 0 : mFailed = true;
2930 0 : }
2931 :
2932 : private:
2933 : uint32_t& mWhiteNodeCount;
2934 : bool& mFailed;
2935 : };
2936 :
2937 : static void
2938 0 : FloodBlackNode(uint32_t& aWhiteNodeCount, bool& aFailed, PtrInfo* aPi)
2939 : {
2940 0 : GraphWalker<ScanBlackVisitor>(ScanBlackVisitor(aWhiteNodeCount,
2941 0 : aFailed)).Walk(aPi);
2942 0 : MOZ_ASSERT(aPi->mColor == black || !aPi->WasTraversed(),
2943 : "FloodBlackNode should make aPi black");
2944 0 : }
2945 :
2946 : // Iterate over the WeakMaps. If we mark anything while iterating
2947 : // over the WeakMaps, we must iterate over all of the WeakMaps again.
2948 : void
2949 0 : nsCycleCollector::ScanWeakMaps()
2950 : {
2951 : bool anyChanged;
2952 0 : bool failed = false;
2953 0 : do {
2954 0 : anyChanged = false;
2955 0 : for (uint32_t i = 0; i < mGraph.mWeakMaps.Length(); i++) {
2956 0 : WeakMapping* wm = &mGraph.mWeakMaps[i];
2957 :
2958 : // If any of these are null, the original object was marked black.
2959 0 : uint32_t mColor = wm->mMap ? wm->mMap->mColor : black;
2960 0 : uint32_t kColor = wm->mKey ? wm->mKey->mColor : black;
2961 0 : uint32_t kdColor = wm->mKeyDelegate ? wm->mKeyDelegate->mColor : black;
2962 0 : uint32_t vColor = wm->mVal ? wm->mVal->mColor : black;
2963 :
2964 0 : MOZ_ASSERT(mColor != grey, "Uncolored weak map");
2965 0 : MOZ_ASSERT(kColor != grey, "Uncolored weak map key");
2966 0 : MOZ_ASSERT(kdColor != grey, "Uncolored weak map key delegate");
2967 0 : MOZ_ASSERT(vColor != grey, "Uncolored weak map value");
2968 :
2969 0 : if (mColor == black && kColor != black && kdColor == black) {
2970 0 : FloodBlackNode(mWhiteNodeCount, failed, wm->mKey);
2971 0 : anyChanged = true;
2972 : }
2973 :
2974 0 : if (mColor == black && kColor == black && vColor != black) {
2975 0 : FloodBlackNode(mWhiteNodeCount, failed, wm->mVal);
2976 0 : anyChanged = true;
2977 : }
2978 : }
2979 : } while (anyChanged);
2980 :
2981 0 : if (failed) {
2982 0 : MOZ_ASSERT(false, "Ran out of memory in ScanWeakMaps");
2983 : CC_TELEMETRY(_OOM, true);
2984 : }
2985 0 : }
2986 :
2987 : // Flood black from any objects in the purple buffer that are in the CC graph.
2988 0 : class PurpleScanBlackVisitor
2989 : {
2990 : public:
2991 0 : PurpleScanBlackVisitor(CCGraph& aGraph, nsCycleCollectorLogger* aLogger,
2992 : uint32_t& aCount, bool& aFailed)
2993 0 : : mGraph(aGraph), mLogger(aLogger), mCount(aCount), mFailed(aFailed)
2994 : {
2995 0 : }
2996 :
2997 : bool
2998 0 : Visit(nsPurpleBuffer& aBuffer, nsPurpleBufferEntry* aEntry)
2999 : {
3000 0 : MOZ_ASSERT(aEntry->mObject,
3001 : "Entries with null mObject shouldn't be in the purple buffer.");
3002 0 : MOZ_ASSERT(aEntry->mRefCnt->get() != 0,
3003 : "Snow-white objects shouldn't be in the purple buffer.");
3004 :
3005 0 : void* obj = aEntry->mObject;
3006 0 : if (!aEntry->mParticipant) {
3007 0 : obj = CanonicalizeXPCOMParticipant(static_cast<nsISupports*>(obj));
3008 0 : MOZ_ASSERT(obj, "Don't add objects that don't participate in collection!");
3009 : }
3010 :
3011 0 : PtrInfo* pi = mGraph.FindNode(obj);
3012 0 : if (!pi) {
3013 0 : return true;
3014 : }
3015 0 : MOZ_ASSERT(pi->mParticipant, "No dead objects should be in the purple buffer.");
3016 0 : if (MOZ_UNLIKELY(mLogger)) {
3017 0 : mLogger->NoteIncrementalRoot((uint64_t)pi->mPointer);
3018 : }
3019 0 : if (pi->mColor == black) {
3020 0 : return true;
3021 : }
3022 0 : FloodBlackNode(mCount, mFailed, pi);
3023 0 : return true;
3024 : }
3025 :
3026 : private:
3027 : CCGraph& mGraph;
3028 : RefPtr<nsCycleCollectorLogger> mLogger;
3029 : uint32_t& mCount;
3030 : bool& mFailed;
3031 : };
3032 :
3033 : // Objects that have been stored somewhere since the start of incremental graph building must
3034 : // be treated as live for this cycle collection, because we may not have accurate information
3035 : // about who holds references to them.
3036 : void
3037 0 : nsCycleCollector::ScanIncrementalRoots()
3038 : {
3039 0 : TimeLog timeLog;
3040 :
3041 : // Reference counted objects:
3042 : // We cleared the purple buffer at the start of the current ICC, so if a
3043 : // refcounted object is purple, it may have been AddRef'd during the current
3044 : // ICC. (It may also have only been released.) If that is the case, we cannot
3045 : // be sure that the set of things pointing to the object in the CC graph
3046 : // is accurate. Therefore, for safety, we treat any purple objects as being
3047 : // live during the current CC. We don't remove anything from the purple
3048 : // buffer here, so these objects will be suspected and freed in the next CC
3049 : // if they are garbage.
3050 0 : bool failed = false;
3051 : PurpleScanBlackVisitor purpleScanBlackVisitor(mGraph, mLogger,
3052 0 : mWhiteNodeCount, failed);
3053 0 : mPurpleBuf.VisitEntries(purpleScanBlackVisitor);
3054 0 : timeLog.Checkpoint("ScanIncrementalRoots::fix purple");
3055 :
3056 0 : bool hasJSRuntime = !!mCCJSRuntime;
3057 : nsCycleCollectionParticipant* jsParticipant =
3058 0 : hasJSRuntime ? mCCJSRuntime->GCThingParticipant() : nullptr;
3059 : nsCycleCollectionParticipant* zoneParticipant =
3060 0 : hasJSRuntime ? mCCJSRuntime->ZoneParticipant() : nullptr;
3061 0 : bool hasLogger = !!mLogger;
3062 :
3063 0 : NodePool::Enumerator etor(mGraph.mNodes);
3064 0 : while (!etor.IsDone()) {
3065 0 : PtrInfo* pi = etor.GetNext();
3066 :
3067 : // As an optimization, if an object has already been determined to be live,
3068 : // don't consider it further. We can't do this if there is a listener,
3069 : // because the listener wants to know the complete set of incremental roots.
3070 0 : if (pi->mColor == black && MOZ_LIKELY(!hasLogger)) {
3071 0 : continue;
3072 : }
3073 :
3074 : // Garbage collected objects:
3075 : // If a GCed object was added to the graph with a refcount of zero, and is
3076 : // now marked black by the GC, it was probably gray before and was exposed
3077 : // to active JS, so it may have been stored somewhere, so it needs to be
3078 : // treated as live.
3079 0 : if (pi->IsGrayJS() && MOZ_LIKELY(hasJSRuntime)) {
3080 : // If the object is still marked gray by the GC, nothing could have gotten
3081 : // hold of it, so it isn't an incremental root.
3082 0 : if (pi->mParticipant == jsParticipant) {
3083 0 : JS::GCCellPtr ptr(pi->mPointer, JS::GCThingTraceKind(pi->mPointer));
3084 0 : if (GCThingIsGrayCCThing(ptr)) {
3085 0 : continue;
3086 : }
3087 0 : } else if (pi->mParticipant == zoneParticipant) {
3088 0 : JS::Zone* zone = static_cast<JS::Zone*>(pi->mPointer);
3089 0 : if (js::ZoneGlobalsAreAllGray(zone)) {
3090 0 : continue;
3091 : }
3092 : } else {
3093 0 : MOZ_ASSERT(false, "Non-JS thing with 0 refcount? Treating as live.");
3094 : }
3095 0 : } else if (!pi->mParticipant && pi->WasTraversed()) {
3096 : // Dead traversed refcounted objects:
3097 : // If the object was traversed, it must have been alive at the start of
3098 : // the CC, and thus had a positive refcount. It is dead now, so its
3099 : // refcount must have decreased at some point during the CC. Therefore,
3100 : // it would be in the purple buffer if it wasn't dead, so treat it as an
3101 : // incremental root.
3102 : //
3103 : // This should not cause leaks because as the object died it should have
3104 : // released anything it held onto, which will add them to the purple
3105 : // buffer, which will cause them to be considered in the next CC.
3106 : } else {
3107 0 : continue;
3108 : }
3109 :
3110 : // At this point, pi must be an incremental root.
3111 :
3112 : // If there's a listener, tell it about this root. We don't bother with the
3113 : // optimization of skipping the Walk() if pi is black: it will just return
3114 : // without doing anything and there's no need to make this case faster.
3115 0 : if (MOZ_UNLIKELY(hasLogger) && pi->mPointer) {
3116 : // Dead objects aren't logged. See bug 1031370.
3117 0 : mLogger->NoteIncrementalRoot((uint64_t)pi->mPointer);
3118 : }
3119 :
3120 0 : FloodBlackNode(mWhiteNodeCount, failed, pi);
3121 : }
3122 :
3123 0 : timeLog.Checkpoint("ScanIncrementalRoots::fix nodes");
3124 :
3125 0 : if (failed) {
3126 0 : NS_ASSERTION(false, "Ran out of memory in ScanIncrementalRoots");
3127 0 : CC_TELEMETRY(_OOM, true);
3128 : }
3129 0 : }
3130 :
3131 : // Mark nodes white and make sure their refcounts are ok.
3132 : // No nodes are marked black during this pass to ensure that refcount
3133 : // checking is run on all nodes not marked black by ScanIncrementalRoots.
3134 : void
3135 0 : nsCycleCollector::ScanWhiteNodes(bool aFullySynchGraphBuild)
3136 : {
3137 0 : NodePool::Enumerator nodeEnum(mGraph.mNodes);
3138 0 : while (!nodeEnum.IsDone()) {
3139 0 : PtrInfo* pi = nodeEnum.GetNext();
3140 0 : if (pi->mColor == black) {
3141 : // Incremental roots can be in a nonsensical state, so don't
3142 : // check them. This will miss checking nodes that are merely
3143 : // reachable from incremental roots.
3144 0 : MOZ_ASSERT(!aFullySynchGraphBuild,
3145 : "In a synch CC, no nodes should be marked black early on.");
3146 0 : continue;
3147 : }
3148 0 : MOZ_ASSERT(pi->mColor == grey);
3149 :
3150 0 : if (!pi->WasTraversed()) {
3151 : // This node was deleted before it was traversed, so there's no reason
3152 : // to look at it.
3153 0 : MOZ_ASSERT(!pi->mParticipant, "Live nodes should all have been traversed");
3154 0 : continue;
3155 : }
3156 :
3157 0 : if (pi->mInternalRefs == pi->mRefCount || pi->IsGrayJS()) {
3158 0 : pi->mColor = white;
3159 0 : ++mWhiteNodeCount;
3160 0 : continue;
3161 : }
3162 :
3163 0 : if (pi->mInternalRefs > pi->mRefCount) {
3164 : #ifdef MOZ_CRASHREPORTER
3165 0 : const char* piName = "Unknown";
3166 0 : if (pi->mParticipant) {
3167 0 : piName = pi->mParticipant->ClassName();
3168 : }
3169 0 : nsPrintfCString msg("More references to an object than its refcount, for class %s", piName);
3170 0 : CrashReporter::AnnotateCrashReport(NS_LITERAL_CSTRING("CycleCollector"), msg);
3171 : #endif
3172 0 : MOZ_CRASH();
3173 : }
3174 :
3175 : // This node will get marked black in the next pass.
3176 : }
3177 0 : }
3178 :
3179 : // Any remaining grey nodes that haven't already been deleted must be alive,
3180 : // so mark them and their children black. Any nodes that are black must have
3181 : // already had their children marked black, so there's no need to look at them
3182 : // again. This pass may turn some white nodes to black.
3183 : void
3184 0 : nsCycleCollector::ScanBlackNodes()
3185 : {
3186 0 : bool failed = false;
3187 0 : NodePool::Enumerator nodeEnum(mGraph.mNodes);
3188 0 : while (!nodeEnum.IsDone()) {
3189 0 : PtrInfo* pi = nodeEnum.GetNext();
3190 0 : if (pi->mColor == grey && pi->WasTraversed()) {
3191 0 : FloodBlackNode(mWhiteNodeCount, failed, pi);
3192 : }
3193 : }
3194 :
3195 0 : if (failed) {
3196 0 : NS_ASSERTION(false, "Ran out of memory in ScanBlackNodes");
3197 0 : CC_TELEMETRY(_OOM, true);
3198 : }
3199 0 : }
3200 :
3201 : void
3202 0 : nsCycleCollector::ScanRoots(bool aFullySynchGraphBuild)
3203 : {
3204 0 : JS::AutoAssertNoGC nogc;
3205 0 : AutoRestore<bool> ar(mScanInProgress);
3206 0 : MOZ_RELEASE_ASSERT(!mScanInProgress);
3207 0 : mScanInProgress = true;
3208 0 : mWhiteNodeCount = 0;
3209 0 : MOZ_ASSERT(mIncrementalPhase == ScanAndCollectWhitePhase);
3210 :
3211 0 : JS::AutoEnterCycleCollection autocc(Runtime()->Runtime());
3212 :
3213 0 : if (!aFullySynchGraphBuild) {
3214 0 : ScanIncrementalRoots();
3215 : }
3216 :
3217 0 : TimeLog timeLog;
3218 0 : ScanWhiteNodes(aFullySynchGraphBuild);
3219 0 : timeLog.Checkpoint("ScanRoots::ScanWhiteNodes");
3220 :
3221 0 : ScanBlackNodes();
3222 0 : timeLog.Checkpoint("ScanRoots::ScanBlackNodes");
3223 :
3224 : // Scanning weak maps must be done last.
3225 0 : ScanWeakMaps();
3226 0 : timeLog.Checkpoint("ScanRoots::ScanWeakMaps");
3227 :
3228 0 : if (mLogger) {
3229 0 : mLogger->BeginResults();
3230 :
3231 0 : NodePool::Enumerator etor(mGraph.mNodes);
3232 0 : while (!etor.IsDone()) {
3233 0 : PtrInfo* pi = etor.GetNext();
3234 0 : if (!pi->WasTraversed()) {
3235 0 : continue;
3236 : }
3237 0 : switch (pi->mColor) {
3238 : case black:
3239 0 : if (!pi->IsGrayJS() && !pi->IsBlackJS() &&
3240 0 : pi->mInternalRefs != pi->mRefCount) {
3241 0 : mLogger->DescribeRoot((uint64_t)pi->mPointer,
3242 0 : pi->mInternalRefs);
3243 : }
3244 0 : break;
3245 : case white:
3246 0 : mLogger->DescribeGarbage((uint64_t)pi->mPointer);
3247 0 : break;
3248 : case grey:
3249 0 : MOZ_ASSERT(false, "All traversed objects should be black or white");
3250 : break;
3251 : }
3252 : }
3253 :
3254 0 : mLogger->End();
3255 0 : mLogger = nullptr;
3256 0 : timeLog.Checkpoint("ScanRoots::listener");
3257 : }
3258 0 : }
3259 :
3260 :
3261 : ////////////////////////////////////////////////////////////////////////
3262 : // Bacon & Rajan's |CollectWhite| routine, somewhat modified.
3263 : ////////////////////////////////////////////////////////////////////////
3264 :
3265 : bool
3266 0 : nsCycleCollector::CollectWhite()
3267 : {
3268 : // Explanation of "somewhat modified": we have no way to collect the
3269 : // set of whites "all at once", we have to ask each of them to drop
3270 : // their outgoing links and assume this will cause the garbage cycle
3271 : // to *mostly* self-destruct (except for the reference we continue
3272 : // to hold).
3273 : //
3274 : // To do this "safely" we must make sure that the white nodes we're
3275 : // operating on are stable for the duration of our operation. So we
3276 : // make 3 sets of calls to language runtimes:
3277 : //
3278 : // - Root(whites), which should pin the whites in memory.
3279 : // - Unlink(whites), which drops outgoing links on each white.
3280 : // - Unroot(whites), which returns the whites to normal GC.
3281 :
3282 : // Segments are 4 KiB on 32-bit and 8 KiB on 64-bit.
3283 : static const size_t kSegmentSize = sizeof(void*) * 1024;
3284 : SegmentedVector<PtrInfo*, kSegmentSize, InfallibleAllocPolicy>
3285 0 : whiteNodes(kSegmentSize);
3286 0 : TimeLog timeLog;
3287 :
3288 0 : MOZ_ASSERT(mIncrementalPhase == ScanAndCollectWhitePhase);
3289 :
3290 0 : uint32_t numWhiteNodes = 0;
3291 0 : uint32_t numWhiteGCed = 0;
3292 0 : uint32_t numWhiteJSZones = 0;
3293 :
3294 : {
3295 0 : JS::AutoAssertNoGC nogc;
3296 0 : bool hasJSRuntime = !!mCCJSRuntime;
3297 : nsCycleCollectionParticipant* zoneParticipant =
3298 0 : hasJSRuntime ? mCCJSRuntime->ZoneParticipant() : nullptr;
3299 :
3300 0 : NodePool::Enumerator etor(mGraph.mNodes);
3301 0 : while (!etor.IsDone()) {
3302 0 : PtrInfo* pinfo = etor.GetNext();
3303 0 : if (pinfo->mColor == white && pinfo->mParticipant) {
3304 0 : if (pinfo->IsGrayJS()) {
3305 0 : MOZ_ASSERT(mCCJSRuntime);
3306 0 : ++numWhiteGCed;
3307 : JS::Zone* zone;
3308 0 : if (MOZ_UNLIKELY(pinfo->mParticipant == zoneParticipant)) {
3309 0 : ++numWhiteJSZones;
3310 0 : zone = static_cast<JS::Zone*>(pinfo->mPointer);
3311 : } else {
3312 0 : JS::GCCellPtr ptr(pinfo->mPointer, JS::GCThingTraceKind(pinfo->mPointer));
3313 0 : zone = JS::GetTenuredGCThingZone(ptr);
3314 : }
3315 0 : mCCJSRuntime->AddZoneWaitingForGC(zone);
3316 : } else {
3317 0 : whiteNodes.InfallibleAppend(pinfo);
3318 0 : pinfo->mParticipant->Root(pinfo->mPointer);
3319 0 : ++numWhiteNodes;
3320 : }
3321 : }
3322 : }
3323 : }
3324 :
3325 0 : mResults.mFreedRefCounted += numWhiteNodes;
3326 0 : mResults.mFreedGCed += numWhiteGCed;
3327 0 : mResults.mFreedJSZones += numWhiteJSZones;
3328 :
3329 0 : timeLog.Checkpoint("CollectWhite::Root");
3330 :
3331 0 : if (mBeforeUnlinkCB) {
3332 0 : mBeforeUnlinkCB();
3333 0 : timeLog.Checkpoint("CollectWhite::BeforeUnlinkCB");
3334 : }
3335 :
3336 : // Unlink() can trigger a GC, so do not touch any JS or anything
3337 : // else not in whiteNodes after here.
3338 :
3339 0 : for (auto iter = whiteNodes.Iter(); !iter.Done(); iter.Next()) {
3340 0 : PtrInfo* pinfo = iter.Get();
3341 0 : MOZ_ASSERT(pinfo->mParticipant,
3342 : "Unlink shouldn't see objects removed from graph.");
3343 0 : pinfo->mParticipant->Unlink(pinfo->mPointer);
3344 : #ifdef DEBUG
3345 0 : if (mCCJSRuntime) {
3346 0 : mCCJSRuntime->AssertNoObjectsToTrace(pinfo->mPointer);
3347 : }
3348 : #endif
3349 : }
3350 0 : timeLog.Checkpoint("CollectWhite::Unlink");
3351 :
3352 0 : JS::AutoAssertNoGC nogc;
3353 0 : for (auto iter = whiteNodes.Iter(); !iter.Done(); iter.Next()) {
3354 0 : PtrInfo* pinfo = iter.Get();
3355 0 : MOZ_ASSERT(pinfo->mParticipant,
3356 : "Unroot shouldn't see objects removed from graph.");
3357 0 : pinfo->mParticipant->Unroot(pinfo->mPointer);
3358 : }
3359 0 : timeLog.Checkpoint("CollectWhite::Unroot");
3360 :
3361 0 : nsCycleCollector_dispatchDeferredDeletion(false, true);
3362 0 : timeLog.Checkpoint("CollectWhite::dispatchDeferredDeletion");
3363 :
3364 0 : mIncrementalPhase = CleanupPhase;
3365 :
3366 0 : return numWhiteNodes > 0 || numWhiteGCed > 0 || numWhiteJSZones > 0;
3367 : }
3368 :
3369 :
3370 : ////////////////////////
3371 : // Memory reporting
3372 : ////////////////////////
3373 :
3374 0 : MOZ_DEFINE_MALLOC_SIZE_OF(CycleCollectorMallocSizeOf)
3375 :
3376 : NS_IMETHODIMP
3377 0 : nsCycleCollector::CollectReports(nsIHandleReportCallback* aHandleReport,
3378 : nsISupports* aData, bool aAnonymize)
3379 : {
3380 : size_t objectSize, graphSize, purpleBufferSize;
3381 : SizeOfIncludingThis(CycleCollectorMallocSizeOf,
3382 : &objectSize, &graphSize,
3383 0 : &purpleBufferSize);
3384 :
3385 0 : if (objectSize > 0) {
3386 0 : MOZ_COLLECT_REPORT(
3387 : "explicit/cycle-collector/collector-object", KIND_HEAP, UNITS_BYTES,
3388 : objectSize,
3389 0 : "Memory used for the cycle collector object itself.");
3390 : }
3391 :
3392 0 : if (graphSize > 0) {
3393 0 : MOZ_COLLECT_REPORT(
3394 : "explicit/cycle-collector/graph", KIND_HEAP, UNITS_BYTES,
3395 : graphSize,
3396 : "Memory used for the cycle collector's graph. This should be zero when "
3397 0 : "the collector is idle.");
3398 : }
3399 :
3400 0 : if (purpleBufferSize > 0) {
3401 0 : MOZ_COLLECT_REPORT(
3402 : "explicit/cycle-collector/purple-buffer", KIND_HEAP, UNITS_BYTES,
3403 : purpleBufferSize,
3404 0 : "Memory used for the cycle collector's purple buffer.");
3405 : }
3406 :
3407 0 : return NS_OK;
3408 : };
3409 :
3410 :
3411 : ////////////////////////////////////////////////////////////////////////
3412 : // Collector implementation
3413 : ////////////////////////////////////////////////////////////////////////
3414 :
3415 4 : nsCycleCollector::nsCycleCollector() :
3416 : mActivelyCollecting(false),
3417 : mFreeingSnowWhite(false),
3418 : mScanInProgress(false),
3419 : mCCJSRuntime(nullptr),
3420 : mIncrementalPhase(IdlePhase),
3421 : #ifdef DEBUG
3422 4 : mEventTarget(GetCurrentThreadSerialEventTarget()),
3423 : #endif
3424 : mWhiteNodeCount(0),
3425 : mBeforeUnlinkCB(nullptr),
3426 : mForgetSkippableCB(nullptr),
3427 : mUnmergedNeeded(0),
3428 8 : mMergedInARow(0)
3429 : {
3430 4 : }
3431 :
3432 0 : nsCycleCollector::~nsCycleCollector()
3433 : {
3434 0 : UnregisterWeakMemoryReporter(this);
3435 0 : }
3436 :
3437 : void
3438 4 : nsCycleCollector::SetCCJSRuntime(CycleCollectedJSRuntime* aCCRuntime)
3439 : {
3440 4 : MOZ_RELEASE_ASSERT(!mCCJSRuntime, "Multiple registrations of CycleCollectedJSRuntime in cycle collector");
3441 4 : mCCJSRuntime = aCCRuntime;
3442 :
3443 4 : if (!NS_IsMainThread()) {
3444 1 : return;
3445 : }
3446 :
3447 : // We can't register as a reporter in nsCycleCollector() because that runs
3448 : // before the memory reporter manager is initialized. So we do it here
3449 : // instead.
3450 3 : RegisterWeakMemoryReporter(this);
3451 : }
3452 :
3453 : void
3454 0 : nsCycleCollector::ClearCCJSRuntime()
3455 : {
3456 0 : MOZ_RELEASE_ASSERT(mCCJSRuntime, "Clearing CycleCollectedJSRuntime in cycle collector before a runtime was registered");
3457 0 : mCCJSRuntime = nullptr;
3458 0 : }
3459 :
3460 : #ifdef DEBUG
3461 : static bool
3462 24809 : HasParticipant(void* aPtr, nsCycleCollectionParticipant* aParti)
3463 : {
3464 24809 : if (aParti) {
3465 4365 : return true;
3466 : }
3467 :
3468 : nsXPCOMCycleCollectionParticipant* xcp;
3469 20444 : ToParticipant(static_cast<nsISupports*>(aPtr), &xcp);
3470 20444 : return xcp != nullptr;
3471 : }
3472 : #endif
3473 :
3474 : MOZ_ALWAYS_INLINE void
3475 24809 : nsCycleCollector::Suspect(void* aPtr, nsCycleCollectionParticipant* aParti,
3476 : nsCycleCollectingAutoRefCnt* aRefCnt)
3477 : {
3478 24809 : CheckThreadSafety();
3479 :
3480 : // Don't call AddRef or Release of a CCed object in a Traverse() method.
3481 24809 : MOZ_ASSERT(!mScanInProgress, "Attempted to call Suspect() while a scan was in progress");
3482 :
3483 24809 : if (MOZ_UNLIKELY(mScanInProgress)) {
3484 0 : return;
3485 : }
3486 :
3487 24809 : MOZ_ASSERT(aPtr, "Don't suspect null pointers");
3488 :
3489 24809 : MOZ_ASSERT(HasParticipant(aPtr, aParti),
3490 : "Suspected nsISupports pointer must QI to nsXPCOMCycleCollectionParticipant");
3491 :
3492 24809 : mPurpleBuf.Put(aPtr, aParti, aRefCnt);
3493 : }
3494 :
3495 : void
3496 24819 : nsCycleCollector::CheckThreadSafety()
3497 : {
3498 : #ifdef DEBUG
3499 24819 : MOZ_ASSERT(mEventTarget->IsOnCurrentThread());
3500 : #endif
3501 24819 : }
3502 :
3503 : // The cycle collector uses the mark bitmap to discover what JS objects
3504 : // were reachable only from XPConnect roots that might participate in
3505 : // cycles. We ask the JS context whether we need to force a GC before
3506 : // this CC. It returns true on startup (before the mark bits have been set),
3507 : // and also when UnmarkGray has run out of stack. We also force GCs on shut
3508 : // down to collect cycles involving both DOM and JS.
3509 : void
3510 0 : nsCycleCollector::FixGrayBits(bool aForceGC, TimeLog& aTimeLog)
3511 : {
3512 0 : CheckThreadSafety();
3513 :
3514 0 : if (!mCCJSRuntime) {
3515 0 : return;
3516 : }
3517 :
3518 0 : if (!aForceGC) {
3519 0 : mCCJSRuntime->FixWeakMappingGrayBits();
3520 0 : aTimeLog.Checkpoint("FixWeakMappingGrayBits");
3521 :
3522 0 : bool needGC = !mCCJSRuntime->AreGCGrayBitsValid();
3523 : // Only do a telemetry ping for non-shutdown CCs.
3524 0 : CC_TELEMETRY(_NEED_GC, needGC);
3525 0 : if (!needGC) {
3526 0 : return;
3527 : }
3528 0 : mResults.mForcedGC = true;
3529 : }
3530 :
3531 0 : uint32_t count = 0;
3532 0 : do {
3533 0 : mCCJSRuntime->GarbageCollect(aForceGC ? JS::gcreason::SHUTDOWN_CC :
3534 0 : JS::gcreason::CC_FORCED);
3535 :
3536 0 : mCCJSRuntime->FixWeakMappingGrayBits();
3537 :
3538 : // It's possible that FixWeakMappingGrayBits will hit OOM when unmarking
3539 : // gray and we will have to go round again. The second time there should not
3540 : // be any weak mappings to fix up so the loop body should run at most twice.
3541 0 : MOZ_RELEASE_ASSERT(count++ < 2);
3542 0 : } while (!mCCJSRuntime->AreGCGrayBitsValid());
3543 :
3544 0 : aTimeLog.Checkpoint("FixGrayBits");
3545 : }
3546 :
3547 : bool
3548 0 : nsCycleCollector::IsIncrementalGCInProgress()
3549 : {
3550 0 : return mCCJSRuntime && JS::IsIncrementalGCInProgress(mCCJSRuntime->Runtime());
3551 : }
3552 :
3553 : void
3554 0 : nsCycleCollector::FinishAnyIncrementalGCInProgress()
3555 : {
3556 0 : if (IsIncrementalGCInProgress()) {
3557 0 : NS_WARNING("Finishing incremental GC in progress during CC");
3558 0 : JSContext* cx = CycleCollectedJSContext::Get()->Context();
3559 0 : JS::PrepareForIncrementalGC(cx);
3560 0 : JS::FinishIncrementalGC(cx, JS::gcreason::CC_FORCED);
3561 : }
3562 0 : }
3563 :
3564 : void
3565 0 : nsCycleCollector::CleanupAfterCollection()
3566 : {
3567 0 : TimeLog timeLog;
3568 0 : MOZ_ASSERT(mIncrementalPhase == CleanupPhase);
3569 0 : MOZ_RELEASE_ASSERT(!mScanInProgress);
3570 0 : mGraph.Clear();
3571 0 : timeLog.Checkpoint("CleanupAfterCollection::mGraph.Clear()");
3572 :
3573 : uint32_t interval =
3574 0 : (uint32_t)((TimeStamp::Now() - mCollectionStart).ToMilliseconds());
3575 : #ifdef COLLECT_TIME_DEBUG
3576 : printf("cc: total cycle collector time was %ums in %u slices\n", interval,
3577 : mResults.mNumSlices);
3578 : printf("cc: visited %u ref counted and %u GCed objects, freed %d ref counted and %d GCed objects",
3579 : mResults.mVisitedRefCounted, mResults.mVisitedGCed,
3580 : mResults.mFreedRefCounted, mResults.mFreedGCed);
3581 : uint32_t numVisited = mResults.mVisitedRefCounted + mResults.mVisitedGCed;
3582 : if (numVisited > 1000) {
3583 : uint32_t numFreed = mResults.mFreedRefCounted + mResults.mFreedGCed;
3584 : printf(" (%d%%)", 100 * numFreed / numVisited);
3585 : }
3586 : printf(".\ncc: \n");
3587 : #endif
3588 :
3589 0 : CC_TELEMETRY( , interval);
3590 0 : CC_TELEMETRY(_VISITED_REF_COUNTED, mResults.mVisitedRefCounted);
3591 0 : CC_TELEMETRY(_VISITED_GCED, mResults.mVisitedGCed);
3592 0 : CC_TELEMETRY(_COLLECTED, mWhiteNodeCount);
3593 0 : timeLog.Checkpoint("CleanupAfterCollection::telemetry");
3594 :
3595 0 : if (mCCJSRuntime) {
3596 0 : mCCJSRuntime->FinalizeDeferredThings(mResults.mAnyManual
3597 : ? CycleCollectedJSContext::FinalizeNow
3598 0 : : CycleCollectedJSContext::FinalizeIncrementally);
3599 0 : mCCJSRuntime->EndCycleCollectionCallback(mResults);
3600 0 : timeLog.Checkpoint("CleanupAfterCollection::EndCycleCollectionCallback()");
3601 : }
3602 0 : mIncrementalPhase = IdlePhase;
3603 0 : }
3604 :
3605 : void
3606 0 : nsCycleCollector::ShutdownCollect()
3607 : {
3608 0 : FinishAnyIncrementalGCInProgress();
3609 :
3610 0 : SliceBudget unlimitedBudget = SliceBudget::unlimited();
3611 : uint32_t i;
3612 0 : for (i = 0; i < DEFAULT_SHUTDOWN_COLLECTIONS; ++i) {
3613 0 : if (!Collect(ShutdownCC, unlimitedBudget, nullptr)) {
3614 0 : break;
3615 : }
3616 : }
3617 0 : NS_WARNING_ASSERTION(i < NORMAL_SHUTDOWN_COLLECTIONS, "Extra shutdown CC");
3618 0 : }
3619 :
3620 : static void
3621 0 : PrintPhase(const char* aPhase)
3622 : {
3623 : #ifdef DEBUG_PHASES
3624 : printf("cc: begin %s on %s\n", aPhase,
3625 : NS_IsMainThread() ? "mainthread" : "worker");
3626 : #endif
3627 0 : }
3628 :
3629 : bool
3630 0 : nsCycleCollector::Collect(ccType aCCType,
3631 : SliceBudget& aBudget,
3632 : nsICycleCollectorListener* aManualListener,
3633 : bool aPreferShorterSlices)
3634 : {
3635 0 : CheckThreadSafety();
3636 :
3637 : // This can legitimately happen in a few cases. See bug 383651.
3638 0 : if (mActivelyCollecting || mFreeingSnowWhite) {
3639 0 : return false;
3640 : }
3641 0 : mActivelyCollecting = true;
3642 :
3643 0 : MOZ_ASSERT(!IsIncrementalGCInProgress());
3644 :
3645 0 : mozilla::Maybe<mozilla::AutoGlobalTimelineMarker> marker;
3646 0 : if (NS_IsMainThread()) {
3647 0 : marker.emplace("nsCycleCollector::Collect", MarkerStackRequest::NO_STACK);
3648 : }
3649 :
3650 0 : bool startedIdle = IsIdle();
3651 0 : bool collectedAny = false;
3652 :
3653 : // If the CC started idle, it will call BeginCollection, which
3654 : // will do FreeSnowWhite, so it doesn't need to be done here.
3655 0 : if (!startedIdle) {
3656 0 : TimeLog timeLog;
3657 0 : FreeSnowWhite(true);
3658 0 : timeLog.Checkpoint("Collect::FreeSnowWhite");
3659 : }
3660 :
3661 0 : if (aCCType != SliceCC) {
3662 0 : mResults.mAnyManual = true;
3663 : }
3664 :
3665 0 : ++mResults.mNumSlices;
3666 :
3667 0 : bool continueSlice = aBudget.isUnlimited() || !aPreferShorterSlices;
3668 0 : do {
3669 0 : switch (mIncrementalPhase) {
3670 : case IdlePhase:
3671 0 : PrintPhase("BeginCollection");
3672 0 : BeginCollection(aCCType, aManualListener);
3673 0 : break;
3674 : case GraphBuildingPhase:
3675 0 : PrintPhase("MarkRoots");
3676 0 : MarkRoots(aBudget);
3677 :
3678 : // Only continue this slice if we're running synchronously or the
3679 : // next phase will probably be short, to reduce the max pause for this
3680 : // collection.
3681 : // (There's no need to check if we've finished graph building, because
3682 : // if we haven't, we've already exceeded our budget, and will finish
3683 : // this slice anyways.)
3684 0 : continueSlice = aBudget.isUnlimited() ||
3685 0 : (mResults.mNumSlices < 3 && !aPreferShorterSlices);
3686 0 : break;
3687 : case ScanAndCollectWhitePhase:
3688 : // We do ScanRoots and CollectWhite in a single slice to ensure
3689 : // that we won't unlink a live object if a weak reference is
3690 : // promoted to a strong reference after ScanRoots has finished.
3691 : // See bug 926533.
3692 0 : PrintPhase("ScanRoots");
3693 0 : ScanRoots(startedIdle);
3694 0 : PrintPhase("CollectWhite");
3695 0 : collectedAny = CollectWhite();
3696 0 : break;
3697 : case CleanupPhase:
3698 0 : PrintPhase("CleanupAfterCollection");
3699 0 : CleanupAfterCollection();
3700 0 : continueSlice = false;
3701 0 : break;
3702 : }
3703 0 : if (continueSlice) {
3704 : // Force SliceBudget::isOverBudget to check the time.
3705 0 : aBudget.step(SliceBudget::CounterReset);
3706 0 : continueSlice = !aBudget.isOverBudget();
3707 : }
3708 : } while (continueSlice);
3709 :
3710 : // Clear mActivelyCollecting here to ensure that a recursive call to
3711 : // Collect() does something.
3712 0 : mActivelyCollecting = false;
3713 :
3714 0 : if (aCCType != SliceCC && !startedIdle) {
3715 : // We were in the middle of an incremental CC (using its own listener).
3716 : // Somebody has forced a CC, so after having finished out the current CC,
3717 : // run the CC again using the new listener.
3718 0 : MOZ_ASSERT(IsIdle());
3719 0 : if (Collect(aCCType, aBudget, aManualListener)) {
3720 0 : collectedAny = true;
3721 : }
3722 : }
3723 :
3724 0 : MOZ_ASSERT_IF(aCCType != SliceCC, IsIdle());
3725 :
3726 0 : return collectedAny;
3727 : }
3728 :
3729 : // Any JS objects we have in the graph could die when we GC, but we
3730 : // don't want to abandon the current CC, because the graph contains
3731 : // information about purple roots. So we synchronously finish off
3732 : // the current CC.
3733 : void
3734 1 : nsCycleCollector::PrepareForGarbageCollection()
3735 : {
3736 1 : if (IsIdle()) {
3737 1 : MOZ_ASSERT(mGraph.IsEmpty(), "Non-empty graph when idle");
3738 1 : MOZ_ASSERT(!mBuilder, "Non-null builder when idle");
3739 1 : if (mJSPurpleBuffer) {
3740 0 : mJSPurpleBuffer->Destroy();
3741 : }
3742 1 : return;
3743 : }
3744 :
3745 0 : FinishAnyCurrentCollection();
3746 : }
3747 :
3748 : void
3749 0 : nsCycleCollector::FinishAnyCurrentCollection()
3750 : {
3751 0 : if (IsIdle()) {
3752 0 : return;
3753 : }
3754 :
3755 0 : SliceBudget unlimitedBudget = SliceBudget::unlimited();
3756 0 : PrintPhase("FinishAnyCurrentCollection");
3757 : // Use SliceCC because we only want to finish the CC in progress.
3758 0 : Collect(SliceCC, unlimitedBudget, nullptr);
3759 :
3760 : // It is only okay for Collect() to have failed to finish the
3761 : // current CC if we're reentering the CC at some point past
3762 : // graph building. We need to be past the point where the CC will
3763 : // look at JS objects so that it is safe to GC.
3764 0 : MOZ_ASSERT(IsIdle() ||
3765 : (mActivelyCollecting && mIncrementalPhase != GraphBuildingPhase),
3766 : "Reentered CC during graph building");
3767 : }
3768 :
3769 : // Don't merge too many times in a row, and do at least a minimum
3770 : // number of unmerged CCs in a row.
3771 : static const uint32_t kMinConsecutiveUnmerged = 3;
3772 : static const uint32_t kMaxConsecutiveMerged = 3;
3773 :
3774 : bool
3775 0 : nsCycleCollector::ShouldMergeZones(ccType aCCType)
3776 : {
3777 0 : if (!mCCJSRuntime) {
3778 0 : return false;
3779 : }
3780 :
3781 0 : MOZ_ASSERT(mUnmergedNeeded <= kMinConsecutiveUnmerged);
3782 0 : MOZ_ASSERT(mMergedInARow <= kMaxConsecutiveMerged);
3783 :
3784 0 : if (mMergedInARow == kMaxConsecutiveMerged) {
3785 0 : MOZ_ASSERT(mUnmergedNeeded == 0);
3786 0 : mUnmergedNeeded = kMinConsecutiveUnmerged;
3787 : }
3788 :
3789 0 : if (mUnmergedNeeded > 0) {
3790 0 : mUnmergedNeeded--;
3791 0 : mMergedInARow = 0;
3792 0 : return false;
3793 : }
3794 :
3795 0 : if (aCCType == SliceCC && mCCJSRuntime->UsefulToMergeZones()) {
3796 0 : mMergedInARow++;
3797 0 : return true;
3798 : } else {
3799 0 : mMergedInARow = 0;
3800 0 : return false;
3801 : }
3802 : }
3803 :
3804 : void
3805 0 : nsCycleCollector::BeginCollection(ccType aCCType,
3806 : nsICycleCollectorListener* aManualListener)
3807 : {
3808 0 : TimeLog timeLog;
3809 0 : MOZ_ASSERT(IsIdle());
3810 0 : MOZ_RELEASE_ASSERT(!mScanInProgress);
3811 :
3812 0 : mCollectionStart = TimeStamp::Now();
3813 :
3814 0 : if (mCCJSRuntime) {
3815 0 : mCCJSRuntime->BeginCycleCollectionCallback();
3816 0 : timeLog.Checkpoint("BeginCycleCollectionCallback()");
3817 : }
3818 :
3819 0 : bool isShutdown = (aCCType == ShutdownCC);
3820 :
3821 : // Set up the listener for this CC.
3822 0 : MOZ_ASSERT_IF(isShutdown, !aManualListener);
3823 0 : MOZ_ASSERT(!mLogger, "Forgot to clear a previous listener?");
3824 :
3825 0 : if (aManualListener) {
3826 0 : aManualListener->AsLogger(getter_AddRefs(mLogger));
3827 : }
3828 :
3829 0 : aManualListener = nullptr;
3830 0 : if (!mLogger && mParams.LogThisCC(isShutdown)) {
3831 0 : mLogger = new nsCycleCollectorLogger();
3832 0 : if (mParams.AllTracesThisCC(isShutdown)) {
3833 0 : mLogger->SetAllTraces();
3834 : }
3835 : }
3836 :
3837 : // On a WantAllTraces CC, force a synchronous global GC to prevent
3838 : // hijinks from ForgetSkippable and compartmental GCs.
3839 0 : bool forceGC = isShutdown || (mLogger && mLogger->IsAllTraces());
3840 :
3841 : // BeginCycleCollectionCallback() might have started an IGC, and we need
3842 : // to finish it before we run FixGrayBits.
3843 0 : FinishAnyIncrementalGCInProgress();
3844 0 : timeLog.Checkpoint("Pre-FixGrayBits finish IGC");
3845 :
3846 0 : FixGrayBits(forceGC, timeLog);
3847 0 : if (mCCJSRuntime) {
3848 0 : mCCJSRuntime->CheckGrayBits();
3849 : }
3850 :
3851 0 : FreeSnowWhite(true);
3852 0 : timeLog.Checkpoint("BeginCollection FreeSnowWhite");
3853 :
3854 0 : if (mLogger && NS_FAILED(mLogger->Begin())) {
3855 0 : mLogger = nullptr;
3856 : }
3857 :
3858 : // FreeSnowWhite could potentially have started an IGC, which we need
3859 : // to finish before we look at any JS roots.
3860 0 : FinishAnyIncrementalGCInProgress();
3861 0 : timeLog.Checkpoint("Post-FreeSnowWhite finish IGC");
3862 :
3863 : // Set up the data structures for building the graph.
3864 0 : JS::AutoAssertNoGC nogc;
3865 0 : JS::AutoEnterCycleCollection autocc(mCCJSRuntime->Runtime());
3866 0 : mGraph.Init();
3867 0 : mResults.Init();
3868 0 : mResults.mAnyManual = (aCCType != SliceCC);
3869 0 : bool mergeZones = ShouldMergeZones(aCCType);
3870 0 : mResults.mMergedZones = mergeZones;
3871 :
3872 0 : MOZ_ASSERT(!mBuilder, "Forgot to clear mBuilder");
3873 : mBuilder = new CCGraphBuilder(mGraph, mResults, mCCJSRuntime, mLogger,
3874 0 : mergeZones);
3875 0 : timeLog.Checkpoint("BeginCollection prepare graph builder");
3876 :
3877 0 : if (mCCJSRuntime) {
3878 0 : mCCJSRuntime->TraverseRoots(*mBuilder);
3879 0 : timeLog.Checkpoint("mJSContext->TraverseRoots()");
3880 : }
3881 :
3882 0 : AutoRestore<bool> ar(mScanInProgress);
3883 0 : MOZ_RELEASE_ASSERT(!mScanInProgress);
3884 0 : mScanInProgress = true;
3885 0 : mPurpleBuf.SelectPointers(*mBuilder);
3886 0 : timeLog.Checkpoint("SelectPointers()");
3887 :
3888 0 : mBuilder->DoneAddingRoots();
3889 0 : mIncrementalPhase = GraphBuildingPhase;
3890 0 : }
3891 :
3892 : uint32_t
3893 3 : nsCycleCollector::SuspectedCount()
3894 : {
3895 3 : CheckThreadSafety();
3896 3 : return mPurpleBuf.Count();
3897 : }
3898 :
3899 : void
3900 0 : nsCycleCollector::Shutdown(bool aDoCollect)
3901 : {
3902 0 : CheckThreadSafety();
3903 :
3904 : // Always delete snow white objects.
3905 0 : FreeSnowWhite(true);
3906 :
3907 0 : if (aDoCollect) {
3908 0 : ShutdownCollect();
3909 : }
3910 0 : }
3911 :
3912 : void
3913 1788 : nsCycleCollector::RemoveObjectFromGraph(void* aObj)
3914 : {
3915 1788 : if (IsIdle()) {
3916 1788 : return;
3917 : }
3918 :
3919 0 : mGraph.RemoveObjectFromMap(aObj);
3920 : }
3921 :
3922 : void
3923 0 : nsCycleCollector::SizeOfIncludingThis(mozilla::MallocSizeOf aMallocSizeOf,
3924 : size_t* aObjectSize,
3925 : size_t* aGraphSize,
3926 : size_t* aPurpleBufferSize) const
3927 : {
3928 0 : *aObjectSize = aMallocSizeOf(this);
3929 :
3930 0 : *aGraphSize = mGraph.SizeOfExcludingThis(aMallocSizeOf);
3931 :
3932 0 : *aPurpleBufferSize = mPurpleBuf.SizeOfExcludingThis(aMallocSizeOf);
3933 :
3934 : // These fields are deliberately not measured:
3935 : // - mCCJSRuntime: because it's non-owning and measured by JS reporters.
3936 : // - mParams: because it only contains scalars.
3937 0 : }
3938 :
3939 : JSPurpleBuffer*
3940 0 : nsCycleCollector::GetJSPurpleBuffer()
3941 : {
3942 0 : if (!mJSPurpleBuffer) {
3943 : // The Release call here confuses the GC analysis.
3944 0 : JS::AutoSuppressGCAnalysis nogc;
3945 : // JSPurpleBuffer keeps itself alive, but we need to create it in such way
3946 : // that it ends up in the normal purple buffer. That happens when
3947 : // nsRefPtr goes out of the scope and calls Release.
3948 0 : RefPtr<JSPurpleBuffer> pb = new JSPurpleBuffer(mJSPurpleBuffer);
3949 : }
3950 0 : return mJSPurpleBuffer;
3951 : }
3952 :
3953 : ////////////////////////////////////////////////////////////////////////
3954 : // Module public API (exported in nsCycleCollector.h)
3955 : // Just functions that redirect into the singleton, once it's built.
3956 : ////////////////////////////////////////////////////////////////////////
3957 :
3958 : void
3959 4 : nsCycleCollector_registerJSContext(CycleCollectedJSContext* aCx)
3960 : {
3961 4 : CollectorData* data = sCollectorData.get();
3962 :
3963 : // We should have started the cycle collector by now.
3964 4 : MOZ_ASSERT(data);
3965 4 : MOZ_ASSERT(data->mCollector);
3966 : // But we shouldn't already have a context.
3967 4 : MOZ_ASSERT(!data->mContext);
3968 :
3969 4 : data->mContext = aCx;
3970 4 : data->mCollector->SetCCJSRuntime(aCx->Runtime());
3971 4 : }
3972 :
3973 : void
3974 0 : nsCycleCollector_forgetJSContext()
3975 : {
3976 0 : CollectorData* data = sCollectorData.get();
3977 :
3978 : // We should have started the cycle collector by now.
3979 0 : MOZ_ASSERT(data);
3980 : // And we shouldn't have already forgotten our context.
3981 0 : MOZ_ASSERT(data->mContext);
3982 :
3983 : // But it may have shutdown already.
3984 0 : if (data->mCollector) {
3985 0 : data->mCollector->ClearCCJSRuntime();
3986 0 : data->mContext = nullptr;
3987 : } else {
3988 0 : data->mContext = nullptr;
3989 0 : delete data;
3990 0 : sCollectorData.set(nullptr);
3991 : }
3992 0 : }
3993 :
3994 : /* static */ CycleCollectedJSContext*
3995 204660 : CycleCollectedJSContext::Get()
3996 : {
3997 204660 : CollectorData* data = sCollectorData.get();
3998 204660 : if (data) {
3999 204660 : return data->mContext;
4000 : }
4001 0 : return nullptr;
4002 : }
4003 :
4004 : MOZ_NEVER_INLINE static void
4005 0 : SuspectAfterShutdown(void* aPtr, nsCycleCollectionParticipant* aCp,
4006 : nsCycleCollectingAutoRefCnt* aRefCnt,
4007 : bool* aShouldDelete)
4008 : {
4009 0 : if (aRefCnt->get() == 0) {
4010 0 : if (!aShouldDelete) {
4011 : // The CC is shut down, so we can't be in the middle of an ICC.
4012 0 : CanonicalizeParticipant(&aPtr, &aCp);
4013 0 : aRefCnt->stabilizeForDeletion();
4014 0 : aCp->DeleteCycleCollectable(aPtr);
4015 : } else {
4016 0 : *aShouldDelete = true;
4017 : }
4018 : } else {
4019 : // Make sure we'll get called again.
4020 0 : aRefCnt->RemoveFromPurpleBuffer();
4021 : }
4022 0 : }
4023 :
4024 : void
4025 24809 : NS_CycleCollectorSuspect3(void* aPtr, nsCycleCollectionParticipant* aCp,
4026 : nsCycleCollectingAutoRefCnt* aRefCnt,
4027 : bool* aShouldDelete)
4028 : {
4029 24809 : CollectorData* data = sCollectorData.get();
4030 :
4031 : // We should have started the cycle collector by now.
4032 24809 : MOZ_ASSERT(data);
4033 :
4034 24809 : if (MOZ_LIKELY(data->mCollector)) {
4035 24809 : data->mCollector->Suspect(aPtr, aCp, aRefCnt);
4036 24809 : return;
4037 : }
4038 0 : SuspectAfterShutdown(aPtr, aCp, aRefCnt, aShouldDelete);
4039 : }
4040 :
4041 : uint32_t
4042 3 : nsCycleCollector_suspectedCount()
4043 : {
4044 3 : CollectorData* data = sCollectorData.get();
4045 :
4046 : // We should have started the cycle collector by now.
4047 3 : MOZ_ASSERT(data);
4048 :
4049 3 : if (!data->mCollector) {
4050 0 : return 0;
4051 : }
4052 :
4053 3 : return data->mCollector->SuspectedCount();
4054 : }
4055 :
4056 : bool
4057 3 : nsCycleCollector_init()
4058 : {
4059 : #ifdef DEBUG
4060 : static bool sInitialized;
4061 :
4062 3 : MOZ_ASSERT(NS_IsMainThread(), "Wrong thread!");
4063 3 : MOZ_ASSERT(!sInitialized, "Called twice!?");
4064 3 : sInitialized = true;
4065 : #endif
4066 :
4067 3 : return sCollectorData.init();
4068 : }
4069 :
4070 : static nsCycleCollector* gMainThreadCollector;
4071 :
4072 : void
4073 4 : nsCycleCollector_startup()
4074 : {
4075 4 : if (sCollectorData.get()) {
4076 0 : MOZ_CRASH();
4077 : }
4078 :
4079 4 : CollectorData* data = new CollectorData;
4080 4 : data->mCollector = new nsCycleCollector();
4081 4 : data->mContext = nullptr;
4082 :
4083 4 : sCollectorData.set(data);
4084 :
4085 4 : if (NS_IsMainThread()) {
4086 3 : MOZ_ASSERT(!gMainThreadCollector);
4087 3 : gMainThreadCollector = data->mCollector;
4088 : }
4089 4 : }
4090 :
4091 : void
4092 0 : nsCycleCollector_registerNonPrimaryContext(CycleCollectedJSContext* aCx)
4093 : {
4094 0 : if (sCollectorData.get()) {
4095 0 : MOZ_CRASH();
4096 : }
4097 :
4098 0 : MOZ_ASSERT(gMainThreadCollector);
4099 :
4100 0 : CollectorData* data = new CollectorData;
4101 :
4102 0 : data->mCollector = gMainThreadCollector;
4103 0 : data->mContext = aCx;
4104 :
4105 0 : sCollectorData.set(data);
4106 0 : }
4107 :
4108 : void
4109 0 : nsCycleCollector_forgetNonPrimaryContext()
4110 : {
4111 0 : CollectorData* data = sCollectorData.get();
4112 :
4113 : // We should have started the cycle collector by now.
4114 0 : MOZ_ASSERT(data);
4115 : // And we shouldn't have already forgotten our context.
4116 0 : MOZ_ASSERT(data->mContext);
4117 : // We should not have shut down the cycle collector yet.
4118 0 : MOZ_ASSERT(data->mCollector);
4119 :
4120 0 : delete data;
4121 0 : sCollectorData.set(nullptr);
4122 0 : }
4123 :
4124 : void
4125 3 : nsCycleCollector_setBeforeUnlinkCallback(CC_BeforeUnlinkCallback aCB)
4126 : {
4127 3 : CollectorData* data = sCollectorData.get();
4128 :
4129 : // We should have started the cycle collector by now.
4130 3 : MOZ_ASSERT(data);
4131 3 : MOZ_ASSERT(data->mCollector);
4132 :
4133 3 : data->mCollector->SetBeforeUnlinkCallback(aCB);
4134 3 : }
4135 :
4136 : void
4137 3 : nsCycleCollector_setForgetSkippableCallback(CC_ForgetSkippableCallback aCB)
4138 : {
4139 3 : CollectorData* data = sCollectorData.get();
4140 :
4141 : // We should have started the cycle collector by now.
4142 3 : MOZ_ASSERT(data);
4143 3 : MOZ_ASSERT(data->mCollector);
4144 :
4145 3 : data->mCollector->SetForgetSkippableCallback(aCB);
4146 3 : }
4147 :
4148 : void
4149 0 : nsCycleCollector_forgetSkippable(js::SliceBudget& aBudget,
4150 : bool aRemoveChildlessNodes,
4151 : bool aAsyncSnowWhiteFreeing)
4152 : {
4153 0 : CollectorData* data = sCollectorData.get();
4154 :
4155 : // We should have started the cycle collector by now.
4156 0 : MOZ_ASSERT(data);
4157 0 : MOZ_ASSERT(data->mCollector);
4158 :
4159 0 : AUTO_PROFILER_LABEL("nsCycleCollector_forgetSkippable", CC);
4160 :
4161 0 : TimeLog timeLog;
4162 0 : data->mCollector->ForgetSkippable(aBudget,
4163 : aRemoveChildlessNodes,
4164 0 : aAsyncSnowWhiteFreeing);
4165 0 : timeLog.Checkpoint("ForgetSkippable()");
4166 0 : }
4167 :
4168 : void
4169 3 : nsCycleCollector_dispatchDeferredDeletion(bool aContinuation, bool aPurge)
4170 : {
4171 3 : CycleCollectedJSRuntime* rt = CycleCollectedJSRuntime::Get();
4172 3 : if (rt) {
4173 3 : rt->DispatchDeferredDeletion(aContinuation, aPurge);
4174 : }
4175 3 : }
4176 :
4177 : bool
4178 1 : nsCycleCollector_doDeferredDeletion()
4179 : {
4180 1 : CollectorData* data = sCollectorData.get();
4181 :
4182 : // We should have started the cycle collector by now.
4183 1 : MOZ_ASSERT(data);
4184 1 : MOZ_ASSERT(data->mCollector);
4185 1 : MOZ_ASSERT(data->mContext);
4186 :
4187 1 : return data->mCollector->FreeSnowWhite(false);
4188 : }
4189 :
4190 : already_AddRefed<nsICycleCollectorLogSink>
4191 0 : nsCycleCollector_createLogSink()
4192 : {
4193 0 : nsCOMPtr<nsICycleCollectorLogSink> sink = new nsCycleCollectorLogSinkToFile();
4194 0 : return sink.forget();
4195 : }
4196 :
4197 : void
4198 0 : nsCycleCollector_collect(nsICycleCollectorListener* aManualListener)
4199 : {
4200 0 : CollectorData* data = sCollectorData.get();
4201 :
4202 : // We should have started the cycle collector by now.
4203 0 : MOZ_ASSERT(data);
4204 0 : MOZ_ASSERT(data->mCollector);
4205 :
4206 0 : AUTO_PROFILER_LABEL("nsCycleCollector_collect", CC);
4207 :
4208 0 : SliceBudget unlimitedBudget = SliceBudget::unlimited();
4209 0 : data->mCollector->Collect(ManualCC, unlimitedBudget, aManualListener);
4210 0 : }
4211 :
4212 : void
4213 0 : nsCycleCollector_collectSlice(SliceBudget& budget,
4214 : bool aPreferShorterSlices)
4215 : {
4216 0 : CollectorData* data = sCollectorData.get();
4217 :
4218 : // We should have started the cycle collector by now.
4219 0 : MOZ_ASSERT(data);
4220 0 : MOZ_ASSERT(data->mCollector);
4221 :
4222 0 : AUTO_PROFILER_LABEL("nsCycleCollector_collectSlice", CC);
4223 :
4224 0 : data->mCollector->Collect(SliceCC, budget, nullptr, aPreferShorterSlices);
4225 0 : }
4226 :
4227 : void
4228 1 : nsCycleCollector_prepareForGarbageCollection()
4229 : {
4230 1 : CollectorData* data = sCollectorData.get();
4231 :
4232 1 : MOZ_ASSERT(data);
4233 :
4234 1 : if (!data->mCollector) {
4235 0 : return;
4236 : }
4237 :
4238 1 : data->mCollector->PrepareForGarbageCollection();
4239 : }
4240 :
4241 : void
4242 0 : nsCycleCollector_finishAnyCurrentCollection()
4243 : {
4244 0 : CollectorData* data = sCollectorData.get();
4245 :
4246 0 : MOZ_ASSERT(data);
4247 :
4248 0 : if (!data->mCollector) {
4249 0 : return;
4250 : }
4251 :
4252 0 : data->mCollector->FinishAnyCurrentCollection();
4253 : }
4254 :
4255 : void
4256 0 : nsCycleCollector_shutdown(bool aDoCollect)
4257 : {
4258 0 : CollectorData* data = sCollectorData.get();
4259 :
4260 0 : if (data) {
4261 0 : MOZ_ASSERT(data->mCollector);
4262 0 : AUTO_PROFILER_LABEL("nsCycleCollector_shutdown", CC);
4263 :
4264 0 : if (gMainThreadCollector == data->mCollector) {
4265 0 : gMainThreadCollector = nullptr;
4266 : }
4267 0 : data->mCollector->Shutdown(aDoCollect);
4268 0 : data->mCollector = nullptr;
4269 0 : if (data->mContext) {
4270 : // Run any remaining tasks that may have been enqueued via
4271 : // RunInStableState during the final cycle collection.
4272 0 : data->mContext->ProcessStableStateQueue();
4273 : }
4274 0 : if (!data->mContext) {
4275 0 : delete data;
4276 0 : sCollectorData.set(nullptr);
4277 : }
4278 : }
4279 0 : }
|