Does my NodeJS application leak memory? – 5

Mark-Sweep, Mark-Compact

In the last post I talked about Scavenges, semi-spaces, new and old spaces, new space to old space promotion, garbage collection algorithms and tracing garbage collection events. In this installment I will discuss the “Mark-Sweep, Mark-Compact” garbage collector.

Mark-Sweep and Mark-Compact operate in two phases. Marking (marking all the live objects) and Sweeping, Compacting (getting rid of dead objects and reducing fragmentation if needed). Commonly referred to as mark-and-sweep collector.

Marking involves locating and processing all the live objects. An object is considered live if it is being referenced by some chain of pointers to a root object or another live object. Putting it another way, an object is live if and only if, it can be reached by following a chain of references from a set of known roots(e.g. global variables for instance)

Essentially, the marking process involves scanning, locating(discovering) and then processing(marking) the live objects. The garbage collector considers all the unmarked (unprocessed) objects as disposable or dead. Marking is done using tri color abstraction, where white, grey and black represent the marking states. In reality they are bits in a marking bitmap.

In this, white objects have not been located(or discovered) by the marker, while grey objects have been discovered but not much is known about its neighbors and finally black objects have been fully processed and are considered live. To assist in this effort a marker queue is used which tracks the marking progress.

The marking process uses the Depth First Search algorithm to scan the object graph and is best explained using an example. Consider: const a = {b: {c: [1, 2], d: [3, 4]}}; This can be written as:

const a = {};
a.b = {};
a.b.c = [1, 2];
a.b.d = [3, 4];

#1. The initial object graph. All objects are white as they have yet to be scanned. The marker queue is empty – Q = [ ].

#2. The scan starts with the root which is the variable “a”. It is marked grey and pushed to the queue. The grey indicates that “a” has been discovered(or visited, located) but has still to be fully processed. In other words its neighbors(children) if any have yet to be discovered. We now have Q = [a].

leaks-post.031

#3. Continuing with the depth search(following edge a.b) object “b” is discovered, marked grey and pushed on to the marker queue.

#4. The depth search continues and object “c” is discovered(edge b.c).  It too is marked grey and pushed to the queue – Q = [a, b, c]. At this point the search stops as there is nothing beyond “c”(c is a leaf).

#5. As the search cannot continue object “c” is popped from the queue and marked black as it now has been fully processed. In other words it has been visited and it has no neighbors.

At this point Q = [a, b].

#6. Now that object “c” has been processed, the search continues by backtracking to the parent object “b”. Continuing it identifies(discovered)  object “d” a child of “b”. Object “d” is now marked grey and pushed to the queue. We now have Q = [a, b, d].

#7. As the search cannot continue, object “d” is popped from the queue and marked black as it now has been fully processed. In other words it has been visited and it has no neighbors or children. At this point Q = [a, b].

#8. The search continues by backtracking to object “b”. As all its children(neighbors) have been processed it is considered fully processed and so it is popped and marked black. Q = [a];

leaks-post.037

#9. The search continues by backtracking to object “a”. As all its children(neighbors) have been processed it is considered fully processed and so it is popped and marked black. We now have q = []. The queue is empty as all objects have been processed. In this case all objects are live and no garbage collection will take place.

leaks-post.038Assume that in the above example at some point in the program execution, object “d” has been assigned null. So a = {b: {c: [1, 2], d: null}}; This can be written as: a.b.d = null;

This is how the marking will progress:

As “d” is detached from the tree it is never visited or processed. Object “d”, remains white after the marking process is complete and will be garbage collected.

Once marking is complete V8 triggers a sweep event when it believes memory will be required. The sweeper follows a simple process, it destroys all the unmarked (white) objects and optionally compacts the fragmented live objects.

Note that the old space could be very large in size. The default is in excess of 1 GB. It would take the collector a long time(in hundreds of milliseconds) to mark a large number of live objects. This would have an impact on performance as, this is, like I have mentioned before, stop the world gc. To avoid this, an “incremental marker” is used. In this, the live objects are marked in small increments or steps, making sure that no step takes more than 5ms giving as much time as possible to allow the application program to run and be performant. Using the same premise as marking, it would take a sweeper a long time to sweep large swaths of memory. For this, lazy sweeping is used, where only the “just” needed amount of sweeping is done based on how much memory that needs to be freed for the current requests. Rest of the sweeping is kept for a later time.

Now that the theory is out of the way and that you have understood how marking and sweeping take place let’s trace through the logs of two contrived examples. To quickly illustrate the point we will be running both programs in limited memory space by limiting the new space to 1MB and old space to 20 MB.

Example 1:

In the first example the program with every iteration, pushes a string of 1,000,000 characters into the array a.b.c. No memory is being freed. As the maximum old space is limited to a little more than 20MB, the application will eventually run out of memory.

/*
Example-1
Run with: node -trace_gc --trace_incremental_marking --max_semi_space_size=1 --max_old_space_size=20 example-1.js
A contrived example to gradually exhaust memory
*/

"use strict";
const a = {b: {c: []}};
let i = 0;

function aString (s) {
let text = '';
for (let i = 0; i < 10000; i++) {
//string of 100 characters
text += 'Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the ';
}
s.push(text); //push in 1,000,000 bytes of data
}

//Push in 1,000,000 bytes of data till the system
//runs out of memory
setInterval(function () {
aString(a.b.c);
}, 500);

We will use the command:

node -trace_gc -trace_gc_verbose –trace_incremental_marking  –max_semi_space_size=1  –max_old_space_size=20 example-1.js

where:
trace_gc – Creates the basic gc trace, discussed in the previous post
max_semi_space_size – The size of the new space. This is set at 1MB. Remember there are two semi spaces in the new space.
max_old_space_size – The max size of old space. This is set at 20MB
trace_incremental_marking – traces the progress of the incremental marker

The following is the trace with the commentary interspersed:

node -trace_gc --trace_incremental_marking --max_semi_space_size=1 --max_old_space_size=20 example-1.js

V8 initialization - we ignore this
[35841:0x103000000] 72 ms: Scavenge 2.1 (6.0) -> 2.1 (7.0) MB, 0.8 / 0.0 ms [allocation failure].
[35841:0x103000000] 73 ms: Scavenge 2.1 (7.0) -> 2.1 (8.0) MB, 0.7 / 0.0 ms [allocation failure].

Scavenge GC events. With every iteration 1,000,000 bytes are pushed and as the new space is limited to 1MB the new space will fill up pretty much immediately
promoting all data to old space. The scavenge event has been triggered due to "allocation failure" as there is no new space available for 
incoming data and no data is being deleted. In a time span of about 7 seconds the size of the objects grows from 4.0MB to 8.9MB. 
Allocation grows from 11MB to 16MB

[35841:0x103000000] 640 ms: Scavenge 4.0 (11.0) -> 3.7 (11.0) MB, 0.9 / 0.0 ms [allocation failure].
[35841:0x103000000] 643 ms: Scavenge 4.0 (11.0) -> 3.9 (12.0) MB, 1.6 / 0.0 ms [allocation failure].
[35841:0x103000000] 1666 ms: Scavenge 4.7 (12.0) -> 4.6 (12.0) MB, 1.7 / 0.0 ms [allocation failure].
[35841:0x103000000] 2173 ms: Scavenge 4.9 (12.0) -> 4.9 (12.0) MB, 2.4 / 0.0 ms [allocation failure].
[35841:0x103000000] 3181 ms: Scavenge 5.6 (12.0) -> 5.6 (13.0) MB, 2.2 / 0.0 ms [allocation failure].
[35841:0x103000000] 3183 ms: Scavenge 5.9 (13.0) -> 5.9 (13.0) MB, 2.0 / 0.0 ms [allocation failure].
[35841:0x103000000] 4191 ms: Scavenge 6.5 (13.0) -> 6.5 (14.0) MB, 2.0 / 0.0 ms [allocation failure].
[35841:0x103000000] 4698 ms: Scavenge 6.9 (14.0) -> 6.9 (14.0) MB, 1.8 / 0.0 ms [allocation failure].
[35841:0x103000000] 5709 ms: Scavenge 7.5 (14.0) -> 7.5 (15.0) MB, 1.9 / 0.0 ms [allocation failure].
[35841:0x103000000] 6213 ms: Scavenge 7.9 (15.0) -> 7.9 (15.0) MB, 2.3 / 0.0 ms [allocation failure].
[35841:0x103000000] 6718 ms: Scavenge 8.5 (15.0) -> 8.5 (16.0) MB, 2.0 / 0.0 ms [allocation failure].
[35841:0x103000000] 7223 ms: Scavenge 8.9 (16.0) -> 8.9 (16.0) MB, 2.4 / 0.0 ms [allocation failure].

V8 needs to free memory for allocation. All the promoted data is now accumulating in the old space. V8 triggers incremental marking for a possible sweep GC event.
[IncrementalMarking] Start (GC epilogue)
[IncrementalMarking] Start marking
[IncrementalMarking] Running

The scavenge event happens in the new space while marking happens in the old space. So these two events can happen concurrently.
[IncrementalMarking] Scavenge during marking.
The inflow of data from the new space is being promoted to the old space at a steady pace and the incremental marker decides to speed up it rate of marking to keep up with the inflow

[35841:0x103000000] Increasing marking speed to 3 due to high promotion rate
In this scavenge event the marker marked memory in 9 steps which took a total of 1.2 ms or 0.13ms per step. Note that this is since last gc event which was at 7223ms.

[35841:0x103000000] 8236 ms: Scavenge 9.5 (16.0) -> 9.5 (17.0) MB, 2.1 / 0.0 ms (+ 1.2 ms in 9 steps since last GC) [allocation failure].
Marking and Scavenging continues...
[IncrementalMarking] Scavenge during marking.
[35841:0x103000000] 8743 ms: Scavenge 9.9 (17.0) -> 9.9 (17.0) MB, 2.2 / 0.0 ms (+ 1.1 ms in 6 steps since last GC) [allocation failure].
[IncrementalMarking] Scavenge during marking.
[35841:0x103000000] 9251 ms: Scavenge 10.5 (17.0) -> 10.5 (18.0) MB, 2.1 / 0.0 ms (+ 2.1 ms in 10 steps since last GC) [allocation failure].
[IncrementalMarking] Scavenge during marking.
[35841:0x103000000] 9755 ms: Scavenge 10.8 (18.0) -> 10.8 (18.0) MB, 2.4 / 0.0 ms (+ 1.5 ms in 6 steps since last GC) [allocation failure].
[IncrementalMarking] Scavenge during marking.
[35841:0x103000000] 10766 ms: Scavenge 11.5 (18.0) -> 11.5 (19.0) MB, 2.2 / 0.0 ms (+ 1.9 ms in 10 steps since last GC) [allocation failure].
[IncrementalMarking] Scavenge during marking.
[35841:0x103000000] 11274 ms: Scavenge 11.8 (19.0) -> 11.8 (19.0) MB, 2.1 / 0.0 ms (+ 1.2 ms in 6 steps since last GC) [allocation failure].
[IncrementalMarking] Scavenge during marking.
[35841:0x103000000] 12287 ms: Scavenge 12.4 (19.0) -> 12.4 (20.0) MB, 2.2 / 0.0 ms (+ 2.1 ms in 10 steps since last GC) [allocation failure].

[IncrementalMarking] requesting finalization of incremental marking.
[IncrementalMarking] (Incremental marking task: finalize incremental marking).
[IncrementalMarking] Finalize incrementally round 0, spent 0 ms, marking progress 0.

In the marking phase objects can be in three possible states
White - The GC has not found this one yet
Grey - The GC has found this one but has not looked at its neighboring objects. In other words it is yet to confirm whether this object is being referenced by any of its neighboring objects
Black - GC has found this object and has looked at all its neighbors. These have been fully processed.

[IncrementalMarking] Black allocation started
[IncrementalMarking] Stopping.
[IncrementalMarking] Black allocation finished

Old space is filling up. Based on heuristics, V8 understands that a scavenge will not work so triggers more of the mark-sweep events. Hence the reason - [scavenge might not succeed]
In this marking event notice that it is from the start of the marking event which started after the 7223ms scavenge event and not from the last GC event as was the case previously. In this case it also specifies that the longest step was 0.4ms.

[35841:0x103000000] 12800 ms: Mark-sweep 12.8 (20.0) -> 12.6 (20.0) MB, 6.2 / 0.0 ms (+ 13.7 ms in 69 steps since start of marking, biggest step 0.4 ms) [allocation failure] [scavenge might not succeed].

Object size is now at 16.9 MB. Remember that the max old space is 20MB so we are close to running out. Notice also that the duration of each Mark-sweep event has gone up. From 2.4ms at 7223ms to 17.8ms at 18455ms.

[35841:0x103000000] 13313 ms: Mark-sweep 13.2 (20.0) -> 13.2 (21.0) MB, 11.8 / 0.0 ms [allocation failure] [scavenge might not succeed].
[35841:0x103000000] 13830 ms: Mark-sweep 13.6 (21.0) -> 13.6 (21.0) MB, 12.2 / 0.0 ms [allocation failure] [scavenge might not succeed].
[35841:0x103000000] 14849 ms: Mark-sweep 14.2 (21.0) -> 14.0 (22.0) MB, 12.1 / 0.0 ms [allocation failure] [scavenge might not succeed].
[35841:0x103000000] 15366 ms: Mark-sweep 14.3 (22.0) -> 14.3 (22.0) MB, 12.3 / 0.0 ms [allocation failure] [scavenge might not succeed].
[35841:0x103000000] 15883 ms: Mark-sweep 14.9 (22.0) -> 14.9 (23.0) MB, 12.4 / 0.0 ms [allocation failure] [scavenge might not succeed].
[35841:0x103000000] 16399 ms: Mark-sweep 15.3 (23.0) -> 15.3 (24.0) MB, 11.7 / 0.0 ms [allocation failure] [scavenge might not succeed].
[35841:0x103000000] 17417 ms: Mark-sweep 15.9 (24.0) -> 15.9 (24.0) MB, 12.2 / 0.0 ms [allocation failure] [scavenge might not succeed].
[35841:0x103000000] 17933 ms: Mark-sweep 16.3 (24.0) -> 16.3 (24.0) MB, 12.9 / 0.0 ms [allocation failure] [scavenge might not succeed].
[35841:0x103000000] 18455 ms: Mark-sweep 16.9 (24.0) -> 16.9 (24.0) MB, 17.8 / 0.0 ms [allocation failure] [scavenge might not succeed].
last resort and the system runs out of space

16399 ms: Mark-sweep 15.3 (23.0) -> 15.3 (24.0) MB, 11.7 / 0.0 ms [allocation failure] [scavenge might not succeed].
17417 ms: Mark-sweep 15.9 (24.0) -> 15.9 (24.0) MB, 12.2 / 0.0 ms [allocation failure] [scavenge might not succeed].
17933 ms: Mark-sweep 16.3 (24.0) -> 16.3 (24.0) MB, 12.9 / 0.0 ms [allocation failure] [scavenge might not succeed].
18455 ms: Mark-sweep 16.9 (24.0) -> 16.9 (24.0) MB, 17.8 / 0.0 ms [allocation failure] [scavenge might not succeed].

Cannot get stack trace in GC.
FATAL ERROR: MarkCompactCollector: semi-space copy, fallback in old gen Allocation failed - JavaScript heap out of memory

To summarize:

  1. First came the scavenge event. No data is being freed so all data is promoted to the old space.
  2. Data keeps being promoted at a high rate which pushes V8 to trigger the incremental marker to prepare for a possible sweep event.
  3. As no data is being freed, all data objects are live the mark-sweep event has no effect.
  4. Eventually the system runs out of memory.

This was an extreme example to show the various GC events. Let us now consider a more saner example. In this example we actually free up memory to see how the sweeping action works. For every tenth iteration the array ‘c’ is emptied.

/*
Example - 2
Run with: node -trace_gc --trace_incremental_marking --max_semi_space_size=1 --max_old_space_size=20 example-2.js
reset when 10,000,000 bytes have been accumulated
*/

"use strict";

const a = {b: {c: []}};

let i = 0;

function aString (s) {
let text = '';
for (let i = 0; i < 10000; i++) {

//string of 100 characters
text += 'Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the ';
}
s.push(text);
}

setInterval(function () {
aString(a.b.c);

//reset when 10,000,000 bytes have been pushed into the array
if (i === 10) {
i = 0;
a.b.c = [];
}

i++;

}, 500);

We will use the same command line parameters as before:

node -trace_gc -trace_gc_verbose –trace_incremental_marking  –max_semi_space_size=1  –max_old_space_size=20 example-2.js

In the trace below I have removed some of the traces to improve clarity.

node -trace_gc --trace_incremental_marking --max_semi_space_size=1 --max_old_space_size=20 example-2.js

V8 initialization - we ignore this
[40225:0x103000000] 20 ms: Scavenge 2.1 (6.0) -> 2.1 (7.0) MB, 0.7 / 0.0 ms [allocation failure].
[40225:0x103000000] 21 ms: Scavenge 2.1 (7.0) -> 2.1 (8.0) MB, 0.7 / 0.0 ms [allocation failure].

Scavenge events. All data promoted as no space is available in new space
[40225:0x103000000] 562 ms: Scavenge 4.0 (11.0) -> 3.6 (11.0) MB, 0.8 / 0.0 ms [allocation failure].
----more scavenges--------
[40225:0x103000000] 7144 ms: Scavenge 8.8 (16.0) -> 8.8 (16.0) MB, 2.2 / 0.0 ms [allocation failure].

Marking started in preparation for a sweep GC event.
[IncrementalMarking] Start (GC epilogue)
[IncrementalMarking] Start marking
[IncrementalMarking] Running

Marking and scavenges are interspersed
[IncrementalMarking] Scavenge during marking.

All data promoted, marking speed increased to keep up promoted data inflow
[40225:0x103000000] Increasing marking speed to 3 due to high promotion rate
Marking and Scavenge happen concurrently.
[40225:0x103000000] 8155 ms: Scavenge 9.4 (16.0) -> 9.4 (17.0) MB, 2.3 / 0.0 ms (+ 1.0 ms in 8 steps since last GC) [allocation failure].

----more marking and scavenges--------
[40225:0x103000000] 10690 ms: Scavenge 11.4 (18.0) -> 11.4 (19.0) MB, 2.3 / 0.0 ms (+ 1.6 ms in 9 steps since last GC) [allocation failure].
[IncrementalMarking] requesting finalization of incremental marking.
[IncrementalMarking] Black allocation finished

Marking complete and a sweep triggered. Size of objects reduced from 11.7MB to 7.7MB in 2.9 ms. V8 does lazy sweeping, that is,
enough sweeping(cleanup) as it can in the given amount of time so as not to hold up the application.
[40225:0x103000000] 11197 ms: Mark-sweep 11.7 (19.0) -> 7.7 (19.0) MB, 2.9 / 0.0 ms (+ 8.2 ms in 46 steps since start of marking,
biggest step 0.4 ms) [GC interrupt] [GC in old space requested].
Back to scavenges, only promotions. Note the increase in object size
[40225:0x103000000] 11702 ms: Scavenge 8.3 (19.0) -> 8.3 (19.0) MB, 1.6 / 0.0 ms [allocation failure].

----more scavenges--------
[40225:0x103000000] 17247 ms: Scavenge 12.3 (19.0) -> 12.3 (20.0) MB, 1.5 / 0.0 ms [allocation failure].
Second mark-sweep event is triggered. Size of objects reduced from 12.6MB to 5.0MB by lazy sweeping in 5.9 ms.
Time increased compared to last sweep as more memory is swept. In a real live application 5.9ms is very high.
During this time the application has paused.
[40225:0x103000000] 17758 ms: Mark-sweep 12.6 (20.0) -> 5.0 (18.0) MB, 5.9 / 0.0 ms [allocation failure] [scavenge might not succeed].

Back to scavenges. Size of objects increased from 5.6MB to 11.5MB
[40225:0x103000000] 18263 ms: Scavenge 5.6 (18.0) -> 5.6 (18.0) MB, 1.6 / 0.0 ms [allocation failure].

----more scavenges here-----
[40225:0x103000000] 26335 ms: Scavenge 11.5 (19.0) -> 11.5 (19.0) MB, 1.9 / 0.0 ms [allocation failure].
Marking started in preparation for another mark-sweep GC
[IncrementalMarking] Start (idle)
[IncrementalMarking] Start sweeping.
[IncrementalMarking] Start marking
[IncrementalMarking] Running

Compared to the previous marking cycle above the reason for speeding up marking is different - "because of low space left"
[40225:0x103000000] Speed up marking because of low space left
[40225:0x103000000] Marking speed increased to 3
[IncrementalMarking] Scavenge during marking.

GC's desperation continues, progressively increasing the marking speed
[40225:0x103000000] Speed up marking because of low space left
[40225:0x103000000] Marking speed increased to 6
----more marking--------
[IncrementalMarking] (GC interrupt: finalize incremental marking).
[IncrementalMarking] Finalize incrementally round 0, spent 0 ms, marking progress 1.
[IncrementalMarking] Complete (normal).
[40225:0x103000000] Speed up marking because of low space left
[40225:0x103000000] Postponing speeding up marking until marking starts
[IncrementalMarking] Stopping.

Making done. mark-sweep is triggered, lazy sweeping now… object size 12.3 => 4.7 in 3.0ms
[40225:0x103000000] 27352 ms: Mark-sweep 12.3 (19.0) -> 4.7 (18.0) MB, 3.0 / 0.0 ms (+ 4.2 ms in 8 steps since start of marking,
biggest step 1.4 ms) [GC interrupt] [GC in old space requested].

Cycle continues
[40225:0x103000000] 27860 ms: Scavenge 5.2 (18.0) -> 5.2 (19.0) MB, 2.7 / 0.0 ms [allocation failure].
[40225:0x103000000] 28364 ms: Scavenge 5.7 (19.0) -> 5.7 (19.0) MB, 1.9 / 0.0 ms [allocation failure].

By now the typical GC cycle must be apparent in this case.

  1. V8 initialization
  2. Starts with Scavenge in the new space. Data gets promoted to old space
  3. As the promoted data increases, V8 starts up the incremental marker
  4. Marking in the old space is intersped with the Scavenge events
  5. Marking completes and triggers the sweeper(mark-sweep)
  6. The Lazy sweeper runs for a small time duration to free up memory.  

The idea is not to dig too deep into these traces but to get an overview of how your app is performing with one important thing in mind. During a GC event the application is blocked. It cannot process any request till the GC is over.

If in the above example we were to reset every 100,000 bytes there would few, if any, mark-sweep events as everything would be done in the new space. No data would be promoted. As scavenges are fast the pauses would be very short. By the same token you can also set the size of the new space to allow for more scavenges. Given this knowledge you now have a way to check if your app is leaking and/or non performant. In my next post I will provide more realistic examples.

Advertisements

Does my NodeJS application leak memory? – 1

A Gentle Introduction

A journey of a thousand miles begins with a single step – Lao Tzu

One of the key requirements for writing software for emergency management is consistent performance. Every time user feedback and/or testing reveals performance issues, the question that always comes to my mind is whether the application is leaking memory or if is it something else. The last thing an emergency software developer wants to hear is that the alerts did not get to the recipients in time or that emergency personnel experienced sluggish performance.

Simply put, a memory leak is when an application can no longer efficiently manage its memory resources resulting either in a crash or sluggish performance. Putting it differently, a memory leak occurs when freed memory is not reclaimed for reuse. Depending on the size and frequency of leak an application could either crash or behave erratically. While all platforms and programming languages are susceptible to memory leaks, I will be restricting my scope to NodeJS/V8.

Memory leaks in the front end (or client) code can go unnoticed for a long time or may never be detected. This is because the user could shut down the browser or refresh the page before a potential memory leak causes a problem. This is not the case when you are writing backend server real time applications for messaging, authentication and Internet of Things. A server issue affects all its clients not just one user. It is not practical to restart the server every time there is an issue. 

The go to response of a developer is to scour the internet and typically end up examining heap dumps or watching the growth of memory. If one has not understood the theory behind memory leaks and garbage collection this exercise usually results in frustration. For example, just pure memory growth for some period of time does not necessarily mean that there is a leak.

I went through the same frustrations  like most of my fellow developers and decided that I had to find a simpler way to check for leaks. This was very important to me as all my work relates to developing NodeJS backend server apps.

One of the things we tend to ignore is the power of the logs. Both in terms of analyzing them and of implementing them in our code. Depending on the type of log, the vitals of the server app can be provided in realtime. This can provide warning of an upcoming problem, be it leak or a security threat.

There are a ton of log analysis and management tools, including monitoring tools available, but that creates a learning curve for a specific product which may not be available in the near future. Many also do not fit the budget of a small startup. Many free open source tools tend to have difficulty in keeping up with the evolution of NodeJS/V8.

In my quest to find something that was not dependent of any external product or that required code modification, I decided to stick with understanding the garbage collection trace events of V8/NodeJS. My journey took me through analyzing the V8 source code, thousands of lines of garbage collection traces, including a quick detour into the world of Linux internals.

In the series of posts that follow I will describe my journey on how I use the V8 logs to look for leaks starting with some basic information on memory management