I decided to make a little micro benchmark testing common higher-order array methods against a basic iterative
forloop wins, but the results illustrate just how much faster it is.
Since this test is using Node, making use of
argv allowed for arguments to be passed via the command line as an array:
From the Node.js documentation, the
process.argv property returns an array containing the command-line arguments passed when the Node.js process was launched.
The first two arguments passed in by
argv are the path of the Node.js executable, and the path of the current file being run, hence I used
__ to deconstruct them into variables that won’t be used.
The size (
_size) of the array and the contents of each element (
_fill) are what will be used to run the micro benchmarks.
I measured the performance using a simple function that sums all the array elements, and used the following methods for testing:
I chose to use
performance.now rather than
console.time to measure the execution time of each function. In my view
performance.now is more modern and the output can be stored in a variable which is preferable for doing calculations.
performance.now is called, it returns a timestamp as a float. I stored two timestamps (
end), and then subtracted the difference to measure how long the operation took to execute:
end are within the function scope, the measurement structure can be used across all the methods being tested. To achieve this in a DRY manner, I created an
elapsedTime function that does the calculation and also truncates the output to a readable integer value in milliseconds:
The first test function is
Array.reduce and including the instrumentation code above, it looks like:
The code for the
The code for the
The code for a slightly lower-level
And finally the good old
console.login each was to check if all the tests actually produce the same output.
The last thing I did was to create a function to automatically run the tests. Here’s the code:
runTests will accept the
_fill from the command line (via
argv) and pass them to each testing function.
Since Node’s command line arguments are technically strings, they need to be converted to a number via
Number(_size).toLocaleString() is interpolated. The
toLocaleString method ensures the array’s size is more readable. If the number is
25000000 then it’ll be shown as
25,000,000 as an example.
The final code for the entire file looks like:
runTests(_size, _fill) accepts the CLI arguments and passes them to the tests.
To run against custom data, the command is
node index.js 5000 50, where
5000 is the size of the array and
50 is the integer value of each element.
However, to make the tests more automated, I created a
test command in
package.json that includes a warmup run for the garbage collector and then runs the tests against a few different array sizes. The command is:
Which runs the tests against following data:
- 1,000,000 Elements of Value 100
- 10,000,000 Elements of Value 250
- 12,345,678 Elements of Value 1337
- 25,000,000 Elements of Value 500
Trying to test array sizes above
25 millionelements sends the CPU usage above
90%, causing the CodeSandbox MicroVM to hang.
Here’s an example of test output from within CodeSandbox:
Below are the comparisons in charts of the different methods and array sizes tested:
1,000,000 Elements of Value 100
10,000,000 Elements of Value 250
12,345,678 Elements of Value 1337
25,000,000 Elements of Value 500
As expected, the
for loop is more performant than the competitors by a large margin (usually 8x-10x across all sample sizes). It’s of course a micro benchmark and running an operation on an array of 25 million elements isn’t something found in the real world.
While it’s easier (and more intuitive in my opinion) to use modern array methods like
map, it’s also a good idea if you have the time to go back and refactor your code to be a bit more imperative and lower-level where possible, and then test real-world performance using tools such as Chrome’s V8 Profiler, etc.
process.argvin Node.js - nodejs.org/docs/latest/api/process.html#processargv