Modern JavaScript offers a lovely suite of useful array methods that can loop over arrays in elegant and functional ways. Of course, they usually come with overhead since they’re either an abstraction on the built-in iterator methods, or they create an in-memory copy of the array to do operations.
I decided to make a little micro benchmark testing common higher-order array methods against a basic iterative for
loop.
TL;DR the
for
loop wins, but the results illustrate just how much faster it is.
Setup
Since this test is using Node, making use of argv
allowed for arguments to be passed via the command line as an array:
From the Node.js documentation, the process.argv
property returns an array containing the command-line arguments passed when the Node.js process was launched.
The first two arguments passed in by argv
are the path of the Node.js executable, and the path of the current file being run, hence I used _
and __
to deconstruct them into variables that won’t be used.
The size (_size
) of the array and the contents of each element (_fill
) are what will be used to run the micro benchmarks.
I measured the performance using a simple function that sums all the array elements, and used the following methods for testing:
Array.reduce
methodArray.map
methodArray.forEach
methodfor..of
loopfor
loop
Instrumenting
I chose to use performance.now
rather than console.time
to measure the execution time of each function. In my view performance.now
is more modern and the output can be stored in a variable which is preferable for doing calculations.
Whenever performance.now
is called, it returns a timestamp as a float. I stored two timestamps (start
and end
), and then subtracted the difference to measure how long the operation took to execute:
Since start
and end
are within the function scope, the measurement structure can be used across all the methods being tested. To achieve this in a DRY manner, I created an elapsedTime
function that does the calculation and also truncates the output to a readable integer value in milliseconds:
Creating Tests
The first test function is Array.reduce
and including the instrumentation code above, it looks like:
The code for the Array.map
method:
The code for the Array.forEach
method:
The code for a slightly lower-level for..of
loop:
And finally the good old for
loop:
The
console.log
in each was to check if all the tests actually produce the same output.
The last thing I did was to create a function to automatically run the tests. Here’s the code:
The function runTests
will accept the _size
and _fill
from the command line (via argv
) and pass them to each testing function.
Since Node’s command line arguments are technically strings, they need to be converted to a number via Number(_size)
and Number(_fill)
respectively.
You’ll notice Number(_size).toLocaleString()
is interpolated. The toLocaleString
method ensures the array’s size is more readable. If the number is 25000000
then it’ll be shown as 25,000,000
as an example.
The final code for the entire file looks like:
Where runTests(_size, _fill)
accepts the CLI arguments and passes them to the tests.
Running Tests
To run against custom data, the command is node index.js 5000 50
, where 5000
is the size of the array and 50
is the integer value of each element.
However, to make the tests more automated, I created a test
command in package.json
that includes a warmup run for the garbage collector and then runs the tests against a few different array sizes. The command is:
Which runs the tests against following data:
- 1,000,000 Elements of Value 100
- 10,000,000 Elements of Value 250
- 12,345,678 Elements of Value 1337
- 25,000,000 Elements of Value 500
Trying to test array sizes above
25 million
elements sends the CPU usage above90%
, causing the CodeSandbox MicroVM to hang.
Here’s an example of test output from within CodeSandbox:
Charting Tests
Below are the comparisons in charts of the different methods and array sizes tested:
1,000,000 Elements of Value 100
10,000,000 Elements of Value 250
12,345,678 Elements of Value 1337
25,000,000 Elements of Value 500
As expected, the for
loop is more performant than the competitors by a large margin (usually 8x-10x across all sample sizes). It’s of course a micro benchmark and running an operation on an array of 25 million elements isn’t something found in the real world.
While it’s easier (and more intuitive in my opinion) to use modern array methods like reduce
and map
, it’s also a good idea if you have the time to go back and refactor your code to be a bit more imperative and lower-level where possible, and then test real-world performance using tools such as Chrome’s V8 Profiler, etc.
Links
- CodeSandbox Link (for testing) - codesandbox.io/p/github/paramdeo/testing-imperative-loops-versus-higher-order-functions-in-javascript/master
- GitHub Repo (for cloning/testing) - github.com/paramdeo/testing-imperative-loops-versus-higher-order-functions-in-javascript
process.argv
in Node.js - nodejs.org/docs/latest/api/process.html#processargv