It looks like your "load average" is computing something very different than the traditional Unix "load average". If I'm reading right, yours is a measure of what percentage of the time the loop spent sleeping waiting for I/O, taken over the last 60 ticks of a 1 second timer (so generally slightly longer than 60 seconds). The traditional Unix load average is an exponentially weighted moving average of the length of the run queue.
Is one of those definitions better for your goal of detecting when to shed load? I don't know. But calling them the same thing is pretty confusing :-). The Unix version also has the nice property that it can actually go above 1; yours doesn't distinguish between a service whose load is at exactly 100% of capacity and barely keeping up, versus one that's at 200% of capacity and melting down. But for load shedding maybe you always want your tripwire to be below that anyway.
More broadly we might ask what's the best possible metric for this purpose – how do we judge? A nice thing about the JavaScript library you mention is that scheduling delay is a real thing that directly impacts quality of service – it's more of an "end to end" measure in a sense. Of course, if you really want an end to end measure you can do things like instrument your actual logic, see how fast you're replying to http requests or whatever, which is even more valid but creates complications because some requests are supposed to take longer than others, etc. I don't know which design goals are important for real operations.