JIT is great. Don't get me wrong. It's what makes current JS run as fast as it does. But when you're trying to reach for super micro optimizations, it's a blessing but very much also a curse. The problem is that it is a black box and you're reaching for minute gains. That makes it very hard to predict whether or not your optimizations are actually going to matter or not. The JIT compiler might have done the optimizations for you anyways. Or worse, it might deoptimize because you're being smarter than it can handle. Your results might also vary due to the OS or CPU environment being busy.
First a hefty warning. Don't ever do this kind of optimization unless you have a very good reason for it.
I don't ever do this level of optimization outside the ZeParser project. Ever. Yes, you should always think about the order of certain conditions, but almost never whether you should check the
before the dot. Code legibility is almost always far more important than perf. I'm not kidding!
In the world of parsing it's all about speed. And I'm enjoying that. Optimizing on this level is like squeezing out the last bytes for a js1k. It feels great when you've got it another ms down. And much like the eval compression technique in js1k, your results might vary. Something that looks very optimal might actually be slower because of compiled changes under the hood. But unlike js1k, you can't actually get hard numbers on your perf results.
When parsing JS your first task is to create tokens. A token is the smallest "atomic" semantic meaningful part of a source code. That means that
as a whole is a token, just like
are. The characters that compose them are at this point no longer important. And formally, neither is whitespace or comments. This helps tremendously with parsing because you no longer need to worry about the composition of a token. You just go "oh this is an identifier, then next I'm expecting a punctuator". You don't have to care so much about the actual source code string value of the token as much. You still care, but only in some cases.
So there's a tokenizer that first creates these tokens so the parser can go have a ball with them. As you can imagine, the tokenizer actually does most of the work since it has to validate every single character of the source code. So if you're going to optimize anything, you should start at the tokenizer.
In ZeParser2 I'm trying to do very simple operations. No spiffy weird stuff. It only trips up the JIT so there's no point. In fact, I'm using
to get the ordinal ascii number of each char so I can do a number comparison rather than string comparison. It's just way faster.
Another example is switches. I've had that once, compared it to
and ditched the switches immediately. Even though the switches were using number literals, they were still slower (I'd expect them to compile to a jump table, but no).
So the first thing a tokenizer has to do is figure out what token to parse next. Currently, ZeParser2 does this by stats and based on the frequency of token types, will start to check for each token type in that order. However, I've been experimenting with other approaches and feel I should change them. For example, these stats
clearly show that I should go more granular, and check based on character (which I'm already doing anyways).
But this too is a bit misleading as there are other tricks you might be able to pull off to improve this decision making. For example, this little whitespace trick
did the rounds recently. But you could also use ascii ranges as a form of preventing many checks.
will all amount to an identifier token, nothing else. So that's just four checks rather than
. Of course, if those
ops are compiled to something that's slower than
, you'll be screwed again. Probably won't be the case for this example, but it's a realistic problem.
Right now I've changed my code to take the stats in 302
into account. This means I first check for space, dot, parens, semi, comma, equal,
, CR, curlies, and then lower case identifier. The lower case identifier checks for the entire
range. It had to do two checks anyways, so it's basically hoping that "it might as well" do range checks here.
The thing I'm wondering now is whether I should just move the identifier check up to the point where I check
. The rationale is that the
is already one comparison. And, this is also important, the lower case (starting) identifiers make up for about 25% of the tokens anyways, so while most individual letters are not super important, combined they fall into the same token category, without any additional checks, so I could argue that moving the check up means optimizing a single
of 5% is promoted to a double check for 25%.
The underlying problem here is of course that JIT is in constant flux. So even if I'd know exactly what instructions my source could compile to, it could still be completely different next year. This is of course exactly why you normally shouldn't even bother with this level of optimization. Another reason is that it really depends between browsers. And older browsers might not even do it at all (although those are really fading away these days). And what about mobile? Pfew.
Like I said, I enjoy doing these optims. But it's really something I can only do inside the ZeParser2 project.