New parser tactics


I'm currently micro-micro-optimizing the heck out of my parser and it's actually working. I dropped from 600ms to 370ms on a 7mb test file (chrome). Firefox actually dropped from 1200ms to 775ms right now! I changed the entire architecture to try and use small functions wherever possible. Share and re-use functions as much as possible (to hitch a ride on the jit train) and try to make as many comparisons with numbers before anything else.

I'm actually in a world where I'd rather do another call than an extra comparison. Though that is more about reducing conditionals (so, branching), I suppose. And if I must compare, then first a number comparison before doing a string one. And this matters. Holy shit, this matters a lot. You probably won't notice it in your code, but with super high performing code like mine is right now, yes, it matters!

My current strategy is basically this; take the input. Step through it by analyzing the string. Do this by taking input.charCodeAt(pos) as much as possible. However, don't do it more than once on any index. Even though the charCodeAt calls are faster than a string comparison, they are still the bottleneck for speed. So I'm currently re-aligning the code to cache these charCodeAt lookups as much as possible.

For example, there are two points in the code where I cannot escape a big chain of string comparisons. One is at the start of a statement in case the next token is an identifier. I need to check for if, return, etc. The other one is when I have an identifier (for a variable) and I need to validate it against the reserved keyword list. For both cases I changed my approach from a straightforward string comparison (if (identifier === 'if') ... and return (identifier === 'if' || identifier === 'super' || etc...) to first check the first character of the identifier and branch of early to prevent as many comparisons as possible. This means those simple checks now look like this:

var c = identifier.charCodeAt(0);
if (c === 0x62) {
return input.substring(start, end) === 'break';
} else if (c === 0x63) {
var identifier = input.substring(start, end);
return (
identifier === 'case' ||
identifier === 'catch' ||
identifier === 'continue' ||
identifier === 'class' ||
identifier === 'const'
} else if (c === 0x64) {
var identifier = input.substring(start, end);
return (
identifier === 'do' ||
identifier === 'debugger' ||
identifier === 'default' ||
identifier === 'delete'

Note that these substring and charCodeAt calls are already cached, this is just a simplified view of what's going on. This change actually shaved off about 100ms (or roughly 1/6th ~ 1/7th) of the cparse time on chrome. A bit more on firefox. That's a huge win. I'm going to need to expand my test file soon :p

This micro-optimization is actually kind of fun. And refreshing from the golfing in js1k. Though it's kind of comparable to it. You try a trick, you run some browser tests, and sometimes you get lucky. Hit and miss, all the time. Except browsers these days do so much optimization and are so susceptible to OS environment noise (load) that it's hard to get a good reading. They are far from deterministic, which is something you can still rely to in js1k. In JS1K, whatever you change, and no matter how much fuzz the compression gives you, for one specific trick you'll always get the same result. Optimization is much harder that way.

I've been playing with JIT-inspector, a firefox extension. It's a bit simple in UI but very effective. It shows you some of the JIT analysis that the browser runs for your code. And while I don't quite understand all the cryptic types yet, it did give me the insight that I needed to be more consistent with my parameter usage. Like, I've now eliminated any "default to undefined" for parameters since it was giving JIT mixed messages, literally! If a variable is always int, it can optimize better than when it's "sometimes int, sometimes undefined". So I've moved away from a "return a number or null if not found" and made it be "-1 if not found", just so it'll at least be the same number type.

I've also been playing a bit with chrome's profiler. I'm already using console.profile() and console.profileEnd() to get a more accurate reading, but my testing setup uses setTimeout to give the UI time to update between runs. This time is gathering up as (program) and I kind of need a way to pause the profiler. Don't think I'll get that break though.

Soooo, that's a random status update for me. Hope you enjoyed it :)

I should probably add that my parser is currently merely validating. The speeds I'm talking about do not really scale with the huge impact the parser will get when I start modeling a real parse tree. In fact, just returning an array of tokens (one object per token) without slicing up the input for token string values can easily add a second (for the 7mb file), so take these numbers with a grain of salt, I'm currently just aiming for proper validation... parse tree comes after I'm happy with perf!