This commit is a change to the internals of parsing for wasm text files.
Previously the entire input would be lex'd and stored into an array for
parsing to access. This is, however, the largest contributing factor to
the peak memory usage reported in bytecodealliance#1095. Tokens are only used a handful
of times so buffering the entire file in-memory is quite wasteful.
This change was tricky to apply, however, because the original rationale
for lexing in this manner was performance related. The recursive-descent
style `Parser` trait encourages `peek`-ing tokens and will often involve
attempting a parse only to later unwind and try something else instead.
This means that each individual token, especially whitespace and
comments, would otherwise naively be lexed many times. For example if
the buffer of all tokens is "simply" removed some instrumented analysis
showed that over half of all tokens in the input file were lexed more
than once. This means that simply removing the buffer resulted in a
performance regression.
This performance regression is in some sense inherently not addressable
with a lazy-lexing strategy. I implemented a fixed-width cache of the
latest tokens lex'd but it still didn't perform as well as caching all
tokens. I think this is because lexing is quite fast and adding the
layer of the cache spent a lot of time in checking and managing the
cache. While this performance regression may not be 100% fixable though
I've settled on a strategy that's a bit more of a half-measure.
The general idea for the `Parser` is now that it stores the current
position in the file plus the next "significant" token at the same time.
Here "significant" means not-whitespace and not-comments for example.
This enables the parser to always know the next token having pre-skipped
whitespace and comments. Thus any single-token "peek" operations don't
actually need to do any lexing, they can instead look at the current
token and decide what to do based on that. This is enabled with a few
new `Cursor::peek_*` methods to avoid generating the next `Cursor` which
would otherwise require a lexing operation.
Overall this means that the majority of tokens in the case of bytecodealliance#1095 are
lexed only once. There's still ~10% of tokens that are lexed 2+ times,
but the performance numbers are before this commit parsing that file
took 7.6s with 4G of memory, and after this commit it takes 7.9s with 2G
of memory. This means that for a 4% regression in time we're reducing
memory usage by half.