mirror of
https://github.com/LadybirdBrowser/ladybird.git
synced 2025-07-05 16:41:52 +00:00
When loading a canned version of reddit.com, we end up parsing many many shadow tree style sheets of roughly ~170 KiB text each. None of them have '\r' or '\f', yet we spend 2-3 ms for each sheet just looping over and reconstructing the text to see if we need to normalize any newlines. This patch makes the common case faster in two ways: - We use TextCodec::Decoder::to_utf8() instead of process() This way, we do a one-shot fast validation and conversion to UTF-8, instead of using the generic code-point-at-a-time callback API. - We scan for '\r' and '\f' before filtering, and if neither is present, we simply use the unfiltered string. With these changes, we now spend 0 ms in the filtering function for the vast majority of style sheets I've seen so far. |
||
---|---|---|
.. | ||
Block.cpp | ||
Block.h | ||
ComponentValue.cpp | ||
ComponentValue.h | ||
Declaration.cpp | ||
Declaration.h | ||
DeclarationOrAtRule.cpp | ||
DeclarationOrAtRule.h | ||
Dimension.h | ||
Function.cpp | ||
Function.h | ||
GradientParsing.cpp | ||
Helpers.cpp | ||
MediaParsing.cpp | ||
Parser.cpp | ||
Parser.h | ||
ParsingContext.cpp | ||
ParsingContext.h | ||
Rule.cpp | ||
Rule.h | ||
SelectorParsing.cpp | ||
Token.cpp | ||
Token.h | ||
Tokenizer.cpp | ||
Tokenizer.h | ||
TokenStream.h |