We already had the CSSRule::Type enum, but the values were not aligned
with the CSSOM spec. This patch takes care of that, and then exposes
the type of a CSSRule to JavaScript via the "type" attribute.
We already had setProperty() but it was full of ad-hoc idiosyncracies.
This patch aligns setProperty() with the CSSOM spec and also implements
removeProperty() since that's actually needed by setProperty() now.
Some things fixed by this:
- We now support the "priority" parameter to setProperty()
- Element "style" attributes now update to reflect CSSOM mutations
Previously, we would create a new Gfx::ScaledFont whenever we needed one
for an element's computed style. This worked fine on Acid3 since the use
of web fonts was extremely limited.
In the wild, web fonts obviously get used a lot more, so let's have a
per-point-size font cache for them.
Previously, we were running the "load fonts if needed" machine at the
start of every style computation. That was a lot of unnecessary work,
especially on sites with lots of style rules, since we had to traverse
every style sheet to see if any @font-face rules needed loading.
With this patch, we now load fonts once per sheet, right after adding
it to a document's style sheet list.
This is used to skip downloading fonts in formats that we don't support.
Currently we only support TTF as far as I am aware.
The parts of a `src` are in a fixed order, unusually, which makes the
parsing more nesty instead of loopy.
Like, An+B, this is an old construct that does not fit well with modern
CSS syntax, so things get a bit hairy! We have to determine which
tokens match the grammar for `<urange>`, then turn those back into a
string, and then parse the string differently from normal. Thankfully
the spec describes in detail how to do that. :^)
This is not 100% correct, since we are not using the original source
text (referred to in the spec as the "representation") of the tokens,
but just converting them to strings in a manual, ad-hoc way.
Re-engineering the Tokenizer to keep that original text was too much of
a tangent for today. In any case, we do parse `U+4???`, `U+0-100`,
`U+1234`, and similar, so good enough for now!
"Component value" is the term used in the spec, and it doesn't conflict
with any other types, so let's use the shorter name. :^)
Also, this doesn't need to be friends with the Parser any more.
Previously the default was always 1px, which didn't look great on higher
font sizes.
This changes the default thickness to one-tenth to the font height. The
one-tenth part was chosen arbitrarily, but I think it does the job
pretty well. :^)
We're calling this in a way that is incorrect, and so the algorithm's
assumption that the next token is an `<ident-token>` is wrong, and we
have to handle that failing. Ideally we would just stop calling this
incorrectly, but until then, let's actually document what is happening.
The code had to change a bit to match. Previously, we appended an empty
sub-list immediately, but now we append it at the end. The difference
is that if there are no tokens, we now correctly return an empty
list-of-lists, instead of a list containing an empty list.
We now correctly call convert_to_rule() outside of this function.
As before, I've renamed `parse_as_rule()` -> `parse_as_css_rule()` to
match the free function that calls it.
This is not actually used by anything currently, but it should be used
for `@media` and other at-rules.
Removed the public parse_as_list_of_rules() because public functions
should be things that outside classes actually need to use.
`parse_a_stylesheet()` should not do any conversion on its rules. This
change corrects that. There are other places where we get this wrong,
but one thing at a time. :^)