Appendix 1: CSS selector performance

Back at the beginning of 2014 I was having a "debate" (I used air-quotes there people) with some fellow developers about the irrelevance, or not, of worrying about CSS selector speed.

Whenever exchanging theories/evidence about the relative speed of CSS selectors, developers often reference Steve Souders work on CSS selectors from 2009. It's used to validate claims such as 'attribute selectors are slow' or 'pseudo selectors are slow'.

For the last few years, I've felt these kinds of things just weren't worth worrying about. The sound-bite I have been wheeling out for years is:

With CSS, architecture is outside the braces; performance is inside

But besides referencing Nicole Sullivan's later post on Performance Calendar to back up my conviction that the selectors used don't really matter, I had never actually tested the theory.

To try and address this, I attempted to produce some tests of my own that would settle the argument. At the least, I believed it would prompt someone with more knowledge/evidence to provide further data.

Testing selector speed

Steve Souders' aforementioned tests use JavaScript’s new Date(). However, nowadays, modern browsers (iOS/Safari were a notable exception at the time of testing) support the Navigation Timing API which gives us a more accurate measure we can use. For the tests, I implemented it like this:

    ;(function TimeThisMother() {
        window.onload = function(){
            var t = performance.timing;
                alert("Speed of selection is: " + (t.loadEventEnd - t.responseEnd) + " milliseconds");
            }, 0);

This lets us limit the timing of the tests between the point all assets have been received (responseEnd) and the point the page is rendered (loadEventEnd).

So, I set up a very simple test. 20 different pages, all with an identical, enormous DOM, made up of 1000 identical chunks of this markup:

<div class="tagDiv wrap1">
  <div class="tagDiv layer1" data-div="layer1">
    <div class="tagDiv layer2">
      <ul class="tagUl">
        <li class="tagLi"><b class="tagB"><a href="/" class="tagA link" data-select="link">Select</a></b></li>

20 different CSS selection methods were tested to colour the inner most nodes red. Each page differed only in the rule applied to select the inner most node within the blocks. Here were the different selectors tested and a link to the the test page for that selector:

  1. Data attribute
  2. Data attribute (qualified)
  3. Data attribute (unqualified but with value)
  4. Data attribute (qualified with value)
  5. Multiple data attributes (qualified with values)
  6. Solo pseudo selector (e.g. :after)
  7. Combined classes (e.g. class1.class2)
  8. Multiple classes
  9. Multiple classes with child selector
  10. Partial attribute matching (e.g. [class^=“wrap”])
  11. nth-child selector
  12. nth-child selector followed by another nth-child selector
  13. Insanity selection (all selections qualified, every class used e.g. div.wrapper > div.tagDiv > div.tagDiv.layer2 > ul.tagUL > li.tagLi > b.tagB >
  14. Slight insanity selection (e.g. .tagLi .tagB
  15. Universal selector
  16. Element single
  17. Element double
  18. Element treble
  19. Element treble with pseudo
  20. Single class

The test was run 5 times on each browser and the result averaged across the 5 results. The browsers tested:

A previous version of Internet Explorer (rather than the latest Internet Explorer available to me) was used to shed some light on how a 'non evergreen' browser performed. All the other browsers tested received regular updates so I wanted to be sure that there wasn't a considerable difference in the way modern regularly updating browsers deal with CSS selectors and how slightly older ones do.

Here are the results. All times in milliseconds:

TestChrome 34Firefox 29Opera 19IE9Android 4
Biggest Diff.1613.617.631152

The difference between fastest and slowest selector

The Biggest Diff. row shows the difference in milliseconds between the fastest and slowest selector. Of the desktop browsers, IE9 stands out as having the biggest difference between fastest and slowest selectors at 31ms. The others are all around half of that figure. However, interestingly there was no consensus on what the slowest selector was.

The slowest selector

I was interested to note that the slowest selector type differed from browser to browser. Both Opera and Chrome found the 'insanity' selector (test 13) the hardest to match (the similarity between Opera and Chrome here perhaps not surprising given they share the blink engine), while Firefox struggled with a single pseudo selector (test 6), as did the Android 4.2 device (a Tesco hudl 7" tablet). Internet Explorer 9's Achilles heel was the partial attribute selector (test 10).

Good CSS architecture practices

One thing we can be clear on is that using a flat hierarchy of class-based selectors, as is the case with ECSS, provides selectors that are as fast as any others.

What does this mean?

For me, it has confirmed my believe that it is absolute folly to worry about the type of selector used. Second guessing a selector engine is pointless as the manner selector engines work through selectors clearly differs. Further more, the difference between fastest and slowest selectors isn't massive, even on a ludicrous DOM size like this. As we say in the North of England, "There are bigger fish to fry".

Since documenting my original results, Benjamin Poulain, a WebKit Engineer got in touch to point out his concerns with the methodology used. His comments were very interesting and some of the information he related is quoted verbatim below:

"By choosing to measure performance through the loading, you are measuring plenty of much much bigger things than CSS, CSS Performance is only a small part of loading a page:

If I take the time profile of [class^="wrap"] for example (taken on an old WebKit so that it is somewhat similar to Chrome), I see:

With the test above, let say we have a baseline of 100 ms with the fastest selector. Of that, 5 ms would be spent collecting style. If a second selector is 3 times slower, that would appear as 110ms in total. The test should report a 300% difference but instead it only shows 10%."

At this point, I responded that whilst I understood what Benjamin was pointing out, my test was only supposed to illustrate that the same page, with all other things being equal, renders largely the same regardless of the selector used. Benjamin took the time to reply with further detail:

"I completely agree it is useless to optimize selectors upfront, but for completely different reasons:

It is practically impossible to predict the final performance impact of a given selector by just examining the selectors. In the engine, selectors are reordered, split, collected and compiled. To know the final performance of a given selectors, you would have to know in which bucket the selector was collected, how it is compiled, and finally what does the DOM tree looks like.

All of that is very different between the various engines, making the whole process even less predictable.

The second argument I have against web developers optimizing selectors is that they will likely make things worse. The amount of misinformation about selectors is larger than correct cross-browser information. The chance of someone doing the right thing is pretty low.

In practice, people discover performance problems with CSS and start removing rules one by one until the problem go away. I think that is the right way to go about this, it is easy and will lead to correct outcome."

Cause and effect

At this point I felt vindicated that the CSS selector used was almost entirely irrelevant. However, I did wonder what else we could gleam from the tests.

If the number of DOM elements on the page was halved, as you might expect, the speed to complete any of the tests dropped commensurately. But getting rid of large parts of the DOM isn't always a possibility in the real world. This made me wonder what difference the amount of unused styles in the CSS would have on the results.

What difference does style bloat make?

Another test: I grabbed a big fat style sheet that had absolutely no relevance to the DOM tree. It was about 3000 lines of CSS. All these irrelevant styles were inserted before a final rule that would select our inner node and make it red. I did the same averaging of the results across 5 runs on each browser.

Half those rules were then cut out and the test repeated to give a comparison. Here are the results:

TestChrome 34Firefox 29Opera 19IE9Android 4
Full bloat64.4237.674.2436.81714.6
Half bloat51.6142.865.4358.61412.4

Rules diet

This provides some interesting figures. For example, Firefox was 1.7X slower to complete this test than it was with its slowest selector test (test 6). Android 4.3 was 1.2X slower than its slowest selector test (test 6). Internet Explorer was a whopping 2.5X slower than its slowest selector!

You can see that things dropped down considerably for Firefox when half of the styles were removed (approx 1500 lines). The Android device came down to around the speed of its slowest selector at that point too.

Removing unused styles

Does this kind of horror scenario sound familiar to you? Enormous CSS files with all manner of selectors (often with selectors in that don't even work), heaps of ever more specific selectors seven or more levels deep, non-applicable vendor-prefix's, ID selectors all over the place and file sizes of 50–80KB (sometimes more).

If you are working on a code base that has a big fat CSS file like this, one that no-one is quite sure what all the styles are actually for, my advice would be to look there for your CSS optimisations before the selectors being employed. Hopefully by this point you will be convinced that an ECSS approach might help in this respect.

Then again, that won't necessarily help with the actual performance of your CSS.

Performance inside the brackets

The final test I ran was to hit the page with a bunch of 'expensive' properties and values. Consider this rule:

.link {
    background-color: red;
    border-radius: 5px;
    padding: 3px;
    box-shadow: 0 5px 5px #000;
    -webkit-transform: rotate(10deg);
    -moz-transform: rotate(10deg);
    -ms-transform: rotate(10deg);
    transform: rotate(10deg);
    display: block;

With that rule applied, here are the results:

TestChrome 34Firefox 29Opera 19IE9Android 4
Expensive Styles65.2151.465.2259.21923

Here all browsers are at least up with their slowest selector speed (IE was 1.5X slower than its slowest selector test (10) and the Android device was 1.3X slower than the slowest selector test (test 6)) but that's not even the full picture. Try and scroll that page! Repaint on those kind of styles can bring a browser to its knees (or whatever the equivalent of knees is for a browser).

The properties we stick inside the braces are what really affects performance. It stands to reason that scrolling a page that requires endless expensive re-paints and layout changes is going to put a strain on the device. Nice HiDPI screen? It will be even worse as the CPU/GPU strains to get everything re-painted to screen in under 16ms.

With the expensive styles test, on the 15" Retina MacBook Pro I tested on, the paint time shown in continuous paint mode in Chrome never dropped below 280ms (and remember, we are aiming for sub–16ms). To put that in perspective for you, the first selector test page, never went above 2.5ms. That wasn't a typo. Those properties created a 112X increase in paint time. Holy expensive properties Batman! Indeed Robin. Indeed.

What properties are expensive?

An 'expensive' property/value pairing is one we can be pretty confident will make the browser struggle with when it has to repaint the screen (e.g. on scroll).

How can we know what will be an 'expensive' style? Thankfully, we can apply common sense to this and get a pretty good idea what is going to tax the browser. Anything that requires a browser to manipulate/calculate before painting to the page will be more costly. For example, box-shadows, border-radius, transparency (as the browser has to calculate what is shown below), transforms and performance killers like CSS filters - if performance is your priority, anything like that is your worst enemy.


Some takeaways from these tests: