Side navigation
#13842 closed feature (notabug)
Opened April 30, 2013 08:35PM UTC
Closed April 30, 2013 10:34PM UTC
Last modified May 01, 2013 12:07PM UTC
Nearly tripple the performance of repetitive id selection
Reported by: | markg85@gmail.com | Owned by: | markg85@gmail.com |
---|---|---|---|
Priority: | undecided | Milestone: | None |
Component: | unfiled | Version: | 2.0.0 |
Keywords: | dbr | Cc: | |
Blocked by: | Blocking: |
Description
Hi,
I've been playing with this for a few hours and i think i've just doubled the ID selection in jquery to be as fast as $(document.getElementById('someId')). You can see the jsperf results here: http://jsperf.com/jquery-dom-selector-benchmark
So, what did i do to get it nearly 3x faster? Well, a few things.
- i introduced a new object: "domCache" which stored the "match" object (array) against the current "selector" object. So selecting an element that isn't in the cache yet should be roughly equal in speed as it was. Selecting an alement that is already in the cache will make it 3x faster because it can get all data from the "domCache" var.
- Second thing i did is not checking for the string as the first if statement anymore. Instead i'm checking for the selector.nodeType since that one is likely to be the first if anyway when you have a dom object.
all my edits are in the top part of the:
jQuery.fn = jQuery.prototype = {
...
}
function along with the one added domCache var. My goal was actually to make the dom selection at least half the speed of a plain document.getElementById('someId') but that seems to be impossible. The theoretic max that i could get is:
$(document.getElementById('someId'))
And with my edits i'm on that limit. The rest of the slowdown (compared to getElementById) is caused by other parts i suppose though i have no clue what or why. I'd like to know that though.
So, would this be interesting to add in the next jQuery releases (2.0.x and 1.9.x)?
Attachments (0)
Change History (9)
Changed April 30, 2013 08:53PM UTC by comment:1
_comment0: | Correct me if I am wrong, but I don't think selection by an ID is anywhere near a bottleneck for web pages or web apps. Do you have some benchmarks showing it's slow? \ \ Also, caching the results of a selector is always risky because the DOM can change and then your cache is out of date. The best you can do is cache the already-parsed function that you'll use to get the selector next time. \ \ → 1367355291627653 |
---|---|
owner: | → markg85@gmail.com |
status: | new → pending |
Changed April 30, 2013 09:13PM UTC by comment:2
status: | pending → new |
---|
So that's how you handle a contribution. By assuming i'm stupid and have cached the actual dom output. You obviously haven't looked at a single line of my changes otherwise you would;ve made a slightly smarter reply i suppose.
.1 Optimizing is _NEVER_ bad. Curently the jquery performance of selecting a dome element is ~10x slower the doing it "manually" with getElementById() so it certainly can use a improvement.
.2 So the jsperf isn't a benchmark good enough for you. Please...
.3 Since you didn't read the code. I will spell it out for you. I am caching the output of the match variable and am using that when the selector is given a second time. Then i'm still using getElementById so no dom caching!
Sorry for nitpicking heavily on you right now, but your reply made my slighty on edge.
I know i haven't provided a diff. I have provided the sources (you can see it in the jsperf or directly here: http://www.sc2.nl/js/jquery-2.0.0_custom.js) Making a diff is trival, but i just didn't do it yet.
Changed April 30, 2013 09:26PM UTC by comment:3
Optimizing is _NEVER_ bad.
Optimizing for what? Size? Speed? Code complexity?
The current code can get 1.5 MILLION IDs per second. How is that slow? Why should we further optimize the case that is already fast?
Changed April 30, 2013 10:03PM UTC by comment:4
Replying to [comment:3 dmethvin]:
> Optimizing is _NEVER_ bad. Optimizing for what? Size? Speed? Code complexity? The current code can get 1.5 MILLION IDs per second. How is that slow? Why should we further optimize the case that is already fast?
1.5 million is very fast indeed. Yet native is 10x faster. I just don't think that jquery should add such a big load on javascript. It would be kinda pointless for browser vendors to further optimize javascript if they would follow your reasoning.
In my opinion jQuery should stay close to native javascript and do everything to get there. My patch brings the id selection to ~1/5th native performance and i would like to get at 9/10th (near native) with jQuery.
Changed April 30, 2013 10:34PM UTC by comment:5
resolution: | → notabug |
---|---|
status: | new → closed |
By your logic, if you want to get from New York to California faster, you should run to your garage before driving there because clearly running is faster than walking. True, but it doesn't make any real difference to the overall time it takes to get across the country.
You're not optimizing anything useful. Run a profiler on a real web page and see what is really taking the time. It isn't at all bottlenecked on getting elements by ID. Don't let jsperf fool you into thinking that you've increased performance by 3x in any real sense.
Changed April 30, 2013 10:51PM UTC by comment:6
Nice, i certainly will think twice (or three times in this case) before ever submitting anything to jquery again. Guess performance is not important here.
Changed April 30, 2013 11:03PM UTC by comment:7
Sorry it didn't work out. You might turn it into a plugin, in case someone out there needs to select more than 1 million unique IDs per second on a single page and jQuery is too slow to do it.
Changed May 01, 2013 07:51AM UTC by comment:8
Oke, i've got a very nice additional performance improvement but i don't know why that is happening or if it's good by any means.
In jquery you have the lines:
jQuery = function( selector, context ) {
The jQuery object is actually just the init constructor 'enhanced'
return new jQuery.fn.init( selector, context, rootjQuery );
},
Now if i omit the "new" thus i get:
jQuery = function( selector, context ) {
The jQuery object is actually just the init constructor 'enhanced'
return jQuery.fn.init( selector, context, rootjQuery );
},
then the performance of the selector is shooting up! It is in fact at ~1/2 of native performance. So a quite significant improvement if you ask me. However, i wonder why this happens. Can i safely omit the "new"? And what is happening internally if i do so?
You can test it out here: http://jsperf.com/jquery-dom-selector-benchmark/2 (the "raw speed" tests)
Changed May 01, 2013 12:07PM UTC by comment:9
keywords: | → dbr |
---|
Correct me if I am wrong, but I don't think selection by an ID is anywhere near a bottleneck for web pages or web apps. Do you have some benchmarks showing it's slow?