Skip to main content

Bug Tracker

Side navigation

#11974 closed bug (cantfix)

Opened June 26, 2012 01:05AM UTC

Closed June 26, 2012 01:24AM UTC

Last modified May 22, 2013 08:56PM UTC

$.parseHTML has ( lots ) of xss issues and can't be labeled as secure in its current implementation

Reported by: James Churchman <> Owned by:
Priority: undecided Milestone: None
Component: unfiled Version: 1.8b1
Keywords: Cc:
Blocked by: Blocking:

so the point of $.parseHTML is that it will safely return html and not make any cross site requests ..

well I'm not so sure that it does this .. the problem is that it returns dom elements .. and dom elements can ( naturally ) contain images .. and images naturally contain links to images

now as far as i know the dom prefetches any images, regardless of if the nodes are placed into the document or not .. so that it already has the image when it is finally inserted.. this means that any img nodes with embedded src url's, the url is given a GET request

so this will request the image by fetching the url

$.parseHTML("<img src=''>")

with out it even being inserted

this leaves a few small but still significant types of xss attacks available .. one) where you spoof a get request on the target site and the server thinks that its a trusted request and performs the desired request. the user is still logged in and any id cookies would be sent. there are ways around this but many sites don't implement them two ) where it now makes a request to a foreign site .. this is not huge but it can still be used for tracking and other purposes .. & maybe more. + this is only for the image tag, i have not yet thought what could be done with data-uri's, link tags and more

i still think that the feature has good merit but it can't be labeled as something truly secure because its not ... apart from removing script tags there is little benefit to putting this directly into the document with innerHTML

so i think that the problem here is that it returns dom nodes, if it wishes to be secure, due to prefetching and other issues , i think it should return something else ( fake dom node objects .. ) that any potential user could easily scan & validate contains only items they approve ( divs, or images who's url's are in a place that they allow etc.. ) and then this could be inserted into the main document dom and the "proper" dom nodes produced for it

just a thought and hope this is not too unhelpful! :-)


Attachments (0)
Change History (7)

Changed June 26, 2012 01:24AM UTC by dmethvin comment:1

keywords: → needsdocs
resolution: → cantfix
status: newclosed

Definitely, there are ways to attack or send data via back channels if the developer blindly processes untrusted content, even if they *know* that content is HTML. That applies whether jQuery is there or not.

The important things about $.parseHTML is that 1) the user explicitly says they are expecting HTML (not a selector) and 2) the user has the option to not run any scripts in the markup. The latter can be strictly enforced in environments that support Content Security Policy or for example Microsoft's toStaticHTML.

We can't prevent all XSS attacks inside jQuery. We're just trying to ensure that the developer is clear on what they are asking jQuery to do, and that requires making the inputs to some API calls clear.

I'm closing this canfix/needsdocs since the best we can do is educate. :-)

Changed June 26, 2012 10:27AM UTC by James Churchman <> comment:2

Hi dmethvin, thanks for your fast response :-)

I think that you fundamentally don't understand the issue. I agree with both 1) and 2) but the implementation is fundamentally flawed ..

please re-read ( maybe you wrote this section I'm not sure :-) under XSS PROTECTION .. is specifically says that this is a safe feature to use for taking any untrusted source :

"Developers have sometimes forgotten this, passing strings to jQuery that come from untrusted sources such as the URL or user input." and it continues in a similar fashion..

this directly implies its use is as an html parser of html from any source ..

( this does not align with "We're just trying to ensure that the developer is clear on what they are asking jQuery to do" as you state )

if the blog statement is the case, then the fundamental implementation is totally flawed, you can't go creating dom elements from untrusted sources, even if those elements are not ( yet ) inserted .. the dom will do all kinds of prefetchings and other side-effecting actions .

i agree that jquery can't prevent all possible xss's written with it, but it can avoid including functions that do it by default, especially when documented to the country .. so on this basis i am not happy that the ticket has been closed within hours of being opened ...

there are obviously a few good solutions :

1) the documentation is updated to state that this is simply a way to include html with out the ambiguity of $() possibly being seen as a selector and also striping script tags


2) it's reimplemented to generate jquery lookalike dom nodes that you can do the normal .attr and .text etc.. on but are not in fact real dom nodes, until inserted

( or maybe 3) there are multiple options in the call that let you specify what tag types are allowed disallowed or filtered, and for them what attributes are allowed or disallowed before the dom nodes are made and returned )


Changed June 26, 2012 11:28AM UTC by James Churchman <> comment:3

ok so my initial trial was based on 30 seconds of playing around but having had a think you can run any js with this :-)

eg :

$.parseHTML("<img src='' onload='alert(\\"hi there\\")'>")

will fire the onload event when the image is downloaded ( it will get called twice in fact , so i presume jquery is creating each element twice ), and before the image is ever loaded into the page or the return result of the parseHTML can be validated ...

or an even shorter example :

$.parseHTML("<img src='z' onerror='alert(\\"hi there\\")'>")

Changed June 26, 2012 12:11PM UTC by dmethvin comment:4

this directly implies its use is as an html parser of html from any source

No, the issue with $() is that they are passing it what they think is a selector, but it really turns out to be HTML with executable code. That is the primary issue we're trying to address by making people spell it out with $.parseHTML(). The rest regarding trusted sources can be addressed in the docs.

Changed June 26, 2012 12:54PM UTC by James Churchman <> comment:5

thats an excellent, but opposite issue ..

in that case you need to have a $.selector("#element_id") type thing that allows somebody to explicitly say a string is a selector and can't be an html fragment as it could be with the current overloaded $ when used with user input

in that scenario it would be fantastic to have an explicit $.selector call added to jquery , but its the opposite issue to an xss secure html parser

please re-read the 1.8 announcement blog post :-)

Changed October 15, 2012 07:37PM UTC by mikesherov comment:6

keywords: needsdocs

Changed May 22, 2013 08:56PM UTC by dmethvin comment:7

#13921 is a duplicate of this ticket.