Skip to content
This repository has been archived by the owner on Apr 6, 2021. It is now read-only.

XSS security vulnerability #122

Closed
fxkr opened this issue Apr 20, 2016 · 15 comments
Closed

XSS security vulnerability #122

fxkr opened this issue Apr 20, 2016 · 15 comments
Labels

Comments

@fxkr
Copy link

fxkr commented Apr 20, 2016

Paste this into monod and refresh the page:

<a href=""><iframe src="javascript:alert(1)"></iframe></a>

I'm not quite sure why "secure" should even be one of three main selling points ("cool, secure, and offline-first") for a text editor, but monod fails here. (On the upside, it's undoubtedly cool. ;)

@jmaupetit jmaupetit added the bug label Apr 20, 2016
@mandatoryprogrammer
Copy link

The following payload also works:

<input onfocus=alert(1) autofocus>>

(the extra > is required to bypass whatever filter they have)

@moloch--
Copy link

The <input onfocus=alert(1) autofocus>> should prefered to the <iframe as the <iframe src= results in a null origin.

@moloch--
Copy link

The following also results in XSS within the context of the application's origin:

<<img onerror=alert(1) src=x/>>

@mandatoryprogrammer
Copy link

mandatoryprogrammer commented Apr 20, 2016

Here is a proof-of-concept demonstrating the stored XSS: [link removed: please this is not needed]

(until someone edits it)

@willdurand
Copy link
Member

Hi! Thank you for commenting on this. There are a few things that are worth considering here though. In short, because you can execute JS on a web page means there is a security issue. Also, we did not provide any security layer (so there is nothing to by-pass).

It looks like CommonMark does not forbid JavaScript code because it allows any HTML. At least, markdown-it allows it. You could see this as a "feature" then, but I do agree with you: it would be weird. So let's think a bit more about the potential issue.

Monod does not deal with any user data, there is no user or anything like that, so there is nothing to leak or privilege to escalate at first glance. Nonetheless, using JavaScript, an attacker could look into the local storage (or whatever client-side/local database layer) and therefore he would have access to any document stored in the browser of the victim: that's pretty bad.

Yet, documents are encrypted locally too. So the attacker would have access to a set of documents encrypted with AES using 256 bits keys that are never persisted. For a given document, the only way to get this key is to have the full URL, which would mean full access to the content anyway. It may be possible to brute-force the documents though, but I quickly checked and it does not seem to be feasible in practice. Am I wrong?

On the other hand, we built this editor in a short amount of time to learn React.js, and the sharing feature was primarily designed to share content with friends. That is why we did not put more energy in CSP headers and all other XSS protections. Should we deal with sanitization at the app level?

That being said, I would like to thank you for highlighting such a weakness. Be sure that we care about security:

Thanks for contributing!
William

@mandatoryprogrammer
Copy link

Hey @willdurand,

Fair points, I think the mean thing that's important here is that it would be concerning if someone hosted this on their own website (which did employ authentication/other functionality). If that was the case then this vulnerability could be used to perform authenticated requests in the context of the user's session. This is something important that users should know if they plan on using the software, even if it's hosted on a different port: https://developer.mozilla.org/en-US/docs/Web/Security/Same-origin_policy#IE_Exceptions

It is additionally worth noting that with this an attacker can rewrite the page (and the address bar with history.pushState) to be any arbitrary content. The idea here is that an attacker can abuse the trust given to a specific domain by rewriting the content and the URI bar to simulate something like a login page for example.

Yet, documents are encrypted locally too. So the attacker would have access to a set of documents encrypted with AES using 256 bits keys that are never persisted. For a given document, the only way to get this key is to have the full URL, which would mean full access to the content anyway. It may be possible to brute-force the documents though, but I quickly checked and it does not seem to be feasible in practice. Am I wrong?

To clarify, are you talking about AES 256 for localStorage? And the URI hash is the key to decrypt the content? I haven't dug into the source code too deeply yet. My answer sort of depends on the answer to this :).

Hope that all makes sense, depending on how you see people using (and abusing) your software some of these risks may not be concerning to you. As with anything it's all in the context.

@fxkr
Copy link
Author

fxkr commented Apr 20, 2016

Yes, I believe you should absolutely fix this. Either get it completely right or don't have "secure" as one of your top three selling points. If you decide to call a blatant XSS security hole a feature, you should call that out in very big, red letters in your README. I understand that you are hesitant because bolting on security by implementing proper parsing and output escaping at this stage would be some work.

The obvious example is that a vulnerable Monod installation at example.com/monad/ can be used to steal cookies from, say, example.com/webmail/. (Or worse, depending on CORS. Noone expects XSS.)

Another example would be key leakage via document.referrer if a more-restricted document contains a hyperlink to a less-restricted document in the same monad installation.

@willdurand
Copy link
Member

It is additionally worth noting that with this an attacker can rewrite the page (and the address bar with history.pushState) to be any arbitrary content. The idea here is that an attacker can abuse the trust given to a specific domain by rewriting the content and the URI bar to simulate something like a login page for example.

That makes perfectly sense.

To clarify, are you talking about AES 256 for localStorage? And the URI hash is the key to decrypt the content? I haven't dug into the source code too deeply yet. My answer sort of depends on the answer to this :).

We abstract client-side persistence with localForage. We encrypt/decrypt documents before/after persisting them locally. The URI hash is a secret (256 bits) used to generate a 256 bits key for AES (we use sjcl).

Hope that all makes sense, depending on how you see people using (and abusing) your software some of these risks may not be concerning to you. As with anything it's all in the context.

Yup!

@mandatoryprogrammer
Copy link

Some other fun examples of exploitation mainly revolve around abusing the HTML5 history API to start snooping your previous notes that you've viewed.
The idea would be something like the following:

  • User has a history of notes in their browser history
  • User then navigates to an attacker-created note
  • The page's XSS payload fires (we'll call this Window Test setup #1)
  • The XSS payload opens a new tab (Window Basic layout #2)
  • The new tab usings window.opener to access the window object of Window Test setup #1.
  • The new tab runs window.history.back() to force the browser window to go backwards into previous notes made/viewed by the user (Window Test setup #1)
  • Since both tabs are running in the same origin, the contents of these other notes can be viewed because the # key is exposed.

@willdurand
Copy link
Member

willdurand commented Apr 20, 2016

@mandatoryprogrammer that is what I was thinking when I read your comment about history.pushState

@mandatoryprogrammer
Copy link

Ah cool, hopefully the comments have demonstrated our concerns (let me know if anything I've said is unclear - more than happy to elaborate). I apologize if my comments were a bit brash, the word secure is sort of a trigger for security folk :) like myself.

@willdurand
Copy link
Member

The idea would be something like the following:

User has a history of notes in their browser history
User then navigates to an attacker-created note
The page's XSS payload fires (we'll call this Window #1)
The XSS payload opens a new tab (Window #2)
The new tab usings window.opener to access the window object of Window #1.
The new tab runs window.history.back() to force the browser window to go backwards into previous notes made/viewed by the user (Window #1)
Since both tabs are running in the same origin, the contents of these other notes can be viewed because the # key is exposed.

So the victim has to "navigate to an attacker-created note", but it has to open this note from the current tab (which owns a history of notes), right?

@willdurand
Copy link
Member

Another example would be key leakage via document.referrer if a more-restricted document contains a hyperlink to a less-restricted document in the same monad installation.

Thanks for the initial report @fxkr :) Care to elaborate on this?

@fxkr
Copy link
Author

fxkr commented Apr 21, 2016

Another example would be key leakage via document.referrer if a more-restricted document contains a hyperlink to a less-restricted document in the same monad installation.

Thanks for the initial report @fxkr :) Care to elaborate on this?

I was wrong, sorry (well past midnight here). The referrer will not contain the fragment identifier, so that attack does not work.

However, you may want to consider adding rel=noreferrer to all links anyway.

willdurand added a commit that referenced this issue Apr 21, 2016
* Simplify parsing (no need to deal with HTML malformed blocks anymore)
* Add noreferrer and noopener rels to links

cf. #122
willdurand added a commit that referenced this issue Apr 21, 2016
* Simplify parsing (no need to deal with HTML malformed blocks anymore)
* Add noreferrer and noopener rels to links

cf. #122
willdurand added a commit that referenced this issue Apr 21, 2016
* Simplify parsing (no need to deal with HTML malformed blocks anymore)
* Add noreferrer and noopener rels to links

cf. #122
@willdurand
Copy link
Member

So we've rolled out a new version on our instance. HTML support has been dropped, which should help here. We might enable HTML support again with proper sanitization, but we think that it is better to come up with a fix right now, and see later.

Thanks you all for your help and your contribution!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

5 participants