3:37 PMBrowser Security Handbook, part 1.1
In addition to the aforementioned "true" URL schemes, modern browsers support a large number of pseudo-schemes used to implement various advanced features, such as encapsulating encoded documents within URLs, providing legacy scripting features, or giving access to internal browser information and data views.
Encapsulating schemes are of interest to any link-handling applications, as these methods usually impose specific non-standard content parsing or rendering modes on top of existing resources specified in the later part of the URL. The underlying content is retrieved using HTTP, looked up locally (e.g., file:///), or obtained using other generic method - and, depending on how it's then handled, may execute in the security context associated with the origin of this data. For example, the following URL:
...will be retrieved over HTTP from http://www.example.com/archive.jar. Because of the encapsulating protocol, the browser will then attempt to interpret the obtained file as a standard Sun Java ZIP archive (JAR), and extract, then display /resource.html from within that archive, in the context of example.com.
In addition to encapsulating schemes enumerated above, there are various schemes used for accessing browser-specific internal features unrelated to web content. These pseudo-protocols include about: (used to access static info pages, errors, cache statistics, configuration pages, and more), moz-icon: (used to access file icons), chrome:, chrome-resource:, chromewebdata:, resource:, res:, and rdf: (all used to reference built-in resources of the browser, often rendered with elevated privileges). There is little or no standardization or proper documentation for these mechanisms, but as a general rule, web content is not permitted to directly reference any sensitive data. Permitting them to go through on trusted pages may serve as an attack vector in case of browser-side vulnerabilities, however.
Finally, several pseudo-schemes exist specifically to enable scripting or URL-contained data rendering in the security context inherited from the caller, without actually referencing any additional external or internal content. It is particularly unsafe to output attacker-controlled URLs of this type on pages that may contain any sensitive content. Known schemes of this type include:
The core protocol used to request and annotate much of web traffic is called the Hypertext Transfer Protocol. This text-based communication method originated as a very simple, underspecified design drafted by Tim Berners-Lee, dubbed HTTP/0.9 (see W3C archive) - these days no longer used by web browsers, but recognized by some servers. It then evolved into a fairly complex, and still somewhat underspecified HTTP/1.1, as described in RFC 2616, whilst maintaining some superficial compatibility with the original idea.
Every HTTP request opens with a single-line description of a content access method (GET meant for requesting basic content, and POST meant for submitting state-changing data to servers - along with plethora of more specialized options typically not used by web browsers under normal circumstances). In HTTP/1.0 and up, this is then followed by protocol version specification - and the opening line itself is followed by zero or more additional field: value headers, each occupying their own line. These headers specify all sorts of meta-data, from target host name (so that a single machine may host multiple web sites), to information about client-supported MIME types, cache parameters, the site from which a particular request originated (Referer), and so forth. Headers are terminated with a single empty line, followed by any optional payload data being sent to the server if specified by a Content-Length header.
One example of an HTTP request might be:
POST /fuzzy_bunnies/bunny_dispenser.php HTTP/1.1
The server responds in a similar manner, returning a numerical status code, spoken protocol version, and similarly formatted metadata headers followed by actual content requested, if available:
HTTP/1.1 200 OK
Originally, every connection would be one-shot: after a request is sent, and response received, the session is terminated, and a new connection needs to be established. Since the need to carry out a complete TCP/IP handshake for every request imposed a performance penalty, newer specifications introduced the concept of keep-alive connections, negotiated with a particular request header that is then acknowledged by the server.
This, in conjunction with the fact that HTTP supports proxying and content caching on interim systems managed by content providers, ISPs, and individual subscribers, made it particularly important for all parties involved in an HTTP transaction to have exactly the same idea of where a request starts, where it ends, and what it is related to. Unfortunately, the protocol itself is highly ambiguous and has a potential for redundancy, which leads to multiple problems and differences between how servers, clients, and proxies may interpret responses:
Many specific areas, such as caching behavior, have their own sections later in this document. Below is a survey of general security-relevant differences in HTTP protocol implementations:
NOTE 2: Refresh header tokenization in MSIE occurs in a very unexpected manner, making it impossible to navigate to URLs that contain any literal ; characters in them, unless the parameter is enclosed in additional quotes. The tokenization also historically permitted cross-site scripting through URLs such as:
Unlike in all other browsers, older versions of Internet Explorer would interpret this as two URL= directives, with the latter taking precedence:
Hypertext Markup Language, the primary document format rendered by modern web browsers, has its roots with Standard Generalized Markup Language, a standard for machine-readable documents. The initial HTML draft provided a very limited syntax intended strictly to define various functional parts of the document. With the rapid development of web browsers, this basic technology got extended very rapidly and with little oversight to provide additional features related to visual presentation, scripting, and various perplexing and proprietary bells and whistles. Perhaps more interestingly, the format was also extended to provide the ability to embed other, non-HTTP multimedia content on pages, nest HTML documents within frames, and submit complex data structures and client-supplied files.
The mess eventually led to a post-factum compromise standard dubbed HTML 3.2. The outcome of this explosive growth was a format needlessly hard to parse, and combining unique quirks, weird limitations, and deeply intertwined visual style and document structure information - and so ever since, W3C and WHATWG focused on making HTML a clean, strict, and well-defined language, a goal at least approximated with HTML 4 and XHTML (a variant of HTML that strictly conforms to XML syntax rules), as well as the ongoing work on HTML 5.
This day, the four prevailing HTML document rendering implementations are:
Sadly, for compatibility reasons, parsers operating in non-XML mode tend to be generally lax and feature proprietary, incompatible, poorly documented recovery modes that make it very difficult for any platform to anticipate how a third-party HTML document - or portion thereof - would be interpreted. Any of the following grossly malformed examples may be interpreted as a scripting directive by some, but usually not all, renderers:
1: <B <SCRIPT>alert(1)</SCRIPT>>
Cross-site scripting aside, another interesting property of HTML is that it permits certain HTTP directives to be encoded within HTML itself, using the following format:
<META HTTP-EQUIV="Content-Type" VALUE="text/html; charset=utf-8">
Not all HTTP-EQUIV directives are meaningful - for example, the determination of Content-Type, Content-Length, Location, or Content-Disposition had already been made by the time HTML parsing begins - but some values seen may be set this way. The strategy for resolving HTTP - HTML conflicts is not outlined in W3C standards - but in practice, valid HTTP headers take precedence over HTTP-EQUIV; on the other hand, HTTP-EQUIV takes precedence over unrecognized HTTP header values. HTTP-EQUIV tags will also take precedence when the content is moved to non-HTTP media, such as saved to local disk.
Key security-relevant differences between HTML parsing modes in the aforementioned engines are shown below:
NOTE: to add insult to injury, special HTML handling rules seem to be sometimes applied to specific sections of a document; for example, \x00 character is ignored by Internet Explorer, and \x08 by Firefox, in certain HTML contexts, but not in others; most notably, they are ignored in HTML tag parameter values in respective browsers.
|Total comments: 11||1 2 »|