An inductively reasoned case for increasing Web standards adoption
Due to the overstated impact of legacy user agents, designers' reluctance to retreat from the Desktop Publishing (DTP) metaphor, and the perceived peculiarities of Cascading Stylesheets (CSS), markup-driven site presentation is allowed a fallaciously strong business case. Exposing this fallacy requires the conscientious enhancement of development practices and promotion of their benefits, possibly in combination with the adoption of a more cavalier attitude toward the complaints of colleagues who are reluctant to adjust to the technologies today's user agent rendering engines will support.
In my September 2006 article about the pitfalls of standards-friendly development, I made the Real World case for things like stylesheet filters. Several people underscored my points, but a near-equal number insisted in their feedback that I'd instead supported the case for sticking with table based layout. However, I strongly believe that the arguments for relying on tables for site layout are poor at best, and fallacious at worst.
Distinctions of Web standards
Of particular importance is my use of the term "standards-friendly" in frequent preference to the term "standards-compliant." I emphasize this distinction because in order for a site to be inarguably standards-compliant, its component markup and styles must both pass validation tests. To be considered compliant, a site must also pass World Wide Web Consortium (W3C) Web Content Accessibility Guidelines. However, the market demand for vendor extensions to existing technologies, paired with the W3C's legendary (and inevitable) inability to respond quickly to market conditions, often makes true standards-compliance more of an aspiration than a realistic goal.
Standards-friendly practices emphasize the spirit and strengths of Web standards without regard to the compliance of a site; in fact, there are conditions under which a site can be standards-compliant without being standards-friendly.
In order to be easily convertible into effective XML namespaces, sites built with the various versions of Hypertext Markup Language (HTML) must be standards-compliant and standards-friendly.
Legacy user agents defined
For the purposes of this article, the class of legacy user agents (browsers) assumes all browsers except:
- Internet Explorer 7
- Safari 2.x+
- Firefox 1.5+
- Mozilla 1.7+
- Opera 9.x
- Camino 1.x
In the case of the more popular browsers a cutoff release date of Q3 2004 is assumed, since that was the period during which Microsoft announced that it was putting Internet Explorer into a new development cycle.
In the case of the less popular browsers more recent versions are defined as current, since the users of those browsers tend to update conscientiously.
A note on Dreamweaver, et. al.
The relevance of Integrated Development Environments (IDE's) to the subject of standards advocacy is deliberately placed almost entirely out of the scope of this article.
Gaming the circumstances: pros and cons
When I consider the utility of using tables for layout out of context, I reckon that its supporting factors include:
- Consistent presentation support in practically all browsers
- Broad availability of graphic designers and producers able to work with table based layout
Note that those two arguments are the only positive arguments I can provide in support of the use of table based site layout techniques.
When I turn to the positive case for the broad adoption of CSS and other Web standards, I establish the following factors:
- One document (or a small collection of documents) controls the presentation of an entire site, without regard to its display platform, simplifying maintenance and extensions
- Web documents (HTML files or fragments) can be produced with a rational source order, obviating the need for multiple, platform-specific versions of the same document or fragment (literal or otherwise)
- The capabilities inherent to CSS were from the beginning intended as part of the Web standards roadmap, suggesting extensive future-proofing for sites that depend upon CSS for presentation
If these supporting points are agreed upon, the contest between markup-driven presentation and CSS-reliant presentation comes down to a judgement between expediency and robustness.
That so many project stakeholders choose the expedient route is, in my opinion, the result of several different factors. I intend to call out one of these factors in particular: misapprehension of the desirability of slavish support for legacy browsers.
Furthermore, I neglect (but do not ignore) two strengths of CSS: the ease with which it allows consistency to be enforced within the same rendering context, and the fact that in combination with appropriately produced markup, rapid iterative development becomes far easier than is the case when presentation is driven mostly or entirely by markup.
Tangled work product for the sake of the status quo
An ideal outcome, and the one for which all developers strive during a project, is universal accessibility without regard to the user agents used to visit a site. However, this puts the cart before the horse, because it totally ignores the miserable return on investment from the practices which lead to that outcome.
While it is true that table based layout and markup-driven presentation utilize the lingua franca of the Web markup world, I fail to see how the only obvious benefit from their use - removal of practically all obstacles from the template authoring process - outweighs negative factors which beg the use of CSS. These factors include:
- "Railroading" of document source order into the requirements of table markup
- Lack of error tolerance - neglect of a single cell within a table can clobber an entire layout
- Vastly increased Total Cost of Ownership (TCO) for the site, as the result of retreading required during any subsequent redesign
- Markup verbosity which unnecessarily increases the cost of serving the site, due to higher bandwidth demands and greater demands of labor for site maintenance
The logical conclusion is that table based markup is odiously messy and tangled... for the sake of nothing more than expediency and an increase in the number of visitors who can view the site exactly as intended by its designer - an intention itself attended by unrealistic expectations.
A lot of bluster is made over the genuinely positive aspect of the latter argument... bluster which I believe is out of proportion to its actual merit.
A strong case can be made for the comparatively low contribution of legacy user agent traffic to the business goals of most sites - the same legacy user agent traffic that theoretically requires the suboptimal implementations under question.
Illusory value of legacy user agent support
The absolute numbers for browser market share don't exist in any real sense, but frequent quotes of statistics bear out the likelihood that legacy user agents (those mentioned in the introduction of this article) handle the requests of something close to ten percent of the visitors to most sites. In the case of sites tailored to more technically inclined visitors this proportion shrinks considerably, while certain environments - particularly schools and government offices - likely generate higher proportions of legacy user agent traffic.
While everyone concedes the existence of these users, I notice that very few speculate on the common reasons behind their browser choices.
I find it quite likely - according to my first hand experience and experience shared with me anecdotally over ten years in developer fora - that these users can be assessed as follows:
|Description||Significant trait||Likely destinations|
|Unaware of, or uninterested in, the obsolescence of their software||Use the Web to obtain news and write correspondence, but little else||News, webmail, and weblogs|
|Unable to upgrade for financial reasons||Rarely if ever make purchases online, without regard to their expectations of their user experience on any site||News and other rich sources of information|
|Unable to upgrade for technical reasons||Are discouraged from engaging in network use that is not strictly work- or school-related||Search, subject portals, intranets|
|Distinct from the first group, yet unwilling to upgrade for reasons of workflow or habit||Prejudicially avoid sites known to miss their expectations of their user experience||Variable|
If these conjectures approach the literal truth, we're left with the following types of sites that likely need to make a point of catering to users of legacy user agents:
- Subject-specific infosystems
Intranets and the God-only-knows destinations of the users in the final group listed can be deliberately discounted, since the first type of site is almost always optimized for a particular browser, and the latter group of sites is visited by users who almost always make their software choices with full knowledge of the consequences (e.g., their best user experience is limited by informed choice).
The low frequency with which user groups might engage in e-commerce is also discounted, since that scenario almost certainly accounts for a multiplicatively small fraction of the small fraction of the legacy user agent install base.
If this chain of conclusions can be proven unbroken, the result is that the majority of developers worry about providing extensive user agent support to visitors who will rarely if ever visit the sites they're designing. While this coutcome rings with a certain nobility at first, experience begs of me the likelihood that it's due more to inertia and ignorance.
One exceptionally disastrous outcome of the current state of affairs is that user agents officially beyond the End of Life point in their development cycle continue to cling to market share. This dynamic is one that is in dire need of action, and I propose that those shops which have not already done so should agree to support such user agents only at an execrably steep premium, or not at all.
If my assertions are begging to be disproven - say, if for example, users of legacy user agents make purchases online just as frequently as users with newer software - the point of inertia and ignorance still stands. See my comments about network effects below.
The alternative conclusions to "inertia and ignorance" with any chance of having merit are:
- Broadly accepted Web browser share numbers are, in fact, worse than "damn lies."
- The efforts of the W3C, advocacy groups, individual software publishers, and more recently site owners to bring Web standards into maturity are widely thought certain to come to naught, due to flaws in the standards themselves.
- An overwhelming majority of end users would want to preserve the status quo, even if they were to learn the benefits of standards adoption.
In fairness, I concede that all of the preceding conclusions could logically be true in combination. However, I doubt any of them are... excepting the accusation of inertia and ignorance. Consider the most likely rebuttals to the three arguments above:
- Operators of individual sites often measure their traffic conscientiously, and have no reason to fake their user agent statistics. I conclude that the low measurements of legacy market share aggregated by popular analysis tool vendors are likely to have basis in the reality of the entire user base.
- "Ivory towers" notwithstanding, the W3C knows what it's doing, better than any group of developers who are merely anxious to preserve the status quo. This is true in great part because the W3C is making and following plans that account for the entire Web user population, while developers opposed to standards adoption routinely point to self-interest and near-term advantages for support of their position. In light of that stark contrast, I find it difficult to believe that the deliberate efforts of the best and brightest to create a truly interoperable Web are misguided to the point of uselessness.
- Users want what they came for - no more, no less. To some, this requirement is best met by preservation of the status quo. However, I strongly believe that even more would prefer a Web where bandwidth demands are minimized and features are added to their favorite sites quickly and incrementally, instead of requiring extensive overhauls and large-delta evolutions in the user experience... which is what happens when standards friendly development techniques are given short shrift. The overblown buzz surrounding Web 2.0 further begs that this assertion (and the assessment of the user population that it implies) has at least some traction in the business community.
What I take away from the preceding lists is that everyone wants to ensure that site visitors get what they come for. Here's a question I'll leave with the reader:
Which process benefits are the greatest boon to user goals: the comparatively rapid time to initial launch afforded by markup-driven presentation, or the comparatively rapid time between design iterations afforded by stylesheet-driven presentation?
What about the downsides?
At the start of the article I immediately laid out the attractors for clientside development choices, and offered the opinion that the advantages of CSS-driven presentation outweigh those of markup-driven presentation. Here I will resume the argument in terms of negatives, without duplicatively reversing the positives already mentioned.
The fact that I failed to come up with two full sets of balancing disadvantages can be considered instructive.
When network effects go bad
Having made and supported the assertion that too many developers rely on markup for presentation out of habit, argued that the business case for catering to legacy browsers is poor, and touched on the consequences for typical site life cycles, there's a universal consequence of the status quo I need to point out: the persistence of network effects.
According to the Wikipedia article on the subject:
The network effect is a characteristic that causes a good or service to have a value to a potential customer dependent on the number of customers already owning that good or using that service.
In other words, something like a Web browser becomes more valuable as its userbase grows.
In the case of legacy user agents, this idea gets turned on its head without being shattered. Given that legacy user agents are - practically by definition - inadequate to the demands placed on them by today's standards of user experience, their effect on the market is actually negative due their demonstrated capability to forestall innovation. However, the dynamics of the network effect remain in force. The user base is large enough to justify support; developers provide that support; since support is available, users are effectively encouraged to continue using the old (inadequate) software. The cycle continues thence.
The principal purpose of this paper is to underscore the desirability, through rejection of presentation markup and/or sincere standards adoption, of breaking the back of the network effect in the area of browser support. The best way to do this is by enhancing the network effect of Web standards, and the current step to take toward that goal requires the formation of a bulletproof business case.
Where things stand: being here, getting there
Having established that the value of markup-driven presentation is questionable in most cases, and further that common practice makes the problem worse instead of better, it becomes necessary to discern how these unpleasant facts can be worn down to non-issues.
In recent years browser vendors have become steadily more helpful by formally ceasing support for legacy titles and forcing automatic updates. Putting aside what this says about vendors' negative perceptions of their markets, the fact remains that any system allowing for easy updates of Web browsing software is an improvement over none.
IDE vendors have posed a greater challenge for a variety of reasons, but have shown signs of slow but steady improvement.
Meanwhile, there are four other groups with a clear interest in legacy browser abandonment:
- End users: since the subset of the end user community at issue is constituted of those whose Internet literacy leaves something to be desired, education is a must. Outlets for this effort include online advertising and mass media mentions.
- Developers: bringing around the stubborn through affirmative outreach may not be possible, and at best would require phenomenal effort. Barring such efforts, unambiguous ostracism of poor work product may well be a viable (if unpleasant) alternative.
- Designers: it is vital that graphic designers be dragged, kicking and screaming if necessary, away from applying the DTP metaphor to sitebuilding. This is an all-too-regular occurrence which makes the strongest contribution to the practice of relying on markup for presentation.
- Stakeholders: perversely, it is with this group that the best hope for standards adoption lies. They sign paychecks and set expectations of professionalism; the perversity lies in the fact that once the typical stakeholder signs the check, he usually returns to being a typical end user. Put bluntly, traction from this group will only come if the business case for standards adoption can be made irrefutably more affordable than the alternatives, in terms of capital investment. Finally, the costs of legacy user agent support need to be framed in terms of a site's TCO and in terms of the return on investment made with initial development costs.
If ostracism of laggard developers is agreed to be a viable approach, it should be practiced in combination with an open-arms policy of easy access to mentoring. Such an approach could leave the community of standards advocates open to accusations of arrogance, which can be countered with argument on the basis of fact - hard numbers proving that solid, standards-aware processes deliver measurably better work product at a similar or lower level of capital investment than what is required by older processes.
In summary, standards advocates need to continue their dialogue with software publishers, while perfecting the business case for standards-friendly development and using a carrot-and-stick approach with users and developers.
The remaining (and thorniest) problem presented highlights graphic designers' collectively fanatical attachment to the DTP metaphor; it is this attachment which puts stakeholder and developer expectations at odds with the underlying strengths of Web standards.
Changing process so that business cases stick
Amongst most, there's a perception that Web browsers are monolithic in nature, that version numbers aren't things that end users ought to worry about. Paradoxically, getting the message across about version differences is far less important than selling the business case for the changes made possible if those differences are respected.
The cornerstone of such an argument requires the account representative to emphasize the cost-to-beneift ratio of presentation-driven markup production; as pointed out, the argument can be made that in the near term, such benefits are questionable. Extra time is spent prototyping and producing templates on account of bloat and problems imposed by irrational source order.
It is my belief that the development of new, squeaky-clean template libraries is a must; CSS is more than flexible enough to handle their limitations. Additionally, existing template libraries attached to popular publishing systems (for example, Wordpress themes) could see increased uptake consequential to an improvement of their documentation.
Increasing the availability of high-quality templates should make it possible to make the lead times for standards-friendly development comparable to markup-driven presentation methods.
Adherence to wireframes
The elimination of edge cases during the design process is also a must for improving lead times. Where each layout footprint within a table is fairly well contained with respect to its neighbors, stylesheet-driven presentations are much more complex in terms of the relationships between rules and adjacent elements. Clear and strictly followed wireframes are essential to simplifying those stylesheet relationships.
Stylist input during the graphic design stage
Given a graphic designer whose mindset is better suited to print work, a collegial relationship with the stylist responsible for implementing the designer's work is also imperative.
It is my belief that hard work in these areas of interest will make long strides toward reducing capital investment demands made by the development of standards-friendly sites, which in turn will make irresistible the business case for standards-friendly development.
Do media format wars provide a lesson against the challenges faced by Web standards advocates?
During the editorial process, this question jumped into my mind since I see a compelling case for similarities between media format wars and the standards advocacy fight.
In the case of media format wars, the perception of a de facto standard is engendered by a positive correlation with ease of access to a technology's most important benefits for the median end user of the technology. If we extend this to the circumstances of Web standards advocates, our first call is to define end user needs: both to consume information, and to publish it.
If we extend the lesson into the realm of strategy, the goal of standards advocates must be to make both of these tasks easier and/or cheaper to accomplish in a standards-compliant context.
I am confident that when all other factors are equal, standards-friendly sites are often easier (and rarely more difficult) to use than ones that are not; I will leave it to someone else to prove that belief. In any case, standards-friendly development processes carry network effects of their own; as more designers and developers adopt those processes, their user-experience advantages will become better understood.
The greater challenge lies in the publishing realm. Online services such as Blogger have lowered the barriers to entry for Web publishing without regard to the role of Web standards, and I suspect that standards advocates will want to consider supporting the development of one or more online Content Management Systems that will address the need for lowered barriers to entry when it comes to publishing arbitrarily structured, standards-friendly sites. Wordpress.com is a solid advance in this area, but is only the first step of many that are needed to bring Web standards into the mainstream to a degree beyond dispute.
I concede that this entire article is based on conjecture, personal experience, and anecdote, rather than properly controlled research. I will leave it to the reader to decide if these potentially biased "sources" will still be good enough to make an effective case.
Whatever the circumstances, the glacial speed at which developers and designers are taking up standards-friendly production techniques can be increased, and I hope that some of the ideas I've raised can be put to effective use.