News Article: You used JavaScript to write WHAT?

You used JavaScript to write WHAT?

I think it’s an interesting article. In particular, the author’s comments about JS performance on page 2. Oh, I really want to get my hands on Tamarin inside Firefox…

There’s a thought that’s been rumbling around my head for a few weeks, and I just want to throw it out there. Wouldn’t it be nice, as Tamarin stabilizes, to have a Firefox 3.1 which was the same as Firefox 3.0, but using the Tamarin engine instead of the current JS interpreter? The first fruits of Mozilla 2.0, so to speak, and a preview of what’s in store for JavaScript-land. We could even call it Firefox 3.14159. 🙂

I have no idea if this is technically feasible or not. Hopefully one or two of the Tamarin hackers can chime in here.

The IE meta standards tag – a slightly different take

I know I’m going to get flamed for this, but…

Everybody on p.m.o is ranting hard about Microsoft’s decision to include a third mode for web standards rendering. They’re all calling on Microsoft to Tell The Non-Compliant Web To Take A Hike.

What makes me chuckle is that, in one sense, the p.m.o community is telling Microsoft to act like the monopolist the community hates. The community is telling Microsoft to dictate to the Web (again) how the Web should be, to throw its massive market share around and do something that might break the Web. Bear in mind that massive market share, the ecosystem of the Web, isn’t just about the browsers. It’s also about the portals, sites we visit all the time, like CNN, NBC, MySpace, Wikipedia, etc.

You know what? Someday we’re just going to have to learn not to give a damn what Microsoft does with their browser. Let the evolution of the Web decide what’s best. Let Firefox be Firefox, let Opera be Opera, and if Microsoft doesn’t want to come to the table, fine. Then the W3C might mean something again – or the WHATWG will just replace it. (Just as long as people don’t cheat and make sneaky side agreements to install their browser as the default… oh, wait, ummm…)

On second thought, they’re just feeding authors’ addictions to bad code. I really ranted about that a few weeks ago. Let them (bit)rot. 🙂

Update: A few people seem to think that I want to dictate to Microsoft how they should build their browser. I want to strongly suggest they do certain things (where’s my DOM 2 TreeWalker?), but I am not going to scream bloody murder about it.

Personally, I do think three rendering modes is a bad idea. But then again, I don’t have a website. I’m pretty sure if I did, I’d have to think long and hard about supporting Internet Explorer with more than basic HTML. IE’s users are rarely my target audience anyway.

Seriously, people, this blog post is about two words: Lighten up.

Turned 30 today… more ruminations on me, tech… and life

As opposed to a depressing outlook on life four years ago, I’m really quite pleased with my immediate future. I’m rocking and rolling at DVC (we just got some nice t-shirts), and (at least for the moment) my workload is somewhat manageable. At DVC, anyway.

That said, there’s still room for improvement. We still don’t have a XPCOM debugger – and with Tamarin reshaping JavaScript next year, it may be harder to do than ever before. (Or easier. It’s an open question what will happen to JS debugging in the future. But I’ll still try to do some kind of 1.9-compatible mockup.) I postponed my Verbosio work again, and I’m trying to find time for that. I’m still overwhelmed with the amount of Mozilla tasks that I can’t find time for, not to mention a number of other very worthy projects to help out on, all of which I have repeatedly dropped the ball on. About the only one I still participate in on a regular basis is CodingForums, where I’ve been moderating the XML forum for over five years, and helping out for over ten (though even that’s suffered lately).

Not to mention the complete (and I do mean complete) lack of a life. For the first time in a decade, I am not able to fly home and visit family this year. Christmas this year will be very, very lonely.

Here’s a hint for anyone just entering the software industry: you can find yourself loving your chosen hobby too much. It’s happened to me, and though I’ve benefited in some ways, the tradeoffs hurt.

A few other random thoughts…

Continue reading Turned 30 today… more ruminations on me, tech… and life

The Web needs a Superfund-like cleanup

Yahoo News. CNN. Ebay. Any major site you visit today has hefty use (or, more accurately, abuse) of HTML. We’ve pushed HTML far beyond what it was designed to do, and browsers (and developers!) pay the price. Plaxo’s Joseph Smarr gave a talk on just what the end-result of this is (featured on planet.mozilla earlier, but here’s a link to the horse’s mouth: High Performance JavaScript video).

This has been bugging me for a few weeks now, as a deeper-level problem. Part of my job (not just at DVC, but at previous companies) involves figuring out why products I work on break on certain web pages. QA says “We’re broken on this page,” and before I can work on the bug I have to dismantle the page. Imagine how much fun it is to dismantle code like:
<div><div><div><div><div><div><div><div><div>…</div></div></div></div></div></div></div></div></div>… Or the horrors of table-based layout, especially nested-table-based layout. This practice is officially known as “minimizing a testcase”, but realistically it ought to be called a steaming pile of fertilizer. No one wants to do it, and pages using hyperbloated markup make it ten times harder. (Especially when one character of whitespace can make the bug disappear.) Then there’s the worst feeling of all: minimizing a testcase, fixing the bug, and then finding out there’s something else busted on the same page, which your fix didn’t catch.

Unfortunately, it’s a vicious cycle. Major sites want to work in all major browsers, and major browsers don’t want to break major sites. People are quick to blame Internet Explorer (and I’m one of those people), but that’s not enough. We need to understand HTML + JavaScript + CSS + AJAX + smart developers = radioactive sludge code that just barely does what we want.

Seriously, how much code should a web page need to implement a tabbox?

I just spotted from GMail a Quote of the Day that really summarizes this well, by Gen. George S. Patton: “If everyone is thinking alike, then somebody isn’t thinking.”

There are design flaws in the whole process that make some of this unfixable in its current state. XML Namespaces has been around since 1999, but HTML isn’t XML (unless you convert wholesale to XHTML). So as long as we keep generating web pages with just HTML, we’re stuck. On the other hand, HTML isn’t going to go away any time soon.

Maybe it’s a standardized user-interface language we need. Mozilla has XUL (which has never worked as well on web pages as it has for chrome apps), Microsoft has XAML (oh, wait, you need Vista for that, don’t you?), and somewhere there’s a W3C discussion about creating a unified UI language (but how credible is the W3C among developers these days?).

The Tamarin project should improve JS performance significantly (and when it does, Microsoft and the others will be forced to respond – good news for all users), but this doesn’t solve the underlying problem – it just makes the problematic code run faster.

Of course, all this is “Web 2.0” – but I don’t think anyone really understands that Web 2.0 should be easier to work with than Web 1.0. Web 2.0 shouldn’t just be about cool widgets.

How can we, particularly Mozilla developers, contribute to a fix? (It’d be the height of arrogance to ask “How can we fix this ourselves?”.)

One way would be to encourage web sites to switch to XHTML (and serve it as something other than text/html). This would enable mixing HTML with other languages more efficiently (even more than with XML data islands, which end up being generic XML). To those who say, “But Internet Explorer doesn’t really support XHTML,” they should start screaming at Internet Explorer’s team for this. Here’s a really good question for Microsoft: for the same effect, would IE’s parser team prefer to eat hundreds of HTML tags, plus hundreds of lines of JavaScript, plus CSS, or would the team prefer to eat a few dozen XAML tags mixed in with XHTML? I know which I would pick – the one that requires fewer bytes to express in web page source code.

Another way would be to make a standard user-interface language for the web – and implement it so that web pages could use it. Stop downloading cruft from the web – store it locally on the machine as sandboxed components. (Think XTF or XBL without privileges to implement user-interface.) Make it something that doesn’t need a whole package from Yahoo!, another package from Google, another package from Tom’s Best User Interface Widgets Page, etc. Just make it something everyone can use. And if it means a plug-in for a stubborn vendor, hire someone to write the damn plug-in! (Provided that the plug-in itself has a clearly available spec and a clearly available owner.)

I think the biggest impact we could have would be to ask the people who build and maintain these major sites, “Hey, what can we add support for that would make your jobs easier and your code cleaner?” It would not surprise me to find out that Mozilla was already talking behind the scenes to CNN, Ebay, Yahoo, etc. – specifically to their web engineers. It also wouldn’t surprise me if they weren’t, but instead focusing efforts on making the browser UI better, the user experience better, and on meeting standards that these major sites just don’t give a damn about – while still trying to render these sites well.

I don’t think HTML is broken, nor is JavaScript, CSS, or anything else. I simply think we’re demanding far too much from them, and we need a better solution for the parts of the web that really put HTML to the test. Even WHATWG doesn’t go far enough…


Beyond clones: synchronized DOM nodes

For years, I’ve tried to answer a simple question: How do you show the same data in two places at once, in Mozilla? In particular, DOM nodes. The simple answer is to clone the data you have in one location, and place the clone in another. This is not enough when the data changes in one location; I want it to change in the other, too.

Possibilities include:

  • A split view of the same document, particularly for editing
  • My latest attempt at a scrollable content object model (XBL at work, again)
  • Correcting markup in one section of code, and having it update a dependent section later automatically. Think slide shows, where two slides are nearly identical – and a semantic relationship exists between them. Also think steps of a theorem proof.

An ideal solution might involve a special layout frame for transposing the data, but that’s more work in an area I’m unfamiliar with. A solution I can do (more expensive memory-wise, but at least workable) involves setting DOM mutation listeners. Read the extended entry for more details, and a ZIPped source directory for my own synchronizer – if you don’t mind some technobabble.

Continue reading Beyond clones: synchronized DOM nodes

Operating systems disconnect

This morning, I noticed something funny about how I approach various operating systems.

When Microsoft Windows Vista was released, I stayed way the hell away. I have a (nearly) perfectly functional Windows XP box, deliberately overbuilt when I bought it so it would last a while. The news reports about Vista were less than flattering, and ditto the blogs. Despite the occasional BSOD, which I now accept with mild grumbling, Windows XP is sufficient for my needs and the last thing I want to do is install an operating system (again) – and all the necessary build software – for building Mozilla. With Vista Service Pack 1 not too far off, I’m still convinced of the rightness of my decision.

Apple’s Macintosh OS X 10.5, code named Leopard, was released a few days ago. At first I was tempted to go get it, but then I noticed my laptop’s antivirus expires in a couple months. So again, I’m taking a wait-and-see approach, letting the AV program run out before upgrading. Again, the machine is adequate to my needs. I haven’t fired up my Mac Mini in a little while, probably because I don’t have a monitor dedicated for it or a desk big enough. (I’m lazy on that front.)

The Fedora Project will release the Fedora 8 edition of Linux in seven days. Yesterday, I pre-ordered a copy of it. Two USA dollars plus shipping and handling, from an online vendor. (When Fedora 7 was released, I set my box to downloading it – first by FTP, which died a couple times, and then by Bittorrent… but in deciding to be generous to others and allow for twice the upload amount as the download amount, it took three days to reach that 2.000 factor. Thanks a lot, Comcast.)

Here’s the part I don’t rationally grok. With Mac, and especially with Windows, I perceive a fair bit of pain with upgrading the operating system. With Linux, it’s not even a question – I like it, and I actually look forward to the latest & greatest.

I had some rough experiences with Kanotix, a Debian-based system, but primarily because I couldn’t get LILO to default to Windows at the time. I have had some good experiences with Fedora in the past, and I still do. Maybe it’s because I don’t hear nearly as much bad stuff about Linux in general that I’m willing to trust it more. Maybe it’s because I can get copies of a Linux-based OS dirt-cheap. (Legally, too. I don’t do pirated software.) Maybe it’s because building Mozilla has the lowest barrier to entry on Linux – almost everything comes with the default OS installation. (The debuggers are extremely painful to use, though.) Maybe it’s because the malware community fears offending the (very smart, very persistent, and very loud!) Linux-based developers community. Maybe it’s because of the frequent updates Fedora offers – average one set every week, I’ve noticed – far more often than the Two Titans.

I don’t know what it is, really. Ultimately, I don’t think I care. Fedora’s brand of Linux is one I simply don’t have any cause to question. With Windows or Mac, I’m 90-95% sure that I’m at fault when something unexpected happens. With Linux, it’s 99.9%, at least. I simply take for granted that Linux Really Does Work… and for a developer, that’s saying a lot.

(It’s not 100%… see issue 2353. Granted, I reported it last night, but…)

Fun with slave-driving programs

This week in the evenings, I’ve been banging on the inter-process communication (IPC) bug to try and get it working for me, with little success. I finally remembered about noon today that your standard output stream is buffered. So now my blocked testcase has been hit with a plunger (pun intended).

So among the several things I tried was a NS_ERROR break in the IPC code, along
with GDB, the GNU Debugger. Here’s roughly how it turned out…

Continue reading Fun with slave-driving programs


This week, I had a series of “great” ideas for things I wanted to do with Mozilla code, starting with a gripe from three and a half years ago. Some of them worked, some of them didn’t, and it’s generally evolving.

Tonight’s idea, for instance, was to start using XTF to augment XUL’s capabilities. However, since XUL is a predefined XML language in Mozilla, XTF can’t share the same XML namespace. No problem – a second XML language, plus display: -moz-box, and I’m in business. It’d also mean explicitly defining the public interfaces to these elements in IDL, something I’m not opposed to.

I’ll get back to this subject in a bit, but I need to express another thought here. These two subjects are related, and frankly I’ve been chewing in silence on this other subject for months now.

What’s going on with Mozilla 1.9.x?

Continue reading Overwhelmed

Why aren’t we using Components.Exception?

Components.Exception on LXR

A long-standing custom in Mozilla code for XPCOM components in JavaScript is to throw Components.results.NS_ERROR_FAILURE or something like it. However, Components.Exception is more useful, particularly since it lets us component authors define our own error messages to go with the error code. Plus, it’s been around for quite a while.

It’s actually pretty useful, I’ve found. I’m wondering if we should add this to a list of to-do’s for Mozilla 2.

Alex Vincent's ramblings about Mozilla technology, authoring, and whatever he feels like.