Yeah, it’s a sweet box, Mr. Karel

While trying to resolve my “when should I retire a computer” dilemma, I realized I might not need to spend any money at all. After all, my overengined Windows desktop has plenty of capacity. Tonight, I found out how much.

Yesterday, I removed the Windows partition on one of my hard drives and replaced it with Fedora 13. After system updates, I did the following:

  1. Downloaded ccache source code directly from ccache.samba.org. (Fedora’s ccache is version 2.4. This one is version 3.1.)
  2. Unpacked ccache and installed it:
    tar -xzf ccache-3.1.tar.gz
    cd ccache-3.1
    sh configure
    make
    su
    make install
    exit
  3. Added ac_add_options --with-ccache to my .mozconfig
  4. Built a first pass of mozilla-central:
    cd mozilla
    make -f client.mk configure
    cd ../fx-opt
    time make -j4
    

    Total time: approximately 20 minutes. Finally, it runs fast enough for me not to worry.

  5. Moved the old objdir to a new folder and repeated the previous steps exactly.

The total time on that is astonishing:

real    3m39.293s
user    3m49.130s
sys    1m53.586s

Given builds in less than four minutes, I should’ve done this a couple years ago. A couple other thoughts:

  • I tried pymake -j4 as suggested in my previous blog post, but it failed to build.
  • I actually tried to avoid putting Fedora on the same box as my Windows system, thanks to the inconvenience of a boot loader before. With the computer I have now, it turned out to be a non-issue. The system still defaults to booting Windows, and by hitting F10 at boot time, I can tell it to boot Fedora instead by selecting the second hard drive.
  • I don’t think I need a solid state drive anymore…
  • Nor do I think I need to try tinkering with a RAM drive.
  • Anyone interested in a rarely-used Linux box built three years ago? Windows not included. I am willing to donate to a good cause…
  • I think I’m gonna have fun with this thing.

When do you retire a machine?

I’m thinking about replacing my existing Fedora Linux desktop. I’m really on the fence about it.

On the one hand, it’s never, ever caused me any major problems. I don’t turn it on very often, except when I’m working with something I can’t easily do on the Windows box (I keep forgetting MozillaBuild doesn’t provide everything). Plus, I finally decided to try ccache, and it makes for an amazing build time: about 10 minutes on this three year old box.

On the other hand, it is three years old, and a build of Mozilla Firefox 4 trunk takes a couple hours when ccache’s cache isn’t up to date. (For comparison, the Windows box running Windows 7 now takes about the same time, and its specs are significantly better!) Fedora did a pretty bad job partitioning the 80GB drive, so at one point my home directory partition had only a couple GB of free space on it (while Fedora reserved about half the drive for itself and used very little). I bought an external hard drive to compensate – which turns out to be a pretty good idea anyway from just the backup standpoint. Most ironic of all is that the CD burner in the drive doesn’t reliably burn Fedora CD’s, so every time I want to upgrade Fedora, I have to let my Windows machine do the work…

Now, I hear stories of i7 machines that can do clobber builds of Mozilla in fifteen minutes, and I start to think about greener pastures. As much as I think, Oh, I don’t really need it, I keep thinking about it.

So what do you think? If you have a perfectly good machine that’s just old, running Linux, do you replace it, and why?

Implementing a DOM in JavaScript, part two: Two difficult options, with requirements and test specs

A little over a week ago, I had an idea about implementing
a DOM in JavaScript
. Thank goodness I
wasn’t the first
. (Thanks for the links, guys!)

Still, what these libraries provide isn’t enough. It’s “necessary but not
sufficient” for what I really need: using one DOM tree as a public interface to
another, “private” DOM tree. I’m calling this a “DOM Abstraction”, for lack of
a better term. So in classic do-it-yourself fashion, I set out to plan what I
needed to implement.

I just spent several hours (spread over many days) formally defining
requirements and tests. As I was reviewing my article, I asked myself a simple
question: Well, why don’t you just clone a bunch of native DOM nodes and
extend their properties?

More in the extended entry. (Warning: it’s looooooooooooooooooooooong! But I really do need developer’s feedback. I hope I’m not too boring.)

Continue reading Implementing a DOM in JavaScript, part two: Two difficult options, with requirements and test specs

Implementing a DOM in JavaScript?

I know, that sounds crazy, but hear me out…

In recent weeks, I’ve begun to suspect the singularity is near, really, in JavaScript land. After all, we now have just-in-time compilation (in at least two flavors for Mozilla code alone), the Bespin SkyWriter editor project, the new JS Reflect API, Narcissus and Zaphod… it’s a pretty exciting time.

Also at work, I’ve been seriously exposed to jQuery for the very first time. I’d heard of it and similar libraries (Dojo, Yahoo UI widgets come to mind), and what they do is provide abstractions of the underlying DOM for common operations.

Mix that concept (abstracting the DOM) with Narcissus’s concept (a JavaScript engine implemented in JavaScript), and I start to wonder. What if we implemented a complete DOM in JavaScript, on top of the existing DOM – and then used that JS-layer DOM to implement our own features?

A few years ago, when XBL 2 was first announced, someone suggested they could implement the specification in JavaScript. Someone wiser (I think it was bz, but I don’t remember) replied that this implementation really didn’t do it, since the nodes weren’t really hidden. But if we implemented a full DOM layer in JS, and modified that…

For instance, take Node.firstChild:

JSNode.prototype = {
// ...
get firstChild() {
if (this.DOMLayer.hasSpecialFirstChild(this))
return this.DOMLayer.getSpecialFirstChild(this);
var returnNode = this.innerDOM.firstChild;
while (returnNode && this.DOMLayer.isHidden(returnNode)) {
if (this.DOMLayer.hasSpecialNextSibling(returnNode))
returnNode = this.DOMLayer.getSpecialNextSibling(returnNode);
else
returnNode = returnNode.nextSibling;
}
return this.DOMLayer.wrapNode(returnNode);
},
// ...
}

As long as everything was wrapped in a complete and well-tested DOM abstraction layer like this – and only objects from the abstraction layer were returned to the user – you’d have a baseline for creating (or at least emulating) a whole new way of viewing the DOM. XBL 2 could be implemented by manipulating this JS-based DOM.

You could also experiment with other possibilities. I’m running into a problem where I would really like to hide some nodes from the DOM Core interfaces (particularly previousSibling and nextSibling), but expose them through other means. My first thought was to use node filtering to skip past them (you’ll hear more about that in another blog post), but I wonder if I’m just wall-papering over the real problem or actually solving my immediate problem.

I could even use this idea to get past three of my fundamental problems:

  • XTF, while beautiful for my purposes, isn’t well-supported, and XBL 2 is supposed to make it go away. I don’t know when, but I find myself using more and more of XTF’s features – particularly its notifications – in recent months. If XBL 2 really replaces XTF, it’s going to be more and more difficult to update my code. A JS-based DOM means I can just walk away from XTF entirely.
  • Recently, I found myself hacking the core DOM to support observer notifications from Node.cloneNode(), so that I could manipulate the DOM before and after a clone operation. That’s not a very nice thing to do, but I have my reasons. I can modify a JS-based DOM to avoid that step entirely.
  • At some point, I’d like to implement the Entity and EntityReference parts of the DOM Level 1 Core specification. When it comes to editing XML, DTD’s and entities matter (see localization), but they’re a pain to work with.

Of course, there are some logistical problems with building a JS-based DOM. For one thing, there are a lot of DOM specifications to implement or wrap in this way. This abstraction layer would have to pass a pretty large set of tests too.

If you did all that, though, and could run it a couple layers deep (one JS-DOM layer on top of another JS-DOM layer on top of the native DOM layer) accurately… that would be a pretty good starting point for lots of custom DOM manipulations.

A search on Google didn’t turn up anything like what I detail above. Is there anyone crazy enough to write a narcissistic DOM? Given that I might need this anyway at some point in time, should I seriously consider doing this for my own project? Or is this just a really bad idea?

Why is it so wrong to write XPCOM components?

The last few months I’ve tried submitting patches to Mozilla implementing some feature as a XPCOM component, I’ve been told XPCOM’s the wrong way to go. I don’t understand why, though.

A lot of my work – especially Verbosio – has been based on the idea that JS components work. I can clearly define the interfaces in XPIDL, with some type checking done for me by XPConnect. Ditto for scriptable C++-based components, which I write only when I feel I have to.

I’d really like someone to write a DevMo or WikiMo article (or failing that, a blog post), explaining best practices, the costs of XPCOM, and alternatives. There’s several areas to cover:

  • C++ to C++
  • C++ code called from JS
  • JS code called from C++
  • JS to JS
  • Privileged code called from content (not escalating content to privileges, just code that runs with privileges but exposed through some public interfaces to content)
  • Anything obvious that I’ve missed.

Many of my own blog entries have been all about leveraging XPCOM. Having it explained in a tech note could profoundly affect the way I do things.

(I know I sound like a jerk in this article, and the tone is a bit hostile. I apologize for that, but as usual, I’m having trouble editing my own words… I can write new words, but which ones?)

P.S. Thanks for all the replies to my previous post… very informative, very useful opinions!

Vendor-provided user agent extensions are gone… now what do we do?

Scenario: A corporate extension modifies the user agent by a general.useragent.extra.foo preference. The sole purpose is for the web page to know whether that extension’s installed or not.

With Firefox 4.0, that’s no longer practical. Someone helpfully removed support for adding extra parameters to the user agent string. So now, the corporate office finds itself in a pickle.

We need some other way of informing a web page that our extension is enabled. (Not every web page, though – a selective list.) So far, we’ve considered a few options:

  • Set the general.useragent.override preference. This is just wrong
  • Adding a new HTTP header
  • Injecting a node into the target web page.
  • Firing a custom DOM event at the target web page.

I’d like your recommendations on the best way for a chrome extension to notify a web page of its existence (or non-existence). If it’s one of the three options above, please say which one. If you have your own ideas, please post them as a reply, with reasoning behind it.

Please, no comments suggesting we should never, ever inform a web page of our extension’s existence. If we develop both the webpage and the extension, knowing the extension’s there can be really important.

Commit very early, commit very often!

You’ve probably heard it before: “Commit early, commit often“. It applies to teams, and it applies to individuals.

I agree with that advice. I started my “unstable” branch on the Verbosio XML editor project a few months ago so I could commit freely without breaking the hg trunk of my project. I’m glad I did – I now commit (and push) my changes on this branch two or three times a week without much regard to whether it works or not. It’s very useful when working from multiple computers.

A few moments ago, I wished I had been even more aggressive about freestyle committing. I committed one change out of three that I need – another took me some four days to hash out, and the third I hadn’t started on yet. So I executed a “clean” command on my build script – the one that does symbolic links – and lost all the files in the source for that symbolic link. Four days of work, gone due to a bug in my build script and overconfidence in my set-up.

*sigh* At least I can take comfort that I’m not the only one to make that kind of dumb mistake.

Sometimes, even “Commit early, commit often” won’t save you. So commit very early, commit very often, and if you don’t think you can — create a private branch, and commit anyway!

My former boss at Skyfire once told me branches are cheap. Disk drive space is, too.

Patch management between repositories, part 2

Back in 2008, I said the following:

I’m hoping a few people can point me to some nice freebie tools for applying patches in one repository to another repo’s code, and keeping the patches up to date. Or for handling cross-repository (and for that matter, cross-repository-platform) patches in general.

And…

From my brief research, Mercurial Queues seems perfect for this – within Mercurial repositories anyway.

Two years later, I’ve long since stopped caring about CVS. Both Verbosio (my experimental XML editor project) and Mozilla host their source code on Mercurial, and I’m getting better at Python. So I’ve once again solved my own problem.

The funny thing is that Mercurial Queues is both simple enough and documented enough for me to put this together. I mean, it was really only a day’s work to figure out. Once I found out .hg/patches was its own Mercurial repository, writing code to manipulate that inner repository was a piece of cake.

Whoever developed Mercurial Queues was very, very clever. Clever enough to make the basic design hackable. I like that.

Sparking some interest…

It’s been a whirlwind August, that’s for sure. Three weeks after Skyfire Labs, Inc. laid me off, I had a new offer from Mindspark – a sister company to ask.com. I started on Monday.

In addition, I moved from San Jose to San Leandro, so I could commute to Oakland more easily.

I still find it hard to believe, in this economy, that a guy with no college education could land a new job that quickly – and with a sizable raise, to boot!

I’ll be working on custom toolbars for Firefox at Mindspark.

Alex Vincent's ramblings about Mozilla technology, authoring, and whatever he feels like.