All posts by ajvincent

Ars Technica: Free compiling on Windows? Forget it.

This cannot possibly be good for our developer community.  Think about it:  Visual Studio has a much more developer-friendly C++ debugger than GDB on Linux – that’s why when I’m writing C++ code, I prefer to do it in Windows.  Certainly it means many of us will be stuck on VS 2010, and Mozilla will have to support code written for before the C++11 standard for quite a while.

It wasn’t that long ago that I implemented timeouts for XMLHttpRequest.  If I’d had to make do with the VS2011 Express compiler… I’d be on Linux.

Pay-to-play sucks when it’s pay-to-make-the-Internet-better.

DimensionalMap: Moved to Sourceforge, version 1.0a2 released

What is DimensionalMap?

  • Moved DimensionalMap project to Verbosio.  The Google Code project is now obsolete and scheduled for deletion.
  • Removed Proxy / membrane code (it didn’t work entirely, it will break soon, and it just got in the way)
  • Rebased build system on my Verbosio-Jasmine repository (supports xpcshell testing, downloading XULRunner SDK)
  • Implement a .hasAny(dimensionName, coordinate) method.
  • TODO: Migrate online documentation.

Source repository at .

InterfaceChecker: Enforcing JS prototypes override correctly

I decided I had to write my own Document Object Model implementation because the models available to me just won’t meet my requirements.  That involves writing a lot of code, though, and ensuring that Element nodes, Text nodes, etc. all implement Node properties like firstChild, childNodes, etc., and methods like appendChild() correctly.

To lend a hand, I wrote a little InterfaceChecker library for ensuring that tests on a base class are run on derived classes.  (I know, JavaScript uses prototypes instead of proper classes; I’m using the concept’s meaning here.)  In principle, I need to only do three things:

  • Write tests for the Node interface’s methods and properties
  • Designate Element as inheriting from Node, and
  • Provide some functions to build “typical” instances of Element,

My InterfaceChecker will run the Node tests against the “typical” instances of Element, for example.  It’s not DOM-specific, either:  I can do the same to arbitrary JavaScript constructors, like a theoretical Shape and Circle pairing.

This code is for debugging purposes only; in my opinion, it’s a little extreme to include these tests in a production environment.

Please, let me know what you think!

Jasmine testing with XPCShell, revisited

Jasmine testing environment with XPCShell, revisited and enhanced

I’ve resurrected the idea of Jasmine testing in XPCShell.  My adaptation will do several things:

  1. Download and extract the latest Gecko SDK based on branch (Aurora, Beta or Release)
  2. Generate concatenated “bin” and “test” files with a simple build system
  3. Launch XPCShell with the Jasmine tests
  4. Launch Mozilla Firefox with the Jasmine tests
  5. Allow you to re-build the project from within Mozilla Firefox by reloading the test page
  6. Remember where it parked the SDK, so you don’t have to download it repeatedly

Quick start guide:

hg clone verbosio-jasmine
cd verbosio-jasmine
python --update-sdk=release
python test-xpc

(Note:  right now I have a dependency on ECMScript Harmony’s Set, so this won’t work with Gecko SDK 12, the current release.  But it will work with Gecko SDK 13, which is in beta.  Whoops.)

For actually writing your own code and tests, there’s a top-level build.directories file, which simply lists directories to iterate over. Each listed directory should have a build.modules file as well – sample-build/build.targets should be a good start to explain how it does its work.

Help wanted!  First of all, is this useful to you?  Do you feel like you can create a JavaScript project with build support for Jasmine, and test it more effectively, than with Firefox alone?

Second, what would make this more useful to you?

  • What documentation would make this easier to use?
  • I plan on adding XUL support at a later stage, because I will be working in XUL land eventually.
  • I also plan on writing a Firebug XPI fetcher as well, and incorporating that into the HTML world.

I plan on maintaining this code separately from my main work, because I think a foundation to build a project with tests is far more useful than a fully built-out project without a portable test environment.

Jasmine in XULRunner, part 1: Started, help wanted

I’m working on my Verbosio templates repository, which is where I’m going to try (again) to build my XML templates project. The first thing I’ve been working on is getting Jasmine working with the Gecko SDK.  It’s going really well.  I’ve written some Python scripts to fetch a Gecko SDK build (in my case, Aurora builds), to concatenate JavaScripts together for Jasmine to use as specifications, and to build a XULRunner application to run the Jasmine specs.  I also have code to launch the XULRunner app with the Gecko SDK, so I can see the results. My goal is to offer the baseline as a separate, clean repository for Jasmine testing from XULRunner or XPCShell.

To get where I want to be, though, I need a little help:

  • The Gecko SDK also includes xpcshell, which means it shouldn’t be too hard to add Jasmine testing in XPCShell.  I could then do lightning-quick test runs from the command-line.  Tomas Brambora from Salsita Software’s already done some work along these lines.  I just need his code updated for this project, and MPL tri-licensed for check-in.
  • Only about 5% of the code I’m planning to write needs a XUL environment.  With tools like Firebug, ordinary HTML and “content JavaScript” is much easier to debug.  So I need to integrate downloading a Firebug XPI into my project, and I need a new Jasmine reporter which runs in the HTML and sends message events (postMessage, anyone?) to the privileged XUL Jasmine reporter.  Then I can tie the two together to do most of my work in the very well supported HTML Jasmine debugging environment.
  • (Alternatively, I could modify my project to download a Firefox Aurora build… but where’s the fun in that?  It would be quick, though.  The reason I don’t like this is Mac development becomes more painful:  Mac binaries are in .dmg images.  Installation, maintenance overhead, no thanks.  The Gecko SDK approach feels better for this.)
  • For some reason, the stack trace blocks for test failures are really thin vertically; a little CSS should fix it.
  • I could also use some code and developer usability reviews.

However, I think having a Python build script pull a prebuilt SDK and assemble my JavaScript modules for me is a winner.  It”ll save me valuable hours I’d otherwise spend building Mozilla code.  Once I get a stable development environment, I’m going to clone the repository as-is and maintain it separately for anyone who wants a pure Jasmine+XULRunner+XPCShell environment to start with.

If you can spare a few hours to bring me these last few bits, write me a comment.  I think the Mozilla community at large could really use this.

Check-out and build instructions:

hg clone verbosio-templates
cd verbosio-templates
python --update-sdk
# Wait a few minutes for it to get XULRunner; it's a one-time cost
python --test-xul
# XULRunner will open in another window, but it will block the python script from exiting.

Insane in the membrane!

(I’ve been wanting to use this blog title for months. Insane in the brain!)

Over the last year, I’ve been experimenting with JavaScript proxies and the concept of a “membrane”. (Tom van Cutsem has a nice introductory write-up.) The idea of truly private properties in JavaScript is just so compelling…

… and such a waste of my time.

The hard truth is I’m spending so much time chasing this ideal that I forgot the API I’m working with is unstable as all hell, and sooner or later, it is going to change. Given that latter requirement, and that my planned code design now has a very fragile choke-point on it, I’m going to bite the bullet and admit that building a membrane now, when I have only pieces of the design that’s going to use it, was a pretty bad idea.

What I should’ve done – and what I will now do – is to create a stub membrane with nothing more than “forwarding handlers” between the code I’m prototyping, and the code which exercises it. Most critically, I want to turn that membrane off when I’m doing development. Later, when I need a real membrane, I’ll have a single isolated place to start with. When the API does change, it’ll be in one place to fix.

I have good weeks, and I have bad weeks. Oh, well.

When did SourceForge become so nice?

Many years ago, I tried being a developer on SourceForge. It was thoroughly unpleasant, and I couldn’t figure out how to do it.  Last night, I tried again, and oh, my goodness, was it nice:

  • An extremely friendly web interface for admins
  • Multiple repositories at the drop of a hat for one project
  • SSH was a breeze
  • OpenID support
  • A very simple bug tracking system

Mercurial’s ability to push from one repo to another hosted somewhere else made life a little easier, too, and the convert extension meant I could bring over the old CVS-based checkins pretty easily.

Considering the number of times I’ve had to restart, and the fact I’m still going it alone, this is almost perfect. I chose Google Code for DimensionalMap at the time because I was in a hurry. I chose Mozdev for Verbosio because it seemed like a good idea at the time. SourceForge beats both of them for me, for all of the above reasons.

I am a happy developer.

DimensionalMap 1.0a1 Release

DimensionalMap project and 1.0a1 Release

For the last several months, I’ve been working on a new idea.  ECMAScript Harmony introduces the concept of WeakMaps – hashtables built directly into the JavaScript language.  It’s a simple key-value hashtable.  I went a little further: key-key-key-key…-key-value.

Specifically,  I took the WeakMap API and redefined it for a multi-dimensional hashtable.  I think my Concepts wiki page has a pretty good explanation of the problem I’m trying to solve:

 Consider a HTML table:

Table 1

9 2 3
8 1 4
7 6 5

The pattern behind this table’s layout is pretty obvious to us in two dimensions: the center has 1, and as you move in a spiral, you increment the value assigned to each cell. But if you didn’t know the pattern, or were simply storing data (as this article does), how would you do it?

The typical answer is to define a two-dimensional array – more specifically, an array of arrays. This is in fact precisely what HTML does. Observe the markup for the above table:

<table border="1" style="width: 200px">
  <caption>Table 1</caption>

This long string of characters serializes a two-dimensional array as a HTML table. This works fine when I need to store only two dimensions of data. But where might I put the number 10, if in neither an adjacent row nor an adjacent column? An adjacent floor, perhaps – a third dimension?

There’s a key assumption here. The assumption is that our data structure already has all the dimensions it will need. To add another dimension, another degree of freedom, you’d have to rewrite the data structure entirely. A HTML table is clearly not capable of handling three dimensions – nor do I mean to suggest it should. Here, it’s just an example.

The DimensionalMap library is supposed to give its users the ability to store data in its space, validate the keys (coordinates) being passed in, and add new dimensions to its space.

I also wrote a quick-and-dirty mockup of the Document Object Model to test DimensionalMap against.  I found four specific uses for DimensionalMap in the mockup:

  • Supporting namespaced DOM attributes (xlink, anyone?)
  • Supporting undo and redo operations
  • Reverting all “uncommitted” changes when an exception is thrown
  • Shadow or anonymous content hidden from the mainstream DOM

This is an “alpha” release for two reasons:

  1. DimensionalMap depends on the Map and Proxy features of ECMAScript Harmony, which only Mozilla “Aurora” and Google Chrome “Dev” builds have.
  2. None of this code or documentation has been reviewed by anyone yet.  I’m posting now because it’s time to ask for those reviews.  Certainly I want to make it useful for others, not just me.

I’ve spent several months working on this, spare-time, as infrastructure for my Verbosio project.  (It’s still not dead yet!)  I chose the mockups test deliberately to see what challenges I would face in building my XML templates markup model – which I’ll be happy to explain to anyone interested, but it still doesn’t work.

Unlike other works of mine in the Mozilla community, this one is entirely web-safe, using to the best of my knowledge ECMAScript-compliant and Harmony-compliant code only.

As always, your feedback is most welcome!

It’s been a really good week.

7 days ago, Jason Orendorff checked in a new feature to the JavaScript engine, the Map constructor.  His timing could not have been better for me, as I was just finishing up a critical part of my DimensionalMap project: getting all tests passing with WeakMaps for object keys.  (DimensionalMap is about a JavaScript-based hashtable in two or more dimensions.)  It didn’t take me long to create a “nativemap” branch and convert my code to work for keys of all types.

That alone would have been cause to celebrate.  I figure Jason’s work here shaved a couple of months off of this project.

However, thanks to Olli Pettay, Jonas Sicking, Ben Bucksch, and Dão Gottwald, among others, my patch for adding timeout support to XMLHttpRequests landed on Thursday.  DOM Workers support for timeouts will have to wait a bit longer, as I’m still a little busy and workers require special JSAPI incantations which I have not yet mastered.  Help wanted!

It feels nice to see both of these happen.  I haven’t been able to contribute much code to Mozilla lately, so anytime I can get something meaningful in, it’s a good week.

A little more fastness, please? Maybe?

Over the last few years, there’s been a big movement on making Mozilla code really, really fast.  Which is wonderful.  A fair bit of this has been in “quick stubs” – specifically, optimizing how JavaScript (aka JS) code calls into C++ code.  For instance, the HTML input element has a quick stub for the value property, bypassing much of the XPConnect gobbledygook.

This post is about the reverse:  frequent calls from C++ code to JS code.  Calls like that don’t have the direct path.  They go through XPConnect.  This is understandable – JS code can do pretty much anything under the sun.  Plus, there aren’t many areas where C++ calls on JS.  The most notable spot is DOM event listeners, but again, they can do anything.

On the other hand, there are specific interfaces where the JS author is supposed to do only very simple things – or where the first few actions taken are validation (“does this event have a keyCode I care about?”).  The NodeFilter interface from DOM Traversal comes to mind.  It’s supposed to check a node’s properties and return a simple value.  A NodeFilter you pass into a TreeWalker or NodeIterator will be called for any node of the specified node types.

This is the kind of thing I look at and think, “Why don’t we get XPConnect out of the way, by having native code do the repetitive work?”  JavaScript is a glue language, where you can connect components together.  So we could use JavaScript to assemble a C++-based NodeFilter, which then gets passed into the native TreeWalker.  The purpose of this “intermediate filter” is simply to call on the JS code less frequently.

Now, I admit, node filters aren’t the most compelling case for pre-assembly.  (I have my reasons for wanting that, but they’re not common cases on the Web, I suspect.)  An event listener, though, would be more compelling.  I’d wager a native ProgressEvent listener, filtering on the readyState property before possibly executing a script-based callback, would be nice.  So would a native keypress event listener, based on keystrokes to a form control.

There’s even a bit of precedent in the Mozilla Firefox code:  the “browser status filter”.  I admit this poorly-documented web progress listener has caused me pain in the past, as it filters quite a lot… but that’s the point:  it hides a lot of unnecessary cruft from JavaScript code that frankly doesn’t care!  I also think JavaScript typed arrays are a point of comparison.

I think a few intermediate filter classes like this could have a measurable impact (ok, maybe small) on frequent calls from C++ to JS.  I don’t know if they would… but I think in the right places they could.  The major downside I can see is to readability of the code.

Opinions welcome!