All posts by ajvincent

A little more fastness, please? Maybe?

Over the last few years, there’s been a big movement on making Mozilla code really, really fast.  Which is wonderful.  A fair bit of this has been in “quick stubs” – specifically, optimizing how JavaScript (aka JS) code calls into C++ code.  For instance, the HTML input element has a quick stub for the value property, bypassing much of the XPConnect gobbledygook.

This post is about the reverse:  frequent calls from C++ code to JS code.  Calls like that don’t have the direct path.  They go through XPConnect.  This is understandable – JS code can do pretty much anything under the sun.  Plus, there aren’t many areas where C++ calls on JS.  The most notable spot is DOM event listeners, but again, they can do anything.

On the other hand, there are specific interfaces where the JS author is supposed to do only very simple things – or where the first few actions taken are validation (“does this event have a keyCode I care about?”).  The NodeFilter interface from DOM Traversal comes to mind.  It’s supposed to check a node’s properties and return a simple value.  A NodeFilter you pass into a TreeWalker or NodeIterator will be called for any node of the specified node types.

This is the kind of thing I look at and think, “Why don’t we get XPConnect out of the way, by having native code do the repetitive work?”  JavaScript is a glue language, where you can connect components together.  So we could use JavaScript to assemble a C++-based NodeFilter, which then gets passed into the native TreeWalker.  The purpose of this “intermediate filter” is simply to call on the JS code less frequently.

Now, I admit, node filters aren’t the most compelling case for pre-assembly.  (I have my reasons for wanting that, but they’re not common cases on the Web, I suspect.)  An event listener, though, would be more compelling.  I’d wager a native ProgressEvent listener, filtering on the readyState property before possibly executing a script-based callback, would be nice.  So would a native keypress event listener, based on keystrokes to a form control.

There’s even a bit of precedent in the Mozilla Firefox code:  the “browser status filter”.  I admit this poorly-documented web progress listener has caused me pain in the past, as it filters quite a lot… but that’s the point:  it hides a lot of unnecessary cruft from JavaScript code that frankly doesn’t care!  I also think JavaScript typed arrays are a point of comparison.

I think a few intermediate filter classes like this could have a measurable impact (ok, maybe small) on frequent calls from C++ to JS.  I don’t know if they would… but I think in the right places they could.  The major downside I can see is to readability of the code.

Opinions welcome!

“Before you load this page, do that…”

I’m still working on my prototype XML editor, Verbosio, through yet another infrastructure project.  This time, it’s through a “multi-dimensional hashtable” concept, bootstrapped from ECMAScript Harmony WeakMap objects.  You can read about my “DimensionalMap” concept and plan here.  That’s not why I’m writing, though.

When I’m working on a typical project, I’m usually alternating between three windows:

  • the browser
  • the editor
  • a shell or command prompt

That’s a little painful.  For DimensionalMap, I’m deliberately keeping the various components in different files, and assembling them together via a Makefile.  This gave me an idea:  what if a special URL could re-run make and force these files to rebuild?

First, the Makefile, which launches Firefox (given a path to the browser) with a custom profile.  Second, a custom protocol handler which I call “jasmine:”, after the very nice Jasmine test harness, which DimensionalMap uses.  This protocol handler’s sole purpose is to call make build before loading the Jasmine test page.

Effectively, that reduces me to two windows to work with:

  • the browser
  • the editor
  • a shell or command prompt

It makes a difference.  Now, if I knew how to incorporate Komodo Edit or Bespin/Ace/Cloud9 as a local file editor extension into the custom profile, it’d be down to one window.  That would be very, very nice indeed.

DimensionalMap is under MPL/LGPL/GPL tri-license.  Comments, reviews and patches are quite welcome.

Whither Amaya?

It’s been over a year since the W3C released a new version of its Amaya prototype XHTML editor.  This is not good.  I’d go so far as to call it ++ungood.  With apologies to Daniel Glazman and the many people who’ve worked on Composer-based tools, I’ve long preferred and used Amaya when it comes to writing HTML these days.  It’s had support for MathML and SVG forever, and its UI provides a lot of power without being in the way.  Oh, and it works on Windows, Linux and Macintosh… kind of like another code base I also love…

I realize Mozilla’s probably in no position to fund Amaya development, and I don’t believe they should.  I’m posting this to call attention to this project I find extremely useful, and inspirational (for Verbosio, my XML editor project).  Maybe someone out there among the W3C members or the global software community can help out a bit.

Three operating systems, three purposes

I’ve stated before that I work with Mozilla code on three different operating systems.  My desktop supports Fedora Core 15 and Windows 7.  My MacBook runs MacOS 10.6.  These three give me coverage on all three platforms.

That said, I realized a few minutes ago each system serves a different purpose for me:

  • With Fedora, I get fast compile times, especially incremental builds, and fast test execution.
  • With Windows, I get a really good free debugger with Visual Studio Express.
  • With Mac, I get portability:  I can write my code on the go.

It’s actually a pretty good situation, I think.  Is anyone else in a similar situation?

Why I’m attending college: trying to explain my thinking

A couple months ago, I started taking classes at Chabot College.  One reason is to get the degree that shows what I’m talking about.  But there’s another reason, even more fundamental:  filling in the gaps in my own knowledge.

Case in point, I’ve been thinking long and hard about building a JavaScript-based DOM.  The problem I’m facing is that to do everything I want – shadow content, undo/redo transactions, etc. – I need a specialized data structure.  Specifically, I need a data structure much like a multi-dimensional array or hashtable.

(The first dimension would be a set of object keys – the DOM nodes.  Another dimension represents undo history, like a timeline.  A third dimension could be the shadow content.  I could define other dimensions might exist for building an individual transaction until it is completed, or otherwise creating a “workspace” for experimenting.)

About 24 hours ago, I had an idea, related to my multi-part keys for hashtables implementation.  Typically, in designing a data structure class, I think about how to give each piece of data an address first, then I implement ways to store data by that address.  The idea is to flip that approach around:  to define an API that lets the user place an object with an address, and then add new dimensions to the address space as the user needs them.

If I’m not making any sense, that’s fine.  In fact, that’s why I’m writing this blog entry.  I’m having lots of trouble articulating last night’s idea formally.  I can see it crystal-clear, and I can write JavaScript to implement it, but I don’t have the language to describe it yet in English.  I spoke briefly with my Calculus instructor tonight, to figure out what area of mathematics my idea might fall into, and he suggested linear algebra (that my idea relates to vectors in some way).  Which I can’t take a class in until I complete Math 1 and Math 2 (both are Calculus classes; I’m enrolled in Math 1).  The name of the linear algebra class, at Chabot, is Math 6.

This underlines why I’m going to college after at least six years as a professional software developer.  It’s a gap in my knowledge.

Some people, like me, enter this field with a talent built upon years and years of tinkering, of experimenting and of thinking.  The software industry can really take a person like that quite a ways.  Others enter the industry having taken computer programming courses – and that’s really hit or miss for an inexperienced person.  (No offense intended to the software engineers who started through college!)

I wonder if taking up college classes after you’ve been in the industry a while is actually the best approach of all:  continuing the education, helping clarify what you’re working on and expanding the imagination with new possibilities.

I wonder how many current software engineers have decided to go back to college after being in the field a while, to push themselves and their capabilities even further.

Jarring web content?

Many years ago, the Mozilla Application Suite implemented support for a jar: protocol.  This protocol states you can refer to ZIP-compressed files inside an archive served as application/jar-archive or application/x-jar.  A typical URL would look like jar:https://alexvincent.us/somepath/target.jar!/test.html.  The idea is that you download one file, target.jar in this case, and your browser extracts the rest of your files from that URL.

Today, Mozilla Firefox uses it as part of how they store their user interface (omni.jar these days) on the local file system.

It’s a pretty interesting concept.  There’s been a large push in recent years towards JavaScript minification, to reduce download sizes.  This is especially true as scripts have recently ballooned to hundreds of kilobytes.  If you have a lot of JS, HTML, CSS, XML, and other plain-text files that rarely change, it might be worth putting them into a single JAR for users to fetch – you get additional compression on top of your minification, and there’s one HTTP request instead of several.

With one major catch, though.

As far as I can tell, Mozilla Firefox is the only major browser to support the jar: protocol.  This is not news.  Even Google Chrome has not implemented support for the jar: protocol.

Naturally, Firefox’s history hasn’t been perfect with the jar: protocol.  They did fix the one issue I found publicly available about it, and wrote about it on MDN. I’m not aware of any other issues, so theoretically they should be safe to use.

I’m thinking about using this capability on my website.  For my “Visualizing the DOM” series I’m developing, I have a lot of JS files that I’m using, including some library files.  Ogg Vorbis doesn’t compress so well (on a 9 minute audio I received a 1% benefit), so I won’t include that.  Alas, if I lock out all the major browsers except Firefox, I’m not going to be too happy.  The good news is that the HTML file which loads my “theater” application is PHP-generated – so based on the user agent, I can probably send the user a JAR archive (to Firefox users) or the uncompressed files directly.

Comments welcome!  Particularly if you think this is a good or a bad idea.

Making a hash of things, part 2: Multi-part keys

Last year, I wrote about hash tables for JavaScript. This was before I knew about the plans to implement a native hash table for JavaScript called WeakMap in Mozilla Firefox.

WeakMaps are awesome.  They completely obsolete the need for that hashStringKey hack I came up with last year.  When you combine them with JavaScript proxies, you can get something even more awesome called a membrane.  (I didn’t make up the name.)  Through the membrane you can ensure a proxy either returns a primitive value (number, string, boolean, etc.), or another proxy.  In fact, if two underlying “native” objects A and B would refer to each other, through the membrane you can ensure the proxies for them, pA and pB, refer not to A or B, but to each other:  pA.b == pB, and pB.a = pA.

A couple things you can’t do with WeakMap

First, keys cannot be primitive values.  You can’t say:

var map = new WeakMap;
map.set("attributes", []); // throws exception because "attributes" isn't an object.

It just doesn’t work.  I asked Jason Orendorff about it, and the reason has to do with garbage collection.  Simply put, weak maps hold references to their keys loosely:  when no one else knows the key, the JavaScript engine can safely erase the key and the value.  With objects, that’s easy:  they’re unique.  When you copy them, you get a distinct object that isn’t equal to the original.  With primitives like a simple string, that’s not so easy:  you can lose all reference to the original string in memory, but hard-code that string elsewhere.  The weak map would have to remember it.  WeakMap currently deals with the problem by forbidding primitive keys.

Second, it’s one key to one value.  That’s what hash tables are, and what they generally should be.  But there’s really no concept of a two-part key, as there is in public key encryption.  Nor a three-part key, nor an n-part key.  So there’s no way to say that any two keys are related to each other.  Think of a two-dimensional grid: each cell has a row and a column.  The row and the column combine to form a key where you can look up a value for the cell.

My very experimental solutions

For the first problem, I implemented a brute-force primitive-to-singleton-object class, PrimitiveKeySet. It creates an object for every primitive it sees (thankfully, you have to pass it a primitive first), and returns that object. I also implemented a WeakMapWithPrimitives function, which leverages PrimitiveKeySet and wraps around the WeakMap API.  It doesn’t solve the memory leakage problem – nothing can, really – but it at least lets me use primitive keys.  I also tried to be a little smart about it:  when you tell it to delete a primitive key, it really does.

For the second problem, I did a little bootstrapping, using a tree analogy.  I started with a WeakMap (the “root map”).  I used the first part of the key to assign another WeakMap (call this a “branch map”) as a value stored in the root map.  I would use the second part of the key to assign another WeakMap to the branch map.  I repeated this over and over again until I reached the last part of the key and the last WeakMap I needed (the “leaf map”, for lack of a better term). At that point I assigned the user’s value to the last key part, on the leaf map.

I could easily say, then:

var map = new CompositeKeyMap(["row", "column"]);
map.set({
  row: 3,
  column: 4
}, "Row 3, Column 4");

I took it one step further, and added a shortcut, the PartialKey.  With a PartialKey, I wouldn’t have to specify every field, every time:

var map = new CompositeKeyMap(["row", "column", "floor"]);
var floorKey = map.getPartialKey({floor: 6});
floorKey.set({
  row: 3,
  column: 4
}, "Row 3, Column 4");
map.get({row: 3, column: 4, floor: 6}) // returns "Row 3, Column 4"

All of these you can see on my main web site under libraries/CompositeKeyMap, and with a Jasmine test suite.

Should you use these?

If you don’t intend to require Mozilla Firefox 6+, probably not.  This code won’t work without that, and there are no fallbacks.

If you want to use partial keys, I would think the CompositeKeyMap is very useful indeed.  I’d recommend one key you specify be for objects only, at least:  otherwise you might as well just use an ordinary JavaScript object as your map: {}.  Whether it should be the first or the last for memory efficiency, I can’t tell you.

I don’t see much use for WeakMapWithPrimitives, to be honest.  I did that as a proof-of-concept only, a stepping stone to the CompositeKeyMap.

Thanks for reading – and feel free to play with it or review the code as Mozilla Firefox 6 launches next week.  Comments welcome!

Welcome to my new digital home. Watch out for falling bytes.

Well, it was time.  I finally decided to launch my own web site, again.  Truth be told, I didn't even have a website all these years that I had a blog.  I didn't feel I really needed one.  Many thanks to the MozillaZine crew for hosting me all these years.

So why'd I move?  There's a few reasons:

  • I needed a home for JavaScript demos, and free blogs just don't cut it for that.  (Which is why I didn't end up at WordPress.com.)
  • I've been playing with WebGL a bit, and HTML 5 Audio.  Fifteen seconds of Ogg Vorbis audio is around 200 KB, even for just speech! When I want to do five minute shows, that's a non-starter on a free blog.
  • To be honest, I've been a bit of a cheapskate, too. The free ride had to end sometime.

I've also gone ahead and purchased HTTPS hosting. Why? Admittedly, it's overkill. But at the same time, I want you to know that the files on this site are coming from me. In particular, I want people who don't trust the Web (NoScript, anyone?) to trust me. This really is just a personal site for me, to showcase my public works of Web technology.

Yes, Verbosio, my prototype XML editor, is the main focus of those works. I'm considering moving it over here, as it never really was a good fit with MozDevGroup's aim of supporting extensions. (Mine's a full-fledged vaporware application.)

Speaking of WebGL, I've got a cheap prototype demonstration of circles in 3-D space going. Don't ask me why the circles are a little flattened. I'm hoping someone out there can tell me why. It's all leading towards a series of presentations I'm calling “Visualizing the DOM”, and the first installment's coming soon (probably in a month or less).

The college journey begins…

I’m taking college classes at Chabot College, for an Associate’s in Science degree (major: Computer Science, emphasis in Mathematics). My online classes start today. 🙂

Most people I’ve worked with were surprised when I told them I have no college education whatsoever. Oh, sure, I attended an A-School in the U.S. Navy to be a Journalist, but I don’t think that counts. I’m 33 years old, and I spent the last decade building up my resume. Now, it’s time.

Chabot recommends a person working full-time take no more than 6 units per semester. I’m starting with Real Estate Principles (3 units) and Sports Officiating (2 units) for the summer, then Math 1 (Calculus, 5 units) and Volleyball Beginners (1 unit) in the fall. If I think I can handle more, I’ll take more. If I think it’s too much, I’ll take less.

I put this off for a long, long time. I remember an incident shortly after my book was published in 2002: I was attending the OSCON, to promote the book. An employee of Amazon.com talked to me and said he wanted to hire me. I was more than willing. Then via e-mail he saw my resume had no degree on it, and he said he could not hire me. I’ve always remembered that, with sadness and a hint of bitterness. I was qualified to do the work then (at least, I thought so), and I would have enjoyed it. Alas, it was not to be. One of the reasons I’m going to college is to correct that hole in my resume, to show that I know what the hell I’m doing.

What’s this mean for my other projects? Well, work still takes first priority, even over college. But my pet projects like Verbosio (the prototype extensible XML editor) are probably going to be back-burnered again.

I’m not sure what I can do about Venkman. I’ve had five different people write me on my blog expressing interest in working on my proposed rewrite, and I think that’s great. I’d still be willing to mentor, but it needs a leader who’s willing to get his or her hands dirty and dive in. Last time I cited a need for dockable XUL panels. We need someone to step up, to create visual “mocks” of how they perceive this. Then we need to write some code.

Finally, I know I haven’t been very active in the Mozilla community lately. College means I’ll be even less active, and that does make me sad. I wish my own pet project was ready for others to play in, but it’s not. I wish I had the time to contribute to ongoing DOM Core or developer tools work, but I don’t.

I’ve said it before, and I’ll say it again: this community and platform has given me a career, and I am eternally grateful for that. If I ever do work for Mozilla, maybe my business card should say “Lizard bridge builder” or something like that. Because you guys have built bridges for me, so far, and I’m not giving up yet.

Venkman thoughts 2011, part 2

First of all, thanks to everyone who’s responded so far to my statement about Venkman dying yesterday. I’ve had a few thoughts and a few communications since then, and I thought I’d try to answer them here.

Venkman versus Firebug

The most important question I’ve gotten so far is “Well, what does Venkman have that Firebug doesn’t?” As I said in comments, I don’t know, because I have almost never used Firebug. Apparently, several years ago, a few people talked about it (search results courtesy of Google.com). For example:

Since my blog isn’t really a good place to collect this data, I figured I’d start a comparison wiki page where we can collect the features of each. Firebug and Venkman fans, please, help me out with some facts – log in and write them down!

On another note, the question itself bothers me a bit. Eight years ago, you could ask “What does Macintosh do that Windows doesn’t?” We were in a monoculture back then (and still are). You could ask “What does Mozilla do that Internet Explorer doesn’t?” about eight years ago, too. Again, a monoculture existed then.

I agree, Firebug is a very impressive tool, even if I haven’t used it very much. (Something about it’s good, if so many people use it and support it regularly.) Also, remember Firebug itself came years after Venkman… and JavaScript debugging was a monoculture then too. Firebug had a compelling answer then. Venkman, having languished in the shadows for years, doesn’t really have a compelling answer now, but that’s beside the point.

When you have at least two complete, healthy projects using the same interfaces, you’re probably doing something right. The W3C works like this: few specifications reach Proposed Recommendation status without two independent complete implementations. The spec may have bugs, and the implementations certainly will… but it provides a level of confidence that the spec is usable.

Now, someone might write a JS debugger UI independent of both Firebug and Venkman, using jsd2… and that’s great. The question bothers me, and I thought I was answering it above… I can’t put my finger on what it is that bothers me right now, but it’s a gut feeling.

The Venkman community: diehards

The second thing I notice from replies so far is that there are a few enthusiasts still out there. 🙂 It’s nice to know, and it’s appreciated. No one’s committed to working on a rewrite yet (not surprising – it’s a huge task). I certainly haven’t figured out high-level details of a rewrite project yet. My goal yesterday was to start the conversation, but to move on, I need somewhere I and others can at least white-board a bit.

I don’t even have a viable code name for the rewrite yet. (The best I’ve come up with so far is “Spengler”.) I’m open to suggestions – maybe WikiMo, maybe somewhere else.

The only thing I know we need and don’t have right now is a good “dockable XUL panels” implementation. Neil Deakin filed bug 554926 for this. This is not what I would call a “good first bug” by any means, but I suspect a lot of editor applications would love having this. (Komodo Edit, my Verbosio project, BlueGriffon, etc.) I envision using XUL panel elements to replace the floating windows Venkman currently provides. Panels in general could use a lot of help – see XUL:Panel_Improvements for details. I’m sure Neil would welcome patches.

Next steps

I don’t know yet. It’s too soon for me to call anything like a “Town Hall” for Venkman replacement efforts. I’m still trying to identify people willing to actively contribute time and talent. If it were me and Gijs alone, forget about it arriving in the next three years. We need help if it’s going to get done.