Hardware ups and downs

For many months, my custom-built WinXP desktop has been giving me blue screens of death, and more recently, random restarts. Since I can’t use it to build Mozilla anymore, I decided I might as well take it into the shop for repairs. The company that built it is closed on weekends (grrr), so I dropped it off at Fry’s Electronics for a diagnostic.

Several days later, I spotted a computer deal that was pretty hard to beat: a Gateway GT5676 computer, 64-bit AMD processor…

Continue reading Hardware ups and downs

We are now in the Gecko 1.9 end-game

We have entered an interesting phase in the development cycle, what I like to roughly call the end-game. It’s a point where sacrifices and unpleasant decisions are made.

In this particular case, it’s in seeing a number of bugs which were marked as blocking the 1.9 release losing that distinction. The blocking1.9+ flags are in several cases being changed to blocking1.9-.

It’s a sad truth: not all bugs are created equal. Indeed, not all blockers are equal in severity or impact, either, and near the end of a project, drivers end up re-evaluating bugs to see what really needs to block a release. Not enough resources (human in particular), and not enough time.

I’ve seen this cycle at work, too: bugs that block a release up until a certain arbitrary point in time. That time being, “it’s time to ship something.” To me, it’s a bittersweet moment: on the one hand, very serious bugs are being taken off the must-fix list. (For a bug to get blocking1.9+ in the first place, it has to meet a high criteria.) On the other hand, it signifies that what we have now is nearly ready for our customers, and developers can either spend time on what gets us to a shippable state, or on bugs that, ultimately, don’t matter as much as others. In the interest of meeting “when it’s ready”, bugs that aren’t critical to that readiness state get dropped.

I haven’t complained about any of the bugs I’ve observed getting knocked off the blocking list, because the drivers are right about one thing: Gecko 1.9, and its primary child, Mozilla Firefox 3, are in a nearly-shippable state already. We’re at the point where it’s time to accept the fantastic work done so far with some bugs unfinished. As I said, it’s a bittersweet moment: we’re at the finish line, we just need to break the tape. We’ll deal with the sore ankle some other time.

Now, I just look forward to hearing what the post-1.9 plans are for 1.9 code. There’s talk at mozdev of using Mercurial instead of Subversion for the next-generation repository. If I had a clear guarantee that 1.9.x releases would have equivalent Mercurial repositories (or at least tags) to go with them, that’d be a big plus.

Evangelism on a different level needed

I’m running on my Fedora 8 box Mozilla Firefox 2.0. I see a link for Battlestar Galactica, Season 3, on CNN’s page near the bottom. What page do I get back when I load it? This one. EW doesn’t describe my feelings. Ewww does.

I’ve been in this business a long time, and I’ve been browsing the Internet and tinkering with computers a lot longer. So do tell, how does Firefox on Linux differ that much to a web server from Firefox on Windows or Macintosh? The best you’ve got is the user-agent string, I’d think. One of the biggest goals of Firefox is that web pages look the same on different platforms. More to the point, how do you suggest I upgrade my browser to a totally different operating system?

I’ve seen some stupid web-discrimination in the past (and I still do from time to time), but this is pretty near the top. There’s no excuse for that.

Ten years… and a career

Others can speak more eloquently than I can about the significance of mozilla.org’s birth. I’ll put it in much simpler terms: I owe my professional career to that event and the years that followed.

I have always been a fan of the Mozilla code base – dating all the way back to my early high school years when Netscape was appearing on the scene. Shortly after I’d finished writing my book on JavaScript, I discovered Mozilla’s user-interface had a huge JavaScript presence in it. After a few years tinkering around in the Mozilla codebase, a recruiting agency contacted me and asked if I wanted to do that for a living. To which my answer then – and now – is “absolutely, yes!”

A few years later, I’m working at Skyfire Labs, Inc., (which coincidentally appeared today in the Wall Street Journal), and I’m having the time of my life. I’m doing what I wanted to do, and I’m getting paid nicely to do it. What could possibly be better than that?

So when someone wants to throw a party to celebrate what Mozilla’s done for the past ten years – not just at the beginning – I’m there. Mozilla technology made it possible for me to earn a decent living doing what I do best. This community made it possible.

So, to everyone who’s written a line of code, filed a bug, written a testcase, figured out how to make it easier on others, or just written down what it does and how to use it… thanks.

XUL Trees and Objects: ClassTreeView

I love XUL trees. I even
smoke them
from time to time.
But what I don’t like is trying to build a hierarchy of objects in them – even
though that’s probably the best use for them.

Imagine that you want to show this tree of objects, with properties of each
object horizontally, and the objects themselves laid out vertically, indented
and illustrated to show which objects have which parent objects. DOM Inspector
does this with DOM nodes all the time. My chrome registry viewer code does
something similar for files (file systems are tree-like), and when you want to
see the properties of an object, JS object inspection is usually through a tree.
Even Venkman uses trees to show you functions in a file or webpage.

Still, for every different object tree I’ve come across, there’s a
different view that has to be built. Usually it’s custom-built for that tree.
So you’ve got two options: build your own view, from scratch, every time… or
build a XUL tree DOM and let Gecko’s own tree utilities show it to you.

Believe it or not, I’ve tried both approaches… and finally decided to
roll my own baseline solution. (If someone else has done this before, please let
me know. It’s best to have this in a common place.) More details in the extended section.

Continue reading XUL Trees and Objects: ClassTreeView

Verbosio: Coming out of hibernation

Over the last several weeks, I’ve been having this gnawing urge to restart work on Verbosio. It’s been getting stronger, to the point where I just can’t keep quiet about it: I’m getting back into it, and looking forward to completing my work on an 0.1 “proof-of-concept” XML editor.

Since I put Verbosio to sleep several months ago, I’ve had a number of thoughts:

  • I want to use the new Songbird-provided build system. I’ve played around with it a few times since it first arrived, and I’m pleased. This makes it much easier to compile a specific XULRunner-based application, using Mozilla’s own build system.
  • I’ve gotten much more comfortable working with C++ code. When I started work on Verbosio, I had a goal that said “no compiling necessary.” That was because I didn’t want to muck around too much. The new build system – and a couple years experience – obsoletes that requirement, in my opinion. Besides, I still anticipate that I might have to customize Gecko a little bit. Hopefully not that much.
  • CVS would make me pay dearly for my earlier decisions. Because of the above two points, I want to rearrange Verbosio’s source code to take advantage of the build infrastructure Mozilla already provides. That means a lot of files and directories moving. In CVS, that means you lose all revision history. No thanks. Fortunately, SVN is coming, and supposedly soon. I am looking forward to it.
  • Writing to ZIP files changes the game. nsIZipWriter means that Verbosio, in its XUL demo work, could start working with jarred chrome for real. Fun times.
  • Bit rot has been surprisingly minimal. Code that I wrote many moons ago still works, at least in Gecko 1.9 beta 5. There was a regression in 1.9b4 that has since been fixed.
  • Waiting for OpenKomodo was a bit of a mistake. I figured by now they would be on the 1.9 code base, which Verbosio requires. Also, I haven’t reached “proof-of-concept”, that working model that lays the foundation for any possible merging or sharing of code. I need to move forward without them, for the time being.
  • I still have ideas to explore. In addition to this list, I’ve had a couple more ideas that might be useful. For instance, I’ve seen myself working with DOM TreeWalkers, where the NodeFilter is wholly implemented in JavaScript. Wouldn’t it be nicer, perf-wise, if you could have the first half of the filter implemented as a common C++ object (running ten times faster than JS code, thanks)?
  • I hope I can find the time and the motivation, again. It’s been hard, yes, but I’ve learned a bit about myself. These days, I can usually find the drive to work on side projects one, maybe two days a week. Yet these side projects are what keep my skills and ability to create, to innovate, at the sharpest levels. That’s when this business of software development is most fun: creating something new, useful, and radical. It also fires me up when it comes to my day job – I’ve noticed that my biggest ideas for work come to me either during or soon after a bit of heavy-duty cogitation on something Mozilla, but not Skyfire. I can’t explain it, but it works.

Ultimately, I’m still not sure of where this Verbosio project will take me – or where I’ll take it. But I take heart in the fact that I still don’t see anyone doing anything remotely like what I have in mind for Verbosio. Sure, it’s hard to do (as I’ve said before), but I have a vision, and that makes the years of effort worthwhile.

Thanks for reading!

CodingHorror visits Firefox extensions

The Dark Side of Extensions

As a guy who works on Firefox code on a regular basis, and as someone who recently started reading CodingHorror again, I thought it worth pointing this post out. Jeff Atwood is usually insightful.

That doesn’t mean I agree with him, and I certainly don’t, here. I’m posting this in the hopes that someone from our Firefox community will respond. Mr. Atwood is one of those voices worth hearing and answering, in my opinion.

Mozilla Messaging: My own two bits

Robert Kaiser’s comments on SeaMonkey and Mozilla Messaging bring to mind my own thoughts. I’m not sure how well this will be received, particularly as MoMe (sorry, David, I couldn’t resist) is just getting started, and what I have in mind might be ambitious.

One thing I remember very clearly about the compose-message part of SeaMonkey, and probably Mozilla Thunderbird hasn’t changed this, is its use of a hypertext editor in composing e-mails – in fact, the same basic editor technology that SeaMonkey’s Composer, and NVu (I’d bet) also use. It’s a <xul:editor/> tag.

This XUL editor element probably hasn’t gotten nearly as much love as the rest of the Mozilla code base over the years. Netscape had a good team going on that. MoCo, not so much. Just finding current peers for editor reviews can be difficult, and they have other things to do. (So do I, sadly.) I’m not aware of a great deal of work that’s gone on in the editor space over the last few years.

Perhaps MoMe can adopt the editor modules and bring some people aboard to work on them. I’ve long held an interest in improving the editor, just no time to work on it (I’m busy and need help, too!). A dedicated team of three to five people on the editor code alone would probably go a very long way.

Just my own two bits. Congratulations, guys!

Why should I register just to fix your site?

This is something of an open letter to companies like Yahoo!, Facebook, Meebo, MSN, anybody that requires a new registration before people can play with their products.

Why won’t you give us techies a public, testing-only login that doesn’t require we tell you our life stories?

At each of the three companies I’ve worked for as a software engineer, I’ve worked on browser bugs relating to third-party websites. The QA team finds a bug on a website that I don’t normally visit. I go to the site, and discover I need to create a whole new account on their site. That means going through the terms of service, the privacy policy, yadda-yadda-yadda, and giving up a bunch of information that I would really rather not spend the time giving just to reproduce one stinking little bug. Either that, or outright lie (which believe it or not, I hate even more, and I won’t do for this).

But of course, I have to. That’s how these sites get probably three percent of their new members, including me.

I’m not trying to hack your websites – just the opposite. I’m trying to make them work with our products. I’m a technical customer – technical both as an engineer, and as a person who isn’t really going to buy anything from you or click on your banner ads. I’m on your site for two minutes.

Honestly, can’t you just put out a few “developer-test” logins that operate in a sandbox, away from your production environment and real customers? Disable any commands that send e-mails (except between the dev-test logins), restrict the number of accounts in the sandbox, let a company grab accounts on these isolated boxes for short periods of time (a week or two) for testing purposes only.

Or license a second class of logins altogether, just for user-testing. If anything, it might give your potential customers a chance to experience what you’re doing without committing all the way. If they like it, they’re more likely to become real customers… and more likely to use your product and make you money. (That applies to me, too – if I can “test-drive” it without signing up, I’m far more likely to play with it and actually sign up if I like it.)

You can’t give me this “we want to protect our customers from hackers” line, though. You make it so simple for people to sign up for your services that the bad guys can get an account just by filling in a few form fields, and they have far less ethics about this than I do. How does that protect your real customers?

Look, if your website doesn’t work in my product, then the customers we share are unhappy. They don’t get what they want, which means we don’t get what we want and you don’t get what you want. Nobody wins then. All I’m asking is that you don’t make me jump through hoops to complete that agreement between web site and web browser that says “We’re going to work for the customer and make their day a great one.”

(Note: I am specifically not speaking for my employers, past or present. This is my personal opinion only.)

UPDATE: I received three replies directing me to BugMeNot.com. To me, that’s a non-starter. As I said earlier, I don’t think lying is appropriate, nor do I think borrowing someone else’s account is appropriate either.

In this business of software development, I do not have a college degree. In fact, I have not been to college. These days, that’s a rarity among programmers (do we still use that word?). I plan on going… someday. But the point is that I don’t have a degree to fall back on. All I have are my works and my word. I cannot compromise on either of those if I have any future in this business.

News Article: You used JavaScript to write WHAT?

You used JavaScript to write WHAT?

I think it’s an interesting article. In particular, the author’s comments about JS performance on page 2. Oh, I really want to get my hands on Tamarin inside Firefox…

There’s a thought that’s been rumbling around my head for a few weeks, and I just want to throw it out there. Wouldn’t it be nice, as Tamarin stabilizes, to have a Firefox 3.1 which was the same as Firefox 3.0, but using the Tamarin engine instead of the current JS interpreter? The first fruits of Mozilla 2.0, so to speak, and a preview of what’s in store for JavaScript-land. We could even call it Firefox 3.14159. 🙂

I have no idea if this is technically feasible or not. Hopefully one or two of the Tamarin hackers can chime in here.

Alex Vincent's ramblings about Mozilla technology, authoring, and whatever he feels like.