Validating directory inputs?

A quick thought here.  I spent several hours today trying to figure out why a simple Firefox toolkit application wouldn’t work.  (I don’t know what to call “-app application.ini” applications anymore, as “XULRunner” has definitely fallen from favor…)  It took me far too long to realize that the “default” subdirectory should’ve been named “defaults” – something that I already know about these apps, but I only build them from scratch every two years or so…

Catching this sort of rookie mistake is, fundamentally, an argument validation exercise:  the main difference is instead of the argument being an object of some kind, it’s a directory on the filesystem.  If Mozilla has a module or component for validating a directory’s structure in general, I haven’t heard of it…

Which is the point of my post here.  I’m wondering what general-purpose libraries exist for validating a directory tree’s structure and contents at a basic level.  Somebody out there must have run into this problem before and created libraries for this.  I’d love to see libraries written in C++, D, Python, NodeJS and/or privileged JavaScript.  Please reply to my post if you can point me to them.  (For once, a quick search on the world’s most popular search engine fails me…)  Bonus points for libraries that allow passing in callbacks for file-specific validation. (“Is there a syntactically correct .ini file at (root)/application.ini?”)

A practical whitelist in JavaScript: es7-membrane, version 0.7

Several months ago, I announced es7-membrane, a new project for letting JavaScript developers control, through JS proxies, what their customers see of their own libraries.  A proxy, as you may recall, lets its creator define rules for looking up properties, defining properties, calling methods, etc., often with a real object underneath which the proxy’s handler internally refers to.

First off, a little bit of basics: an object is a collection of properties (some of which are functions, and officially named “methods”). This may sound obvious, but it’s important: you refer to other values by the object and a property name.

Proxies allow you to rewrite the rules for referring to other values by that tuple of the (containing) object and a property name. For instance, you can hide “private” members behind a proxy.

(Stack Overflow)

A membrane presents a one-to-one relationship between each proxy and a corresponding real object.  If you’re given a proxy to a document, and not the document itself, you can get other proxies, and those proxies can offer other properties that let you get back to the proxy of the document… but you can’t break out of any of the proxies to the underlying set of objects (or “object graph”).

When I made the announcement back in August of this new project, the membrane I presented was quite useless, implementing only a mirroring capability.  Not anymore.  There’s a few features that the latest version currently supports:

  1. Membrane owners can replace any proxy the membrane generates with a custom proxy, using the .modifyRules API
    • .createChainHandler(…) creates a new ProxyHandler derived from a handler used for an object graph the membrane already knows about.
    • .replaceProxy(oldProxy, newHandler) takes the new handler and returns a new proxy for the original value, provided the new proxy handler was created via .createChainHandler().
  2. Membrane owners can require a proxy to store new properties locally on the proxy, instead of propagating them through to the underlying object.  (.storeUnknownAsLocal(…) )
  3. Membrane owners can require a proxy to delete properties locally, instead of on the underlying object.  (.requireLocalDelete(…) )
  4. Membrane owners can hide existing properties of an object from the proxy’s users, as if those properties do not exist.  (.filterOwnKeys(…) )
  5. Membrane owners can be notified when a new proxy is about to go to the customer, and set up new rules for that proxy before the customer ever sees it.  (ObjectGraphHandler.prototype.addProxyListener(callback), ..removeProxyListener(callback) )
  6. Membrane owners can define as many object graphs as they want.  Traditionally, a membrane in JavaScript has supported only two object graphs.  (“wet”/”dry”, or “protected”/”public”, or “internal”/”external”… you get the idea)  But there’s no technical reason for that to be the upper limit.  The initial design of es7-membrane allowed the owner to define an object graph by name with a string.
    • Having more than one object graph could have a few applications:  different privileges for different users or customers, for example.
    • The es7-membrane test code uses “dry”, “wet” and “damp” object graphs.
    • For lack of a better name, es7-membrane calls these “multisided” membranes.  I have a document explaining how this works at a very temporary location.
  7. A new feature of ECMAScript 6, however, is symbols – which can be used as valid keys to an object, but are not strings.  Per a Github issue by Dr. Mark Miller of Google (who has been advising me on applications every now and then – thanks, Mark), now membrane owners can define object graphs by a “private” JavaScript symbol they create, as well.

Features two through five above combine to make whitelisting of properties in JavaScript very doable:  properties you don’t want exposed you filter out, and customers receiving proxies configured for local properties won’t propagate their changes to the your objects.  The listeners also mean you can apply those filters and local property rules before the customer ever sees any newly created proxies.  So properties you want private and/or protected really are private and/or protected from the malicious end-user.

Version 0.7 added the symbol support today, along with a probable memory leak clean-up and a little more protection for revoked object graphs.

The production files are available in the dist subdirectory of the es7-membrane project’s Github’ repository, or directly usable as a npm module.

I’m still looking for help in a few areas:

  • Building a static interactive demo site on Github
  • Code and documentation reviews
  • More testing
    • additional tests in Jasmine
    • converting tests to test-262 format (someone suggested the standard tests for ECMAScript might use some integration tests, such as a membrane implementation)
    • if there are weaknesses, where a customer who has only “dry” proxies and good old ECMAScript 6+ (no membrane access, no other proxies) can break out to reach “wet” proxies… no one’s seriously explored that yet.
  • Using the membrane module to protect itself from evildoers (dogfooding bonus)
  • Implementing other use cases for a multisided membrane
  • And just in general, another volunteer developer or two would be extremely welcome!

Introducing es7-membrane: A new ECMAScript 2016 Membrane implementation

I have a new ECMAScript membrane implementation, which I will maintain and use in a professional capacity, and which I’m looking for lots of help with in the form of code reviews and API design advice.

For those of you who don’t remember what a membrane is, Tom van Cutsem wrote about membranes a few years ago, including the first implementations in JavaScript. I recently answered a StackOverflow question on why a membrane might be useful in the first place.

Right now, the membrane supports “perfect” mirroring across object graphs:  as far as I can tell, separate object graphs within the same membrane never see objects or functions from another object graph.

The word “perfect” is in quotes because there are probably bugs, facets I haven’t yet tested for (“What happens if I call Object.freeze() on a proxy from the membrane?”, for example).  There is no support yet for the real uses of proxies, such as hiding properties, exposing new ones, or special handling of primitive values.  That support is forthcoming, as I do expect I will need a membrane in my “Verbosio” project (an experimental XML editor concept, irrelevant to this group) and another for the company I work for.

The good news is the tests pass in Node 6.4, the current Google Chrome release, and Mozilla Firefox 51 (trunk, debug build).  I have not tested any other browser or ECMAScript environment.  I also will be checking in lots of use cases over the next few weeks which will guide future work on the module.

With all that said, I’d love to get some help.  That’s why I moved it to its own GitHub repository.

  • None of this code has been reviewed yet.  My colleagues at work are not qualified to do a code or API review on this.  (This isn’t a knock on them – proxies are a pretty obscure part of the ECMAScript universe…)  I’m looking for some volunteers to do those reviews.
  • I have two or three different options, design-wise, for making a Membrane’s proxies customizable while still obeying the rules of the Membrane.  I’m assuming there’s some supremely competent people from the es-discuss mailing list who could offer advice through the GitHub project’s wiki pages.
  • I’d like also to properly wrap the baseline code as ES6 modules using the import, export statements – but I’m not sure if they’re safe to use in current release browsers or Node.  (I’ve been skimming Dr. Axel Rauschmeyer’s “Exploring ES6” chapter on ES6 modules.)
    • Side note:  try { import /* … */ } catch (e) { /* … */ } seems to be illegal syntax, and I’d really like to know why.  (The error from trunk Firefox suggested import needed to be on the first line, and I had the import block on the second line, after the try statement.)
  • This is my first time publishing a serious open-source project to GitHub, and my absolute first attempt at publishing to NPM:
  • I’m not familiar with Node, nor with “proper” packaging of modules pre-ES6.  So my build-and-test systems need a thorough review too.
  • I’m having trouble properly setting up continuous integration.  Right now, the build reports as passing but is internally erroring out…
  • Pretty much any of the other GitHub/NPM-specific goodies (a static demo site, wiki pages for discussions, keywords for the npm package, a Tonic test case, etc.) don’t exist yet.
  • Of course, anyone who has interest in membranes is welcome to offer their feedback.

If you’re not able to comment here for some reason, I’ve set up a GitHub wiki page for that purpose.

Associate’s Degree in Computer Science (Emphasis in Mathematics)

Hi, all.  I know I’ve been really quiet lately, because I’ve been really busy.  My fulltime job is continuing along well, and I just completed a Associate’s in Arts degree, majoring in Computer Science with an emphasis in Mathematics at Chabot College.

I have an online music course to take to complete my lower division education requirements, and then I’ll be starting in the fall quarter at California State University, East Bay on a Bachelor’s of Science degree, also majoring in Computer Science.

No, I don’t have any witty pearls of wisdom to offer in speeches, so I will defer to the expert in commencement speeches, Baz Luhrmann:

My two cents on WebExtensions, XPCOM/XUL and other announcements

(tl;dr:  There’s a lot going on, and I have some sage, if painful, advice for those who think Mozilla is just ruining your ability to do what you do.  But this advice is worth exactly what you pay to read it.  If you don’t care about a deeper discussion, just move to the next article.)

 

The last few weeks on Planet Mozilla have had some interesting moments:  great, good, bad, and ugly.  Honestly, all the recent traffic has impacts on me professionally, both present and future, so I’m going to respond very cautiously here.  Please forgive the piling on – and understand that I’m not entirely opposed to the most controversial piece.

  • WebAssembly.  It just so happens I’m taking an assembly language course now at Chabot College.  So I want to hear more about this.  I don’t think anyone’s going to complain much about faster JavaScript execution… until someone finds a way to break out of the .wasm sandboxing, of course.  I really want to be a part of that.
  • ECMAScript 6th Edition versus the current Web:  I’m looking forward to Christian Heilmann’s revised thoughts on the subject.  On my pet projects, I find the new features of ECMAScript 6 gloriously fun to use, and I hate working with JS that doesn’t fully support it.  (CoffeeScript, are you listening?)
  • WebDriver:  Professionally I have a very high interest in this.  I think three of the companies I’ve worked for, including FileThis (my current employer), could benefit from participating in the development of the WebDriver spec.  I need to get involved in this.
  • Electrolysis:  I think in general it’s a good thing.  Right now when one webpage misbehaves, it can affect the whole Firefox instance that’s running.
  • Scripts as modules:  I love .jsm’s, and I see in relevant bugs that some consensus on ECMAScript 6-based modules is starting to really come together.  Long overdue, but there’s definitely traction, and it’s worth watching.
  • Pocket in Firefox:  I haven’t used it, and I’m not interested.  As for it being a “surprise”:  I’ll come back to that in a moment.
  • Rust and Servo:  Congratulations on Rust reaching 1.0 – that’s a pretty big milestone.  I haven’t had enough time to take a deep look at it.  Ditto Servo.  It must be nice having a team dedicated to researching and developing new ideas like this, without a specific business goal.  I’m envious.  🙂
  • Developer Tools:  My apologies for nagging too much about one particular bug that really hurts us at FileThis, but I do understand there’s a lot of other important work to be done.  If I understood how the devtools protocols worked, I could try to fix the bug myself.  I wish I could have a live video chat with the right people there, or some reference OGG videos, to help out… but videos would quickly become obsolete documentation.
  • WebExtensions, XPCOM and XULUh oh.

First of all, I’m more focused on running custom XUL apps via firefox -app than I am on extensions to base-line Firefox.  I read the announcement about this very, very carefully.  I note that there was no mention of XUL applications being affected, only XUL-based add-ons.  The headline said “Deprecration of XUL, XPCOM…” but the text makes it clear that this applies mostly to add-ons.  So for the moment, I can live with it.

Mozilla’s staff has been sending mixed messages, though.  On the one hand, we’re finally getting a Firefox-based SDK into regular production. (Sorry, guys, I really wish I could have driven that to completion.)  On the other, XUL development itself is considered dead – no new features will be added to the language, as I found to my dismay when a XUL tree bug I’d been interested in was WONTFIX’ed.  Ditto XBL, and possibly XPCOM itself.  In other words, what I’ve specialized in for the last dozen years is becoming obsolete knowledge.

I mean, I get it:  the Web has to evolve, and so do the user-agents (note I didn’t say “browsers”, deliberately) that deliver it to human beings have to evolve too.  It’s a brutal Darwinian process of not just technologies, but ideas:  what works, spreads – and what’s hard for average people (or developers) to work with, dies off.

But here’s the thing:  Mozilla, Google, Microsoft, and Opera all have huge customer bases to serve with their browser products, and their customer bases aren’t necessarily the same as yours or mine (other developers, other businesses).  In one sense we should be grateful that all these ideas are being tried out.  In another, it’s really hard for third-parties like FileThis or TenFourFox or NoScript or Disruptive Innovations, who have much less resources and different business goals, to keep up with that brutally fast Darwinian pace these major companies have set for themselves.  (They say it’s for their customers, and they’re probably right, but we’re coughing on the dust trails they kick up.)  Switching to an “extended support release” branch only gives you a longer stability cycle… for a while, anyway, and then you’re back in catch-up mode.

A browser for the World Wide Web is a complex beast to build and maintain, and growing more so every year.  That’s because in the mad scramble to provide better services for Web end-users, they add new technologies and new ideas rapidly, but they also retire “undesirable” technologies.  Maybe not so rapidly – I do feel sympathy for those who complain about CSS prefixes being abused in the wild, for example – but the core products of these browser providers do eventually move on from what, in their collective opinions, just isn’t worth supporting anymore.

So what do you do if you’re building a third-party product that relies on Mozilla Firefox supporting something that’s fallen out of favor?

Well, obviously, the first thing you do is complain on your weblog that gets syndicated to Planet Mozilla.  That’s what I’m doing, isn’t it?  🙂

Ultimately, though, you have to own the code.  I’m going to speak very carefully here.

In economic terms, we web developers deal with an oligopoly of web browser vendors:  a very small but dominant set of players in the web browsing “market”.  They spend vast resources building, maintaining and supporting their products and largely give them away for free.  In theory the barriers to entry are small, especially for Webkit-based browsers and Gecko:  download the source, customize it, build and deploy.

In practice… maintenance of these products is extremely difficult.  If there’s a bug in NSS or the browser devtools, I’m not the best person to fix it.  But I’m the Mozilla expert where I work, and usually have been.

I think it isn’t a stretch to say that web browsers, because of the sheer number of features needed to satisfy the average end-user, rapidly approach the complexity of a full-blown operating system.  That’s right:  Firefox is your operating system for accessing the Web.  Or Chrome is.  Or Opera, or Safari.  It’s not just HTML, CSS and JavaScript anymore:  it’s audio, video, security, debuggers, automatic updates, add-ons that are mini-programs in their own right, canvases, multithreading, just-in-time compilation, support for mobile devices, animations, et cetera.  Plus the standards, which are also evolving at high frequencies.

My point in all this is as I said above:  we third party developers have to own the code, even code bases far too large for us to properly own anymore.  What do I mean by ownership?  Some would say, “deal with it as best you can”.  Some would say, “Oh yeah? Fork you!”  Someone truly crazy (me) would say, “consider what it would take to build your own.”

I mean that.  Really.  I don’t mean “build your own.”  I mean, “consider what you would require to do this independently of the big browser vendors.”

If that thought – building something that fits your needs and is complex enough to satisfy your audience of web end-users, who are accustomed to what Mozilla Firefox or Google Chrome or Microsoft Edge, etc., provide them already, complete with back-end support infrastructure to make it seamlessly work 99.999% of the time – scares you, then congratulations:  you’re aware of your limited lifespan and time available to spend on such a project.

For what it’s worth, I am considering such an idea.  For the future, when it comes time to build my own company around my own ideas.  That idea scares the heck out of me.  But I’m still thinking about it.

Just like reading this article, when it comes to building your products, you get what you pay for.  Or more accurately, you only own what you’re paying for.  The rest of it… that’s a side effect of the business or industry you’re working in, and you’re not in control of these external factors you subconsciously rely on.

Bottom line:  browser vendors are out to serve their customer bases, which are tens of millions, if not hundreds of millions of people in size.  How much of the code, of the product, that you are complaining about do you truly own?  How much of it do you understand and can support on your own?  The chances are, you’re relying on benevolent dictators in this oligopoly of web browsers.

It’s not a bad thing, except when their interests don’t align with yours as a developer.  Then it’s merely an inconvenience… for you.  How much of an inconvenience?  Only you can determine that.

Then you can write a long diatribe for Planet Mozilla about how much this hurts you.

Introducing a WebGL-DOM Visualization Tool

Repository: https://bitbucket.org/verbosio/webgl-dom

Home page: https://alexvincent.us/webgl-dom/

tl;dr:  I have a new way of visualizing DOM trees in WebGL, and I’m looking for volunteers to improve the basic tool, especially on the WebGL side with three.js.

I’ve run into some trouble with an experimental Document Object Model.  Specifically, I’m trying to visualize it, but I’m dealing with multiple dimensions:

  • The parent-child node relationships
  • Sibling nodes
  • Attributes of DOM Element nodes
  • Atomic change sets
  • and at least two models of “shadow content”.

About four to six weeks ago, I realized I needed a tool to not only debug the DOM I’m trying to build, but to simulate new ideas that I haven’t yet implemented.  So I started building a WebGL-based visualization tool.

The tool currently has two main tasks:

  1. Transforming a XML document into a specific JSON format
  2. Generating a tree diagram in WebGL from JSON documents in the specified format

Most importantly, hand-editing this JSON should allow me to show ideas that currently cannot be done in the standard DOM.  The “WebGL Inspector sample” link from the home page shows this:  it takes only JSON as input and renders the tree.

It is very primitive right now, a mere starting point.  I’m posting this now, hoping to find a volunteer who’s more familiar with WebGL / three.js than I am to improve on the rendering parts.  The image is static:  there’s no zoom, pan, or rotation support whatsoever.  I really would like some help there.

Also, it doesn’t work in Google Chrome, but that’s because I had to specify type=”application/javascript;version=1.8” to make it work in Mozilla Firefox 39+.  (I like ECMAScript 6th edition and strict mode, thank you very much.  I just wish it worked without versioning.  I understand that’s Coming Soon.)

There is some click support:  clicking on a sphere should give details about the corresponding DOM Node, including the nodeName and nodeType.

If anyone out there likes the 3-D visualization idea and wants to reimplement it in the Firefox Developer Tools, be my guest.  Though the Tilt add-on for Firefox is more practical right now.

I bought a condominium!

It seems customary on Planet Mozilla to announce major positive events in life.  Well, I’ve just had one.  Not quite as major as “I’m a new dad”, but it’s up there.  With the help of my former employers who paid my salaries (and one successful startup, Skyfire), I closed a deal on a condominium on March 5, in Hayward, California, U.S.A.

There will be a housewarming party.  Current and former Mozillians are certainly welcome to drop by.  The date is TBA, and parking will be extremely limited, so a RSVP will be required.

I’ll write a new post with the details when I have them.

Competing with pen & paper

In tonight’s linear algebra class, I made the mistake of leaving my paper notebook home. Ok, I thought, I’ll just use Amaya and see how that goes.

Not so well, it turns out.

Twenty minutes of lecture equals a frantic “where is that thing?”, and nothing learned…

  • The template for a MathML subscript is in a different panel from the template for a MathML summation (“sigma”), and you have to switch between panels constantly.
  • If you want two subscripts (and in linear algebra, two subscripts for an element is common), you get a modal dialog.  (Really? How many subscripts does an element need?)
  • Where’s the special “M” symbol for matrix spaces? (I’d post it, but WordPress eats everything after that U+1D544 character!)  We can get the real number set with ℝ..
  • The UI for Amaya is hard-coded, so I can’t change it at all.
  • Amaya’s copy & paste support is terrible.
  • It takes about 2 seconds to write [Ai]1j with pen & paper. In Amaya that takes somewhere around ten seconds, plus the dialog I mentioned earlier.
  • Oh, and the instructor’s going on, keeping a pace for students using pen & paper… there’s no chance of me keeping up with him.

After twenty minutes of trying to quickly jot down what he’s saying, without comprehension, I end up with some symbolic gobbledygook that’s probably about 80% of a match to what the instructor is actually saying.  But what I was able to write down was complete nonsense.

I ended up switching to scratch paper and pen, where I was not only able to keep up, but ask some insightful questions.

(Incidentally, I glanced at LibreOffice tonight as well.  I saw immediately that I’d have fundamentally the same problems:  unfamiliar UI and lots of context switching.  Too much to really listen to what the instructor’s saying.)

How does a computer compete with pen & paper?

Later tonight, I realized, if it only takes five quick, essentially subconscious penstrokes to draw Ai, and a fair bit of training to teach someone the equivalent keystrokes in an editor… then maybe a keyboard and mouse are the wrong tools to give a student.  Maybe something closer to pen & paper is best for quickly jotting down something, and then translating it to markup later… which sounds like character recognition.

Hmm, isn’t that something digital tablets and styluses are somewhat good at?  Maybe not perfect, but easier for a human to work with than a memorized set of keystrokes.

Now, I am starting to understand why computer manufacturers (and Firefox OS developers) are putting so much effort into supporting touchscreens:  because they’re useful for taking notes, at least.  Once again, I’ve somewhat missed the boat.

How does this impact my editor project?

The good news is this is way too complicated for me to even attempt in my proof-of-concept editor that I’m trying to build.  (The proof of concept is about making each XML language an add-on to the core editor.)

The bad news is if I ever want students to regularly use computers in a mathematics classroom (which is the raison d’être I even started working with computers as a child), I’m going to need to support tablet computers and styluses.  That’s a whole can of worms I’m not even remotely prepared to look at.  This raises the bar extremely high.  I’m writing this blog post mainly for myself as a future reference, but it means I’ve just discovered a Very Hard Problem is really a Much, Much Harder Problem than I ever imagined.

Alex Vincent's ramblings about Mozilla technology, authoring, and whatever he feels like.