Meteor's Nested Import Controversy

Written by Pete Corey on Jul 17, 2016.

In my last post you might have noticed an interesting piece of ES6-flavored syntax sugar. We imported a module within a nested block:


if (Meteor.isServer) {
    import winston from "winston";
    ...

Although this code is isomorphic and executed on both the client and the server, winston is only imported on the server.

While this kind of nested importing seems like a handy addition to our Meteor toolbox, it doesn’t come without its share of controversy.

Meteor Meet Reify

As recently as Meteor version 1.3.2.4, this kind of nested import was impossible. Importing a module within any non-top-level block would result in an exception when building your Meteor application:


import winston from “winston”;
^^^^^^
SyntaxError: Unexpected reserved word

However, this all changed in Meteor 1.3.3. Digging through the release notes for that version, you’ll notice a very interesting bullet point:

import statements in application modules are no longer restricted to the top level, and may now appear inside conditional statements (e.g. if (Meteor.isServer) { import … }) or in nested scopes.

In this release, Meteor transitioned to using Ben Newman’s Reify transpiler, which transforms our nested import statement into something like this:


if (Meteor.isServer) {
    var winston;
    module.import("winston",{"default":function(v){winston=v}});
}

Initially, this seems like a useful improvement to the module system.

Importing modules within nested blocks can alleviate some of the pains of context-dependent (client vs. server) imports in isomorphic code. You only want this module imported on the server? Not a problem!

Reify Meet Babel

Trouble quickly rears its ugly head when we try using these modules outside the context of the Meteor build tool.

To simplify our example, imagine we have a module that looks like this:


export function parse(input) {
    import qs from "qs";
    return qs.parse(input);
}

This module exports a function called parse that takes in an input string, runs it through qs.parse, and returns the result.

If this were a Meteor module, this would work just fine. The qs module would be imported at runtime using module.import and everything would work as expected.

Now, imagine that we wanted to test this functionality. Because we want to keep our tests fast, we’ll bypass Meteor’s test framework and use Mocha directly.

A simple test for this module might look something like this:


import { expect } from "chai";
import { parse } from "../imports/parse";

describe("myParseModule", function() {
    it("parses input", function() {
        expect(parse("foo=bar")).to.deep.equal({
            foo: "bar"
        });
    });
});

We execute this test by running mocha over our ./test directory. Unaware of the transition to Reify (and, admittedly, unaware that Reify even exists), we specify that we want to use Babel as our Javascript transpiler:


mocha ./test --compilers js:babel-register

Unfortunately, when Babel tries to transpile our application, it throws an error:


SyntaxError: 'import' and 'export' may only appear at the top level (2:4)
  1 | export function parse(input) {
> 2 |     import qs from "qs";
    |     ^
  3 |     return qs.parse(input);
  4 | }
  5 |

Outside the context of Reify and the Meteor build system, nested imports are not recognized as valid ES6.

The Controversy

Currently, ES6 only supports top-level module imports. This design decision is intended to open the doors for static analysis tools, better resolution of cyclic dependencies, improved dead code removal, and faster lookups, along with proposed Javascript features like macros and types.

Reify’s choice to deviate from this decision is potentially at odds with these design goals, and violates the ES6 specification itself.

That isn’t to say that Reify or Meteor are necessarily in the wrong. Specifications should be changeable, provided there is a compelling reason to change. Ben took up the torch and wrote a compelling document outlining the benefits of nested imports.

In addition to static imports, ES6 also describes a module loader API that can be used to dynamically import modules:


["./foo", "./bar"]
.map(System.import)
.then((foo, { baz }) => {
    // ...
});

An argument could be made that the dynamic module loader API makes techniques like dead code removal impossible. How can a static analysis tool know which modules can be culled if it can’t see, at compile time, which modules will be used?


let version = Math.round(Math.random());
System.import("./foo-v" + version);

Can our build system remove the foo-v0 module from our final bundle? What about foo-v1? Either of the modules could be chosen at runtime, so it’s impossible to know.

Ben argues that using nested imports, which require string literal import locations and require all import symbols be explicitly named would eliminate this problem entirely. Even with nested imports, it’s easy to see which modules and symbols within those modules will be required in a final bundle.

Would nested imports bring us closer to our goals of better compile-time static analysis, while at the same time providing a better, more consistent developer experience?

The controversy is subtle, but the controversy is real.

Looking Forward

As Meteor developers, we have two immediate options moving forward. We can embrace Reify, and potentially distance ourselves from the rest of the Javascript community, or we call fall back to using CommonJS-style require statements to pull in nested modules (or shim ES6-style module loaders):


if (Meteor.isServer) {
    const winston = require("winston");
    ...

For the time being, because I enjoy using native Node.js tools outside the context of the Meteor build tool, I plan on refraining from using nested imports.

I’m very interested to see how all of this will play out.

Ben will be discussing his proposal for nested imports with the ECMAScript standards committee at the end of this month.

Literate Commits

Written by Pete Corey on Jul 11, 2016.

I’ve always been interested in Donald Knuth’s idea of “literate programming”. Presenting a complete program as a story or narrative is a very powerful framework for expressing complex ideas.

While literate programming is interesting, it’s also uncommon. The practice of literary programming seems to be at odds with how most of us develop software.

Thankfully, with a small shift in perspective we may be able to employ all of the benefits of writing literary software, while leveraging the tools we use on a daily basis.

The Problem With Literate Programming

Literate programming doesn’t come without its share of problems. In a recent article, John Cook does a fantastic job of outlining some of the drawbacks of this style of programming, and how they’ve hindered its wider adoption. Additionally, commenter Peter Norvig gives a compelling argument against literate programming in the comments:

I think the problem with Literate Programming is that assumes there is a single best order of presentation of the explanation. I agree that the order imposed by the compiler is not always best, but different readers have different purposes. You don’t read documentation like a novel, cover to cover. You read the parts that you need for the task(s) you want to do now.

A developer’s experiences, preferences, and goals will lead them to approach the same codebase in very different ways. A new developer looking to take ownership over an existing codebase may be looking for a much more holistic view of the software than a developer looking to fix a single bug.

Choose Your Own Adventure

But what if we had all of the benefits of a literate program without the strict presentation order? What if we could dive into any piece of the code that interests us and read only the sections of documentation that relate to that code?

What would be ideal is a tool to help construct such paths for each reader, just-in-time; not a tool that makes the author choose a single path for all readers.

Peter’s idea of a “reading path” that can be constructed on the fly strikes a chord with me and resembles an experiment I’ve been working on lately.

In an attempt to better document, improve, and share my programming workflow, I’ve been setting aside time for documented, deliberate practice.

During these practice sessions, I write short programs while following what I consider to be “best practices”. I’m very deliberate during these sessions and document the thought process and impetus behind every change with highly detailed, “literate” commit messages.

The goal of this process is to turn the project’s revision history into an artifact, a set of literate commits, that represent my thoughts as I go through the steps of writing professional-level software.

Benefits of Literate Commits

In the short amount of time I’ve been doing them, these practices sessions have been enlightening. Intentional observation of my process has already led to many personal insights.

Slowing down and ensuring that each and every commit serves a singular purpose and adds to the narrative history of the project has done wonders to reduce thrash and the introduction of “stupid mistakes”.

The knowledge that the project’s revision history will be on display, rather than buried in the annals of git log is a powerful motivating factor for doings things right the first time.

The goal is that this repeated act of “doing things right the first time” will eventually turn into habit.

Learning From History

The portion of Peter’s comment that stands out is his desire for a choose-your-own-adventure-style tool for bringing yourself up to speed with a given piece of code.

Imagine finding a section of code that you don’t understand within a project. git blame can already be used to find the most recent commits against that section, but those commits are most likely unhelpful out of context.

Instead imagine that those commit messages were highly detailed, through explanations of why that code was changed and what the original developer hoped to accomplish with their change.

Now go further back. Review all of the related commits that led up to the current state of this particular piece of code.

Read in chronological order, these commits should paint a clear picture of how and why this particular code came into existence and how it has changed over the course of its life.

Those who do not read history are doomed to repeat it.

This kind of historical context is invaluable when writing software. By observing how a piece of code has changed over time, you can build a better understanding of the purpose it serves, and put yourself in the right mindset to change it.

Example Project

For a very simple, introductory example to this style of programming and writing, take a look at how I solved a simple code kata in literary commit style.

This is a very basic example, but I hope it serves as a clear introduction to the style. I plan on continuing to release literal commit posts over the coming months. Hopefully this intentional style of programming can be as helpful to other as it has been to me.

While I’m not advocating using literary commits in real-world software, in my limited experience, it can be an incredibly useful tool for honing your craft.

Delete Occurrences of an Element

This post is written as a set of Literate Commits. The goal of this style is to show you how this program came together from beginning to end.

Each commit in the project is represented by a section of the article. Click each section's header to see the commit on Github, or check out the repository and follow along.

Written by Pete Corey on Jul 11, 2016.

Laying the Groundwork

Today we’ll be tackling a code kata called “Delete occurrences of an element if it occurs more than n times” (what a catchy name!). The goal of this kata is to implement a function called deleteNth. The function accepts a list of numbers as its first parameter and another number, N, as its second parameter. deleteNth should iterate over each number in the provided list, and remove and numbers that have appeared more than N times before returning the resulting list.

While this is a fairly simple problem, we’re going to solve it in a very deliberate way in order to practice building better software.

This first commit lays the groundwork for our future work. We’ve set up a simple Node.js project that uses Babel for ES6 support and Mocha/Chai for testing.

.babelrc

+{ + "presets": ["es2015"] +}

.gitignore

+node_modules/

package.json

+{ + "main": "index.js", + "scripts": { + "test": "mocha ./test --compilers js:babel-register" + }, + "dependencies": { + "babel-preset-es2015": "^6.9.0", + "babel-register": "^6.9.0", + "chai": "^3.5.0", + "lodash": "^4.12.0", + "mocha": "^2.4.5" + } +}

test/index.js

+import { expect } from "chai"; + +describe("index", function() { + + it("works"); + +});

Take What We’re Given

One of the challenges of real-world problems is teasing out the best interface for a given task. Code katas are different from real-world problems in that we’re usually given the interface we’re supposed to implement upfront.

In this case, we know that we need to implement a function called deleteNth which accepts an array of numbers as its first argument (arr), and a number, N, as its second parameter (x).

Eventually, deleteNth will return an array of numbers, but we need to take this one step at a time.

index.js

+function deleteNth(arr,x){ + // ... +}

Our First Test

Writing self-testing code is a powerful tool for building robust and maintainable software. While there are many ways of writing test code, I enjoy using Test Driven Development for solving problems like this.

Following the ideas of TDD, we’ll write the simplest test we can that results in failure. We expect deleteNth([], 0) to return an empty array. After writing this test and running our test suite, the test fails:


deleteNth is not defined

We need to export deleteNth from our module under test and import it into our test file. After making those changes, the test suite is still failing:


expected undefined to deeply equal []

Because our deleteNth method isn’t returning anything our assertion that it should return [] is failing. A quick way to bring our test suite into a passing state is to have deleteNth return [].

index.js

-function deleteNth(arr,x){ - // ... +export function deleteNth(arr,x){ + return [];

test/index.js

+import { deleteNth } from "../"; ... -describe("index", function() { +describe("deleteNth", function() { ... - it("works"); + it("deletes occurrences of an element if it occurs more than n times", function () { + expect(deleteNth([], 0)).to.deep.equal([]); + });

Keep it Simple

Interestingly, our incredibly simple and incredibly incorrect initial solution for deleteNth holds up under additional base case tests. Any calls to deleteNth with a zero value for N will result in an empty array.

test/index.js

... + expect(deleteNth([1, 2], 0)).to.deep.equal([]);

Forging Ahead

As we add more test cases, things begin to get more complicated. In our next test we assert that deleteNth([1, 2], 1) should equal [1, 2]. Unfortunately, our initial solution of always returning an empty array failed in this case.


expected [] to deeply equal [ 1, 2 ]

We know that all calls to deleteNth where x is zero should result in an empty array, so lets add a guard that checks for that case.

If x is not zero, we know that our test expects us to return [1, 2] which is being passed in through arr. Knowing that, we can bring our tests back into a green state by just returning arr.

index.js

... - return []; + if (x == 0) { + return []; + } + return arr;

test/index.js

... + + expect(deleteNth([1, 2], 1)).to.deep.equal([1, 2]);

Getting Real

We added a new test case, and things suddenly became very real. Our new test expects deleteNth([1, 1, 2], 1) to return [1, 2]. This means that the second 1 in the input array should be removed in the result. After adding this test, our test suite groans and slips into a red state.

It seems that we have to finally begin implementing a “real” solution for this problem.

Because we want to conditionally remove elements from an array, my mind initially gravitates to using filter. We replace our final return statement with a block that looks like this:


return arr.filter((num) => {
  return seenNum <= x;
});

Our filter function will only pass through values of arr that we’ve seen (seenNum) no more than x times. After making this change our test suite expectedly complains about seenNum not being defined. Let’s fix that.

To know how many times we’ve seen a number, we need to keep track of each number we see as we move through arr. My first instinct is to do this with a simple object acting as a map from the number we seen to the number of times we’ve seen it:


let seen = {};

Because seen[num] will initially be undefined we need to give it a default value of 0:


seen[num] = (seen[num] || 0) + 1;

Our test suite seems happy with this solution and flips back into a green state.

index.js

... - return arr; + let seen = {}; + return arr.filter((num) => { + seen[num] = (seen[num] || 0) + 1; + let seenNum = seen[num]; + return seenNum <= x; + });

test/index.js

... + expect(deleteNth([1, 1, 2], 1)).to.deep.equal([1, 2]);

Simplifying the Filter

After getting to a green state, we notice that we can refactor our filter and remove some duplication.

The seenNum variable is unnecessary at this point. Its short existence helped us think through our filter solution, but it can easily be replaced with seen[num].

index.js

... - let seenNum = seen[num]; - return seenNum <= x; + return seen[num] <= x;

Removing the Base Case

While we’re on a refactoring kick, we also notice that the entire zero base case is no longer necessary. If N (x) is zero, our filter function will happily drop every number in arr, resulting in an empty array.

We can remove the entire if block at the head of our deleteNth function.

index.js

... - if (x == 0) { - return []; - }

Final Tests

At this point, I think this solution solves the problem at hand for any given inputs. As a final test, I add the two test cases provided in the kata description.

Both of these tests pass. Victory!

test/index.js

... + + expect(deleteNth([20, 37, 20, 21], 1)).to.deep.equal([20, 37, 21]); + expect(deleteNth([1, 1, 3, 3, 7, 2, 2, 2, 2], 3)).to.deep.equal([1, 1, 3, 3, 7, 2, 2, 2]);

Final Refactoring

Now that our tests are green and we’re satisfied with the overall shape of our final solution, we can do some final refactoring.

Using the underrated tilde operator, we can simplify our seen increment step and merge it onto the same line as the comparison against x. Next, we can leverage some ES6 syntax sugar to consolidate our filter lambda onto a single line.

index.js

... - return arr.filter((num) => { - seen[num] = (seen[num] || 0) + 1; - return seen[num] <= x; - }); + return arr.filter((num) => (seen[num] = ~~seen[num] + 1) <= x);

Wrap-up

This was an excellent demonstration of how following test-driven development ideas can give you supreme confidence when refactoring your code. We were able to gut entire sections out of our solution and then completely transform it with zero trepidation.

Overall, our solution looks very similar to the other submitted solutions for this kata.

My one regret with this solution is using the double tilde operator (~~). While it does make our final solution quite a bit shorter, it adds confusion to the solution if you’re not familiar with how ~~ works.

Be sure to check out the final project on Github!