AWS Lambda First Impressions

Written by Pete Corey on May 24, 2016.

Lately, I’ve been paying quite a bit of attention to AWS Lambda.

Lambda is an Amazon Web Service designed to run small pieces of code in response to external stimuli (an endpoint is hit, a document is inserted into a database, etc…). The beautiful thing about Lambda is that your code is designed to run once, and you’re only charged for the amount of time your code is running.

A Node.js Script

To make things a little more concrete, let’s talk about my first baby-steps into working with Lambda.

I have a script-based tool that automates Bitcoin lending on the Poloniex exchange. Pre-Lambda, I implemented this tool as a Node.js script that spun up a local server, and executed a job every 15 minutes to “do stuff” (💸 💸 💸).

I wanted to move this script off of my local machine (mostly so I could close my laptop at night), so I began investigating my hosting and pricing options. On the low end of things, I could spin up a small DigitalOcean droplet for five dollars per month. Not bad, but I knew I’d be unnecessarily paying for quite a bit of idle server time.

I even considered buying a Raspberry PI for around forty dollars. I figured the upfront-costs of buying the device would be payed for within a year. After that initial investment, the power requirements would be negligible.

Meets AWS Lambda

Finally, I found Lambda. I quickly and painlessly modified my Node script to run once, manually deployed it to Lambda, and added a schedule trigger to run my script once every fifteen minutes.

Fast forward past a couple hours of fiddling and my script was working!

After monitoring my script for several days, I noticed that it took between one to two seconds to execute, on average. I added an execution hard-stop duration of three seconds to my Lambda function. With that, I knew that I would be charged for, at most, three seconds of up-time every fifteen minutes.

Using that data and Lambda’s pricing sheet, I calculated that at three seconds per execution with an execution every fifteen minutes, the yearly cost for running my script was, at most, at just under twenty two cents zero dollars.

I was shocked. $0.22/year! Thanks to Lambda’s free tier, hosting my script was free! Comparing that to DigitalOcean’s $60/year, or a Raspberry PI’s upfront cost of $40+ dollars, I had a clear winner.

Looking Forward

My first introduction to AWS Lambda left me impressed. Further research has left me even more excited. The possibilities of an scalable on-demand, event-driven infrastructure seem very attractive.

While I’m not totally re-assessing my software development stack, I’m definitely making a little room for Lambda. I’m already thinking about how I could have used it in the past to build more elegantly engineered, and cheaper solutions.

The Missing Link In Meteor's Rate Limiter

Written by Pete Corey on May 16, 2016.

Meteor’s DDPRateLimiter was released into Meteor in version 1.2 with surprisingly little fanfare. I say this is surprising because DDPRateLimiter helps minimize one of the most prevalent risks found in nearly all Meteor applications: Denial of Service attacks.

By putting hard limits on the rate at which people can call your methods and subscribe to your publications, you prevent them from being able to overrun your server with these potentially expensive and time consuming requests.

Unfortunately, Meteor’s DDPRateLimiter in its current form only partially solves the problem of easily DOS-able applications.

Meteor’s Rate Limiter

In this forum post, Adam Brodzinski, points out that the "meteor.loginServiceConfiguration" publication within the core accounts-base package is not being rate limited by default. He argues that this exposes a serious vulnerability to all Meteor applications using this package who haven’t taken extra precautions.

Without an established rate limit on this publication, any malicious user can potentially exploit it by making repeated subscriptions. These subscriptions flood the DDP queue and prevent other requests from being processed.

The exploit allows you to turn any meteor app on and off like a light switch.

These types of method and publication-based Denial of Service attacks are fairly well documented, and they’re even discussed in the Guide. Be sure to take a look if this kind of attack is new to you.

A Chink In The Armor

The initial vagueness of Adam’s post intrigued me. I started digging deeper into how and when DDPRateLimiter is used by Meteor core. My sleuthing payed off!

I found a chink in the rate limiter’s armor.

The DDPRateLimiter is invoked on the server whenever a subscription is made, and whenever a method is called. These invocations are fairly simple. They increment either a "subscription", or "method" counter and use these counters to check if the current rate of subscription or method calls exceeds any established limits. If the subscription/method exceeds a limit, an exception is thrown.

However, there’s a third type of DDP interaction that can be abused by malicious users: the DDP connection process itself.

Meteor users SockJS to handle its WebSocket connections. You’ll find the actual code that handles these connections in the ddp-server package. The DDP server extends this connection hooking functionality and registers callbacks for handling DDP-specific WebSocket messages.

If you look closely at the "connection" event handler, you’ll notice that it makes no attempt to rate limit the number of connection requests.

In fact, the DDPRateLimiter doesn’t even have a "connection" type. This means that a single user can repeatedly spam a Meteor server with DDP/WebSocket connection requests, all of which will be happily accepted until the server runs out of resources and chokes.

If abused, this can bring down a Meteor server in seconds.

Protecting Your Application

Sikka, like DDPRateLimiter, is another Meteor package designed to enforce rate limiting. Unfortunately, Sikka also won’t help protect against this particular kind of attack.

Sikka works by hooking into the processMessage method found in Meteor’s livedata server. Unfortunately, the processMessage method is called after a WebSocket connection is established. From within this method, we have no way of preventing abusive connection requests.


As discussed, DDPRateLimiter in its current form won’t prevent this type of Denial of Service attack.

Thinking out loud, one potential solution may be to modify Meteor core and add a third rate limiting type: "connection". This new rate limit type could be incremented and validation within each "connection" event:

self.server.on('connection', function (socket) {
  if (Package['ddp-rate-limiter']) {
    var DDPRateLimiter = Package['ddp-rate-limiter'].DDPRateLimiter;
    var rateLimiterInput = {
      type: "connection",
      connection: socket
    };

    DDPRateLimiter._increment(rateLimiterInput);
    var rateLimitResult = DDPRateLimiter._check(rateLimiterInput);
    if (!rateLimitResult.allowed) {
      return socket.end();
    }
  }
  ...

If this technique works, extending the DDPRateLimiter in this way would give Meteor developers the power and flexibility to establish connection rate limits that make sense for their own applications.

Maybe this kind of functionality could even be implemented as a Meteor package, if the "connection" event listeners could be correctly overridden.


The surefire and recommended way of preventing this kind of attack is moving your Meteor application behind a proxy or load balancer like NGINX or HAProxy. Implementing rate limiting using these tools is fairly simple, and very effective.

Rate limiting on the network level means that abusively excessive requests to the /websocket HTTP endpoint will fail, which stops the WebSocket handshake process dead in its tracks, killing the connection before it hits your Meteor server.

I highly recommend moving your Meteor applications behind some kind of proxy layer, rather than exposing them directly to the world.

Final Thoughts

Denial of Service attacks in the Meteor world can be a scary thing to think about. The use of WebScokets and queue-based processing of DDP messages means that when they hit, they hit hard.

Fortunately, with the proper precautions, naive Denial of Service attacks are totally avoidable! Be sure to always rate limit your methods and publications, and move your application behind a proxy that does the same.

Transitioning to Modules With Global Imports

Written by Pete Corey on May 9, 2016.

Meteor 1.3 is upon us! It brings with it promises of better testability, reusability and debugability all thanks to the ES6 module system.

Unfortunately, a wholesale transition into the 1.3-style of doing things may take a huge amount of work, depending on the size of your application. Where will you find the time to refactor your entire application into modules?


Even a partial transition can be frustrating.

Imagine you have a collection called MyCollection that you’ve decided to move into a module. This process is simple enough. After your refactor, you might have a module located at /imports/lib/mycollection that exports MyCollection:

import { Mongo } from "meteor/mongo";
export default new Mongo.Collection("mycollection");

The difficulty comes in when you realize that the rest of your 1.2-style application still assumes that this collection will be accessible as a global reference.

When you run your application, you’ll be greeted by countless errors complaining that MyCollection is not defined throughout your application:

ReferenceError: MyCollection is not defined

One possible solution to this problem is to find each file referencing this collection and import MyCollection module within it.

import MyCollection from "/imports/lib/mycollection";
...
MyCollection.find(...);

However, if your application references this collection throughout dozens or hundreds of files, this can quickly get out of hand. The seemingly simple process of moving MyCollection into a module has suddenly turned into a hydra requiring you to edit files throughout your entire project.


Another solution to this problem is to import MyCollection globally on both your client and your server. This eliminates the need to modify potentially hundreds of files throughout your project, and lets your legacy 1.2 code exist in blissful harmony with your 1.3 modules.

But how do we import modules globally? It’s not as simple as just importing them in your project’s main.js files. After all, ES6 import calls are transpiled down to var declarations by Babel, and var scope is limited to the file it was declared in.

The key is to import your module into your local scope and then explicitly assign it to a global reference. Using this technique, your client/main.js and server/main.js would look something like this:

...
import _MyCollection from "/imports/lib/mycollection";
MyCollection = _MyCollection;

If your collection is a named export, rather than a default export, you can assign it to a global reference like this:

import { MyCollection as _MyCollection } 
       from "/imports/lib/mycollection";
MyCollection = _MyCollection;

Transpired down to ES5, our import looks something like this:

var _mycollection = require("/imports/lib/mycollection");
MyCollection = _mycollection.MyCollection;

Notice that we’re reassigning the locally scoped _mycollection to the global MyCollection reference. Now, your old 1.2 style code can continue to reference MyCollection as a global.

Happy refactoring!