Using Apollo Client with Elixir's Absinthe

Written by Pete Corey on Apr 10, 2017.

Last week I wrote about how I’m using Create React App in conjunction with the Phoenix framework to get off the ground incredibly quickly when building single page applications.

In that post I gave a teaser of how you can integrate your React front-end with your Elixir/Phoenix back-end using GraphQL and the Apollo client.

Let’s dive deeper into how we can wire up an Apollo driven React application to our Absinthe/Phoenix back-end.

Getting Started

Before we get started, let’s make sure we’re on the same page.

The Apollo client is a powerful GraphQL client with a variety of framework-specific integrations. Absinthe is an Elixir-powered GraphQL server implementation with built-in support for the Phoenix framework.

Apollo + Absinthe = 😍

In the past I’ve talked about the basics of getting started with Apollo client and Absinthe. Be sure to check out those instructions, and read through the Apollo documentation and the Absinthe guides if you’re unfamiliar with either tool.

For this article, I’ll assume you’re using a project setup similar to the one I described last week. That is, you’re running a React front-end application served through the Create React App tooling that connects to a Phoenix back-end application.

Once that’s all set up, we can spin up our React front-end by navigating to our React application’s folder and running:


npm start

Similarly, we can spin up our Phoenix back-end server by navigating to our Elixir project and running:


mix phoenix.server

Out of the box, our React application will run on port 3000, and our Phoenix server will run on port 4000.

Wiring Our Front-end to Our Back-end

As we mentioned last week, we need a way to tell our React application how to communicate with our back-end Phoenix applications. We do this by giving our application the URI to our GraphQL endpoint.

In different environments, this endpoint might change. That is, our staging environment might point to a different GraphQL endpoint than our production environment.

This means we can’t hardcode these endpoints into our front-end code.

So what do we do?

Thankfully, Create React App lets us pass custom environment variables into our front-end applications as long as they’re prefixed with REACT_APP_. These environment variables are passed into the application by the build tool and can be accessed through the process.env object.

Let’s assume that our GraphQL endpoint will be passed in through the REACT_APP_GRAPHQL_URI environment variable. We can use this to build Apollo’s network interface:


const networkInterface = createNetworkInterface({
    uri: _.get(process.env, "REACT_APP_GRAPHQL_URI"),
});

Now when we spin up our React application, we need to be sure to set REACT_APP_GRAPHQL_URI, otherwise our call to createNetworkInterface will fail:


REACT_APP_GRAPHQL_URI="http://localhost:4000/graphql" npm start

Now our React application knows where to find our Absinthe-powered GraphQL server.

Perfect!

Using a Custom Watcher… Partially

While our setup is now fully functional, having to manage two different development servers is cumbersome. Instead, let’s write a custom Phoenix watcher than spins up our Create React App development server whenever we start our Phoenix server.

We can do this by adding a new watcher to our Endpoint configuration in config/dev.exs:


config :hello_create_react_app, HelloCreateReactApp.Endpoint,
  ...
  watchers: [npm: ["start", cd: Path.expand("priv/hello_create_react_app/")]]

Now whenever we fire up our Phoenix server with mix phoenix.server or even iex -S mix phoenix.server, our Create React App development server (npm start) will spin up as well.

Remember, we still need to set REACT_APP_GRAPHQL_URI!


REACT_APP_GRAPHQL_URI="http://localhost:4000/graphql" mix phoenix.server

Fantastic!


Unfortunately, there’s currently a problem with this solution.

For some reason, when Create React App’s npm start command is executed through a watcher, which ultimately boils down to a call to System.call "npm", ["start"], ..., killing the Phoenix server will not kill the Create React App development server running in the background.

Killing your Phoenix server and trying to spin it up again will give you this error from the npm start command:


Something is already running on port 3000.

You’ll need to manually find the orphaned node process and kill it before once again restarting the Phoenix server.

I believe this problem is related to this issue on Github. Hopefully it will be fixed soon.

Integrating with our Release Manager

We know that we can point our React front-end to different back-ends with our REACT_APP_GRAPHQL_URI environment variable, but how do we automate this?

Is there a way to incorporate this into our release process?

If you’re using edeliver to generate your releases, you’re in luck. You edeliver configuration file can be customized to set REACT_APP_GRAPHQL_URI according to your build target.

Your edeliver configuration (.edeliver/config) is really just a bash script that’s executed before each edeliver task. This means that we can conditionally set REACT_APP_GRAPHQL_URI based on the environment we’re building for:


if [[ "$DEPLOY_ENVIRONMENT" = "production" ]]
then
  REACT_APP_GRAPHQL_API="http://production/graphql"
fi

if [[ "$DEPLOY_ENVIRONMENT" = "staging" ]]
then
  REACT_APP_GRAPHQL_API="http://staging/graphql"
fi

We can then add a compile hook to build our React application’s static bundle:


pre_erlang_clean_compile() {
  status "Installing NPM dependencies"
  __sync_remote "
    [ -f ~/.profile ] && source ~/.profile
    set -e

    cd '$BUILD_AT/priv/hello_create_react_app'
    npm install $SILENCE
  "

  status "Building React application"
  __sync_remote "
    [ -f ~/.profile ]
    set -e

    cd '$BUILD_AT/priv/hello_create_react_app'
    npm run build $SILENCE
  "
}

Now our React application should be built into priv/hello_create_react_app/build/, which is where our Phoenix application expects to find it.

Additionally, when it’s served by our Phoenix application it will connect to either our staging or production GraphQL endpoint depending on the environment the particular release was built for.

Now that our release manager is handling our front-end build process, all we need to worry about it building our application.

Final Thoughts

Once again, I’m very happy with this setup!

So far the combination of using React with Apollo and communicating with my Phoenix server using Absinthe has been a very powerful and productive combination.

As I mentioned last time, building your front-end application independently from your back-end application can offer up interesting optimizations if you choose to go that route.

Alternatively, you can configure your Create React App to be served as a static asset from your Phoenix application. Hopefully this article has shown that that’s a fairly painless process.

Using Create React App with Phoenix

Written by Pete Corey on Apr 3, 2017.

Contrary to what I said a couple weeks ago, I’ve decided to build the front-end of Inject Detect, as a React application.

Wanting to get off the ground quickly, I wanted to use Create React App to generate my React application boilerplate and tooling. While I wanted to use Create React App, I still wanted my React app to live within the context of a Phoenix application.

How could I combine these two worlds?

It took a little fiddling, but I’ve landed on a configuration that is working amazingly well for my needs. Read on to find out how I integrate all of the magic of Create React App with the power of a Phoenix application!

Front-end Demolition

To start, we won’t need Brunch to manage our project’s front-end static assets.

When creating a new Phoenix project, either leave out Brunch (mix phoenix.new ... --no-brunch), or remove brunch from your existing project by removing the Brunch watcher from your config/dev.exs configuration, and by removing brunch-config.js, package.json, and the contents of node_modules and web/static.

We also won’t need any of the initial HTML templates a new Phoenix project generates for us.

You can skip the generation of these initial files by giving mix phoenix.new a --no-html flag, or you can just ignore them for now.

Now that we’ve torn a massive hole in the front-end of our Phoenix application, let’s fill it back in with a React application!

Creating a React App

We’ll be using Create React App to generate our React application’s boilerplate and to wire up our front-end tooling.

We want Create React App to work largely without any interference or tweaking, so we’ll create our new front-end application within the /priv folder of our Phoenix project:


cd priv
create-react-app hello_create_react_app

What you choose to call your React project is entirely up to you. In this case, I chose to replicate the name of the overarching Phoenix project.

To give myself a little extra control over the base index.html file generated by Create React App, I chose to immediately eject my React application at this point:


cd hello_create_react_app
npm run eject

You now have a fully functional front-end application. You can spin up your development server by running npm start within the priv/hello_create_react_app folder.

Your React application should automatically start serving from port 3000!

Front-end Meets Back-end

Now we’re in a strange spot.

In development our React application will spin up and run on port 3000 but has no out-of-the-box way to interact with our back-end. Similarly, our Phoenix application runs on port 4000, but no longer serves any kind of front-end.

How do we join the two?

The answer is fairly simple. Our React application should be built in such a way that it can be told to communicate with any back-end service over whatever medium we choose.

We configure this communication by passing in the URL of our backend service through environment variables. The Create React App tooling will populate env.process for us with any environment variables prefixed with REACT_APP_.

With this in mind, let’s connect our React application to our back-end using Apollo client:


const networkInterface = createNetworkInterface({
    uri: _.get(process.env, "REACT_APP_GRAPHQL_URL") || "http://localhost:4000/graphql"
});

Locally, we can set REACT_APP_GRAPHQL_URL when we spin up our Create React App tooling:


REACT_APP_GRAPHQL_URL="http://localhost:1337/graphql" npm start

During staging and production builds, we can set this environment variable to either our staging or production back-ends, respectively.

Lastly, we’ll need to configure our Phoenix server to allow requests from both of our development servers. Add the CORSPlug to your endpoint just before you plug in your router:


plug CORSPlug, origin: ["http://localhost:3000", "http://localhost:4000"]

For more flexibility, pull these CORS endpoints from an environmental configuration.

Deployment Freedom

Because our front-end application exists completely independently from our Phoenix application, we can deploy it anywhere.

Running npm run build in our priv/hello_create_react_app folder will build our application into a static bundle in the priv/hello_create_react_app/build folder.

This static bundle can be deployed anywhere you choose to deploy static assets, such as S3 or even GitHub Pages!

Because your front-end content is served elsewhere, your back-end Phoenix application will only be accessed to resolve queries or perform mutations, rather than to fetch every static asset requested by every user.

Out of the box, this inherent separation between the front-end and the back-end offers nice scaling opportunities for your application.

Serving React from Phoenix

While serving your front-end separately from your back-end can be a powerful tool, it’s often more of a burden when you’re just getting out of the gate.

Instead, it would be nice to be able to serve your front-end React application from your Phoenix application.

Thankfully, this is a breeze with a few configuration changes.

First things first, we want our application’s root URL to serve our react application, not a Phoenix template. Remove the get "/" hook from router.ex.

Now we’ll want to reconfigure our Phoenix endpoint to serve static assets from our React application’s build folder, not priv/static:


plug Plug.Static,
  at: "/", 
  from: "priv/hello_create_react_app/build/",
  only: ~w(index.html favicon.ico static)

Unfortunately, navigating to our root URL doesn’t load our application. However, navigating to /index.html does!

We’re almost there.

We need to tell Phoenix to load and serve static index.html files when they’re available. The plug_static_index_html plugin makes this a one line change.

Just before we wire up Plug.Static in our endpoint, configure the application’s root URL to serve index.html:


plug Plug.Static.IndexHtml,
  at: "/"

That’s it! We’re serving our React application statically from within our Phoenix application!

Now any deployments of our Phoenix application will contain and serve the last built React application in its entirety. Be sure to set your REACT_APP_ environment variables during your Phoenix release build process!

Final Thoughts

I’ve been developing with this setup for the past few weeks, and I’m extremely satisfied with it so far.

Using Create React App gets me off the ground quickly with a functioning boilerplate and fantastic tooling. I don’t have to waste time restructuring my Phoenix application or tweaking build configurations.

The ability to quickly integrate my React application with an Elixir-powered Phoenix back-end means that I don’t have to sacrifice speed of development for power or scalability.

Intercepting All Queries in a Meteor Application

Written by Pete Corey on Mar 27, 2017.

It’s probably no secret to you that I’m working on a new project called Inject Detect. As I mentioned last week, as a part of that project I need a way to intercept all MongoDB queries made by a Meteor application.

To figure out how to do this, we’ll need to spend a little time diving into the internals of Meteor to discover how it manages its MongoDB collections and drivers.

From there, we can create a Meteor package that intercepts all queries made against all MongoDB collections in an application.

Hold on tight, things are about to get heavy.

Exploring MongoInternals

One way of accomplishing our query interception goal is to monkey patch all of the query functions we care about, like find, findOne, remove, udpate, and upsert.

To do that, we first need to find out where those functions are defined.

It turns out that the functions we’re looking for are defined on the prototype of the MongoConnection object which is declared in the mongo Meteor package:


MongoConnection.prototype.find = function (collectionName, selector, options) {
  ...
};

When we instantiate a new Mongo.Collection in our application, we’re actually invoking the MongoConnection constructor.

So we want to monkey patch functions on MongoConnection. Unfortunately, the MongoConnection object isn’t exported by the mongo package, so we can’t access it directly from our own code.

How do we get to it?

Thankfully, MongoConnection is eventually assigned to MongoInternals.Connection, and the MongoInternals object is globally exported by the mongo package.

This MongoInternals object will be our entry-point for hooking into MongoDB queries at a very low level.

Monkey Patching MongoInternals

Since we know where to look, let’s get to work monkey patching our query functions.

Assuming we have a package already set up, the first thing we’ll do is import MongoInternals from the mongo Meteor package:


import { MongoInternals } from "meteor/mongo";

Let’s apply our first patch to find. First, we’ll save the original reference to find in a variable called _find:


const _find = MongoInternals.Connection.prototype.find;

Now that we’ve saved off a reference to the original find function, let’s override find on the MongoInternals.Connection prototype:


MongoInternals.Connection.prototype.find = function(collection, selector) {
    console.log(`Querying "${collection}" with ${JSON.stringify(selector)}.`);
    return _find.apply(this, arguments);
};

A diagram of a monkey patch.

We’ve successfully monkey patched the find function to print the collection and selector being queried before passing off the function call to the original find function (_find).

Let’s try it out in the shell:


> import "/imports/queryMonkeyPatcher"; // This is our monkey patching module
> Foo = new Mongo.Collection("foos");
> Foo.find({bar: "123"}).fetch();
Querying "foos" with {"bar": "123"}.
[{_id: "...", bar: "123"}]

As long as we import our code before we instantiate our collections, all calls to find in those collections will be routed through our new find function!

Now that we know our find monkey patch works, we can go ahead and repeat the procedure for the rest of the query functions we’re interested in.

How is This Useful?

This kind of low-level query interception can be thought of as collection hooks on steroids.

Josh Owens uses this kind of low-level hooking to explain every query made by an application with his Mongo Explainer Meteor package.

Similarly, Inject Detect will make use of this query-hooking-foo by collecting information on all queries made by your application and sending it off to be inspected for potential NoSQL Injection attacks.

This kind of low level hooking is an incredibly powerful tool that can accomplish some amazing things if used correctly.