Noisy Neighbor Perfect is a verb, not a noun.

Node On Pws

Migrating a Node.js app to PWS

In a prior post we built a responsive web app atop Node.js, Express and Heroku… To give us more time to focus on building the application, we decided to deploy using a PaaS (Platform as a Service). Further limiting our choices, we wanted something with a reasonable free tier to support experimentation.

I was eager to give Heroku a spin since many respected colleagues have raved about it in the past. The thorough documentation, CLI, git-based workflow support and native buildpacks made getting our idea published on the Internet quick and easy. Heroku also came up with The Twelve-Factor App, which practically became a holy grail of DevOps and best practice for anyone shipping modern applications.

Recently I started working for Pivotal focused on cloud native architecture and service delivery via Cloud Foundry (PCF). Specifically, the Platform Reliability Team helps customers understand, adopt and apply Site Reliability Engineering principles within their organizations.

To better understand the PCF developer experience, I decided to see how hard it would be to migrate my Heroku-based application to Pivotal Web Services (PWS). PWS, or P-Dubs as we affectionately call it, is a fully-managed version of PCF hosted atop public cloud infrastructure very similar to Heroku. There’s a polished UI, CLI, extensive documentation, and buildpacks for many popular languages at reasonable prices.

Getting Started

To start experimenting, sign up for a free PWS account. This is quick and easy (email, password, confirm SMS, define your organization name). With that, you can walk through their tutorial to push your first app.

The tutorial is based on a simple app using the Java Build Back (JBP), but lets you install and exercise the CLI. You don’t have to worry too much about buildpacks since they are typically auto-detected, but in our case we’ll be using the appropriate version for Node.js.


Not having migrated from one PaaS to another before, I wasn’t entirely sure what obstacles I would encounter. The good news is, it was fairly painless. Following twelve-factor principles from the start meant no application refactoring was required, and configuration was easily passed around through the environment.

The first new concept I needed to grasp was a deployment manifest. This is a YAML configuration, typically named manifest.yml, allowing you to control almost every aspect of your application’s deployment. While something extra to keep track of, this is similar to other PaaS-specific metadata like Heroku’s Procfile.

It’s best to start small, then iterate to add parameters as needed… To get started, I just grabbed the skeleton from the tutorial app’s repo and adjusted a couple parameters:

- name: chowchow
  memory: 512M
  instances: 1
  random-route: true

random-route enables behavior similar to Heroku’s default of randomly-generating the host-part of the application’s FQDN. Without that, Cloud Foundry will try and use the application name…which may be what you want, or may lead to collisions in a shared name space like we have with the free tier’s top-level application domain.

Another thing I needed to do was effectively pin my Node and NPM versions. I had neglected that when initially deploying to Heroku, but it’s a good idea for any production app lest deploying a build suddenly pull in unexpected (or expected but broken) dependencies. To do that, I simply added a couple lines to package.json (see the full version):

"engines": {
  "node": "8.9.4",
  "npm": "5.6.0"

Technically you don’t have to lock down npm, if left undefined it will use the version that ships with the selected node. I like to be explicit.

Going Live

With that, I felt like I was in pretty good shape based on the tutorial… After a cf login -a (using the email address and password used when signing up), I fired off cf push to start a deployment, but got an error message:

The app upload is invalid: Symlink(s) point outside of root folder

Luckily, a little Google engineering quickly led to answers… a few related suggestions there. The one I went with was simply creating .cfignore at the top level of my application repo and adding the node_modules directory. With that, cf push worked like magic… reading my deployment manifest, auto-detecting the proper buildpack, and bringing up a public instance with a random route name:

Pushing from manifest to org deadlysyn-org / space development as x@y.z...
Using manifest file ./chowchow/manifest.yml
Getting app info...
Updating app with these attributes...
  name:                chowchow
  path:                ./chowchow
  disk quota:          1G
  health check type:   port
  instances:           1
  memory:              512M
  stack:               cflinuxfs2

Updating app chowchow...
Mapping routes...
Comparing local files to remote cache...
Packaging files to upload...
Uploading files...


   -----> Nodejs Buildpack version 1.6.18
   -----> Installing binaries
          engines.node (package.json): 8.9.4
          engines.npm (package.json): 5.6.0

Waiting for app to start...

name:              chowchow
requested state:   started
instances:         1/1
usage:             512M x 1 instances
last uploaded:     Sat 24 Feb 18:05:09 EST 2018
stack:             cflinuxfs2
buildpack:         nodejs
start command:     node app.js

     state     since                  cpu    memory      disk      details
#0   running   2018-02-24T23:05:40Z   0.0%   0 of 512M   0 of 1G  

We now have a working Node/Express app available at, and like Heroku it is automatically served securely since the app instances are fronted by a routing tier doing TLS offloading (using a wildcard cert for the shared domain). Pretty neat!

You can get status of the application using the CLI:

$ cf apps
Getting apps in org deadlysyn-org / space development as x@y.z...

name       requested state   instances   memory   disk   urls
chowchow   started           1/1         512M     1G

The UI is also lightweight and responsive:


Cloud Foundry Specifics

If you want to go deeper on Cloud Foundry specifics, the documentation is the place to start… A couple key concepts we glossed over above were routes and spaces. Since they are so central to hosting an application, I wanted to briefly describe both of those here.

The term route within the PCF ecosystem usually refers to the hostname portion of a FQDN (Fully Qualified Domain Name). In our example above, the FQDN was and the route was chowchow-fantastic-waterbuck. It’s also possible to have context specific routes which allow different micro-services hosted under the same top-level domain name to be reached via URIs. An example of that would be and where /foo and /bar are routed to different applications. This is highly flexible, and allows you to easily scale-out specific parts of your service regardless of how you chose to present it to the Internet.

Spaces are part of Cloud Foundry’s authorization scheme. This is a hierarchy… Every project will have one or more organizations (in our example this was deadlysyn-org), which in turn have one or more spaces, each of which have one or more users and applications all governed by RBAC (Role Based Access Control). We deployed to the development space which was created for us by default, but this is again as flexible as you need it to be in complex multi-tenant environments.

You can manage all of this from the CLI, and within the web UI it’s easy to see what organization and space you are working in. Assuming you have the right permissions, you can also create new spaces (perhaps for other environments like staging and production, or for service teams).



I encountered one oops during this journey… While cf push worked, in my haste to get everything going I’d forgotten to properly set up the environment. We could add environment variables to our deployment manifest, but that is really for non-sensitive information (think things like NODE_ENV).

For sensitive bits you don’t want checked into source control, keep them out of the deployment manifest… The mechanism I used instead was cf env. This is similar to Heroku’s config vars. You can define variables at deploy time, adjust them while the service is running, and they persist across deployments (so you don’t lose settings when orchestrating new instances).

In our case, we just need to ensure a SECRET variable exists in the environment so express-session can pick up a proper session key. Setting environment variables is easy via the CLI:

$ cf set-env chowchow SECRET someRandomString
Setting env variable 'SECRET' to 'someRandomString' for app chowchow in org deadlysyn-org / space development as x@y.z...
TIP: Use 'cf restage chowchow' to ensure your env variable changes take effect 

Look at that, it even reminds us how to get our app to pick up the change… Who says CLIs can’t be friendly? Let’s ensure our new variable was properly set, then restage the application:

$ cf env chowchow
Getting env variables for app chowchow in org deadlysyn-org / space development as x@y.z...

  "application_id": "f35f84b5-39b2-4937-bc39-4c04ac539e87",
  "application_name": "chowchow",
  "application_uris": [
  "application_version": "bc1591bc-7d40-49be-86a3-e4b9fd745ed7",
  "cf_api": "",
  "limits": {
   "disk": 1024,
   "fds": 16384,
   "mem": 512
  "name": "chowchow",
  "space_id": "900772c4-cccd-4255-8e21-f599733f4b86",
  "space_name": "development",
  "uris": [
  "users": null,
  "version": "bc1591bc-7d40-49be-86a3-e4b9fd745ed7"

SECRET: someRandomString

No running env variables have been set

No staging env variables have been set

$ cf restage chowchow
Waiting for app to start...

name:              chowchow
requested state:   started
instances:         1/1
usage:             512M x 1 instances
last uploaded:     Sat 24 Feb 18:05:09 EST 2018
stack:             cflinuxfs2
buildpack:         nodejs
start command:     node app.js

     state     since                  cpu    memory          disk          details
#0   running   2018-02-25T04:49:05Z   0.0%   14.3M of 512M   73.6M of 1G

With that, we have a properly configured version of our app up and running! The maturity of the platform, excellent documentation, and large community made it easy to get started and enabled us to find answers when we got stuck.

Best practices like Twelve-Factor ensured that everything mostly just worked after applying configuration by adjusting the runtime environment. We did have to learn about deployment manifests and make minor adjustments to package.json, but these were relatively minor changes that are well-documented and not unlike specifics we would have to learn when embracing and customizing any PaaS.

Overall, I’m happy to report migrating our simple application to a new PaaS was relatively straightforward… though admittedly, the simple app used here just scratched the surface of PWS capabilities. They support custom DNS domains, SSL as a service, and a variety of service brokers for backing stores and other dependencies more complex services would require.

Have you experimented with Pivotal Web Services?



Perl popularized the concept There’s More Than One Way To Do It (which, depending on who you ask, is either awesome or just confusing). JavaScript is similarly good at giving budding programmers 2^32 ways of doing most anything. In a prior post we briefly evaluated how to do simple HTTP GETs for querying an API… as an homage to Perl monks everywhere, let’s explore how There’s More Than One Way To GET It (or POST, PUT…whatever).

NOTE: I tend not to use semi-colons in JavaScript…this is not a desire to offend or participate in any holy wars. Feel free to imagine semi-colons anywhere you see fit, or better yet watch this monolog that is both amusing and enlightening.

NOTE’: Please read this, or anything I ever create, with a sense of humor. :-)


As a convenient use case, we’ll build a minimalist Bitcoin price checker because, well, everyone’s doing Bitcoin stuff and peer pressure is real. We’ll use the Coindesk API.

For the sake of brevity, I won’t repeat this again in all the examples, but we declare some important constants in the top of our JavaScript which point to the API endpoint and element we’ll be updating with the current Bitcoin price:

const API = ''
const PRICE = document.querySelector('#price')

For the examples below, common HTML boilerplate is also used… here’s what that looks like so the JavaScript makes more sense:

        <link rel="stylesheet" href="style.css">
        <script src=""></script>
        <script src=""></script>
        <h1>Bitcoin Price Checker</h1>
            <li id="btnXHR">XHR</li>
            <li id="btnFetch">Fetch</li>
            <li id="btnJquery">jQuery</li>
            <li id="btnAxios">Axios</li>
        <p id="price">Push a button to get started...</p>
        <script src="bitcoinGetter.js"></script>

Yep, that’s it… Quite hideous, but the UI is not the point and we don’t want it to be a distraction. In fact, this should be simpler – I started getting sucked into the black hole that is endless tweaking of HTML and CSS. :-) All you need to note is that we setup some unique element ids to make our selector queries easier and import both jQuery and Axios which we’ll use later on.

or just use the source, Luke (or Lukette, or whatever your name, hair color, country of origin, personal pronoun, etc. happens to be – source is equally good for us all).


First, let’s travel back in time… a time when AJAX (Asynchronous JavaScript And XML) was new. Obviously, it was still new, since today it would be called AJAJ (what doesn’t spew JSON these days?). This was a time when Single Page Apps and other now-commonly-accepted hotness did not exist, so the ability to make asynchronous HTTP requests obviously got people very excited.

Strictly speaking, the X doesn’t just stand for XML, but more specifically XMLHttpRequest. This built-in object is still used to make HTTP requests in JavaScript today. Thankfully, modern implementations speak JSON since it makes more sense in the Javascript ecosystem. As I’m sure you noticed, XMLHttpRequest is much nicer to bandy about if we abbreviate it XHR, so that’s what is commonly done (and who doesn’t love acronyms).

That’s great trivia fodder you say, but how does it work? Here you go:

// XHR

const btnXHR = document.querySelector('#btnXHR')

btnXHR.addEventListener('click', function() {
    let XHR = new XMLHttpRequest()
    XHR.onreadystatechange = function() {
        if (XHR.readyState == 4 && XHR.status == 200) {
            PRICE.textContent = JSON.parse(XHR.responseText).bpi.USD.rate + ' USD'
    }'GET', API)

Nothing surprising here, assuming you understand at least a few things:

  • We need to instantiate a new XHR object before we can do anything useful (hence the use of new).
  • onreadystatechange (love that name) is an event handler called any time our new XHR’s readyState attribute changes.
  • readyState has five possible states_ (0-4), with 4 signifying the request is complete.

If you find the open and send confusing, the docs help…but just think of open as setting up the request, and send, well, sending it. Once sent, the state changes will propagate, and assuming it completes (and we get a 200 OK) PRICE.textContent is updated.

Why start with XHRs? They are the foundational element upon which everything below is built!


Another native way (aside from https.get which we discussed in the past) is fetch. This is a newer addition to the language, as evidenced by its use of promises. Oooo ahhh promises, you say! Let’s see it in action before we get to an unfortunate secret (if secret is another name for behavior described in the documentation):

// Fetch

const btnFetch = document.querySelector('#btnFetch')

btnFetch.addEventListener('click', function() {
        .then(function(res) {
                .then(function(data) {
                    PRICE.textContent = data.bpi.USD.rate + ' USD'
        .catch(function(err) {

Pretty neat, huh? The main things to note here are the use of .then and .catch to handle promises, as well as the built-in .json() method which replaces our use of JSON.parse() above. Nice, clean, modern… but not well supported (yet). The Fetch API docs reveal IE is completely out in the cold at this point, which might be a show-stopper for you.


Good old jQuery. If there was ever a more simultaneously revered and dissed workhorse among us, I haven’t seen it…at least not since the last watercooler conversation over vim vs emacs. Though, technically, I guess that is a conversation about two equally dissed workhorses (depending whom you talk to). I digress.

While not as popular as it used to be, jQuery is still full-featured, extremely useful in some circumstances, and given past popularity you might need to support it regardless of personal opinions. For the record, I don’t have anything against it – just be sure you need to do more than make an HTTP request before pulling in a large dependency!

Here goes:

// jQuery

$('#btnJquery').click(function() {
        .done(function(data) {
            $('#price').text(data.bpi.USD.rate + ' USD')
        .fail(function(err) {

Gone is the querySelector, or more accurately…it’s been disguised as $. Different syntax, but we still bind an anonymous function to a click listener. Inside, we use jQuery’s getJSON method to handle turning the HTTP response into a useful object. That is two things jQuery has going for it – no shortage of documentation or utility methods!


Last, but certainly not least, Axios is a tremendously popular (i.e. almost 40k stars on GitHub at the time of this writing) HTTP client library. It is promise-based, lightweight, and well-supported.

// Axios

const btnAxios = document.querySelector('#btnAxios')

btnAxios.addEventListener('click', function() {
        .then(function(res) {
            PRICE.textContent = + ' USD'
        .catch(function(err) {

The promise handling looks similar to earlier examples but gone is any reference to JSON-related methods…because the library is doing the transformation automagically. Pretty sweet! Axios is also highly-configurable and supports concurrency that is actually easy to understand:

// Community example of multiple requests with Axios...

function A() {
  return axios.get(API + '/A');

function B() {
  return axios.get(API + '/B');

axios.all([A(), B()])
  .then(axios.spread(function (a, b) {
    // Safely use a and b!


So, there you have it… more ways than anyone ever needed to make HTTP requests in JavaScript. Needless to say, the examples above were all client-side, but could just as easily have been done in Node.js. For those requiring imports, just replace the <script></script> tags with npm install axios or similar.

Last but not least, I want to give a very loud shout out to Udemy’s Advanced Web Developer Bootcamp. The use case and examples here were an amalgamation of things picked up there. If you’re looking for a good overview of modern web development, it’s a great course – and I am not affiliated with Udemy or the course authors in any way…so hopefully that counts for something. :-)

Thanks for reading!

PS: My examples use USD simply because it is my local currency, and kept the code a bit cleaner… At the time of writing, the Coindesk API can return EUR, GBP and USD. As a fun exercise, clone the repo and refactor to support your favorite, or let the user select the desired currency!

Idea To App Part 3


In the first two parts of this series we bootstrapped our idea and got the basic UI ready for a responsive webapp. Getting this part out took longer than I’d hoped because of being distracted ramping up at a new job… I feel like some of the points I wanted to share have slipped into the ether, so this will be a quicker tour than originally hoped.

Just in case you forgot, here’s the outline of our Idea to App series:

I’d originally planned a Deployment section as well…but decided this will be the last part in this series. Perhaps I’ll touch on deployment aspects of typical web and mobile apps in a future series. ChowChow is a simple app deployed atop Heroku, so there is not much to share that isn’t already in their excellent docs!

To make up for that, based on how well I manage to split time amongst work (SRE is fun enough it’s often hard to tear yourself away!), actual coding, learning new stuff and blogging… a more interesting topic may be circling back over our finished app and doing some refactoring (add more error handling, make better use of newer ES features such as promises, cleanup based on Airbnb’s style guide, etc.).

Remember to clone the repository to follow along…

Using the Environment

One of the first things we need to take care of when designing modern apps is managing sensitive data. The common practice is to read these values from the environment, which can then be injected via credential management utilities, set via CI/CD tools, etc.

Leveraging the environment isn’t limited to sensitive information you don’t want checked into version control… it can also be used to control behavior (e.g. dev vs prod) or read dynamic information like the listen address. With Node.js, we use process.env for that.

ChowChow is simple enough we don’t have much to worry about, but we do need to read the IP address and port from the environment (so things work on my laptop as well as my hosting provider). We also have a secret key used by express-session (provides lightweight session management).

var ip = process.env.IP || ''
var port = parseInt(process.env.PORT, 10) || 3000
var secret = process.env.SECRET || 'some random string'

The special address just listens on all available interfaces (which might be my laptop’s loopback or ethernet address at home, or a container’s virtual NIC atop a platform like Heroku). I could set this to and have it work just as easily at home, but that would usually break when shipping the app so is easier to manage.

If you are concerned about binding to all available interfaces on your laptop (hopefully you run a firewall), you could use then set the IP environment variable on startup. PORT is very similar so I won’t dwell on it, just note how you can either pass in a PORT environment variable or let it default to 3000. You would set these via shell as export NAME = value or a mechanism like Heroku’s config vars.

Last but not least, secret will default to some random string just to make dev easier, but for production we’ll set the SECRET environment variable in our environment, container, build tool, or hosting provider…this way the real secret is not checked into GitHub. Easy, right?

Dev vs Production

I’ve really been having fun using Express, but if you are shipping a real app one of the first things you’ll need to change (it’s no secret, all the docs make it clear!) is express-session’s MemoryStore. It’s a great starting point for prototyping, but will leak memory in production (from what I understand, it just doesn’t bother with expiry).

There are a TON of options for backing stores (you might, for example, want sessions stored in MongoDB or a SQL database), but sticking to our theme of simplicity, memorystore works very similarly to the default without the memory leaks. Perfect! Let’s configure that for ChowChow:

  store: new memstore({
      checkPeriod: 3600000 // 1 hour in ms
  resave: false,
  saveUninitialized: false,
  secret: secret

secret is the value we read from the environment above. Adjust the others to taste, based on the memorystore docs and express-session docs.


As mentioned earlier in the series, the stack we chose for this experiment was Node.js and Express… In this ecosystem, a common theme is keeping code responsible for routes clean by factoring functions doing heavy lifting into middleware.

You can chain middleware functions together for flexibility, passing results around via session or return values. Let’s see a simple case of this in our app… first, in app.js, the /random route responsible for showing a randomly selected restaurant near the user looks quite clean:

var m = require('./middleware') // import ./middleware/index.js

...'/random', m.logRequest, m.parseRequest, function(req, res, next) { {

We could just have our route contain all the logic in m.logRequest and m.parseRequest, but that would both make the code harder to read, and cause lots of duplication (not very DRY!) for things like logRequest which all our routes currently share.

Let’s dig into parseRequest

API Wrangling

The bulk of our functionality comes from the Yelp Fusion API. The poorly named parseRequest (honest, it felt like a good name at the time for reasons the code might make clear; if not, add an item to our refactor list!) is the middleware responsible for massaging our app’s inputs (things like latitude and longitude obtained from geolocation discussed in the previous part of this series) and getting API results.

This is still longer than I’d like, even after factoring out a few lines to a helper function, but here’s a look at parseRequest as it stands:

middleware.parseRequest = function(req, res, next) {
  if (req.body.latitude && req.body.longitude) {
      // build up yelp api query string...
      let q = `?term=${SEARCHTERM}&latitude=${req.body.latitude}&longitude=${req.body.longitude}&limit=${APILIMIT}&open_now=true&sort_by=rating`

      // how much you're willing to spend
      switch (req.body.price) {
          case '$':
              q += '&price=1'
          case '$$':
              q += '&price=1,2'
          case '$$$':
              q += '&price=1,2,3'
          case '$$$$':
              q += '&price=1,2,3,4'

      if ( == 'true') {
          // 8000 meters ~= 5 miles
          q += '&radius=8000'
      } else {
          // 500 meters ~= 5 blocks
          q += '&radius=500'

      searchYelp(q, function(results) {
          if (results.businesses) {
              // grab random result
              let randChoice = Math.floor(Math.random() * results.businesses.length)
              req.session.choice = results.businesses[randChoice]
              // save remaining results for list view
              req.session.results = results.businesses.filter(biz => !=
              return next()
          } else {
              req.flash('error', 'No results found: please try again')
  } else {
      req.flash('error', 'Location error: please try again')

This builds up the requisite query string Yelp’s API needs to search for food near the user… Worth noting, that q assignment is made slightly more manageable by using template literals.

Once we have a suitable query, we call searchYelp with a callback (it could take a while) to retrieve our results… if unexpected things happen, we use the de facto connect-flash to display messages to the user by writing to the session (if you cloned the repo, you can see an example of how that’s displayed in home.ejs in the error div).

This is all pretty standard (the only trick being the use of filter), so let’s wrap up by taking a look at searchYelp:

function searchYelp(queryString, callback) {
    let options = {
        headers: {'Authorization': 'Bearer ' + APIKEY},
        hostname: APIHOST,
        path: APIPREFIX + queryString,
        port: 443
    https.get(options, function(res) {

        let body = ""
        res.on("data", data => {
            body += data

        res.on("end", () => {

First, we build a options object to control the behavior of https.get. Most of these are simple consts further up in the file, but APIKEY is read from the environment (after all, it’s sensitive data we don’t want checked into git!) via the familiar process.env.API_KEY.

The most interesting (perhaps?) part of this is the use of https.get. We could have used any number of options for this request… my first instinct was to use something familiar; the request module. Aside from exercising it in a recent class, it’s also similar to Python’s requests library. One real down-side for our app (trying to optimize for simplicity) is the sheer number of dependencies…a bit heavy-weight given our simple use case.

Some other options are Axios, wrapping promises around request, or fetch…I chose https.get because it is part of the standard library, and I found the way it treats responses as streams interesting! What would you use?

Since the https.get response is a stream, we declare body as a block-level variable and append data to it until we receive the end event. Then we call back with our response data ready for use.

NOTE: We didn’t even scratch the surface on the many ways you could make HTTP requests with Javascript/Node… Check this out.


That was admittedly a lightening tour, but now we’ve officially peeked beneath the covers and seen the middleware responsible for talking to Yelp and retrieving the data which makes our app useful (or at least slightly more than useless)… we also saw how to quickly work around the default MemoryStore’s shortcomings, and leverage Node’s process.env to easily control app behavior and protect sensitive information.

As I said earlier, this will likely be the last official article in this series… Heroku simply made deployment so simple that there’s not much worth writing their docs haven’t already covered better than I could ever hope to improve upon! That said, I am in the process of experimenting with a new development environment including VSCode, ESLint and Airbnb’s style guide…so may revisit this project to cover refactoring.

The last piece to making this a real app would be something closer to a native or progressive web app…that will take a bit longer for me, since I am still learning React-Native (not the only way to go, just one I am currently learning). With any luck, I’ll come up with a more interesting premise to explore together once I’m further along and ready to incorporate additional technologies. :-)

Thanks for reading!