Noisy Neighbor Perfect is a verb, not a noun.

The Notorious Fsc

The Notorious FSC

I’ve been fortunate enough to have time to start learning React. It is an amazing piece of technology for any builder to have in their toolbox. After working through the tutorial and experimenting a bit on my own, I picked up a copy of Robin Wieruch’s The Road to Learn React. If you are interested in React, you’ve likely read it too – if not, do yourself a favor and check it out! It is to React what You Don’t Know JS is to Javascript.

I have a bad habit of getting carried away and publishing overly-dense (read: too long) blog posts. As part of my React journey, one goal is to blog about key things I learn…improving my posts as I go. Each one should have enough learning to be fun, but remain shorter than a novella. We’ll see. :-)

Today I want to talk about the ominously named “FSC” or Functional Stateless Components in React. Not from the position of an expert, but as someone just learning about them myself. What are they? When should you use them? We’ll answer those questions, and refactor a simple component along the way…

What, When, Why?

Let’s unpack “FSC” – Functional, Stateless, Component. It turns out there’s a lot in a name. Unlike ES6 Class Components which are built atop ES6 Classes, FSCs are composed using…wait for it…functions! Hmm, OK. We use functions all the time, that’s not so scary. So we have two ways of representing React components… How do we pick one? Well, classes are from ES6 so they must be better right? Read on!

Especially with modern tooling like create-react-app which handles transpiling for you, I think it is safe to say there is nothing you can do with functional components that you can’t do with class components. Often you will see simple, stateless tutorials that are entirely class based. One advantage to this is consistency – components have a common look and feel, so you can argue there is less cognitive load when browsing code. I suppose it could also be easier to template or generate boilerplate in some cases.

The reverse is not true… you can do things with class components that are not supported with FSCs. The clue is in the name: stateless. FSCs only have access to props, and do not maintain local state (this.state). This also means you loose access to lifecycle methods (the FSC itself, or its returned JSX, is in effect the render method).

Just as one could argue using a consistent component approach makes code easier to read, the flip side is FSCs require fewer lines of code (less potential for bugs) and present less “noise” when skimming code. You will loose “consistency” if your app requires state, because you will still need one or more class components.

Thinking too much about performance is certainly premature optimization at my point in the game, but large projects may see gains by refactoring class components as FSCs – if there’s no need for state, you avoid the overhead of managing its lifecycle, and should see smaller bundle sizes as well.

Note that even in a FSC, you can still do work between receiving props and returning the JSX. You can add custom functions or other code inside your FSCs, they just won’t be linked to specific phases in the component lifecycle as with constructor(), componentDidMount(), componentWillUnmount(), etc.

Getting Real

A snippet is worth a thousand blogs, so let’s refactor a simple component… Thinking in React is a fun exercise which walks you through turning a mock into a real live component. In the example solution, everything is consistently implemented as ES6 classes. A wise choice for teaching (maybe even your personal preference), but lets refactor one component as a FSC to solidify the theory above.

class ProductCategoryRow extends React.Component {
  render() {
    const category = this.props.category;
    return (
        <th colSpan="2">

I am a beginner, and this is a sample component from a tutorial, written by folks way smarter than me… As you might expect, it’s already quite easy to read. Keep an open mind as we refactor, and try to imagine any small gains we observe magnified across a large project.

Before we begin mangling code, is this a good candidate for refactoring? Based on what we know so far, we see that ProductCategoryRow does not reference this.state or any lifecycle methods. We can safely turn it into a function:

function ProductCategoryRow(props) {
  const category = props.category;
  return (
      <th colSpan="2">

It works the same, now we just receive props as a function parameter. Aside from that minor change, our old render method just became our FSC! This is already fewer lines of code to reason about, but ES6 syntactic sugar helps us remove even more visual clutter. Combining arrow functions, destructuring assignment and concise bodies yields a simplified function which draws the eye to inputs and outputs:

const ProductCategoryRow = ({ category }) => (
    <th colSpan="2">

The props are being destructured in the function signature, so if you are passing more just add parameters. Default values are OK too. Technically you can go for concise gold and leave off the return’s parens, but it drives some syntax highlighting crazy. :-)

const ProductCategoryRow = ({ category = 'javascript', children }) =>
    <th colSpan="2">


If you’re a React novice like me, this hopefully helps clarify what FSCs are, when they are appropriate and how to use them. We took a very simple component from twelve lines of code to seven (>40% reduction), and that’s given a line for adding functionality (passing in children). When envisioning how the tests wrapped around production code could be simplified to match, it’s a real eye opener! The ES6 syntax really improves readability and maintainability, and should be available to most thanks to tools like create-react-app and babel.

One size definitely does not fit all…in the real world you will likely have apps combining ES6 class and functional stateless components. For beginners or modest sized projects where it doesn’t matter as much, FSCs may be a premature optimization. You need to be aware of the trade offs, and pick the right tool for the job. For some, a more concise codebase with fewer lines to hide bugs may be enough to justify a refactor.

Node On Pws

Migrating a Node.js app to PWS

In a prior post we built a responsive web app atop Node.js, Express and Heroku… To give us more time to focus on building the application, we decided to deploy using a PaaS (Platform as a Service). Further limiting our choices, we wanted something with a reasonable free tier to support experimentation.

I was eager to give Heroku a spin since many respected colleagues have raved about it in the past. The thorough documentation, CLI, git-based workflow support and native buildpacks made getting our idea published on the Internet quick and easy. Heroku also came up with The Twelve-Factor App, which practically became a holy grail of DevOps and best practice for anyone shipping modern applications.

Recently I started working for Pivotal focused on cloud native architecture and service delivery via Cloud Foundry (PCF). Specifically, the Platform Reliability Team helps customers understand, adopt and apply Site Reliability Engineering principles within their organizations.

To better understand the PCF developer experience, I decided to see how hard it would be to migrate my Heroku-based application to Pivotal Web Services (PWS). PWS, or P-Dubs as we affectionately call it, is a fully-managed version of PCF hosted atop public cloud infrastructure very similar to Heroku. There’s a polished UI, CLI, extensive documentation, and buildpacks for many popular languages at reasonable prices.

Getting Started

To start experimenting, sign up for a free PWS account. This is quick and easy (email, password, confirm SMS, define your organization name). With that, you can walk through their tutorial to push your first app.

The tutorial is based on a simple app using the Java Build Back (JBP), but lets you install and exercise the CLI. You don’t have to worry too much about buildpacks since they are typically auto-detected, but in our case we’ll be using the appropriate version for Node.js.


Not having migrated from one PaaS to another before, I wasn’t entirely sure what obstacles I would encounter. The good news is, it was fairly painless. Following twelve-factor principles from the start meant no application refactoring was required, and configuration was easily passed around through the environment.

The first new concept I needed to grasp was a deployment manifest. This is a YAML configuration, typically named manifest.yml, allowing you to control almost every aspect of your application’s deployment. While something extra to keep track of, this is similar to other PaaS-specific metadata like Heroku’s Procfile.

It’s best to start small, then iterate to add parameters as needed… To get started, I just grabbed the skeleton from the tutorial app’s repo and adjusted a couple parameters:

- name: chowchow
  memory: 512M
  instances: 1
  random-route: true

random-route enables behavior similar to Heroku’s default of randomly-generating the host-part of the application’s FQDN. Without that, Cloud Foundry will try and use the application name…which may be what you want, or may lead to collisions in a shared name space like we have with the free tier’s top-level application domain.

Another thing I needed to do was effectively pin my Node and NPM versions. I had neglected that when initially deploying to Heroku, but it’s a good idea for any production app lest deploying a build suddenly pull in unexpected (or expected but broken) dependencies. To do that, I simply added a couple lines to package.json (see the full version):

"engines": {
  "node": "8.9.4",
  "npm": "5.6.0"

Technically you don’t have to lock down npm, if left undefined it will use the version that ships with the selected node. I like to be explicit.

Going Live

With that, I felt like I was in pretty good shape based on the tutorial… After a cf login -a (using the email address and password used when signing up), I fired off cf push to start a deployment, but got an error message:

The app upload is invalid: Symlink(s) point outside of root folder

Luckily, a little Google engineering quickly led to answers… a few related suggestions there. The one I went with was simply creating .cfignore at the top level of my application repo and adding the node_modules directory. With that, cf push worked like magic… reading my deployment manifest, auto-detecting the proper buildpack, and bringing up a public instance with a random route name:

Pushing from manifest to org deadlysyn-org / space development as x@y.z...
Using manifest file ./chowchow/manifest.yml
Getting app info...
Updating app with these attributes...
  name:                chowchow
  path:                ./chowchow
  disk quota:          1G
  health check type:   port
  instances:           1
  memory:              512M
  stack:               cflinuxfs2

Updating app chowchow...
Mapping routes...
Comparing local files to remote cache...
Packaging files to upload...
Uploading files...


   -----> Nodejs Buildpack version 1.6.18
   -----> Installing binaries
          engines.node (package.json): 8.9.4
          engines.npm (package.json): 5.6.0

Waiting for app to start...

name:              chowchow
requested state:   started
instances:         1/1
usage:             512M x 1 instances
last uploaded:     Sat 24 Feb 18:05:09 EST 2018
stack:             cflinuxfs2
buildpack:         nodejs
start command:     node app.js

     state     since                  cpu    memory      disk      details
#0   running   2018-02-24T23:05:40Z   0.0%   0 of 512M   0 of 1G  

We now have a working Node/Express app available at, and like Heroku it is automatically served securely since the app instances are fronted by a routing tier doing TLS offloading (using a wildcard cert for the shared domain). Pretty neat!

You can get status of the application using the CLI:

$ cf apps
Getting apps in org deadlysyn-org / space development as x@y.z...

name       requested state   instances   memory   disk   urls
chowchow   started           1/1         512M     1G

The UI is also lightweight and responsive:


Cloud Foundry Specifics

If you want to go deeper on Cloud Foundry specifics, the documentation is the place to start… A couple key concepts we glossed over above were routes and spaces. Since they are so central to hosting an application, I wanted to briefly describe both of those here.

The term route within the PCF ecosystem usually refers to the hostname portion of a FQDN (Fully Qualified Domain Name). In our example above, the FQDN was and the route was chowchow-fantastic-waterbuck. It’s also possible to have context specific routes which allow different micro-services hosted under the same top-level domain name to be reached via URIs. An example of that would be and where /foo and /bar are routed to different applications. This is highly flexible, and allows you to easily scale-out specific parts of your service regardless of how you chose to present it to the Internet.

Spaces are part of Cloud Foundry’s authorization scheme. This is a hierarchy… Every project will have one or more organizations (in our example this was deadlysyn-org), which in turn have one or more spaces, each of which have one or more users and applications all governed by RBAC (Role Based Access Control). We deployed to the development space which was created for us by default, but this is again as flexible as you need it to be in complex multi-tenant environments.

You can manage all of this from the CLI, and within the web UI it’s easy to see what organization and space you are working in. Assuming you have the right permissions, you can also create new spaces (perhaps for other environments like staging and production, or for service teams).



I encountered one oops during this journey… While cf push worked, in my haste to get everything going I’d forgotten to properly set up the environment. We could add environment variables to our deployment manifest, but that is really for non-sensitive information (think things like NODE_ENV).

For sensitive bits you don’t want checked into source control, keep them out of the deployment manifest… The mechanism I used instead was cf env. This is similar to Heroku’s config vars. You can define variables at deploy time, adjust them while the service is running, and they persist across deployments (so you don’t lose settings when orchestrating new instances).

In our case, we just need to ensure a SECRET variable exists in the environment so express-session can pick up a proper session key. Setting environment variables is easy via the CLI:

$ cf set-env chowchow SECRET someRandomString
Setting env variable 'SECRET' to 'someRandomString' for app chowchow in org deadlysyn-org / space development as x@y.z...
TIP: Use 'cf restage chowchow' to ensure your env variable changes take effect 

Look at that, it even reminds us how to get our app to pick up the change… Who says CLIs can’t be friendly? Let’s ensure our new variable was properly set, then restage the application:

$ cf env chowchow
Getting env variables for app chowchow in org deadlysyn-org / space development as x@y.z...

  "application_id": "f35f84b5-39b2-4937-bc39-4c04ac539e87",
  "application_name": "chowchow",
  "application_uris": [
  "application_version": "bc1591bc-7d40-49be-86a3-e4b9fd745ed7",
  "cf_api": "",
  "limits": {
   "disk": 1024,
   "fds": 16384,
   "mem": 512
  "name": "chowchow",
  "space_id": "900772c4-cccd-4255-8e21-f599733f4b86",
  "space_name": "development",
  "uris": [
  "users": null,
  "version": "bc1591bc-7d40-49be-86a3-e4b9fd745ed7"

SECRET: someRandomString

No running env variables have been set

No staging env variables have been set

$ cf restage chowchow
Waiting for app to start...

name:              chowchow
requested state:   started
instances:         1/1
usage:             512M x 1 instances
last uploaded:     Sat 24 Feb 18:05:09 EST 2018
stack:             cflinuxfs2
buildpack:         nodejs
start command:     node app.js

     state     since                  cpu    memory          disk          details
#0   running   2018-02-25T04:49:05Z   0.0%   14.3M of 512M   73.6M of 1G

With that, we have a properly configured version of our app up and running! The maturity of the platform, excellent documentation, and large community made it easy to get started and enabled us to find answers when we got stuck.

Best practices like Twelve-Factor ensured that everything mostly just worked after applying configuration by adjusting the runtime environment. We did have to learn about deployment manifests and make minor adjustments to package.json, but these were relatively minor changes that are well-documented and not unlike specifics we would have to learn when embracing and customizing any PaaS.

Overall, I’m happy to report migrating our simple application to a new PaaS was relatively straightforward… though admittedly, the simple app used here just scratched the surface of PWS capabilities. They support custom DNS domains, SSL as a service, and a variety of service brokers for backing stores and other dependencies more complex services would require.

Have you experimented with Pivotal Web Services?



Perl popularized the concept There’s More Than One Way To Do It (which, depending on who you ask, is either awesome or just confusing). JavaScript is similarly good at giving budding programmers 2^32 ways of doing most anything. In a prior post we briefly evaluated how to do simple HTTP GETs for querying an API… as an homage to Perl monks everywhere, let’s explore how There’s More Than One Way To GET It (or POST, PUT…whatever).

NOTE: I tend not to use semi-colons in JavaScript…this is not a desire to offend or participate in any holy wars. Feel free to imagine semi-colons anywhere you see fit, or better yet watch this monolog that is both amusing and enlightening.

NOTE’: Please read this, or anything I ever create, with a sense of humor. :-)


As a convenient use case, we’ll build a minimalist Bitcoin price checker because, well, everyone’s doing Bitcoin stuff and peer pressure is real. We’ll use the Coindesk API.

For the sake of brevity, I won’t repeat this again in all the examples, but we declare some important constants in the top of our JavaScript which point to the API endpoint and element we’ll be updating with the current Bitcoin price:

const API = ''
const PRICE = document.querySelector('#price')

For the examples below, common HTML boilerplate is also used… here’s what that looks like so the JavaScript makes more sense:

        <link rel="stylesheet" href="style.css">
        <script src=""></script>
        <script src=""></script>
        <h1>Bitcoin Price Checker</h1>
            <li id="btnXHR">XHR</li>
            <li id="btnFetch">Fetch</li>
            <li id="btnJquery">jQuery</li>
            <li id="btnAxios">Axios</li>
        <p id="price">Push a button to get started...</p>
        <script src="bitcoinGetter.js"></script>

Yep, that’s it… Quite hideous, but the UI is not the point and we don’t want it to be a distraction. In fact, this should be simpler – I started getting sucked into the black hole that is endless tweaking of HTML and CSS. :-) All you need to note is that we setup some unique element ids to make our selector queries easier and import both jQuery and Axios which we’ll use later on.

or just use the source, Luke (or Lukette, or whatever your name, hair color, country of origin, personal pronoun, etc. happens to be – source is equally good for us all).


First, let’s travel back in time… a time when AJAX (Asynchronous JavaScript And XML) was new. Obviously, it was still new, since today it would be called AJAJ (what doesn’t spew JSON these days?). This was a time when Single Page Apps and other now-commonly-accepted hotness did not exist, so the ability to make asynchronous HTTP requests obviously got people very excited.

Strictly speaking, the X doesn’t just stand for XML, but more specifically XMLHttpRequest. This built-in object is still used to make HTTP requests in JavaScript today. Thankfully, modern implementations speak JSON since it makes more sense in the Javascript ecosystem. As I’m sure you noticed, XMLHttpRequest is much nicer to bandy about if we abbreviate it XHR, so that’s what is commonly done (and who doesn’t love acronyms).

That’s great trivia fodder you say, but how does it work? Here you go:

// XHR

const btnXHR = document.querySelector('#btnXHR')

btnXHR.addEventListener('click', function() {
    let XHR = new XMLHttpRequest()
    XHR.onreadystatechange = function() {
        if (XHR.readyState == 4 && XHR.status == 200) {
            PRICE.textContent = JSON.parse(XHR.responseText).bpi.USD.rate + ' USD'
    }'GET', API)

Nothing surprising here, assuming you understand at least a few things:

  • We need to instantiate a new XHR object before we can do anything useful (hence the use of new).
  • onreadystatechange (love that name) is an event handler called any time our new XHR’s readyState attribute changes.
  • readyState has five possible states_ (0-4), with 4 signifying the request is complete.

If you find the open and send confusing, the docs help…but just think of open as setting up the request, and send, well, sending it. Once sent, the state changes will propagate, and assuming it completes (and we get a 200 OK) PRICE.textContent is updated.

Why start with XHRs? They are the foundational element upon which everything below is built!


Another native way (aside from https.get which we discussed in the past) is fetch. This is a newer addition to the language, as evidenced by its use of promises. Oooo ahhh promises, you say! Let’s see it in action before we get to an unfortunate secret (if secret is another name for behavior described in the documentation):

// Fetch

const btnFetch = document.querySelector('#btnFetch')

btnFetch.addEventListener('click', function() {
        .then(function(res) {
                .then(function(data) {
                    PRICE.textContent = data.bpi.USD.rate + ' USD'
        .catch(function(err) {

Pretty neat, huh? The main things to note here are the use of .then and .catch to handle promises, as well as the built-in .json() method which replaces our use of JSON.parse() above. Nice, clean, modern… but not well supported (yet). The Fetch API docs reveal IE is completely out in the cold at this point, which might be a show-stopper for you.


Good old jQuery. If there was ever a more simultaneously revered and dissed workhorse among us, I haven’t seen it…at least not since the last watercooler conversation over vim vs emacs. Though, technically, I guess that is a conversation about two equally dissed workhorses (depending whom you talk to). I digress.

While not as popular as it used to be, jQuery is still full-featured, extremely useful in some circumstances, and given past popularity you might need to support it regardless of personal opinions. For the record, I don’t have anything against it – just be sure you need to do more than make an HTTP request before pulling in a large dependency!

Here goes:

// jQuery

$('#btnJquery').click(function() {
        .done(function(data) {
            $('#price').text(data.bpi.USD.rate + ' USD')
        .fail(function(err) {

Gone is the querySelector, or more accurately…it’s been disguised as $. Different syntax, but we still bind an anonymous function to a click listener. Inside, we use jQuery’s getJSON method to handle turning the HTTP response into a useful object. That is two things jQuery has going for it – no shortage of documentation or utility methods!


Last, but certainly not least, Axios is a tremendously popular (i.e. almost 40k stars on GitHub at the time of this writing) HTTP client library. It is promise-based, lightweight, and well-supported.

// Axios

const btnAxios = document.querySelector('#btnAxios')

btnAxios.addEventListener('click', function() {
        .then(function(res) {
            PRICE.textContent = + ' USD'
        .catch(function(err) {

The promise handling looks similar to earlier examples but gone is any reference to JSON-related methods…because the library is doing the transformation automagically. Pretty sweet! Axios is also highly-configurable and supports concurrency that is actually easy to understand:

// Community example of multiple requests with Axios...

function A() {
  return axios.get(API + '/A');

function B() {
  return axios.get(API + '/B');

axios.all([A(), B()])
  .then(axios.spread(function (a, b) {
    // Safely use a and b!


So, there you have it… more ways than anyone ever needed to make HTTP requests in JavaScript. Needless to say, the examples above were all client-side, but could just as easily have been done in Node.js. For those requiring imports, just replace the <script></script> tags with npm install axios or similar.

Last but not least, I want to give a very loud shout out to Udemy’s Advanced Web Developer Bootcamp. The use case and examples here were an amalgamation of things picked up there. If you’re looking for a good overview of modern web development, it’s a great course – and I am not affiliated with Udemy or the course authors in any way…so hopefully that counts for something. :-)

Thanks for reading!

PS: My examples use USD simply because it is my local currency, and kept the code a bit cleaner… At the time of writing, the Coindesk API can return EUR, GBP and USD. As a fun exercise, clone the repo and refactor to support your favorite, or let the user select the desired currency!