In this post, we’ll discuss how you can use some of the tools and frameworks we’ve built to create and deploy a simple React/NodeJS application, into a Kubernetes cluster. At the end of this post, you’ll know how to deploy a basic contract-first React ‘To Dos’ app with client API generation.

This will be particularly useful for those of you who haven’t yet created a microservice, or deployed one into a Kubernetes cluster.

Before we begin, you’ll remember in our last blogs we discussed a number of techniques we use to build and deploy our microservices into a Kubernetes cluster. We’ve also shown you how to use kops to create a Kubernetes cluster in AWS and how to use code generation to make test automation easier and more productive. You should read these, before diving into this.

Let’s start by assuming you already have either a remote Kubernetes cluster to deploy into, or a local Minikube cluster. If not, you can follow our kops guide or install Minikube. We’re using Jenkins to deploy so feel free to follow our guide to get your own Jenkins instance to perform CI. Alternatively, you can easily build and deploy this yourself using any other CI tool, or by using the command line as we explain below.

There are two components of our stack – a NodeJS backend which will serve API requests to our React frontend which will serve our users the website, so they can interact with our app.

Backend

We’ve become big fans of TypeScript here at ClearPoint. This is based on our experiences using it in large microservice projects, where we are generally agnostic to the languages that individual developers want to write their services in. One thing we insist on is type safety and TypeScript provides our JavaScript developers with an easy avenue to achieve this goal.

Another reason we like this pattern is it enables contract-first development using the Swagger TypeScript code generator library. What this means is that we can define our API using the Swagger (now OpenAPI) specification. This specification is machine-readable and forms the basis of our ‘contract’ that developers must adhere to. Our Test Automation Lead Irina’s recent post introduces contract-first development and code generation for testing Java services – the pattern we’re going to introduce here is very similar.

The below diagram illustrates visually what our code generation does:

It all starts with an OpenAPI specification which you can reference here. You can use various tools like Swagger Editor and Stoplight to create your own OpenAPI specs. This is ours:

{
  "swagger": "2.0",
  "info": {
    "version": "",
    "title": "Todo",
    "description": ""
  },
  "host": "localhost:3001",
  "schemes": ["http"],
  "paths": {
    "/resolve-todo/{id}": {
      "parameters": [
        {
          "name": "id",
          "in": "path",
          "required": true,
          "type": "string"
        }
      ],
      "put": {
        "operationId": "resolveTodo",
        "summary": "resolveTodo",
        "responses": {
          "200": {
            "description": "",
            "schema": {
              "type": "array",
              "items": {
                "$ref": "#/definitions/Todo"
              }
            }
          }
        }
      }
    },
    "/remove-todo/{id}": {
      "parameters": [
        {
          "name": "id",
          "in": "path",
          "required": true,
          "type": "string"
        }
      ],
      "delete": {
        "operationId": "removeTodo",
        "summary": "removeTodo",
        "responses": {
          "200": {
            "description": "",
            "schema": {
              "type": "array",
              "items": {
                "$ref": "#/definitions/Todo"
              }
            }
          }
        }
      }
    },
    "/add-todo": {
      "post": {
        "operationId": "addTodo",
        "summary": "addTodo",
        "parameters": [
          {
            "name": "body",
            "in": "body",
            "schema": {
              "$ref": "#/definitions/Todo"
            }
          }
        ],
        "responses": {
          "200": {
            "description": "",
            "schema": {
              "type": "array",
              "items": {
                "$ref": "#/definitions/Todo"
              }
            }
          }
        }
      }
    },
    "/get-todos": {
      "get": {
        "operationId": "getTodos",
        "summary": "getTodos",
        "responses": {
          "200": {
            "description": "",
            "schema": {
              "type": "array",
              "items": {
                "$ref": "#/definitions/Todo"
              }
            }
          }
        }
      }
    }
  },
  "definitions": {
    "Todo": {
      "title": "Todo",
      "type": "object",
      "properties": {
        "id": {
          "type": "string"
        },
        "title": {
          "type": "string"
        },
        "resolved": {
          "type": "boolean"
        }
      }
    }
  }
}

What we have defined here is a basic CRUD API for our To Dos app. What’s important is that both our frontend and backend will be built using exactly the same contract so they will always be synchronised.

You can run git clone git@github.com:ClearPointNZ/connect-simple-todos-app.git to follow along. We’re using Jenkins 2.0 and its pipelines feature which allows us to define a Jenkinsfile that enumerates our build phase. You can view the frontend and backend Jenkinsfiles files, and stay tuned for a separate post on how we use Jenkins Job Builder and pipelines to do all this…

The focus of this post is not the Jenkins process to perform all this from code so we’ll step through it on the command-line instead. To do this, you’ll need Docker installed on your machine, as well as a public Docker Hub account to push your containers to. You’ll also need to install Node and Yarn. On a Mac, installing these two is as simple as brew install yarn.

Backend Build

Once you’ve cloned the repo as above, in your terminal, change to the swagger-backend directory in the repo. Run this sequence of commands to build the backend:

yarn install
yarn generate-api-interface
yarn compile

Next we need to construct a Docker container for our backend:

FROM node:6.9.4

VOLUME /todo
WORKDIR /todo
RUN mkdir app
COPY ./swagger-backend/app ./app/
COPY ./swagger-backend/package.json .
COPY ./swagger-backend/node_modules ./node_modules
EXPOSE 3001
RUN npm install -g yarn
ENTRYPOINT ["yarn", "start"]

Above is the Dockerfile for our backend service which will serve our generated client APIs using NodeJS. We know our app works with Node v6.9.4 so we’ve pinned that version in our FROM line.

We want to put our built artefacts into the /todos folder, so we’ve specified that as the VOLUME we want Docker to mount for us, before setting it as our working directory with WORKDIR.

Once it’s mounted, we need to copy across the built app, its package.json and its dependent node_modules. Next we tell Docker we want our backend app running on port 3001 using EXPOSE 3001.

We’re using yarn for dependency management so we install that globally with RUN npm install -g yarn.

Finally, we can start our app using ENTRYPOINT ["yarn", "start"]. This ensures that our backend app runs all the time, listening on port 3001 and waiting to serve requests.

You can build this image yourself as follows. From the root directory of the repo, perform the following:

$ cp ../jobs/backend/pipeline/Dockerfile .
$ docker build -t YOURDOCKERLOGINHERE/simple-app/simple-backend:latest .
Sending build context to Docker daemon  47.94MB
Step 1/10 : FROM node:6.9.4
 ---> c5667be18e4e
*snip*
Step 10/10 : ENTRYPOINT ["yarn", "start"]
 ---> Running in e5da13baa1a7
 ---> 4cffda7f2658
Removing intermediate container da67ffc20be4
Removing intermediate container 25a91c117637
Removing intermediate container e5da13baa1a7
Successfully built 4cffda7f2658
Successfully tagged YOURDOCKERLOGINHERE/simple-app/simple-backend:latest

Once your container has built, login to Docker Hub and push the image as follows. Once you’ve done this, any references we have provided to YOURDOCKERLOGINHERE such as in the Kubernetes Deployments will need to be replaced with your Docker Hub username.

$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username:
Password:
Login Succeeded

Now let’s push our image to Docker Hub:

$ docker push YOURDOCKERLOGINHERE/simple-app/simple-backend:latest
The push refers to a repository [docker.io/YOURDOCKERLOGINHERE/simple-app/simple-backend]
42ec02bbc09c: Pushed
...
a2ae92ffcd29: Mounted from library/node
latest: digest: sha256:d85c1402a4c832f213c8e14610b699105e75fece355308ba4f67edeec94dcf90 size: 2838

Backend Deployment

Once our backend container has been built, we then need to deploy it into our infrastructure so that our frontend can call it. We can use Kubernetes to achieve this, specifically by using a Service and a Deployment.

The main unit of work for Kubernetes is a Pod. The simplest way of thinking about a Pod is like a physical server or virtual machine. A Pod can host one or many containers. A microservice application might consist of a number of pods, two in our case – a ‘backend’ and a ‘frontend’. These pods need to know about each other in order for them to communicate, and the way they do this is through a Service.

A Deployment on the other hand provides a declarative state for our Service. Deployments control our Services, updating them if they need to be, as well as providing a Service with ‘what’ it needs to deploy.

Services and Deployments in Kubernetes are described using YAML. So what do these look like?

apiVersion: v1
kind: Service
metadata:
  name: backend
  namespace: todos
spec:
  externalTrafficPolicy: Cluster
  ports:
  - port: 3001
    targetPort: 3001
  selector:
    app: master
  type: LoadBalancer

Above is our Service. Using this we can create a new Service with the name backend in the namespace todos. It will be assigned an IP in the cluster and will map the external port 3001 to the targetPort 3001 on the container.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: backend
  namespace: todos
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: master
    spec:
      containers:
      - name: master
        image: YOURDOCKERLOGINHERE/simple-app/simple-backend:latest
        ports:
        - containerPort: 3001

This is our Deployment for our backend. In the first spec section, we have replicas which tells our Deployment how many backend Pods we want. We just want 1 for now so we’ve set this to 1. Next we have the template section which describes what we want to deploy and what we want to name it. The spec section within this template tells our Deployment that we want a container created with the name master and we want to deploy the latest tagged container image with the name simple-backend. We also want to expose the port 3001 as that’s the port we’ve specified in our Dockerfile.

To do this, perform the following commands. The first command creates a new todos namespace for our app, then we apply the backend deployment and service:

kubectl create ns todos
kubectl apply -f jobs/backend/pipeline/backend.yml
kubectl apply -f jobs/backend/pipeline/service.yml

Frontend

Most developers are used to architecting web-based client-server apps using a Model-View-Controller (or related) architecture. React claims it isn’t an MVC framework however. Regardless, it can be useful to think about React as the ‘View’ component in your MVC architecture.

We’ve based our frontend app off the Create React App repo which provides a simple, clean way to create a React frontend with minimal configuration. The source for our frontend is here and consists of several components. The public facing index.html contains some simple boilerplate code that references an index.tsx file which is effectively an interface to our application. Styling is provided in the index.css and App.css files and the view logic itself is contained in App.tsx.

Our index.tsx entry point is as follows:

import * as React from 'react';
import * as ReactDOM from 'react-dom';
import App from './App';
import registerServiceWorker from './registerServiceWorker';
import './index.css';

ReactDOM.render(
  <App />,
  document.getElementById('root') as HTMLElement
);
registerServiceWorker();

This imports React and ReactDOM which provides us with components of the React framework we need. We can then import our App which gives us with a nicely encapsulated set of logic. We will use this to build and render the HTML and JavaScript content that our users will interact with. We use a ServiceWorker to progressively build up the HTML that our application will display to the user. A Service Worker is a bit like a proxy, there’s a good post on how they’re used and what they do here.

Now let’s explore how the App.tsx file works. You can follow along in the file here. Since we’ve generated our client APIs, we don’t need to use any REST libraries to perform HTTP requests. All we need to do is import our generated API clients and they’ll do all that dirty work for us!

import TodoApi, { Todo } from './generated-api';

This imports our generated interface and client as TodoApi along with the ToDo model, which defines the data types we’re using. Now the only way we interact with the backend service is via the generated TodoApi interface which provides type-safety. You’ll see below that using the code completion (I’m using IntelliJ) means when we perform getTodos, we know that we’ll be returned an Array of Todos.

We know that the model for a Todo is as follows:

export type Todo = {
    'id': string

    'title': string

    'resolved': boolean

};

We can therefore ensure that whenever we’re working with a Todo object, our IDE will show me an error if we don’t adhere to the types that the model defines.

In our example below, you’ll see this in action. We were writing some code to initialise a new Todo. Instead of using a boolean for the resolved property, we tried using a string 'thisWillNotWork':

 

You’ll see that our IDE says we can’t assign 'thisWillNotWork' to the resolved property because it is expecting it to be a boolean. When we replace this with resolved: false, it will work perfectly. So, instead of this type mismatch becoming a run-time error that a user will see, it instead becomes a compile-time error that we can see while we’re developing! This is one of the huge benefits that TypeScript and generated APIs provides.

Instead of this type mismatch becoming a run-time error that a user will see, it instead becomes a compile-time error that I can see while I’m developing!

JSX and TSX and HTML, Oh My!

Let’s now have a look further down our App.tsx file to the render() function:

render() {
    return (
      <div className="App">
        <h1>Todo List</h1>
        <form
          onSubmit={e => {
            e.preventDefault();
            this.addTodo(this.titleInput.value);
            this.titleInput.value = '';
          }}
        >
          <input
            ref={node => {
              if (node !== null) {
                this.titleInput = node;
              }
            }}
          />
          <button type="submit">Add</button>
        </form>
        <ul>
          {this.state.todos.map((todo, index) => {
            return (
              <li
                key={index}
                style={{
                  textDecoration: todo.resolved ? 'line-through' : 'none',
                }}
              >
                {!todo.resolved && (
                  <button onClick={() => this.doneToDo(todo.id)}>Done</button>
                )}
                <button onClick={() => this.removeToDo(todo.id)}>
                  Delete
                </button>{' '}
                {todo.title}
              </li>
            );
          })}
        </ul>
      </div>
    );
  }
}

It looks like some kind of weird hybrid of HTML and JavaScript, right? This is called 'JSX', or in the TypeScript world 'TSX'. JSX is described as ‘syntactic sugar’ – basically it’s a way of allowing us to write things concisely and expressively like this:

<div className="App"></div>

which will then compile into

React.createElement(
  'div',
  {className: 'App'},
  null
)

which is how React then renders the component for us. We agree the former is much more readable than the latter!

While React presents us with a fairly steep learning curve, if you’ve got experience in other web frameworks like AngularJS or Vue.js, many of the concepts may already be familiar to you.

Frontend Build

To build the frontend, perform the following commands:

cd swagger-frontend
yarn install
yarn generate-api-client
yarn build --production

Now let’s take a look at the Dockerfile for our frontend:

FROM node:8.7.0

ENV NPM_CONFIG_LOGLEVEL warn

RUN npm install -g serve
EXPOSE 80

COPY ./swagger-frontend/build ./build
ENTRYPOINT ["serve", "-s", "build", "-p", "80"]

Our frontend is just a web server so our Dockerfile is very simple. It installs Node v8.7.0, and sets the logging verbosity to warn because the default Node logging is too verbose for our needs. It then installs the NPM package serve which is a basic web server which it exposes on port 80.

Let’s build our frontend Docker container. From the root directory of the repo:

$ rm Dockerfile
$ cp jobs/frontend/pipeline/Dockerfile .
$ docker build -t YOURDOCKERLOGINHERE/simple-app/simple-frontend:latest .

Once that’s done, let’s push it to Docker Hub:

$ docker push YOURDOCKERLOGINHERE/simple-app/simple-frontend:latest

Frontend Deployment

We’ve covered the backend Kubernetes Deployment and Service above, now we need to do the same for the frontend. This is our Deployment:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
  namespace: todos
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: master
    spec:
      containers:
      - name: master
        image: YOURDOCKERLOGINHERE/simple-app/simple-frontend:latest
        ports:
        - containerPort: 80

This Deployment is very similar to the backend deployment. It creates a Kubernetes Deployment with the name frontend in the namespace todos with 1 Pod (replica). It also creates a container named master which is the latest tagged simple-frontend image from our Docker Registry. Since it’s a frontend container, it uses port 80 to serve HTTP web traffic.

This is our Service:

apiVersion: v1
kind: Service
metadata:
  name: frontend
  namespace: todos
spec:
  externalTrafficPolicy: Cluster
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: master
  type: LoadBalancer

Again, this is very similar to our backend Service. We create a new Kubernetes Service with the name frontend in the namespace todos. It accepts traffic on port 80 and routes that to port 80 on our container.

Let’s apply these two:

kubectl apply -f jobs/frontend/pipeline/frontend.yml
kubectl apply -f jobs/frontend/pipeline/service.yml

Great! Now we’ve built and pushed our separate backend and frontend containers and applied their configurations to deploy them using Kubernetes.

You can now either port-forward to your frontend pod or to the service itself:

$ kubectl get pods --namespace todos
NAME                        READY     STATUS    RESTARTS   AGE
backend-4045041855-9zd15    1/1       Running   0          8m
frontend-1750318887-g9h2w   1/1       Running   0          8m

$ kubectl port-forward frontend-1750318887-g9h2w 8080:80 --namespace todos

Above we first get a list of pods in the todos namespace, then copy the name of the frontend pod into the port-forward command. This will allow you to access the app on http://localhost:8080

Alternatively, you can access the service as follows. Navigate to the EXTERNAL-IP of the frontend service:

$ kubectl get services --namespace todos -o wide
NAME       TYPE           CLUSTER-IP      EXTERNAL-IP                                      PORT(S)          AGE       SELECTOR
backend    LoadBalancer   100.67.216.28   REMOVED-1300058055.us-east-1.elb.amazonaws.com   3001:31518/TCP   5d        app=master
frontend   LoadBalancer   100.71.35.107   REMOVED-139397830.us-east-1.elb.amazonaws.com    80:31307/TCP     6d        app=master

We hope you’ve enjoyed this post, we’ve covered a lot of ground! There are several improvements we plan on making to the code generation side of this, including separating the OpenAPI into its own module and a re-work of the route generation process. Stay tuned for these updates!

Thanks to Fyodor Yakimchouk for his work on the code generation and Todos application. Much of this post is based on a presentation Fyodor gave on type-safety with React. Thanks as well to Igor Khripunov for his work on Jenkins.


Leave a Reply

Your email address will not be published. Required fields are marked *