Deploying a Node.js app to a Microsoft Azure Web app

Introduction

The project I’m currently working on uses Angular2 on the front end and Node.js on the backend. The backend is an Express app that wraps a GraphQL API. One of the things we got working very early on was our automated build and release pipeline. We are using Visual Studio Team Services to orchestrate the build and deployment process. In the initial phases of the project we were using MS Azure as our cloud provider – it is relatively easy to deploy to Azure but we encountered some gotchas which I thought were worth sharing.

Build

Our build definition consists of the following steps:

  1. Get Source from Git
  2. “npm install” to install packages
  3. “npm test” to run unit tests
  4. Publish test results – we used Jasmine as the test framework and used the jasmine-reporters package to output test results to JUnit XML format. VSTS can render a nice test report using this file.
  5. “npm run build” to build the Node JS app using babel.
  6. Archive and copy release to VSTS.

Release

Our release definition consists of the following steps:

  1. Get the latest build artefact
  2. Azure app service deploy the artefact

Gotchas

Things didn’t work first time! Documentation was out-of-date (some of the MS documentation hadn’t been updated for 2 years!). Initially it seemed every route we took wasn’t quite right.

NodeJS apps deployed to an Azure WebApp actually run in IIS via iisnode. Communication from iisnode to node js is via a named pipe (this isn’t important but is useful to know). It’s easy enough to get your app on to Azure, but I found that the build and release pipeline required a number of tweaks which weren’t apparent on the documentation.

The following tweaks were needed in our build and release pipeline:

  • node_modules needed to be packaged up with the build. The archive created by our build process included the node_modules that were installed as part of the “npm install” task. There were a few MS articles around Git deploy which said packages referenced in the package.json file should be automatically downloaded the required node_modules, this doesn’t seem to work for our particular deployment technique.
  • There is some crucial web.config required to configure iisnode:
    <?xml version="1.0" encoding="utf-8"?>
    <!--      This configuration file is required if iisnode is used to run node processes behind      IIS or IIS Express.  For more information, visit:      https://github.com/tjanczuk/iisnode/blob/master/src/samples/configuration/web.config -->
    
    <configuration>
      <system.webServer>
        <!-- Visit http://blogs.msdn.com/b/windowsazure/archive/2013/11/14/introduction-to-websockets-on-windows-azure-web-sites.aspx for more information on WebSocket support -->
        <webSocket enabled="false" />
        <handlers>
          <!-- Indicates that the server.js file is a node.js site to be handled by the iisnode module -->
          <add name="iisnode" path="server.js" verb="*" modules="iisnode"/>
        </handlers>
        <rewrite>
          <rules>
            <!-- Redirect all requests to https -->
            <!-- http://stackoverflow.com/questions/21788863/url-rewrite-http-to-https-in-iisnode -->
            <rule name="HTTP to Prod HTTPS redirect" stopProcessing="true">
              <match url="(.*)" />
              <conditions>
                <add input="{HTTPS}" pattern="off" ignoreCase="true" />
              </conditions>
              <action type="Redirect" redirectType="Found" url="https://{HTTP_HOST}/{R:1}" />
            </rule>
    
            <!-- Do not interfere with requests for node-inspector debugging -->
            <rule name="NodeInspector" patternSyntax="ECMAScript" stopProcessing="true">
              <match url="^server.js\/debug[\/]?" />
            </rule>
    
            <!-- First we consider whether the incoming URL matches a physical file in the /public folder -->
            <rule name="StaticContent">
              <action type="Rewrite" url="public{REQUEST_URI}"/>
            </rule>
    
            <!-- All other URLs are mapped to the node.js site entry point -->
            <rule name="DynamicContent">
              <conditions>
                <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="True"/>
              </conditions>
              <action type="Rewrite" url="server.js"/>
            </rule>
          </rules>
        </rewrite>
    
        <!-- 'bin' directory has no special meaning in node.js and apps can be placed in it -->
        <security>
          <requestFiltering>
            <hiddenSegments>
              <remove segment="bin"/>
            </hiddenSegments>
          </requestFiltering>
        </security>
    
        <!-- Make sure error responses are left untouched -->
        <httpErrors existingResponse="PassThrough" />
    
        <!--       You can control how Node is hosted within IIS using the following options:         * watchedFiles: semi-colon separated list of files that will be watched for changes to restart the server         * node_env: will be propagated to node as NODE_ENV environment variable         * debuggingEnabled - controls whether the built-in debugger is enabled       See https://github.com/tjanczuk/iisnode/blob/master/src/samples/configuration/web.config for a full list of options     -->
        <iisnode watchedFiles="web.config;*.js"/>
      </system.webServer>
    </configuration>
    

The most important line in the config is this:

<add name="iisnode" path="server.js" verb="*" modules="iisnode"/>

Which sets iisnode as the handler for server.js. If your main file isn’t called server.js – then you’ll need to change this e.g. to app.js or index.js etc.

The project I’m working on uses WebSockets to do some real-time communication. If you want Node JS to handle web sockets, rather oddly, you must tell IIS to disable web sockets.

And the bad news…

iisnode totally cripples the performance of node.js. We found that node running on “bare metal” (an AWS t2 micro instance) was up to 4 times faster than when the same node service was deployed as a web app on Azure. Worst still, the bare metal deployment of could out perform 4 load-balanced S2 web app instance on Azure 😦

Finally, why did we chose AWS over Azure?

In the end, we actually chose to switch entirely to Amazon Web Services (AWS) – here are a few reasons why.

I’ve used MS Azure for a while now for both production applications and proof of concepts. Generally I’ve enjoyed using it, it has a good portal and lots of great features – Azure Search, Azure SQL to name a few. But in my experience, Azure seems to work well for .NET applications and less so for non .NET solutions.

My main gripes with Azure are around account management and lack of database choice (unless you are willing to manage the DB yourself). The MS account system is a total mess! I have 2 or 3 different MS accounts – some are work ones, some personal – all because MS have a totally inconsistent account system. Some services (like Azure) can be tied to Active Directory and others (MSDN subscriptions) can’t. I just find myself is a mess of choosing which account to log in with today and whether my system administrator has control or not over my permissions to the service I’ve logged in to.

AWS has really though through their permissions model, it’s complex but really flexible. they have user accounts, roles and resource policies. I’ve only been using AWS for a year or so but totally got their permissions model after provisioning a few databases and virtual machines.

For the new project I’m working on, we were toying with a NoSQL solution – such as ArrangoDB. My company (myself included) is more familiar with RDBMS solutions – typically using MS SQL Server for most products. Moving to a NoSQL solution would be a little risky – so as part of an investigation stage of the project we looked at RDBMS’s with document db style support. I’ve been a fan of Postgres for a while, but didn’t realise how many brilliant features it has and good performance characteristics. Although only anecdotal – we found Postgres on an AWS RDS t2 micro instance to be much faster than a basic Azure SQL instance. For us, on this application, database choice was extremely important and Azure (at the time of wiring) didn’t offer a managed instance of Postgres (or anything other than MS SQL Server).

The final reason was AWS Lambda functions. AWS Lambdas are far superior to Azure Functions. A brief prototype in to each proved it was quite easy to convert a fairly complex Node JS app in to a lambda function; I couldn’t get the equivalent app working at all reliably as an Azure function. This seems to follow my main point – write a .NET app and Azure Functions work well. Try a python or Node JS app and see if you can even get it working…

Advertisements

Getting a job in software development

I have been developing commercial software since graduating from University in 2003. I started NBS in 2005 and in the 12 years I’ve worked there I’ve progressed from Graduate Software Developer to NBS Labs manager. As part of this role, I help recruit graduate software developers on to our Graduate Software Developer Scheme. Each year we look to employ 2-3 graduates. The graduates we employ typically studied at one of our local universities – Newcastle, Northumbria, Sunderland or Teeside.

The cost of attending University has increased significantly since the introduction of tuition fees back in 1998. Students can pay around £9000 a year for their degree. I’m going to be a little controversial and say that what is really surprising to me is that, as a recruiter, I’ve not seen an increase in quality of candidate coming through. Instead, I am seeing graduates who have done a little bit of coding – Java, C# – in year 1 and nothing since.

A core part of our recruitment process is a technical test, which many applicants struggle with. So I thought I’d write a quick blog post to give a bit of an insight into what I look for when recruiting graduates.

Review of CVs

The first step in the recruitment process is to sift through CVs. It’s quite common for graduate CVs to look very similar, after all graduates are at the beginning of their careers and have very little commercial experience.

An important thing to realise when you write your CV is that all the students on your course could potentially be applying for the same job. In addition to this, similar number of students from other local Universities might also be applying. How can you make sure that your CV stands out so you make the short list?

  • You are applying for a software development position, make sure you cover your knowledge and experience of key technologies. Also cover group projects and final year projects – with an overview of what the project was, your role and what technologies were used.
  • If you’ve been on a placement as part of your degree you should have some excellent examples of work done whilst on placement and technologies used etc. In ay ways you already have something to make your CV stand out.
  • Demonstrate your abilities and interest in computing – include hyperlinks to personal websites (written using a full development stack and ideally backed by a data store) to showcase your work, blogs, GitHub repositories.

Practical test

From the CVs, we build a short list of candidates to invite in for the first stage of our interview process – a programming test. Our test is fairly simple and is looking for a demonstration of basic programmings skills such as:

  • Interpreting requirements
  • Coding using a good programming style
  • Reading user input
  • Breaking up a problem in to reusable functions
  • Demonstration of code reuse in a loop
  • Reading a file and analysing data within it
  • And ideally showing some initiative – like writing some Unit tests and handling exceptions

This sounds simple doesn’t it? But we see many graduates struggle with this test event though it probably less difficult than something you will have done in a programming seminar at University.

My advice is always a reminder that many software development job interviews will require the completion of a test. Java, .NET, Python, NodeJs are all free to download and IDE’s like the excellent JetBrains IntelliJ IDEA or Visual Studio offer community editions. There are also loads of coding vlogs on YouTube. Practise, practise, practise the basics before you start applying for jobs.

Interview

The tests are code-reviewed by a mix of senior developers and developers to get feedback. If the candidate has written a good solution they are invited back for a formal interview as the final stage of the interview.

You can find lots of really good advice on the Internet about how to interview well – but my advice is try not to panic and remember that the interview is NOT a test to catch you out. It’s a conversation between you and your potential new employer to discuss your knowledge, skills and passion for a career in software development and for you to decide if the company is the right fit for you.

Turtle Minesweeper

I’ve mentioned a few times on various blog posts that I got my first Mac back in 1999. It was a iMac G3 266Mhz running MacOS 8.6. I loved that computer, it was bullet proof compared to previous Windows PCs I’d owned. One thing I missed when moving to the Mac however was Windows Minesweeper. I played the game quite a lot and couldn’t find a clone that did the Windows version justice.

During my time at College I wrote a number of applications in Visual Basic 6 and was quite comfortable with how it worked. An equivalent development on the Mac was REALbasic (which has recently been taken over by Xojo) which I started using in my spare time whilst studying at University. In the summer of 2001 I set about creating a Minesweeper clone for the Mac.

I recently bought a new MacBook Pro 2016 model and was curious to see if the app still worked in macOS Sierra (it was originally written for Classic Mac OS (8.6-> 9.x) and MacOS 10.1 -> 10.5. I stopped supporting it back in 2006 so wasn’t really expecting much.

Analysis and design

Whilst restoring files to the new MacBook Pro, I found the TurtleMine folder and found all the analysis and design documentation I’d written. At University, a few of my modules were about systems analysis and design using UML. I’d tried to apply what I’d learnt to the Minesweeper app.

I’d written a few use cases:

Screen Shot 2017-04-07 at 11.13.21 am

Example use case for uncovering a square

Created some wireframes:

GUI Designs

Wireframes

Sequence diagrams:

ISD - Uncover a square

Sequence diagram for uncovering a square on the minefield

And state charts:

Statechart - Square

State chart for a square on the minefield

All in MS Excel! I must have been a glutton for punishment. Nowadays, I like to use tools like Pencil for wireframes, and Visual Paradigm for UML diagrams.

Running the game

I opened up the latest release of the source code I could find and tried to double click the Turtle Mine application icon:

Screen Shot 2017-04-07 at 12.59.58 pm

And to my total surprise, the app ran!

Download

I don’t have my Turtle Soft website any more, but thought macOS people might still like to be able to download Turtle Mine. If you’d like a copy, you can download Turtle Mine from the DropBox link below:

https://www.dropbox.com/s/p1k87xe02b7an5o/TurtleMine.zip?dl=0

Shairport Sync

My son Max likes to listen to music at night, the music helps him sleep. When Max moved in to his own room, I bought an Airport Express so that I could stream music from my iPad to his room. Just recently, the Airport Express gave up the ghost – I think it overheated and something blew as all it would do is show a steady yellow light.

Fortunately, I have a number of Raspberry Pi’s lying around and suspected that there would be some open source solution to replace the Airport Express at half the cost. During my Googling I happened upon Shairport Sync. The software is actually unmaintained, but there are a number of forks that still are.

I tried my Raspberry Pi3 first as that has a headphone jack, but when I got Shairport sync working I noticed that the sound quality from the Pi3 was really poor. A week or so later, the Raspberry Pi Zero W was released. I decided to get one for Max, and also a hi-fi Digital to Analogue Converter (DAC) to address the issue with sound quality. Some soldering was required to attach the 40-pin header to both the Pi’s GPIO pins and the DAC.

Then came the installation of the Shairport Sync software.

Step 1

In a terminal on the Pi, run the following commands:

sudo apt-get install build-essential git xmltoman
sudo apt-get install autoconf automake libtool libdaemon-dev libasound2-dev libpopt-dev libconfig-dev sudo apt-get install avahi-daemon libavahi-client-dev
sudo apt-get install libssl-dev

Step 2

Get the shairport sync software from GitHub:

git clone https://github.com/mikebrady/shairport-sync.git
cd shairport-sync

Step 3

Create a shairport sync group and user:

getent group shairport-sync &>/dev/null || sudo groupadd -r shairport-sync >/dev/null
getent passwd shairport-sync &> /dev/null || sudo useradd -r -M -g shairport-sync -s /usr/bin/nologin -G audio shairport-sync >/dev/null

Step 4

Configure and compile the software (this will take a few minutes on the Pi Zero):

autoreconf -i -f
./configure --sysconfdir=/etc --with-alsa --with-avahi --with-ssl=openssl --with-metadata --with-systemd
make
sudo make install
sudo systemctl enable shairport-sync
chmod 755 ./scripts/shairport-sync
sudo cp ./scripts/shairport-sync /etc/init.d/shairport-sync/
sudo update-rc.d shairport-sync defaults 90 10

Step 5

Edit the shairport-sync.conf file and set defaults – such as the name of the share

sudo vi /etc/shairport-sync.conf
general =
{
  name = "Raspberry Pi Zero W";
};

Finally restart the pi:

shutdown -r now

And hopefully, the Pi will appear in iTunes:

Shairport

Pi appears as an AirPlay speaker in iTunes

Autodesk Forge Viewer and Angular2

I’ve been using the Autodesk Forge viewer quite a bit lately to integrate 3D building models within various prototype applications. Until now I had only used the Forge Viewer with plain JavaScript (or a bit of JQuery). I recently tried to integrate the viewer within an Angular 2 application and thought I’d share my solution – as I was unable to find any examples when I did a quick google.

Angular2 (just called Angular) is a rewrite of AngularJS framework. A key difference is that Angular2 moves away from the MVC pattern in favour of Components and the shadow DOM. Although not a requirements, Angular2 recommends the use of TypeScript to help more strongly type JavaScript with a view to help maintainability of large applications. Angular is just JavaScript, so it’s not difficult to integrate external JavaScript libraries with it – you just have to follow particular conventions to get these libraries to work. The solution to integrating the Forge Viewer is very similar to some of the React samples on GitHub.

Step 1

After creating a new Angular app via angular-cli, add the required JS includes to index.html:

<script src="https://developer.api.autodesk.com/viewingservice/v1/viewers/three.min.js?v=v2.13"></script>
<script src="https://developer.api.autodesk.com/viewingservice/v1/viewers/viewer3D.min.js?v=v2.13"></script>

Note that I’m going to use the headless Forge Viewer in this example – so I don’t need to include the Forge Viewer’s CSS.

Step 2

Create a new component using angular-cli:

ng generate component forge-viewer

Add the following to forge-viewer.component.html:

<div #viewerContainer class="viewer">
</div>

This provides a Div for the Forge Viewer to render in to. We need to add a #viewerContainer reference within theDiv so that we can obtain an ElementRef to give the Forge Viewer the DOM element to bind to. Add the following CSS to forge-viewer.component.css:

.viewer {
  position: relative;
  width: 100%;
  height: 450px;
}

Step 3

We’ve done the basic setup, we now need to add the main functionality to forge-viewer.component.ts.

import { Component, ViewChild, OnInit, OnDestroy, ElementRef } from '@angular/core';

// We need to tell TypeScript that Autodesk exists as a variables/object somewhere globally
declare const Autodesk: any;

@Component({
  selector: 'forge-viewer',
  templateUrl: './forge-viewer.component.html',
  styleUrls: ['./forge-viewer.component.scss'],
})
export class ForgeViewerComponent implements OnInit, OnDestroy{
  @ViewChild('viewerContainer') viewerContainer: any;
  private viewer: any;

  constructor(private elementRef: ElementRef) { }

...

There are a couple of lines above that are crucially important. We’ve imported the Autodesk Viewer from Autodesk’s servers – this creates a global Autodesk object. We don’t have any TypeScript typings for this object (ts.d files). At time of writing, there were no definitions on the DefinatelyTyped repository. TypeScript is just a superset of JavaScript, so it’s not a problem that we don’t have a typings file. All we need to do is declare an Autodesk variable:

declare const Autodesk: any;

This tells the TypeScript compiler that somewhere globally there is an object called Autodesk.

Also important is a reference to the Div we want to render the viewer in:

@ViewChild('viewerContainer') viewerContainer: any;

Step 4

We’ll now create an instance of the Forge Viewer – we’ll need to do this once the component has been initialised AND our hosting Div has been rendered in the DOM. We’ll use the ngAfterViewInit lifecycle hook:

ngAfterViewInit() {
  this.launchViewer();
}

private getAccessToken(onSuccess: any) {
  const { access_token, expires_in } = // Your code to get a token
  onSuccess(access_token, expires_in);
}

private launchViewer() {
  if (this.viewer) {
    // Viewer has already been initialised
    return;
  }

  const options = {
    env: 'AutodeskProduction',
    getAccessToken: (onSuccess) => { this.getAccessToken(onSuccess) },
  };

  // For a headless viewer
  this.viewer = new Autodesk.Viewing.Viewer3D(this.viewerContainer.nativeElement, {});
  // For a viewer with UI
  // this.viewer = new Autodesk.Viewing.Private.GuiViewer3D(this.viewerContainer.nativeElement, {});

  Autodesk.Viewing.Initializer(options, () => {
    // Initialise the viewer and load a document
    this.viewer.initialize();
    this.loadDocument();
  });
}

private loadDocument() {
  const urn = `urn:${//document urn}`;

  Autodesk.Viewing.Document.load(urn, (doc) => {
    // Get views that can be displayed in the viewer
    const geometryItems = Autodesk.Viewing.Document.getSubItemsWithProperties(doc.getRootItem(), {type: 'geometry'}, true);

    if (geometryItems.length === 0) {
      return;
    }

    // Example of adding event listeners
    this.viewer.addEventListener(Autodesk.Viewing.GEOMETRY_LOADED_EVENT, this.geometryLoaded);
    this.viewer.addEventListener(Autodesk.Viewing.SELECTION_CHANGED_EVENT, (event) => this.selectionChanged(event));

    // Load view in to the viewer
    this.viewer.load(doc.getViewablePath(geometryItems[0]));
  }, errorMsg => console.error);
}

private geometryLoaded(event: any) {
  const viewer = event.target;

  viewer.removeEventListener(Autodesk.Viewing.GEOMETRY_LOADED_EVENT, this.geometryLoaded);

  // Example - set light preset and fit model to view
  viewer.setLightPreset(8);
  viewer.fitToView();
}

private selectionChanged(event: any) {
  const model = event.model;
  const dbIds = event.dbIdArray;

  // Get properties of object
  this.viewer.getProperties(dbIds[0], (props) => {
    // Do something with properties.
  });
}

ngOnDestroy() {
  // Clean up the viewer when the component is destroyed
  if (this.viewer && this.viewer.running) {
    this.viewer.removeEventListener(Autodesk.Viewing.SELECTION_CHANGED_EVENT, this.selectionChanged);
    this.viewer.tearDown();
    this.viewer.finish();
    this.viewer = null;
  }
}

A lot of the code is very similar to how you’d instantiate the viewer via plain JavaScript. The following line creates a new instance of the viewer in the Div of our component template:

this.viewer = new Autodesk.Viewing.Viewer3D(this.viewerContainer.nativeElement, {});

The reset of the code just loads a document and demonstrates how events can be bound.

Gotchas

Whilst working on this prototype, I encountered one gotcha. I could successfully create an instance of the Viewer and load a model in to it. My application had simple routing – when I navigated away from the route where the viewer was hosts, to another route and then back, the viewer wouldn’t display. It seemed that viewer thought it has already been instantiated so didn’t bother and skipped to loading the model…which didn’t work because there was no instance of the viewer.

My solution to the problem isn’t as elegant as I wanted, but does work:

this.viewer = new Autodesk.Viewing.Viewer3D(this.viewerContainer.nativeElement, {}); // Headless viewer

// Check if the viewer has already been initialised - this isn't the nicest, but we've set the env in our
// options above so we at least know that it was us who did this!
if (!Autodesk.Viewing.Private.env) {
  Autodesk.Viewing.Initializer(options, () => {
    this.viewer.initialize();
      this.loadDocument();
  });
} else {
  // We need to give an initialised viewing application a tick to allow the DOM element to be established before we re-draw
  setTimeout(() => {
    this.viewer.initialize();
    this.loadDocument();
  });
}

The 2nd time out component loads, Autodesk.Viewing.Private.env will already be set (we set it!). So we simply call initialise on the viewer and load the model. This didn’t work first time – but adding a setTimeout gave Angular a tick to sort out DOM binding/it’s update cycle before attempting to load the viewer.

Screenshots

The full forge-viewer.component.ts file

import { Component, ViewChild, OnInit, OnDestroy, ElementRef, Input } from '@angular/core';

// We need to tell TypeScript that Autodesk exists as a variables/object somewhere globally
declare const Autodesk: any;

@Component({
  selector: 'forge-viewer',
  templateUrl: './forge-viewer.component.html',
  styleUrls: ['./forge-viewer.component.scss'],
})
export class ForgeViewerComponent implements OnInit, OnDestroy {
  private selectedSection: any = null;
  @ViewChild('viewerContainer') viewerContainer: any;
  private viewer: any;

  constructor(private elementRef: ElementRef) { }

  ngOnInit() {
  }

  ngAfterViewInit() { 
    this.launchViewer();
  }

  ngOnDestroy() {
    if (this.viewer && this.viewer.running) {
      this.viewer.removeEventListener(Autodesk.Viewing.SELECTION_CHANGED_EVENT, this.selectionChanged);
      this.viewer.tearDown();
      this.viewer.finish();
      this.viewer = null;
    }
  }

  private launchViewer() {
    if (this.viewer) {
      return;
    }

    const options = {
      env: 'AutodeskProduction',
      getAccessToken: (onSuccess) => { this.getAccessToken(onSuccess) },
    };

    this.viewer = new Autodesk.Viewing.Viewer3D(this.viewerContainer.nativeElement, {}); // Headless viewer
 
    // Check if the viewer has already been initialised - this isn't the nicest, but we've set the env in our
    // options above so we at least know that it was us who did this!
    if (!Autodesk.Viewing.Private.env) {
      Autodesk.Viewing.Initializer(options, () => {
        this.viewer.initialize();
        this.loadDocument();
      });
    } else {
      // We need to give an initialised viewing application a tick to allow the DOM element to be established before we re-draw
      setTimeout(() => {
        this.viewer.initialize();
        this.loadDocument();
      });
    }
  }

  private loadDocument() {
    const urn = `urn:${// model urn}`;

    Autodesk.Viewing.Document.load(urn, (doc) => {
      const geometryItems = Autodesk.Viewing.Document.getSubItemsWithProperties(doc.getRootItem(), {type: 'geometry'}, true);

      if (geometryItems.length === 0) {
        return;
      }

      this.viewer.addEventListener(Autodesk.Viewing.GEOMETRY_LOADED_EVENT, this.geometryLoaded);
      this.viewer.addEventListener(Autodesk.Viewing.SELECTION_CHANGED_EVENT, (event) => this.selectionChanged(event));

      this.viewer.load(doc.getViewablePath(geometryItems[0]));
    }, errorMsg => console.error);
  }

  private geometryLoaded(event: any) {
    const viewer = event.target;

    viewer.removeEventListener(Autodesk.Viewing.GEOMETRY_LOADED_EVENT, this.geometryLoaded);
    viewer.setLightPreset(8);
    viewer.fitToView();
    // viewer.setQualityLevel(false, true); // Getting rid of Ambientshadows to false to avoid blackscreen problem in Viewer.
  }

  private selectionChanged(event: any) {
    const model = event.model;
    const dbIds = event.dbIdArray;

    // Get properties of object
    this.viewer.getProperties(dbIds[0], (props) => {
       // Do something with properties
    });
  }

  private getAccessToken(onSuccess: any) {
    const { access_token, expires_in } = // get token
    onSuccess(access_token, expires_in);
  }
}

GraphQL

Last month I explained how we are using NodeJS/Express, GraphQL and Sequelize to prototype a new project at work. Although I’ve been extremely busy over the last few weeks, I wanted to continue the topic by exploring how to add a GraphQL API over the top of our Sequelize store.

During brainstorming of technologies for the new project, an extremely knowledgeable colleague, who is also project lead suggested checking out Facebook’s (fairly) recently open sourced GraphQL. Over the years, I’ve created a few web services using various technologies – from SOAP based services such as ASMX Web Services and WCF to REST services using ASP.NET WebAPI, OData, Nancy FX and SailsJS.

My team do a lot of prototyping and feasibly studies, we often have to start by creating CRUD data layers. Upon reading about GraphQL, I could see it looked to address a common problem I’ve seen – where the REST API is built around the structure of the data, often leading to very “talkie” APIs. In other instances, REST endpoints are coded more around how the client will consume the data – and as a result often return huge JSON payloads to circumvent the performance issues of the “talkie” API.

GraphQL focuses more on how the data looks and the queries/mutations you wish to allow on that data. This looked perfect for prototyping as we could define our objects and the client can make ad-hoc queries (that are validated against the schema we’ve defined).

In this blog post, I wanted to give a very basic overview of adding a GraphQL layer over the Sequelize data layer we built last time. The GraphQL service we will build will allow us to query classifications and classification items.

Step 1

We need to add a few more packages to our node app. We will be using express to host GraphQL in a web app.

npm install express --save
npm install graphql express-graphql --save

Step 2

We need to create a *very* basic express application. We will add an app.js file to the project, which will look like this:

import express from 'express';

const app = express();

app.listen(3000, () => console.log('Now listening on localhost:3000'));

NOTE: I’m a big fan of Babel, in our prototype at work were using babel-node for local development and transpiling for deployment to our test server. I’ve used it in the above example to provide support for ES6, and would highly recommend it if you want all of the nice ES6 features without worrying about which version of NodeJS is installed.

The server will launch with the command node app.js. If we visit the URL http://localhost:3000 we won’t see much!

Step 3

Next we are going to define our GraphQL schema – this will comprise of the objects that can be queried (and how they will resolve their underlying data) and the queries that can performed.

Our API is very simple, we’re going to allow users to query classifications and classification items. We’ll start by creating a file called schema.js and adding Classification and ClassificationItem objects.

import { GraphQLString, GraphQLInt, GraphQLList, GraphQLSchema, GraphQLObjectType } from 'graphql';
import * as Db from './db';

const Classification = new GraphQLObjectType({
  name: 'Classification',
  description: 'This represents a Classification',
  fields() {
    return {
      title: {
        type: GraphQLString,
        resolve: ({ title }) => title,
      },
      publisher: {
        type: GraphQLString,
        resolve: ({ publisher }) => publisher,
      },
      classificationItems: {
        type: new GraphQLList(ClassificationItem),
        resolve: (classification) => {
          // Used sequelize to resolve classification items from the database
          return classification.getClassificationItems({ where: { parentId: null } });
        },
      },
    };
  },
});

const ClassificationItem = new GraphQLObjectType({
  name: 'ClassificationItem',
  description: 'This represents a Classification Item',
  fields() {
    return {
      notation: {
        type: GraphQLString,
        resolve: ({ notation }) => notation,
      },
      title: {
        type: GraphQLString,
        resolve: ({ title }) => title,
      },
      classificationItems: {
        type: new GraphQLList(ClassificationItem),
        resolve: (classification) => {
          // Used sequelize to resolve classification items from the database
          return classification.getClassificationItems({ where: { parentId: classification.id } });
        },
      },
    };
  },
});

The main thing to note is the resolve method – this tells GraphQL how to resolve the data requested. In the above example there are 2 types of results – basic scalars which are resolved by returning properties of the results fetched by Sequelize. We’ve also modelled a couple of relationships to get child classification items. To resolve these relationships, we need to use Sequelize to return the child records from the database.

Step 4

Then we define the queries we want to support on our objects. We’ll allow clients to query classifications on title and classification items on notation, parentId and classificationId:

const classificationQuery = {
  type: new GraphQLList(Classification),
  args: {
    title: {
      type: GraphQLString,
    },
  },
  resolve(root, args) {
    return Db.Classification.findAll({ where: args });
  },
};

const classificationItemQuery = {
  type: new GraphQLList(ClassificationItem),
  args: {
    notation: {
      type: GraphQLString,
    },
    parentId: {
      type: GraphQLInt,
    },
    classificationId: {
      type: GraphQLInt,
    },
  },
  resolve(root, args) {
    return Db.Taxon.findAll({ where: args });
  },
};

const QUERIES = new GraphQLObjectType({
  name: 'Query',
  description: 'Root Query Object',
  fields() {
    return {
      // We support the following queries
      classification: classificationQuery,
      classificationItem: classificationItemQuery,
    };
  },
});

const SCHEMA = new GraphQLSchema({
  query: QUERIES,
});

export default SCHEMA;

Finally we create and export the GraphQLSchema with the queries we defined.

Step 5

We now have to add graphql to the express service we created, using the express-graphql package – by adding a few more lines to app.js:

import express from 'express';
import graphqlHTTP from 'express-graphql';
import schema from './schema';

const app = express();

app.use('/graphql', graphqlHTTP({
  schema: schema,
  // Enable UI
  graphiql: true,
}));

app.listen(3000, () => console.log('Now listening on localhost:3000'));

We’ve created a new express endpoint called /graphql and have attached the graphqlHTTP client with the schema we’ve declared in the previous steps. We have also enabled the grapiql ui. If you run the service  and navigate to http://localhost:3000/graphql, you’ll see the UI.

Graphiql

Graphiql is a fab front end to test out your GraphQL queries, read documentation about the capabilities of the API, and shows off some of the nice GraphQL features such as checking the query is valid (e.g. the API supports the fields being queried etc.).

Summary

This has been a quick write up of getting up and running with Express, GraphQL and Sequelize. It only scratches the surface of GraphQL – in this example we’ve only looked at reading data not mutating it. So far, we’ve been really impressed with GraphQL and have found it really good to work with. On the client, we’ve been looking at the Apollo GraphQL client which offers some nice features out of the box – including integration of a Redux store, query caching and some nice Chrome developer tools…but maybe more on this in a later post.

 

 

 

 

Sequelize

It’s a new year and we’ve started some new projects at work. Over the next few months I’m working on a project to push our specification products on using newer technologies. Traditionally, I’ve worked mostly with a Microsoft Stack – SQL Server and .NET (EntityFramework, WinForms or  WebAPI and ASP.NET). However, a hobby of mine (and part of my role at work) is to keep tabs on latest technologies. I’ve been following the various emerging JavaScript frameworks closely over the last few years – EmberJS, Angular, VueJS, NodeJS, Express (I’ve not looked at ReactJS yet but mean to). One thing I tell everyone who will listen is to bookmark the ThoughWorks technology radar.

For the new project, I want to use JavaScript only – Angular2 on the front end and, NodeJS/Express on the back-end. The main motivation is one of cost and scalability – JavaScript runs on pretty much anything, the ecosystem is full of open source solutions and the stack is now fairly mature (with successful production usage of many of the frameworks). I considered .NET Core but from a previous prototype, the toolset isn’t mature enough yet (maybe it will be when the next version of VisualStudio is released). I also have to admit, I found the whole .NET Core experience quite frustrating during that prototype with tools being marked as RC1 (DNC, DNX etc) only to be totally re-written in RC2 (dotnet cli). Good reason, but changes were so fundamental and should have gone back to a beta/preview status.

The first area I started looking at was the backend data model, API and database. It was during reviewing GraphQL that I happened upon an excellent video by Lee Benson where he showed implementing a GraphQL API backed by a database that used Sequelize as the data access component. As mentioned, I’m used to EntityFramework so I’m familiar with ORMs – I’ve just never used an ORM written in JavaScript!

This blog post will cover a very simple example of creating a NodeJS app and Sequalize model that backs a Postgres database.

Step1

Our first step is to create a new node app and add the necessary dependancies.

$ npm init
$ npm install sequelize -save

# Package for Postgres support
$ npm install pg -save

Step2

We’re going to create a very simple model to store Uniclass 2015 in a database. We will model this as 2 tables:

erd

Simple Entity-Relationship-Diagram

The classification table will store the name of the classification; the classificationItems table will store all of the entries in Uniclass 2015. ClassificationItems will be a self-referencing table so that we can model Uniclass 2015 as a tree.

Step3

We’re going to use Atom, a fantastic text editor, to write our JavaScript. First, we need to create a new .js file to add our database model to. We’ll call this new file “db.js”.

First off, we need to import the sequelize library and create our database connection

const Sequelize = require('sequelize');

const Conn = new Sequelize(
 'classification',
 'postgres',
 'postgres',
 {
 dialect: 'postgres',
 host: 'localhost',
 }
);

Sequelize supports a number of different databases – MySQL, MariaDb, SQlite, Postgres and MS SQL Server. In this example, we’re using the Postgres provider.

Next we define our two models:

const Classification = Conn.define('classification', {
  title: {
    type: Sequelize.STRING,
    allowNull: false,
    comment: 'Classification system name'
  },
  publisher: {
    type: Sequelize.STRING,
    allowNull: true,
    description: 'The author of the classification system'
  },
});

const ClassificationItem = Conn.define('classificationItem', {
  notation: {
    type: Sequelize.STRING,
    allowNull: false,
    comment: 'Notation of the Classification'
  },
  title: {
    type: Sequelize.STRING,
    allowNull: false,
    comment: 'Title of the Classification Item'
   }
});

We use the connection to define each table. We then define the fields within that table (in or example we allow Sequalize to generate an id field and manage the primary keys).

As you’d expect, Sequelize supports a number of field data types – strings, blobs, numbers etc. In our simple example, we’ll just use strings.

Each of our fields requires a value – so we use the allowNull property to enforce that values are required. Sequelize has a wealth of other validators to check whether fields are email addresses, credit card numbers etc.

Once we have our models, we have to define the relationships between them so that Sequelize can manage our many-to-one relationships.

Classification.hasMany(ClassificationItem);
ClassificationItem.belongsTo(Classification);
ClassificationItem.hasMany(ClassificationItem, { foreignKey: 'parentId' });
ClassificationItem.belongsTo(ClassificationItem, {as: 'parent'});

We use the hasMany relationship to tell Sequelize that both Classification and ClassificationItem have many children. Sequelize automatically adds a foreign key to the child relationship and provides convenience methods to add models to the child relationship.

The belongsTo relationship allows child models to get their parent object. This provides us with a convenience method to get our parent object if we need it in our application. Sequelize allows us to control the name of the foreign key. As mentioned above, ClassificationItem is a self-referencing table to help us model the classification system as a tree. Rather than ‘classificationItemId’ being the foreign key to the parent item, I’d prefer parentId to be used instead. This would give us a getParent() method too which reads better. We achieve this by specifying the foreignKey on one side of the relationship and { as: ‘parent’ } against the other side.

Step4

Next we get Sequelize to create the database tables and were write a bit of code to seed the database with some test data:

Conn.sync({force: true}).then(() => {
    return Classification.create({
    title: 'Uniclass 2015',
    publisher: 'NBS'
  });
 }).then((classification) => {
   return classification.createClassificationItem({
     notation: 'Ss',
     title: 'Systems'
   }).then((classificationItem) => {
     return classificationItem.createClassificationItem({
       notation: 'Ss_15',
       title: 'Earthworks systems',
       classificationId: classification.id,
       //parentId: classificationItem.id
     })
   }).then((classificationItem) => {
     return classificationItem.createClassificationItem({
       notation: 'Ss_15_10',
       title: 'Groundworks and earthworks systems',
       classificationId: classification.id
     });
   }).then((classificationItem) => {
     return classificationItem.createClassificationItem({
       notation: 'Ss_15_10_30',
       title: 'Excavating and filling systems',
       classificationId: classification.id
     });
   }).then((classificationItem) => {
     classificationItem.createClassificationItem({
       notation: 'Ss_15_10_30_25',
       title: 'Earthworks excavating systems',
       classificationId: classification.id
     });

     classificationItem.createClassificationItem({
       notation: 'Ss_15_10_30_27',
       title: 'Earthworks filling systems',
       classificationId: classification.id
     });
   });
 });

The sync command creates the database tables – by specifying { force: true }, Sequelize will drop any existing tables and re-create them. This is ideal for development environments but obviously NOT production!

The rest of the code creates a classification object and several classification items. Notice that I use the createClassificationItem method so that parent id’s are set automatically when inserting child records.

The resulting database looks like this:

Step 5

Now we have a model and some data, we can perform a few queries.

1. Get root level classification items:

Classification.findOne({
  where: {
   title: 'Uniclass 2015'
  }
}).then((result) => {
  return result.getClassificationItems({
    where: {
      parentId: null
    }
  })
}).then((result) => {
  result.map((item) => {
    const {notation, title} = item;
    console.log(`${notation} ${title}`);
  });
});

Output:

Ss Systems

2. Get classification items (and their children) with a particular notation:

ClassificationItem.findAll({
  where: {
    notation: {
      $like: 'Ss_15_10_30%'
    }
  }
}).then((results) => {
  results.map((item) => {
    const {notation, title} = item;
    console.log(`${notation} ${title}`);
  })
});

Output

Ss_15_10_30 Excavating and filling systems
Ss_15_10_30_25 Earthworks excavating systems
Ss_15_10_30_27 Earthworks filling systems

3. Get a classification items’s parent:

ClassificationItem.findOne({
  where: {
   id: 6
  }
}).then((result) => {
  const {notation, title} = result;
  console.log(`Child: ${notation} ${title}`);
  return result.getParent();
}).then((parent) => {
  const {notation, title} = parent;
  console.log(`Parent: ${notation} ${title}`);
});

Output

Ss_15_10_30_27 Earthworks filling systems
Ss_15_10_30 Excavating and filling systems

That was a quick whistle stop tour of some of the basic features of Sequelize. At the moment I’m really impressed with it. The only thing that takes a bit of getting used to is working with all the promises. Promises are really powerful, but you need to think about the structure of your code to prevent lots of nested then’s.