Deploying a Node.js app to a Microsoft Azure Web app

Introduction

The project I’m currently working on uses Angular2 on the front end and Node.js on the backend. The backend is an Express app that wraps a GraphQL API. One of the things we got working very early on was our automated build and release pipeline. We are using Visual Studio Team Services to orchestrate the build and deployment process. In the initial phases of the project we were using MS Azure as our cloud provider – it is relatively easy to deploy to Azure but we encountered some gotchas which I thought were worth sharing.

Build

Our build definition consists of the following steps:

  1. Get Source from Git
  2. “npm install” to install packages
  3. “npm test” to run unit tests
  4. Publish test results – we used Jasmine as the test framework and used the jasmine-reporters package to output test results to JUnit XML format. VSTS can render a nice test report using this file.
  5. “npm run build” to build the Node JS app using babel.
  6. Archive and copy release to VSTS.

Release

Our release definition consists of the following steps:

  1. Get the latest build artefact
  2. Azure app service deploy the artefact

Gotchas

Things didn’t work first time! Documentation was out-of-date (some of the MS documentation hadn’t been updated for 2 years!). Initially it seemed every route we took wasn’t quite right.

NodeJS apps deployed to an Azure WebApp actually run in IIS via iisnode. Communication from iisnode to node js is via a named pipe (this isn’t important but is useful to know). It’s easy enough to get your app on to Azure, but I found that the build and release pipeline required a number of tweaks which weren’t apparent on the documentation.

The following tweaks were needed in our build and release pipeline:

  • node_modules needed to be packaged up with the build. The archive created by our build process included the node_modules that were installed as part of the “npm install” task. There were a few MS articles around Git deploy which said packages referenced in the package.json file should be automatically downloaded the required node_modules, this doesn’t seem to work for our particular deployment technique.
  • There is some crucial web.config required to configure iisnode:
    <?xml version="1.0" encoding="utf-8"?>
    <!--      This configuration file is required if iisnode is used to run node processes behind      IIS or IIS Express.  For more information, visit:      https://github.com/tjanczuk/iisnode/blob/master/src/samples/configuration/web.config -->
    
    <configuration>
      <system.webServer>
        <!-- Visit http://blogs.msdn.com/b/windowsazure/archive/2013/11/14/introduction-to-websockets-on-windows-azure-web-sites.aspx for more information on WebSocket support -->
        <webSocket enabled="false" />
        <handlers>
          <!-- Indicates that the server.js file is a node.js site to be handled by the iisnode module -->
          <add name="iisnode" path="server.js" verb="*" modules="iisnode"/>
        </handlers>
        <rewrite>
          <rules>
            <!-- Redirect all requests to https -->
            <!-- http://stackoverflow.com/questions/21788863/url-rewrite-http-to-https-in-iisnode -->
            <rule name="HTTP to Prod HTTPS redirect" stopProcessing="true">
              <match url="(.*)" />
              <conditions>
                <add input="{HTTPS}" pattern="off" ignoreCase="true" />
              </conditions>
              <action type="Redirect" redirectType="Found" url="https://{HTTP_HOST}/{R:1}" />
            </rule>
    
            <!-- Do not interfere with requests for node-inspector debugging -->
            <rule name="NodeInspector" patternSyntax="ECMAScript" stopProcessing="true">
              <match url="^server.js\/debug[\/]?" />
            </rule>
    
            <!-- First we consider whether the incoming URL matches a physical file in the /public folder -->
            <rule name="StaticContent">
              <action type="Rewrite" url="public{REQUEST_URI}"/>
            </rule>
    
            <!-- All other URLs are mapped to the node.js site entry point -->
            <rule name="DynamicContent">
              <conditions>
                <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="True"/>
              </conditions>
              <action type="Rewrite" url="server.js"/>
            </rule>
          </rules>
        </rewrite>
    
        <!-- 'bin' directory has no special meaning in node.js and apps can be placed in it -->
        <security>
          <requestFiltering>
            <hiddenSegments>
              <remove segment="bin"/>
            </hiddenSegments>
          </requestFiltering>
        </security>
    
        <!-- Make sure error responses are left untouched -->
        <httpErrors existingResponse="PassThrough" />
    
        <!--       You can control how Node is hosted within IIS using the following options:         * watchedFiles: semi-colon separated list of files that will be watched for changes to restart the server         * node_env: will be propagated to node as NODE_ENV environment variable         * debuggingEnabled - controls whether the built-in debugger is enabled       See https://github.com/tjanczuk/iisnode/blob/master/src/samples/configuration/web.config for a full list of options     -->
        <iisnode watchedFiles="web.config;*.js"/>
      </system.webServer>
    </configuration>
    

The most important line in the config is this:

<add name="iisnode" path="server.js" verb="*" modules="iisnode"/>

Which sets iisnode as the handler for server.js. If your main file isn’t called server.js – then you’ll need to change this e.g. to app.js or index.js etc.

The project I’m working on uses WebSockets to do some real-time communication. If you want Node JS to handle web sockets, rather oddly, you must tell IIS to disable web sockets.

And the bad news…

iisnode totally cripples the performance of node.js. We found that node running on “bare metal” (an AWS t2 micro instance) was up to 4 times faster than when the same node service was deployed as a web app on Azure. Worst still, the bare metal deployment of could out perform 4 load-balanced S2 web app instance on Azure 😦

Finally, why did we chose AWS over Azure?

In the end, we actually chose to switch entirely to Amazon Web Services (AWS) – here are a few reasons why.

I’ve used MS Azure for a while now for both production applications and proof of concepts. Generally I’ve enjoyed using it, it has a good portal and lots of great features – Azure Search, Azure SQL to name a few. But in my experience, Azure seems to work well for .NET applications and less so for non .NET solutions.

My main gripes with Azure are around account management and lack of database choice (unless you are willing to manage the DB yourself). The MS account system is a total mess! I have 2 or 3 different MS accounts – some are work ones, some personal – all because MS have a totally inconsistent account system. Some services (like Azure) can be tied to Active Directory and others (MSDN subscriptions) can’t. I just find myself is a mess of choosing which account to log in with today and whether my system administrator has control or not over my permissions to the service I’ve logged in to.

AWS has really though through their permissions model, it’s complex but really flexible. they have user accounts, roles and resource policies. I’ve only been using AWS for a year or so but totally got their permissions model after provisioning a few databases and virtual machines.

For the new project I’m working on, we were toying with a NoSQL solution – such as ArrangoDB. My company (myself included) is more familiar with RDBMS solutions – typically using MS SQL Server for most products. Moving to a NoSQL solution would be a little risky – so as part of an investigation stage of the project we looked at RDBMS’s with document db style support. I’ve been a fan of Postgres for a while, but didn’t realise how many brilliant features it has and good performance characteristics. Although only anecdotal – we found Postgres on an AWS RDS t2 micro instance to be much faster than a basic Azure SQL instance. For us, on this application, database choice was extremely important and Azure (at the time of wiring) didn’t offer a managed instance of Postgres (or anything other than MS SQL Server).

The final reason was AWS Lambda functions. AWS Lambdas are far superior to Azure Functions. A brief prototype in to each proved it was quite easy to convert a fairly complex Node JS app in to a lambda function; I couldn’t get the equivalent app working at all reliably as an Azure function. This seems to follow my main point – write a .NET app and Azure Functions work well. Try a python or Node JS app and see if you can even get it working…

Advertisements

GraphQL

Last month I explained how we are using NodeJS/Express, GraphQL and Sequelize to prototype a new project at work. Although I’ve been extremely busy over the last few weeks, I wanted to continue the topic by exploring how to add a GraphQL API over the top of our Sequelize store.

During brainstorming of technologies for the new project, an extremely knowledgeable colleague, who is also project lead suggested checking out Facebook’s (fairly) recently open sourced GraphQL. Over the years, I’ve created a few web services using various technologies – from SOAP based services such as ASMX Web Services and WCF to REST services using ASP.NET WebAPI, OData, Nancy FX and SailsJS.

My team do a lot of prototyping and feasibly studies, we often have to start by creating CRUD data layers. Upon reading about GraphQL, I could see it looked to address a common problem I’ve seen – where the REST API is built around the structure of the data, often leading to very “talkie” APIs. In other instances, REST endpoints are coded more around how the client will consume the data – and as a result often return huge JSON payloads to circumvent the performance issues of the “talkie” API.

GraphQL focuses more on how the data looks and the queries/mutations you wish to allow on that data. This looked perfect for prototyping as we could define our objects and the client can make ad-hoc queries (that are validated against the schema we’ve defined).

In this blog post, I wanted to give a very basic overview of adding a GraphQL layer over the Sequelize data layer we built last time. The GraphQL service we will build will allow us to query classifications and classification items.

Step 1

We need to add a few more packages to our node app. We will be using express to host GraphQL in a web app.

npm install express --save
npm install graphql express-graphql --save

Step 2

We need to create a *very* basic express application. We will add an app.js file to the project, which will look like this:

import express from 'express';

const app = express();

app.listen(3000, () => console.log('Now listening on localhost:3000'));

NOTE: I’m a big fan of Babel, in our prototype at work were using babel-node for local development and transpiling for deployment to our test server. I’ve used it in the above example to provide support for ES6, and would highly recommend it if you want all of the nice ES6 features without worrying about which version of NodeJS is installed.

The server will launch with the command node app.js. If we visit the URL http://localhost:3000 we won’t see much!

Step 3

Next we are going to define our GraphQL schema – this will comprise of the objects that can be queried (and how they will resolve their underlying data) and the queries that can performed.

Our API is very simple, we’re going to allow users to query classifications and classification items. We’ll start by creating a file called schema.js and adding Classification and ClassificationItem objects.

import { GraphQLString, GraphQLInt, GraphQLList, GraphQLSchema, GraphQLObjectType } from 'graphql';
import * as Db from './db';

const Classification = new GraphQLObjectType({
  name: 'Classification',
  description: 'This represents a Classification',
  fields() {
    return {
      title: {
        type: GraphQLString,
        resolve: ({ title }) => title,
      },
      publisher: {
        type: GraphQLString,
        resolve: ({ publisher }) => publisher,
      },
      classificationItems: {
        type: new GraphQLList(ClassificationItem),
        resolve: (classification) => {
          // Used sequelize to resolve classification items from the database
          return classification.getClassificationItems({ where: { parentId: null } });
        },
      },
    };
  },
});

const ClassificationItem = new GraphQLObjectType({
  name: 'ClassificationItem',
  description: 'This represents a Classification Item',
  fields() {
    return {
      notation: {
        type: GraphQLString,
        resolve: ({ notation }) => notation,
      },
      title: {
        type: GraphQLString,
        resolve: ({ title }) => title,
      },
      classificationItems: {
        type: new GraphQLList(ClassificationItem),
        resolve: (classification) => {
          // Used sequelize to resolve classification items from the database
          return classification.getClassificationItems({ where: { parentId: classification.id } });
        },
      },
    };
  },
});

The main thing to note is the resolve method – this tells GraphQL how to resolve the data requested. In the above example there are 2 types of results – basic scalars which are resolved by returning properties of the results fetched by Sequelize. We’ve also modelled a couple of relationships to get child classification items. To resolve these relationships, we need to use Sequelize to return the child records from the database.

Step 4

Then we define the queries we want to support on our objects. We’ll allow clients to query classifications on title and classification items on notation, parentId and classificationId:

const classificationQuery = {
  type: new GraphQLList(Classification),
  args: {
    title: {
      type: GraphQLString,
    },
  },
  resolve(root, args) {
    return Db.Classification.findAll({ where: args });
  },
};

const classificationItemQuery = {
  type: new GraphQLList(ClassificationItem),
  args: {
    notation: {
      type: GraphQLString,
    },
    parentId: {
      type: GraphQLInt,
    },
    classificationId: {
      type: GraphQLInt,
    },
  },
  resolve(root, args) {
    return Db.Taxon.findAll({ where: args });
  },
};

const QUERIES = new GraphQLObjectType({
  name: 'Query',
  description: 'Root Query Object',
  fields() {
    return {
      // We support the following queries
      classification: classificationQuery,
      classificationItem: classificationItemQuery,
    };
  },
});

const SCHEMA = new GraphQLSchema({
  query: QUERIES,
});

export default SCHEMA;

Finally we create and export the GraphQLSchema with the queries we defined.

Step 5

We now have to add graphql to the express service we created, using the express-graphql package – by adding a few more lines to app.js:

import express from 'express';
import graphqlHTTP from 'express-graphql';
import schema from './schema';

const app = express();

app.use('/graphql', graphqlHTTP({
  schema: schema,
  // Enable UI
  graphiql: true,
}));

app.listen(3000, () => console.log('Now listening on localhost:3000'));

We’ve created a new express endpoint called /graphql and have attached the graphqlHTTP client with the schema we’ve declared in the previous steps. We have also enabled the grapiql ui. If you run the service  and navigate to http://localhost:3000/graphql, you’ll see the UI.

Graphiql

Graphiql is a fab front end to test out your GraphQL queries, read documentation about the capabilities of the API, and shows off some of the nice GraphQL features such as checking the query is valid (e.g. the API supports the fields being queried etc.).

Summary

This has been a quick write up of getting up and running with Express, GraphQL and Sequelize. It only scratches the surface of GraphQL – in this example we’ve only looked at reading data not mutating it. So far, we’ve been really impressed with GraphQL and have found it really good to work with. On the client, we’ve been looking at the Apollo GraphQL client which offers some nice features out of the box – including integration of a Redux store, query caching and some nice Chrome developer tools…but maybe more on this in a later post.

 

 

 

 

Run up to Christmas 2016

It’s been a busy few months running up to Christmas. A trip to Autodesk University in Las Vegas, followed by a family holiday to Centerparcs Cumbria. This left 3 working weeks for me running up to my Christmas break which starts today (Friday 16th December). This blog post is a quick round up of what I’ve been up to the last few weeks!

Autodesk University

I was extremely lucky to go to Autodesk University in Las Vegas this year. I went to 3 of the 4 days of the conference so that I was’t away from my family for the whole week. The main purpose of the trip was to demonstrate how NBS and Autodesk technologies can come together to offer innovative solutions to our customers. I also wanted to find out more about the Forge platform, find out more about where it’s going, what the pricing model is etc and generally make some support contacts. I also wanted to attend some of the classes, visit the exhibitors and hopefully come away with a good haul of free stuff 🙂

I have to admit, the whole event was a total whirlwind for me. Las Vegas is an amazing place, but totally unlike anywhere I’ve ever been. Everything is massive and over-the-top. The venue and hotel I stayed at, the Venetian, for example was home of the famous shopping arcade with gondolas in it!

The conference was equally as huge – around 10,000 atendees – milling about a huge exhibition hall, classes, breakout areas, and labs. After a jam packed day starting at 8am, there were loads of after conference parties, the highlight being the AU party on the promenade. In a bar with a bowling alley and a tremendous 80’s tribute band, with a slightly politically incorrect name of The Spazmatics.

On the whole I got a lot out of the conference. I’m a big fan of going to conferences to get out of the office and see what’s going on in the industry. I especially liked seeing the advances in 3D printing, Computer Numeric Control (CNC) machining, Augmented Reality an Virtual Reality – hopefully I’ll get a chance to do something in this area in the next year or two.

Forge prototype

We got a load of good feedback from Autodesk off the back of Autodesk University so over the last few weeks I’ve been adding additional functionality to our Forge viewer prototype to get it ready for a private beta test at some point next year. I’ve also learnt a load about VueJS – the JavaScript framework I used to help with some of the logic. I used VueJS 1.0 – but still have a blog or two on how to communicate between components and how to get plain JavaScript code to update Vue observables – so watch this space!

New technologies and what to expect in 2017

As well as Forge, we’ve been planning the next year at work. In the new year I’ll be working on a few exciting projects that will look to use latest technologies to push our specification products on. I’m hoping for opportunities to do quite a few blog posts on graph databases, AngularJS 2 and more.

And finally…

On reflection, 2016 has been a fantastic year – at work I got the opportunity to visit Milan, San Francisco and Las Vegas. I worked on projects that used new technologies to me such as RFID readers, and technologies such as Forge. Outside of work, my wife gave birth to our baby girl, Chloe Eve Smith, who completes my wonderful family. 2017 looks to be challenging but extremely exciting times to be both in software development and working at NBS.

More Autodesk Forge

Back in August, I blogged about attending the Autodesk Forge DevCon is San Francisco. This month I’m again extremely fortunate and am attending Autodesk University in Las Vegas with work.

Since my previous blog, I’ve been busy on a proof of concept that marries our NBS Create specification product and the Autodesk Forge Viewer. There will be more to follow in the coming months, but for now I just wanted to capture a few features I implemented incase they are useful to anyone else.

1. Creating an extension that captures object selection

The application I’m prototyping needs to extract data from the model when an object is clicked. The Forge Viewer api documentation covers how to create and register an extension to get selection events etc. Adding functionality as an extension, is the recommended approach for adding custom functionality to the viewer.

The data my application needs from the viewer can only be obtained when the viewer has fully loaded the model’s geometry and object tree. So we have to be sure we subscribe to the appropriate events.

Create and register the extension

function NBSExtension(viewer, options) {
  Autodesk.Viewing.Extension.call(this, viewer, options);
}

NBSExtension.prototype = Object.create(Autodesk.Viewing.Extension.prototype);
NBSExtension.prototype.constructor = NBSExtension;

Autodesk.Viewing.theExtensionManager.registerExtension('NBSExtension', NBSExtension);

Subscribe and handle the events

My extension needs to handle the SELECTION_CHANGED_EVENT, GEOMETRY_LOADED_EVENT and OBJECT_TREE_CREATED_EVENT. The events are bound on the extensions “load” method.

NBSExtension.prototype.load = function () {
  console.log('NBSExtension is loaded!');

  this.onSelectionBinded = this.onSelectionEvent.bind(this);
  this.viewer.addEventListener(Autodesk.Viewing.SELECTION_CHANGED_EVENT, this.onSelectionBinded);

  this.onGeometryLoadedBinded = this.onGeometryLoadedEvent.bind(this);
  this.viewer.addEventListener(Autodesk.Viewing.GEOMETRY_LOADED_EVENT, this.onGeometryLoadedBinded);

  this.onObjectTreeCreatedBinded = this.onObjectTreeCreatedEvent.bind(this);
  this.viewer.addEventListener(Autodesk.Viewing.OBJECT_TREE_CREATED_EVENT, this.onObjectTreeCreatedBinded);

  return true;
};

A well behaved extension should also clean up after it’s unloaded.

NBSExtension.prototype.unload = function () {
  console.log('NBSExtension is now unloaded!');

  this.viewer.removeEventListener(Autodesk.Viewing.SELECTION_CHANGED_EVENT, this.onSelectionBinded);
  this.onSelectionBinded = null;

  this.viewer.removeEventListener(Autodesk.Viewing.GEOMETRY_LOADED_EVENT, this.onGeometryLoadedBinded);
  this.onGeometryLoadedBinded = null;

  this.viewer.removeEventListener(Autodesk.Viewing.OBJECT_TREE_CREATED_EVENT, this.onObjectTreeCreatedBinded);
  this.onObjectTreeCreatedBinded = null;

  return true;
};

When the events fire, the following functions are called to allow us to handle the event however we want:

// Event handler for Autodesk.Viewing.SELECTION_CHANGED_EVENT
NBSExtension.prototype.onSelectionEvent = function (event) {
  var currSelection = this.viewer.getSelection();

  // Do more work with current selection
}

// Event handler for Autodesk.Viewing.GEOMETRY_LOADED_EVENT
NBSExtension.prototype.onGeometryLoadedEvent = function (event) {
 
};

// Event handler for Autodesk.Viewing.OBJECT_TREE_CREATED_EVENT
NBSExtension.prototype.onObjectTreeCreatedEvent = function (event) {

};

2. Get object properties

Once we have the selected item, we can call getProperties on the viewer to get an array of all of the property key/value pairs for that object.

var currSelection = this.viewer.getSelection();

// Do more work with current selection
var dbId = currSelection[0];

this.viewer.getProperties(dbId, function (data) {
  // Find the property NBSReference 
  var nbsRef = _.find(data.properties, function (item) {
    return (item.displayName === 'NBSReference');
  });

  // If we have found NBSReference, get the display value
  if (nbsRef && nbsRef.displayValue) {
    console.log('NBS Reference found: ' + nbsRef.displayValue);
  }
}, function () {
  console.log('Error getting properties');
});

The call to this.viewer.getSelection() returns an array of dbId’s (Database ID’s). Each Id can be passed to the getProperties function to get the properties for that dbId. My extension then looks through the array of properties for an “NBSReference” property which can be used to display the associated specification for that object.

Notice that I use Underscore.js’s _.find() function to search the array of properties. I opted for this as I found IE11 didn’t support Javascript’s native Array.prototype.find(). I like the readability of the function and Underscore.js provides the necessary polyfill for IE11.

3. Getting area and volume information

Once the geometry is loaded from the model and the internal object tree create, it’s possible to query the properties in the model that relate to area and volume. For my prototype, I wanted to sum the area and volume of types of a objects the user has selected in the model.

In order to do this, I needed to:

  1. Get the dbId of the selection item
  2. Find that dbID in the object tree
  3. Move to the object’s parent and get all of it’s children (in other words, get the siblings of the selected item)
  4. Sum the area and volume properties of the children

The first step is to build our own representation of the model tree in memory (this must effectively be how the Forge viewer displays the model tree). My code is based on this blog post by Philippe Leefsma.

var viewer = viewerApp.getCurrentViewer();
var model = viewer.model;

if (!modelTree && model.getData().instanceTree) {
  modelTree = buildModelTree(viewer.model);
}

var buildModelTree = function (model) {
  // builds model tree recursively
  function _buildModelTreeRec(node) {
    instanceTree.enumNodeChildren(node.dbId, function (childId) {
      node.children = node.children || [];

      var childNode = {
        dbId: childId,
        name: instanceTree.getNodeName(childId)
      }

      node.children.push(childNode);
      _buildModelTreeRec(childNode);
    });
  }

  // get model instance tree and root component
  var instanceTree = model.getData().instanceTree;
  var rootId = instanceTree.getRootId();
  var rootNode = {
    dbId: rootId,
    name: instanceTree.getNodeName(rootId)
  }
 
  _buildModelTreeRec(rootNode);

  return rootNode;
};

This gives us a representation of the model tree. Once we’ve located all of the siblings, we can use the dbId of each sibling to get it’s area and volume properties.

The code I wrote was based on this sample, originally written be Jim Awe I have to admit, my code is a little bit messy. There are a lot of asynchronous operations going on, which use quite a few callbacks and you do end up close to a pyramid of doom. The code was good for my needs, but I think if I was doing anything more complicated I’d look in to using Promises to tidy the code up a bit.

function _getReportData(items, callback) {
  var results = { "areaSum": 0.0, "areaSumLabel": "", "areaProps": [], "volumeSum": 0.0, "volumeSumLabel": "", volumeProps: [], "instanceCount": 0, "friendlyNotationWithSuffix": friendlyNotationWithSuffix.trim() };

  var viewer = viewerApp.getCurrentViewer();
  var nodes = items;

  nodes.forEach(function (dbId, nodeIndex, nodeArray) {
    // Find node 
    var leafNodes = getLeafNodes(dbId, modelTree);
    if (!leafNodes) return;
    results.instanceCount += leafNodes.length;

    leafNodes.forEach(function (node, leafNodeIndex, leafNodeArray) {
      viewer.getProperties(node.dbId, function (propObj) {
        for (var i = 0; i < propObj.properties.length; ++i) {
          var prop = propObj.properties[i];
          var propValue;
          var propFormat;

          if (prop.displayName === "Area") {
            propValue = parseFloat(prop.displayValue);

            results.areaSum += propValue;
            results.areaSumLabel = Autodesk.Viewing.Private.formatValueWithUnits(results.areaSum.toFixed(2), prop.units, prop.type);

            propFormat = Autodesk.Viewing.Private.formatValueWithUnits(prop.displayValue, prop.units, prop.type);
            results.areaProps.push({ "dbId": dbId, "val": propValue, "label": propFormat, "units": prop.units });
          } else if (prop.displayName === "Volume") {
            propValue = parseFloat(prop.displayValue);

            results.volumeSum += propValue;
            results.volumeSumLabel = Autodesk.Viewing.Private.formatValueWithUnits(results.volumeSum.toFixed(2), prop.units, prop.type);

            propFormat = Autodesk.Viewing.Private.formatValueWithUnits(prop.displayValue, prop.units, prop.type);
            results.volumeProps.push({ "dbId": dbId, "val": propValue, "label": propFormat, "units": prop.units });
          }
        };

        // Callback when we've processed everything
        if (callback && nodeIndex === nodeArray.length - 1 && leafNodeIndex === leafNodeArray.length - 1) {
          callback(results);
        }
      });
    });
  });
}

var getLeafNodes = function (parentNodeDbId, parentNode) {
  var result = null;

  function _getLeafNodesRec(parentNodeDbId, node) {
    // Have we found the node we're looking for?
    if (node.dbId === parentNodeDbId) {
      // We return the children (or the node itself if there are no children)
      result = node.children || [node];
    } else {
      if (node.children) {
        node.children.forEach(function (childNode, index, array) {
          if (result) return;
          _getLeafNodesRec(parentNodeDbId, childNode);
        });
      }
    }
 }

 _getLeafNodesRec(parentNodeDbId, parentNode);
 return result;
};

A couple of things to call out from the above code – The function getLeafNodes is used to get the siblings of the selected item. And the Autodesk Forge viewer has a method to nicely format volumes and areas with the appropriate units:

Autodesk.Viewing.Private.formatValueWithUnits(prop.displayValue, prop.units, prop.type);

I couldn’t actually find this documented in the API though – it was only in the samples on GitHub. But it’s a nice way of getting a nicely formatted string of values with the appropriate units.

This has been another fairly lengthy blog post – so it deserves a few screenshots of the functionality that has been implemented:

And a big shout out to Kirsty Hudson for her awesome UX work!