It’s a new year and we’ve started some new projects at work. Over the next few months I’m working on a project to push our specification products on using newer technologies. Traditionally, I’ve worked mostly with a Microsoft Stack – SQL Server and .NET (EntityFramework, WinForms or  WebAPI and ASP.NET). However, a hobby of mine (and part of my role at work) is to keep tabs on latest technologies. I’ve been following the various emerging JavaScript frameworks closely over the last few years – EmberJS, Angular, VueJS, NodeJS, Express (I’ve not looked at ReactJS yet but mean to). One thing I tell everyone who will listen is to bookmark the ThoughWorks technology radar.

For the new project, I want to use JavaScript only – Angular2 on the front end and, NodeJS/Express on the back-end. The main motivation is one of cost and scalability – JavaScript runs on pretty much anything, the ecosystem is full of open source solutions and the stack is now fairly mature (with successful production usage of many of the frameworks). I considered .NET Core but from a previous prototype, the toolset isn’t mature enough yet (maybe it will be when the next version of VisualStudio is released). I also have to admit, I found the whole .NET Core experience quite frustrating during that prototype with tools being marked as RC1 (DNC, DNX etc) only to be totally re-written in RC2 (dotnet cli). Good reason, but changes were so fundamental and should have gone back to a beta/preview status.

The first area I started looking at was the backend data model, API and database. It was during reviewing GraphQL that I happened upon an excellent video by Lee Benson where he showed implementing a GraphQL API backed by a database that used Sequelize as the data access component. As mentioned, I’m used to EntityFramework so I’m familiar with ORMs – I’ve just never used an ORM written in JavaScript!

This blog post will cover a very simple example of creating a NodeJS app and Sequalize model that backs a Postgres database.


Our first step is to create a new node app and add the necessary dependancies.

$ npm init
$ npm install sequelize -save

# Package for Postgres support
$ npm install pg -save


We’re going to create a very simple model to store Uniclass 2015 in a database. We will model this as 2 tables:


Simple Entity-Relationship-Diagram

The classification table will store the name of the classification; the classificationItems table will store all of the entries in Uniclass 2015. ClassificationItems will be a self-referencing table so that we can model Uniclass 2015 as a tree.


We’re going to use Atom, a fantastic text editor, to write our JavaScript. First, we need to create a new .js file to add our database model to. We’ll call this new file “db.js”.

First off, we need to import the sequelize library and create our database connection

const Sequelize = require('sequelize');

const Conn = new Sequelize(
 dialect: 'postgres',
 host: 'localhost',

Sequelize supports a number of different databases – MySQL, MariaDb, SQlite, Postgres and MS SQL Server. In this example, we’re using the Postgres provider.

Next we define our two models:

const Classification = Conn.define('classification', {
  title: {
    type: Sequelize.STRING,
    allowNull: false,
    comment: 'Classification system name'
  publisher: {
    type: Sequelize.STRING,
    allowNull: true,
    description: 'The author of the classification system'

const ClassificationItem = Conn.define('classificationItem', {
  notation: {
    type: Sequelize.STRING,
    allowNull: false,
    comment: 'Notation of the Classification'
  title: {
    type: Sequelize.STRING,
    allowNull: false,
    comment: 'Title of the Classification Item'

We use the connection to define each table. We then define the fields within that table (in or example we allow Sequalize to generate an id field and manage the primary keys).

As you’d expect, Sequelize supports a number of field data types – strings, blobs, numbers etc. In our simple example, we’ll just use strings.

Each of our fields requires a value – so we use the allowNull property to enforce that values are required. Sequelize has a wealth of other validators to check whether fields are email addresses, credit card numbers etc.

Once we have our models, we have to define the relationships between them so that Sequelize can manage our many-to-one relationships.

ClassificationItem.hasMany(ClassificationItem, { foreignKey: 'parentId' });
ClassificationItem.belongsTo(ClassificationItem, {as: 'parent'});

We use the hasMany relationship to tell Sequelize that both Classification and ClassificationItem have many children. Sequelize automatically adds a foreign key to the child relationship and provides convenience methods to add models to the child relationship.

The belongsTo relationship allows child models to get their parent object. This provides us with a convenience method to get our parent object if we need it in our application. Sequelize allows us to control the name of the foreign key. As mentioned above, ClassificationItem is a self-referencing table to help us model the classification system as a tree. Rather than ‘classificationItemId’ being the foreign key to the parent item, I’d prefer parentId to be used instead. This would give us a getParent() method too which reads better. We achieve this by specifying the foreignKey on one side of the relationship and { as: ‘parent’ } against the other side.


Next we get Sequelize to create the database tables and were write a bit of code to seed the database with some test data:

Conn.sync({force: true}).then(() => {
    return Classification.create({
    title: 'Uniclass 2015',
    publisher: 'NBS'
 }).then((classification) => {
   return classification.createClassificationItem({
     notation: 'Ss',
     title: 'Systems'
   }).then((classificationItem) => {
     return classificationItem.createClassificationItem({
       notation: 'Ss_15',
       title: 'Earthworks systems',
   }).then((classificationItem) => {
     return classificationItem.createClassificationItem({
       notation: 'Ss_15_10',
       title: 'Groundworks and earthworks systems',
   }).then((classificationItem) => {
     return classificationItem.createClassificationItem({
       notation: 'Ss_15_10_30',
       title: 'Excavating and filling systems',
   }).then((classificationItem) => {
       notation: 'Ss_15_10_30_25',
       title: 'Earthworks excavating systems',

       notation: 'Ss_15_10_30_27',
       title: 'Earthworks filling systems',

The sync command creates the database tables – by specifying { force: true }, Sequelize will drop any existing tables and re-create them. This is ideal for development environments but obviously NOT production!

The rest of the code creates a classification object and several classification items. Notice that I use the createClassificationItem method so that parent id’s are set automatically when inserting child records.

The resulting database looks like this:

Step 5

Now we have a model and some data, we can perform a few queries.

1. Get root level classification items:

  where: {
   title: 'Uniclass 2015'
}).then((result) => {
  return result.getClassificationItems({
    where: {
      parentId: null
}).then((result) => { => {
    const {notation, title} = item;
    console.log(`${notation} ${title}`);


Ss Systems

2. Get classification items (and their children) with a particular notation:

  where: {
    notation: {
      $like: 'Ss_15_10_30%'
}).then((results) => { => {
    const {notation, title} = item;
    console.log(`${notation} ${title}`);


Ss_15_10_30 Excavating and filling systems
Ss_15_10_30_25 Earthworks excavating systems
Ss_15_10_30_27 Earthworks filling systems

3. Get a classification items’s parent:

  where: {
   id: 6
}).then((result) => {
  const {notation, title} = result;
  console.log(`Child: ${notation} ${title}`);
  return result.getParent();
}).then((parent) => {
  const {notation, title} = parent;
  console.log(`Parent: ${notation} ${title}`);


Ss_15_10_30_27 Earthworks filling systems
Ss_15_10_30 Excavating and filling systems

That was a quick whistle stop tour of some of the basic features of Sequelize. At the moment I’m really impressed with it. The only thing that takes a bit of getting used to is working with all the promises. Promises are really powerful, but you need to think about the structure of your code to prevent lots of nested then’s.


Run up to Christmas 2016

It’s been a busy few months running up to Christmas. A trip to Autodesk University in Las Vegas, followed by a family holiday to Centerparcs Cumbria. This left 3 working weeks for me running up to my Christmas break which starts today (Friday 16th December). This blog post is a quick round up of what I’ve been up to the last few weeks!

Autodesk University

I was extremely lucky to go to Autodesk University in Las Vegas this year. I went to 3 of the 4 days of the conference so that I was’t away from my family for the whole week. The main purpose of the trip was to demonstrate how NBS and Autodesk technologies can come together to offer innovative solutions to our customers. I also wanted to find out more about the Forge platform, find out more about where it’s going, what the pricing model is etc and generally make some support contacts. I also wanted to attend some of the classes, visit the exhibitors and hopefully come away with a good haul of free stuff 🙂

I have to admit, the whole event was a total whirlwind for me. Las Vegas is an amazing place, but totally unlike anywhere I’ve ever been. Everything is massive and over-the-top. The venue and hotel I stayed at, the Venetian, for example was home of the famous shopping arcade with gondolas in it!

The conference was equally as huge – around 10,000 atendees – milling about a huge exhibition hall, classes, breakout areas, and labs. After a jam packed day starting at 8am, there were loads of after conference parties, the highlight being the AU party on the promenade. In a bar with a bowling alley and a tremendous 80’s tribute band, with a slightly politically incorrect name of The Spazmatics.

On the whole I got a lot out of the conference. I’m a big fan of going to conferences to get out of the office and see what’s going on in the industry. I especially liked seeing the advances in 3D printing, Computer Numeric Control (CNC) machining, Augmented Reality an Virtual Reality – hopefully I’ll get a chance to do something in this area in the next year or two.

Forge prototype

We got a load of good feedback from Autodesk off the back of Autodesk University so over the last few weeks I’ve been adding additional functionality to our Forge viewer prototype to get it ready for a private beta test at some point next year. I’ve also learnt a load about VueJS – the JavaScript framework I used to help with some of the logic. I used VueJS 1.0 – but still have a blog or two on how to communicate between components and how to get plain JavaScript code to update Vue observables – so watch this space!

New technologies and what to expect in 2017

As well as Forge, we’ve been planning the next year at work. In the new year I’ll be working on a few exciting projects that will look to use latest technologies to push our specification products on. I’m hoping for opportunities to do quite a few blog posts on graph databases, AngularJS 2 and more.

And finally…

On reflection, 2016 has been a fantastic year – at work I got the opportunity to visit Milan, San Francisco and Las Vegas. I worked on projects that used new technologies to me such as RFID readers, and technologies such as Forge. Outside of work, my wife gave birth to our baby girl, Chloe Eve Smith, who completes my wonderful family. 2017 looks to be challenging but extremely exciting times to be both in software development and working at NBS.

More Autodesk Forge

Back in August, I blogged about attending the Autodesk Forge DevCon is San Francisco. This month I’m again extremely fortunate and am attending Autodesk University in Las Vegas with work.

Since my previous blog, I’ve been busy on a proof of concept that marries our NBS Create specification product and the Autodesk Forge Viewer. There will be more to follow in the coming months, but for now I just wanted to capture a few features I implemented incase they are useful to anyone else.

1. Creating an extension that captures object selection

The application I’m prototyping needs to extract data from the model when an object is clicked. The Forge Viewer api documentation covers how to create and register an extension to get selection events etc. Adding functionality as an extension, is the recommended approach for adding custom functionality to the viewer.

The data my application needs from the viewer can only be obtained when the viewer has fully loaded the model’s geometry and object tree. So we have to be sure we subscribe to the appropriate events.

Create and register the extension

function NBSExtension(viewer, options) {, viewer, options);

NBSExtension.prototype = Object.create(Autodesk.Viewing.Extension.prototype);
NBSExtension.prototype.constructor = NBSExtension;

Autodesk.Viewing.theExtensionManager.registerExtension('NBSExtension', NBSExtension);

Subscribe and handle the events

My extension needs to handle the SELECTION_CHANGED_EVENT, GEOMETRY_LOADED_EVENT and OBJECT_TREE_CREATED_EVENT. The events are bound on the extensions “load” method.

NBSExtension.prototype.load = function () {
  console.log('NBSExtension is loaded!');

  this.onSelectionBinded = this.onSelectionEvent.bind(this);
  this.viewer.addEventListener(Autodesk.Viewing.SELECTION_CHANGED_EVENT, this.onSelectionBinded);

  this.onGeometryLoadedBinded = this.onGeometryLoadedEvent.bind(this);
  this.viewer.addEventListener(Autodesk.Viewing.GEOMETRY_LOADED_EVENT, this.onGeometryLoadedBinded);

  this.onObjectTreeCreatedBinded = this.onObjectTreeCreatedEvent.bind(this);
  this.viewer.addEventListener(Autodesk.Viewing.OBJECT_TREE_CREATED_EVENT, this.onObjectTreeCreatedBinded);

  return true;

A well behaved extension should also clean up after it’s unloaded.

NBSExtension.prototype.unload = function () {
  console.log('NBSExtension is now unloaded!');

  this.viewer.removeEventListener(Autodesk.Viewing.SELECTION_CHANGED_EVENT, this.onSelectionBinded);
  this.onSelectionBinded = null;

  this.viewer.removeEventListener(Autodesk.Viewing.GEOMETRY_LOADED_EVENT, this.onGeometryLoadedBinded);
  this.onGeometryLoadedBinded = null;

  this.viewer.removeEventListener(Autodesk.Viewing.OBJECT_TREE_CREATED_EVENT, this.onObjectTreeCreatedBinded);
  this.onObjectTreeCreatedBinded = null;

  return true;

When the events fire, the following functions are called to allow us to handle the event however we want:

// Event handler for Autodesk.Viewing.SELECTION_CHANGED_EVENT
NBSExtension.prototype.onSelectionEvent = function (event) {
  var currSelection = this.viewer.getSelection();

  // Do more work with current selection

// Event handler for Autodesk.Viewing.GEOMETRY_LOADED_EVENT
NBSExtension.prototype.onGeometryLoadedEvent = function (event) {

// Event handler for Autodesk.Viewing.OBJECT_TREE_CREATED_EVENT
NBSExtension.prototype.onObjectTreeCreatedEvent = function (event) {


2. Get object properties

Once we have the selected item, we can call getProperties on the viewer to get an array of all of the property key/value pairs for that object.

var currSelection = this.viewer.getSelection();

// Do more work with current selection
var dbId = currSelection[0];

this.viewer.getProperties(dbId, function (data) {
  // Find the property NBSReference 
  var nbsRef = _.find(, function (item) {
    return (item.displayName === 'NBSReference');

  // If we have found NBSReference, get the display value
  if (nbsRef && nbsRef.displayValue) {
    console.log('NBS Reference found: ' + nbsRef.displayValue);
}, function () {
  console.log('Error getting properties');

The call to this.viewer.getSelection() returns an array of dbId’s (Database ID’s). Each Id can be passed to the getProperties function to get the properties for that dbId. My extension then looks through the array of properties for an “NBSReference” property which can be used to display the associated specification for that object.

Notice that I use Underscore.js’s _.find() function to search the array of properties. I opted for this as I found IE11 didn’t support Javascript’s native Array.prototype.find(). I like the readability of the function and Underscore.js provides the necessary polyfill for IE11.

3. Getting area and volume information

Once the geometry is loaded from the model and the internal object tree create, it’s possible to query the properties in the model that relate to area and volume. For my prototype, I wanted to sum the area and volume of types of a objects the user has selected in the model.

In order to do this, I needed to:

  1. Get the dbId of the selection item
  2. Find that dbID in the object tree
  3. Move to the object’s parent and get all of it’s children (in other words, get the siblings of the selected item)
  4. Sum the area and volume properties of the children

The first step is to build our own representation of the model tree in memory (this must effectively be how the Forge viewer displays the model tree). My code is based on this blog post by Philippe Leefsma.

var viewer = viewerApp.getCurrentViewer();
var model = viewer.model;

if (!modelTree && model.getData().instanceTree) {
  modelTree = buildModelTree(viewer.model);

var buildModelTree = function (model) {
  // builds model tree recursively
  function _buildModelTreeRec(node) {
    instanceTree.enumNodeChildren(node.dbId, function (childId) {
      node.children = node.children || [];

      var childNode = {
        dbId: childId,
        name: instanceTree.getNodeName(childId)


  // get model instance tree and root component
  var instanceTree = model.getData().instanceTree;
  var rootId = instanceTree.getRootId();
  var rootNode = {
    dbId: rootId,
    name: instanceTree.getNodeName(rootId)

  return rootNode;

This gives us a representation of the model tree. Once we’ve located all of the siblings, we can use the dbId of each sibling to get it’s area and volume properties.

The code I wrote was based on this sample, originally written be Jim Awe I have to admit, my code is a little bit messy. There are a lot of asynchronous operations going on, which use quite a few callbacks and you do end up close to a pyramid of doom. The code was good for my needs, but I think if I was doing anything more complicated I’d look in to using Promises to tidy the code up a bit.

function _getReportData(items, callback) {
  var results = { "areaSum": 0.0, "areaSumLabel": "", "areaProps": [], "volumeSum": 0.0, "volumeSumLabel": "", volumeProps: [], "instanceCount": 0, "friendlyNotationWithSuffix": friendlyNotationWithSuffix.trim() };

  var viewer = viewerApp.getCurrentViewer();
  var nodes = items;

  nodes.forEach(function (dbId, nodeIndex, nodeArray) {
    // Find node 
    var leafNodes = getLeafNodes(dbId, modelTree);
    if (!leafNodes) return;
    results.instanceCount += leafNodes.length;

    leafNodes.forEach(function (node, leafNodeIndex, leafNodeArray) {
      viewer.getProperties(node.dbId, function (propObj) {
        for (var i = 0; i <; ++i) {
          var prop =[i];
          var propValue;
          var propFormat;

          if (prop.displayName === "Area") {
            propValue = parseFloat(prop.displayValue);

            results.areaSum += propValue;
            results.areaSumLabel = Autodesk.Viewing.Private.formatValueWithUnits(results.areaSum.toFixed(2), prop.units, prop.type);

            propFormat = Autodesk.Viewing.Private.formatValueWithUnits(prop.displayValue, prop.units, prop.type);
            results.areaProps.push({ "dbId": dbId, "val": propValue, "label": propFormat, "units": prop.units });
          } else if (prop.displayName === "Volume") {
            propValue = parseFloat(prop.displayValue);

            results.volumeSum += propValue;
            results.volumeSumLabel = Autodesk.Viewing.Private.formatValueWithUnits(results.volumeSum.toFixed(2), prop.units, prop.type);

            propFormat = Autodesk.Viewing.Private.formatValueWithUnits(prop.displayValue, prop.units, prop.type);
            results.volumeProps.push({ "dbId": dbId, "val": propValue, "label": propFormat, "units": prop.units });

        // Callback when we've processed everything
        if (callback && nodeIndex === nodeArray.length - 1 && leafNodeIndex === leafNodeArray.length - 1) {

var getLeafNodes = function (parentNodeDbId, parentNode) {
  var result = null;

  function _getLeafNodesRec(parentNodeDbId, node) {
    // Have we found the node we're looking for?
    if (node.dbId === parentNodeDbId) {
      // We return the children (or the node itself if there are no children)
      result = node.children || [node];
    } else {
      if (node.children) {
        node.children.forEach(function (childNode, index, array) {
          if (result) return;
          _getLeafNodesRec(parentNodeDbId, childNode);

 _getLeafNodesRec(parentNodeDbId, parentNode);
 return result;

A couple of things to call out from the above code – The function getLeafNodes is used to get the siblings of the selected item. And the Autodesk Forge viewer has a method to nicely format volumes and areas with the appropriate units:

Autodesk.Viewing.Private.formatValueWithUnits(prop.displayValue, prop.units, prop.type);

I couldn’t actually find this documented in the API though – it was only in the samples on GitHub. But it’s a nice way of getting a nicely formatted string of values with the appropriate units.

This has been another fairly lengthy blog post – so it deserves a few screenshots of the functionality that has been implemented:

And a big shout out to Kirsty Hudson for her awesome UX work!

XLST via JavaScript

I’ve recently been working on a prototype that makes NBS Create systems readable in the web browser. This isn’t really a new concept as our NBS Create product and Revit plugin actually use an embedded WebBrowser .NET control (which is basically a wrapper around Internet Explorer).

Most of our products store their data in XML and most transform XML to HTML to provide a rich editing experience (fonts, bullets, hyperlinks, symbols, etc). XML is a bit out of favour now, with JSON being the preferred format. That said, XML still brings something to the table – with schemas, xpath and xsl transforms.

The data I was using for my prototype was in XML format. I was originally planning on letting .NET transform the XML and stream HTML to the client web browser. Out of curiosity, I Googled whether it was possible to do the transform via JavaScript. My thinking was that I could get the client webbrowser to do the work rather than the server. It’s quite an old topic actually, and to my surprise some browsers can natively do the transform – but many tutorials or StackOverflow answers recommended using JavaScript. So I thought….”why not give it a shot!”.

In modern WebBrowsers (Chrome, Firefox, Safari) it was a trivial task:

Step 1: Load XML

My XML documents are sent to the client via an API as a string – the client must convert this string to an XMLDocument. JQuery makes this a breeze:

function parseXmlString(xml) {
  var xmlDoc = $.parseXML(xml);

  if (xmlDoc) {
    return xmlDoc;

Step 2: Load XSLT

The XSLT is a resource on the server, the client must load it like any other web resource. I chose to use JQuery to send an Ajax request:

function loadXmlDocument(url, callback) {
    url: url,
    dataType: "xml",
    success: function (data) {

Step 3: Use the XSLTProcessor to transform

We just do a quick check of the browsers capabilities to make sure it supports the XSLTProcessor.

if (typeof (XSLTProcessor) !== "undefined") { // FF, Safari, Chrome etc
  xsltProcessor = new XSLTProcessor();

  xsltProcessor.setParameter(null, "resPath", configSettings.areaPath + 'Content/img/CreateResources');

  resultDocument = xsltProcessor.transformToFragment(xml, document);
  var contentNode = document.getElementById("clause-content");

Also worth highlighting, is that my XSL requires some parameters passing to it, this is easily done via that setParameter() method.

Internet Explorer quirks

But things are never just that easy are they? Internet Explorer 9-11 don’t support the XSLTProcessor, instead they use an ActiveXObject to do the transform.

Again we need to test the browsers capabilities, but there’s another quirk. IE 9-10 will pass a test for window.ActiveXObject; IE11 however has a bug and will report a fail, so we must check for “ActiveXObject” in window too.

We also have another issue, XML has to be loaded in to the ActiveXObject as a string (but we read it in as an XMLDocument). Frustratingly, the only workaround I could find, was to serialise the XMLDocument to a string so it can be loaded in to the ActiveXObject *sigh*.

if (window.ActiveXObject || "ActiveXObject" in window) {
  var xslt = new ActiveXObject("Msxml2.XSLTemplate");
  var xslDoc = new ActiveXObject("Msxml2.FreeThreadedDOMDocument");

  var serializer = new XMLSerializer();
  var text = serializer.serializeToString(xsl);

  xslt.stylesheet = xslDoc;

  var xslProc = xslt.createProcessor();
  xslProc.input = xml;
  xslProc.addParameter("resPath", configSettings.areaPath + 'Content/img/CreateResources');

  var output = xslProc.output;
  document.getElementById("clause-content").innerHTML = output;

Fortunately the IE 9-11 XSLT processor also supports the passing of parameter values.

And the final result


Swift, WCF WS2007HttpBinding and NBS Guidance on the Mac.

Background (and disclaimer)

I got my first IBM compatible PC in my early teens. It was a blisteringly fast 386sx running at 25Mhz, with a 40MB HDD all running MS-DOS 6 and Windows 3.1. I continued as a PC user until my 18th birthday back in 1999, when I got an iMac as a present from my parents. For the next few years I was quite a keen Mac user and during University started looking at the various Mac programming languages such REALBasic, Carbon and Objective-C Cocoa.

In 2008, I switched back to PC – mainly because my Core Duo iMac was starting to show it’s age against the new Intel Core i3/i5/i7 processors and it was dirt cheap (in comparison to a new iMac) to custom build a new Windows based Core ix system. I was sad leaving the Mac platform though as both the hardware and software are fantastic (even though the platform is quite closed).

In 2015, I returned to the Mac platform after using several Macs to present the BIM Toolkit. Using the Mac again, even briefly, brought back memories of the platform. I got myself a MacBook Air and very quickly after returning to the platform, got back in to looking at how the development tools had progressed since 2008. The new kid on the block is Swift, which seems to be steadily replacing Objective-C as the language used for iOS and MacApp development.

At NBS, our desktop products are written for Windows. Early versions of NBS Specification Manager were written in Visual C++, and were then ported to .NET 1.0. .NET WinForms is heavily tied to the underlying Windows APIs so isn’t easily portable to other operating systems. However, as a Mac user, I’ve always been keen on trying to do some sort of Mac prototype. I was eager to try to do some kind of MacApp using Swift and thought I could create a simple(ish) NBS Guidance viewer app.

Just before I get too in to the details, it’s important to mention that the work discussed in this post is purely hobby work created in my own time. It’s a proof of concept/training application to learn Swift.


The application I wanted to create would display the NBS Guidance from NBS Create in a native MacApp. The features I wanted to implement were:

  • Login and take NBS Create license seats to view NBS Guidance
  • Navigate NBS Guidance
  • Search NBS Guidance
  • Print NBS Guidance
  • Open external references (such as British Standards) in the users default web browser.
  • Add, Edit and Delete notes
  • Create Unit tests to automate testing of the application


In implementing the above, I encountered a number of problems and at the very least hope that someone will find some of my solutions helpful.

The challenges I faced were:

  • Authenticating an NBS user account against the WS2007HttpBinding of our WCF licensing web service.
  • Embedding a WebKit view within a MacApp
  • Calling JavaScript methods within the WebKitView from Swift
  • Calling Swift methods from the WebKitView

Create Licensing Service

The first hurdle I had to jump was authenticating an NBS user account against our NBS Create Licensing web service. The licensing web service endpoint we need to communicate with uses the WS2007HttpBinding. We use this binding over SSL to provide end-to-end encryption from the client to the server. The users’s username and password are used for authentication and internally verified against our user account database.

The WCF service was created back in 2009 and all requests and responses are sent as SOAP envelops. This makes request and response messages quite verbose. .NET has a nifty feature of building a proxy client based on the WCF service’s web service definition. This wraps up/auto generates a lot of the code  to invoke endpoint methods and authenticate requests. I would have to understand how the proxy does this in order for my Swift project to send the same requests to the service.

It took hours of reading to fathom how to authenticate against a WS2007HttpBinding – to understand the WS_Trust specification and the algorithms used to encrypt and sign messages. I even had to look in the .NET source!

Communicating with the licensing service

Authenticating with a WCF service via a WS2007HTTPBinding takes a number of steps.

Step 1

We need to establish a security context (or a session) with the server. This involves sending an unauthenticated request for a security token to the server with a few bits of key information that will be used to establish end-to-end encryption between the client and the server.

<s:Envelope xmlns:s="" xmlns:a="" xmlns:u="">
    <a:Action s:mustUnderstand="1"></a:Action>
    <a:MessageID>urn:uuid:Client generated GUID</a:MessageID>
    <a:To s:mustUnderstand="1">Service URI</a:To>
    <o:Security s:mustUnderstand="1" xmlns:o="">
      <u:Timestamp u:Id="_0">
      <o:UsernameToken u:Id="uuid-Client generated GUID-1">
        <o:Password Type="">password</o:Password>
    <trust:RequestSecurityToken xmlns:trust="">
        <trust:BinarySecret u:Id="uuid-Client generated GUID" Type="">Client nonce</trust:BinarySecret>

You can see that because we’re using SOAP, the messages are quite verbose and there is quite a lot going on.

  • There are several GUIDs that the client needs to generate – MessageID, UsernameTokenID and BinarySecretID. These are created in Swift as NSUUID
  • Our service uses UsernameToken authentication, so the username and password must be sent in the request. This is why we use SSL, so this data is encrypted.
  • The client must generate Entropy (a number once (cnonce)), the server will response with its Entropy (number once (nonce)) and both nonces will be used to sign subsequent messages sent to the server so that the sever knows it’s a genuine request from our client.
  • The client nonce just is simply a 32 random bytes that is BASE64 encoded. My solution uses the following code to generate a secure random 32 byte array
let s = NSMutableData(length: 32)
SecRandomCopyBytes(kSecRandomDefault, s!.length, UnsafeMutablePointer<UInt8>(s!.mutableBytes))
let nonceString = s!.base64EncodedStringWithOptions(NSDataBase64EncodingOptions(rawValue: 0))

Step 2

The server will respond with a fairly long message:

<s:Envelope xmlns:s="" xmlns:a="" xmlns:u="">
    <a:Action s:mustUnderstand="1"></a:Action>
    <a:RelatesTo>urn:uuid:Message GUID</a:RelatesTo>
    <o:Security s:mustUnderstand="1" xmlns:o="">
      <u:Timestamp u:Id="_0">
    <trust:RequestSecurityTokenResponseCollection xmlns:trust="">
      <sc:SecurityContextToken u:Id="uuid-66e50eda-1209-4c6a-b893-66e0ed15a79f-7681" xmlns:sc="">
      <o:SecurityTokenReference xmlns:o="">
        <o:Reference ValueType="" URI="#uuid-66e50eda-1209-4c6a-b893-66e0ed15a79f-7681"></o:Reference>
        <o:SecurityTokenReference xmlns:o="">
          <o:Reference URI="urn:uuid:6fc437e3-e0dd-4847-882c-c31a9948324b" ValueType=""></o:Reference>
        <trust:BinarySecret u:Id="uuid-66e50eda-1209-4c6a-b893-66e0ed15a79f-7682" Type="">Server Nonce</trust:BinarySecret>

The 2 bits of information we really need from the servers response are:

  • The sc:SecurityContextToken element, this is the security context that the server has established.
  • The servers nonce (trust:BinarySecret). We need our client nice and the server nonce to compute a 256 bit combined key. Only our client and the server know these nonce values.

Step 3

We now have enough information to invoke an authenticated request to a method of the licensing service.

The (lengthy) request we will send will look something like this:

<s:Envelope xmlns:a="" xmlns:s="" xmlns:u="">
   <a:Action s:mustUnderstand="1"></a:Action>
   <a:MessageID>urn:uuid:MessageID GUID</a:MessageID>
   <a:To s:mustUnderstand="1">NBS licensing service URI</a:To>
   <o:Security s:mustUnderstand="1" xmlns:o="">
     <u:Timestamp xmlns:u="" u:Id="_0">
     <sc:SecurityContextToken u:Id="uuid-SecurityContextToken Id GUID" xmlns:sc="">
       <sc:Identifier>urn:uuid:SecurityContextToken Identifier GUID</sc:Identifier>
     <Signature xmlns="">
       <SignedInfo xmlns="">
         <CanonicalizationMethod Algorithm=""></CanonicalizationMethod>
         <SignatureMethod Algorithm=""></SignatureMethod>
         <Reference URI="#_0">
             <Transform Algorithm=""></Transform>
           <DigestMethod Algorithm=""></DigestMethod>
           <DigestValue>Timestamp SHA1</DigestValue>
       <SignatureValue>SignedInfo HMACSHA1</SignatureValue>
           <o:Reference URI="#uuid-885b42d2-70d2-44a5-8bcd-3f2083d8113f-85591" ValueType=""/>
   <VerifyUserAccount xmlns="">
     <Username>NBS user acount username</Username>
     <Password>NBS user account password</Password>

One thing to point out that is *really* important, is that the XML we send to the server, or anything we sign (more on this below) MUST be in Canonical XML form so that the client and server are working on the exact same sequence of XML.

Before we can send the request we need to do a little bit of work. Firstly, we need to create some XML with a timestamp in it:

<u:Timestamp xmlns:u="" u:Id="_0">

We then need to create an SHA1 hash of the timestamp XML. This hash has to be added to the SOAP message we send to the server – in the value of the <SignedInfo><DigestValue> element. Finally we need to sign the <SignedInfo> element using a HMAC-SHA1 hash. This is a type of message authentication that involves a cryptographic hash with a secret key. The secret key is computed using a PSHA1 hash algorithm that takes the client nonce and server nonce that were previously exchanged.

Phew, that all sounds a bit complicated – and to be honest I’m not at all an expert in this area. But it does make sense that the client and server have the same knowledge to generate the same key – and use that to generate the same hash on the exact same XML. In this way the client and the server know that the message really did come from the client/server.

I create a little wrapper class for the Cryptographic functions I required:

public class Crypto : NSObject {
    public static func sha1(data: String) -> String {
        let data = data.dataUsingEncoding(NSUTF8StringEncoding)!
        var digest = [UInt8](count:Int(CC_SHA1_DIGEST_LENGTH), repeatedValue: 0)

        CC_SHA1(data.bytes, CC_LONG(data.length), &digest)

        let result = NSData(bytes: digest, length: Int(CC_SHA1_DIGEST_LENGTH))
        return result.base64EncodedStringWithOptions(NSDataBase64EncodingOptions.Encoding64CharacterLineLength)

    // The symmetric key generation chosen is
    // which per the WS-Trust specification is defined as follows:
    //   The key is computed using P_SHA1
    //   from the TLS specification to generate
    //   a bit stream using entropy from both
    //   sides. The exact form is:
    //   key = P_SHA1 (EntREQ, EntRES)
    // where P_SHA1 is defined per
    // and EntREQ is the entropy supplied by the requestor and EntRES
    // is the entrophy supplied by the issuer.
    // From
    // 8<------------------------------------------------------------>8
    // First, we define a data expansion function, P_hash(secret, data)
    // which uses a single hash function to expand a secret and seed
    // into an arbitrary quantity of output:
    // P_hash(secret, seed) = HMAC_hash(secret, A(1) + seed) +
    //                        HMAC_hash(secret, A(2) + seed) +
    //                        HMAC_hash(secret, A(3) + seed) + ...
    // Where + indicates concatenation.
    // A() is defined as:
    //   A(0) = seed
    //   A(i) = HMAC_hash(secret, A(i-1))
    // P_hash can be iterated as many times as is necessary to produce
    // the required quantity of data. For example, if P_SHA-1 was
    // being used to create 64 bytes of data, it would have to be
    // iterated 4 times (through A(4)), creating 80 bytes of output
    // data; the last 16 bytes of the final iteration would then be
    // discarded, leaving 64 bytes of output data.
    // 8<------------------------------------------------------------>8
    public static func computeCombinedKey(reqEntropy: String, resEntropy: String, keySizeInBits: Int = 256) -> NSData {
        let requestorEntropy = NSData(base64EncodedString: reqEntropy, options: NSDataBase64DecodingOptions.init(rawValue: 0))
        let issuerEntropy = NSData(base64EncodedString: resEntropy, options: NSDataBase64DecodingOptions.init(rawValue: 0))

        let keySizeInBytes = keySizeInBits / 8;
        let key = NSMutableData(capacity: keySizeInBytes)

        let khaKey: NSData = requestorEntropy!

        // A(0), the 'seed'.
        var a: NSData = issuerEntropy!
        // Buffer for A(i) + seed
        var b: NSMutableData = NSMutableData(capacity: 160 / 8 + a.length)!
        var result = NSData()

        var i = 0
        while i < keySizeInBytes {
            // Calculate A(i+1).
            a = hmacSha1(a, key: khaKey)
            // Calculate A(i) + seed
            b = NSMutableData(capacity: 160 / 8 + a.length)!
            result = NSData()
            result = hmacSha1(b, key: khaKey)
            for j in 0 ..< result.length {
                if i < keySizeInBytes {
                    i += 1
                    key!.appendData(result.subdataWithRange(NSRange.init(location: j, length: 1)))
                } else {
        return key!
    public static func hmacSha1(data: NSData, key: NSData) -> NSData {
        let digestLen = CryptoAlgorithm.SHA1.digestLength
        let result = UnsafeMutablePointer<UInt8>.alloc(digestLen)
        let dataUnsafe = UnsafePointer<UInt8>(data.bytes)
        let keyUnsafe = UnsafePointer<UInt8>(key.bytes)
        CCHmac(CryptoAlgorithm.SHA1.HMACAlgorithm, keyUnsafe, key.length, dataUnsafe, data.length, result)
        let digest = NSData(bytes: result, length: digestLen)
        return digest

The SHA1 hashes use the CommonCrypto built in to MacOS X (10.5 and later). The PSHA1 hash was a little bit tricker as I wasn’t able to find a Swift equivalent that generated the same keys as .NET. For the solution, I had to look a the .NET source and translate from C# to Swift.

I would have also been fighting a losing battle if I hadn’t enabled WCF tracing to output digests that were computed by the server (and Service Trace Viewer). I took example digests from the trace log, and created Unit tests to ensure I was calculating the exact same signatures.

After several days of reading specs, blog posts and tearing my hair out I was finally successful in sending an authenticated message to the Licensing service.

Displaying NBS Guidance

Once I was able to make authenticated requests to the NBS licensing service, I was able to take seats and obtain tokens to display the NBS Guidance. I thought it would be quite nice for this sample app to have the capability to read, edit and add practice notes to the NBS Guidance (a feature of NBS Create).

There was quite a lot more work that went in to this, but this blog post is quite long at this point, so will have to wait for another day. In the meantime though, here are lots of screenshots of the capabilities that were implemented.



Navigate NBS Guidance pages

Link to external citations such as British Standards and Building Regulations

View and zoom in to NBS Guidance graphics

Add practice notes



Print guidance


Search the guidance

Autodesk Forge

Back in June, I was extremely fortunate to attend Forge DevCon 2016 at the Fort Mason Centre for Art & Culture in San Francisco.

The conference was a packed 2 days of keynotes and tech talks on the capabilities of the Forge Platform and Autodesk’s strategy for it. For those new to the platform, it is essentially a set of cloud services, APIs, and SDKs, to allow developers to quickly create the data, apps, experiences, and services that power the future of making things.


Amar Hanspal, Senior VP, Products at Autodesk introduces the Forge platform

In this blog post, I’ll show usage of the Model Derivative API, which is used to translate design files from one format to another, to prepare them for the online viewer. It can also be used to extract data from the model.

We’ll also look at the Forge Viewer, a WebGL-based, JavaScript library for 3D and 2D model rendering. 3D and 2D model data may come from a wide array of applications, such as AutoCAD, Fusion 360, Revit, IFC etc.


Fort Mason Center – San Francisco

Preparing your file for viewing

Firstly, we’ll use the Model Derivative API to upload and translate a Revit file. Files are uploaded to a “bucket”, which we’ll need to create as a one time task.

Step 1 – Create your app

Before you can get going, you need to sign in to Autodesk developer account ( and create a new application. Select the APIs you want to use and give your app a name.


Create a new app

You will then be given a Client ID and Client Secret to allow your app to obtain authentication tokens to use against the Forge APIs.

Step 2 – Obtain an authentication token

Pretty much all requests to the Forge APIs require a bearer token to authenticate them. The application I’m building up for the blog post will use ASP.NET Core and will be written in C#. I will obtain tokens with the following code:

public async Task<string> GetToken(bool allowUpload = false)
    HttpClient client = new HttpClient();

    client.BaseAddress = new Uri("");
    var content = new FormUrlEncodedContent(new[] 
        new KeyValuePair<string, string>("client_id", "<id>"),
        new KeyValuePair<string, string>("client_secret", "<secret>"),
        new KeyValuePair<string, string>("grant_type", "client_credentials"),
        new KeyValuePair<string, string>("scope", (allowUpload) ? "data:write data:read" : "data:read")

    var result = await client.PostAsync("/authentication/v1/authenticate", content);

    JObject resultContent = JObject.Parse(await result.Content.ReadAsStringAsync());
    var token = resultContent["access_token"].ToString();

    return token;

This is a pretty straightforward web request, it’s worth pointing out the “scope”header. The value passed in here determines the permissions the token has e.g. read only, write, bucket creation etc.

Step 3 – Create a bucket

Models that you want to use with the Forge API’s must be uploaded to a storage area called an Open Storage Service (OSS) bucket. For the example in this blog post, this is a one time action – i.e. we will only create a bucket once and then use it for all of our models. For more information about buckets see this article.

As this is a one time action for us, we’ll use cURL to send the request to create the bucket rather than writing any C# code. We will need a bearer token though, and the token will need permission to create a bucket.

We get a token with the following request:

curl -X POST -H "Content-Type: application/x-www-form-urlencoded" -d 'client_id=<id>&client_secret=<secret>&grant_type=client_credentials&scope=bucket:create bucket:read data:write data:read' ""

Then create a bucket using the bearer token:

curl -X POST -H "Content-Type: application/json" -H "Authorization: Bearer <token>" -d '{
 "bucketKey":"<bucket name>",
 }' ""

 Step 4 – Upload a model

Now that we have a bucket to work with, we can upload files to it for processing. The process is really simple, we send the file to the API as a byte array – passing a valid bucket name. If successful, we get the object id of the file in the bucket. Subsequent calls to the API’s will now use the objectId (or source URN), which must be passed as a BASE64 encoded string – Autodesk recommend the use of a URL safe BASE64 string (RFC 6920).

public async Task<string> PutFile(string bucketName, string fileName, byte[] array)
    HttpClient client = new HttpClient();

    client.BaseAddress = new Uri($"");
    client.DefaultRequestHeaders.Add("authorization", $"Bearer {await GetToken(true)}");
    client.DefaultRequestHeaders.Add("cache-control", "no-cache");

    var content = new ByteArrayContent(array);
    content.Headers.Add("Content-Type", "application/octet-stream");

    var result = await client.PutAsync($"/oss/v2/buckets/{bucketName}/objects/{fileName}", content);

    var resultContent = await result.Content.ReadAsStringAsync();

    JObject json = JObject.Parse(resultContent);

    // Have we got an objectId?
    JToken objectId;
    if (json.TryGetValue("objectId", out objectId))
        // Base64 encode the object id
        var encodedObjectId = Convert.ToBase64String(System.Text.Encoding.UTF8.GetBytes(objectId.ToString()));
        return encodedObjectId;

    return "";

Step 5 – Translate the model

Next we need to request that the model we uploaded is translated to a more optimised SVF format for rendering. We do this with the following request:

public static async Task<string> Translate(string urn)
    HttpClient client = new HttpClient();

    client.BaseAddress = new Uri($"");
    client.DefaultRequestHeaders.Add("authorization", $"Bearer {await GetToken(true)}");

    string json = @"{ ""input"": { ""urn"": """ + urn + @""" }, ""output"": { ""formats"": [{ ""type"": ""svf"", ""views"": [""2d"", ""3d""] }] } }";
    var content = new StringContent(json);
    content.Headers.Add("Content-Type", "application/json");
    // Force file to be re-translated if it already exists in the bucket
    //content.Headers.Add("x-ads-force", "true");

    var response = await client.PostAsync("modelderivative/v2/designdata/job", content);

    var resultContent = await response.Content.ReadAsStringAsync();

    JObject jsonPayload = JObject.Parse(resultContent);

    // Have we got a result?
    JToken result;
    if (jsonPayload.TryGetValue("result", out result))
        return result.ToString();

    return "";

Translation can take a while, in our application we want to show some feedback of progress. This can be achieved by polling another API endpoint for progress information:

public async Task<string> GetTranslationProgress(string urn)
    HttpClient client = new HttpClient();

    client.BaseAddress = new Uri("");
    client.DefaultRequestHeaders.Add("authorization", $"Bearer {await GetToken()}");

    var result = await client.GetAsync($"/modelderivative/v2/designdata/{urn}/manifest");

    JObject resultContent = JObject.Parse(await result.Content.ReadAsStringAsync());
    var progress = resultContent["progress"].ToString();

    return progress;

Step 6 – Get model metadata

Our model has now been uploaded and is being translated, for our application we want to extract metadata from the model. We want to look for objects in the model that have a “CPI” type or instance property.

In order to obtain the metadata, we need to send a request to the Model Derivative API to obtain the metadata.

public async Task<string> GetMetadataModelGuid(string urn)
    HttpClient client = new HttpClient();

    client.BaseAddress = new Uri($"");
    client.DefaultRequestHeaders.Add("authorization", $"Bearer {await GetToken(true)}");

    var response = await client.GetAsync($"/modelderivative/v2/designdata/{urn}/metadata");

    var resultContent = await response.Content.ReadAsStringAsync();

    JObject jsonPayload = JObject.Parse(resultContent);

    // Have we got a result?
    if (jsonPayload["data"]["metadata"][0]["guid"] != null)
        return jsonPayload["data"]["metadata"][0]["guid"].Value<string>();

    return "";

The response will contain a list of model views within the model – Revit files can have a number of model views. For this example application, we’ll naively return the GUID of the first model view and assume it’s the view our user is after.

We can then get all of the properties in that model view with the following request:

public async Task<JObject> GetMetadataModelProperties(string urn, string modelGuid)
    HttpClient client = new HttpClient();

    client.BaseAddress = new Uri($"");
    client.DefaultRequestHeaders.Add("authorization", $"Bearer {await GetToken(true)}");

    var response = await client.GetAsync($"/modelderivative/v2/designdata/{urn}/metadata/{modelGuid}/properties");

    var resultContent = await response.Content.ReadAsStringAsync();

    JObject jsonPayload = JObject.Parse(resultContent);
    return jsonPayload;

Viewing the model

Everything is now setup to initialise the Forge Viewer – our model is uploaded and translated – the only thing that remains is setting up our MVC view to display the model.

Step 1 – Stylesheets

Add the following styles to your view:

<link rel="stylesheet" href="" type="text/css">
<link rel="stylesheet" href="" type="text/css">

At the time of writing, v2.8 was the latest version. You can omit the version number to use the latest version, but this isn’t recommended in a Production application.

Step 2 – JavaScript reference

Next add the following JavaScript references:

NOTE that the Forge viewer is built on the excellent three.js.

Step 3 – Create and initialise the viewer

The viewer is initialise with the (BASE64 encoded) source URN obtained during translation.

    var viewerApp;
    var options = {
        env: 'AutodeskProduction',
        accessToken: '@(await ForgeServices.GetToken())'

    var documentId = 'urn:@ViewData["urn"]';

    Autodesk.Viewing.Initializer(options, onInitialized);

    function onInitialized() {
        viewerApp = new Autodesk.Viewing.ViewingApplication('MyViewerDiv');
        viewerApp.registerViewer(viewerApp.k3D, Autodesk.Viewing.Private.GuiViewer3D);
        viewerApp.loadDocument(documentId, onDocumentLoaded);

    function onDocumentLoaded(lmvDoc) {
        var modelNodes =; // 3D designs
        var sheetNodes =; // 2D designs
        var allNodes = modelNodes.concat(sheetNodes);

        if (allNodes.length) {
            if (allNodes.length === 1) {
                alert('This tutorial works best with documents with more than one viewable!');
        } else {
            alert('There are no viewables for the provided URN!');

In the onDocumentLoaded function, we are simply selecting the first 3D view we find – agin, this is a little naive and assumes that the first 3D view is the one the user wants to see.

And that’s all there is to it – our viewer is now initialised, with all panning, zooming and orbiting goodness you’d expect. There’s even a Duke Nukem 3D style First Person mode allowing you to walk through the model with the keyboard 🙂

This slideshow requires JavaScript.

And finally…

At the start of the blog, I mentioned that I was using ASP.NET Core. All of the above screenshots were taken on a ASP.NET Core application running on Mac OS X. Microsoft are doing some amazing things with .NET, the tooling isn’t quite there yet but it is awesome seeing .NET applications running on other platforms.


ASP.NET Core MVC application running on Mac OS X El Capitan