GraphQL

Last month I explained how we are using NodeJS/Express, GraphQL and Sequelize to prototype a new project at work. Although I’ve been extremely busy over the last few weeks, I wanted to continue the topic by exploring how to add a GraphQL API over the top of our Sequelize store.

During brainstorming of technologies for the new project, an extremely knowledgeable colleague, who is also project lead suggested checking out Facebook’s (fairly) recently open sourced GraphQL. Over the years, I’ve created a few web services using various technologies – from SOAP based services such as ASMX Web Services and WCF to REST services using ASP.NET WebAPI, OData, Nancy FX and SailsJS.

My team do a lot of prototyping and feasibly studies, we often have to start by creating CRUD data layers. Upon reading about GraphQL, I could see it looked to address a common problem I’ve seen – where the REST API is built around the structure of the data, often leading to very “talkie” APIs. In other instances, REST endpoints are coded more around how the client will consume the data – and as a result often return huge JSON payloads to circumvent the performance issues of the “talkie” API.

GraphQL focuses more on how the data looks and the queries/mutations you wish to allow on that data. This looked perfect for prototyping as we could define our objects and the client can make ad-hoc queries (that are validated against the schema we’ve defined).

In this blog post, I wanted to give a very basic overview of adding a GraphQL layer over the Sequelize data layer we built last time. The GraphQL service we will build will allow us to query classifications and classification items.

Step 1

We need to add a few more packages to our node app. We will be using express to host GraphQL in a web app.

npm install express --save
npm install graphql express-graphql --save

Step 2

We need to create a *very* basic express application. We will add an app.js file to the project, which will look like this:

import express from 'express';

const app = express();

app.listen(3000, () => console.log('Now listening on localhost:3000'));

NOTE: I’m a big fan of Babel, in our prototype at work were using babel-node for local development and transpiling for deployment to our test server. I’ve used it in the above example to provide support for ES6, and would highly recommend it if you want all of the nice ES6 features without worrying about which version of NodeJS is installed.

The server will launch with the command node app.js. If we visit the URL http://localhost:3000 we won’t see much!

Step 3

Next we are going to define our GraphQL schema – this will comprise of the objects that can be queried (and how they will resolve their underlying data) and the queries that can performed.

Our API is very simple, we’re going to allow users to query classifications and classification items. We’ll start by creating a file called schema.js and adding Classification and ClassificationItem objects.

import { GraphQLString, GraphQLInt, GraphQLList, GraphQLSchema, GraphQLObjectType } from 'graphql';
import * as Db from './db';

const Classification = new GraphQLObjectType({
  name: 'Classification',
  description: 'This represents a Classification',
  fields() {
    return {
      title: {
        type: GraphQLString,
        resolve: ({ title }) => title,
      },
      publisher: {
        type: GraphQLString,
        resolve: ({ publisher }) => publisher,
      },
      classificationItems: {
        type: new GraphQLList(ClassificationItem),
        resolve: (classification) => {
          // Used sequelize to resolve classification items from the database
          return classification.getClassificationItems({ where: { parentId: null } });
        },
      },
    };
  },
});

const ClassificationItem = new GraphQLObjectType({
  name: 'ClassificationItem',
  description: 'This represents a Classification Item',
  fields() {
    return {
      notation: {
        type: GraphQLString,
        resolve: ({ notation }) => notation,
      },
      title: {
        type: GraphQLString,
        resolve: ({ title }) => title,
      },
      classificationItems: {
        type: new GraphQLList(ClassificationItem),
        resolve: (classification) => {
          // Used sequelize to resolve classification items from the database
          return classification.getClassificationItems({ where: { parentId: classification.id } });
        },
      },
    };
  },
});

The main thing to note is the resolve method – this tells GraphQL how to resolve the data requested. In the above example there are 2 types of results – basic scalars which are resolved by returning properties of the results fetched by Sequelize. We’ve also modelled a couple of relationships to get child classification items. To resolve these relationships, we need to use Sequelize to return the child records from the database.

Step 4

Then we define the queries we want to support on our objects. We’ll allow clients to query classifications on title and classification items on notation, parentId and classificationId:

const classificationQuery = {
  type: new GraphQLList(Classification),
  args: {
    title: {
      type: GraphQLString,
    },
  },
  resolve(root, args) {
    return Db.Classification.findAll({ where: args });
  },
};

const classificationItemQuery = {
  type: new GraphQLList(ClassificationItem),
  args: {
    notation: {
      type: GraphQLString,
    },
    parentId: {
      type: GraphQLInt,
    },
    classificationId: {
      type: GraphQLInt,
    },
  },
  resolve(root, args) {
    return Db.Taxon.findAll({ where: args });
  },
};

const QUERIES = new GraphQLObjectType({
  name: 'Query',
  description: 'Root Query Object',
  fields() {
    return {
      // We support the following queries
      classification: classificationQuery,
      classificationItem: classificationItemQuery,
    };
  },
});

const SCHEMA = new GraphQLSchema({
  query: QUERIES,
});

export default SCHEMA;

Finally we create and export the GraphQLSchema with the queries we defined.

Step 5

We now have to add graphql to the express service we created, using the express-graphql package – by adding a few more lines to app.js:

import express from 'express';
import graphqlHTTP from 'express-graphql';
import schema from './schema';

const app = express();

app.use('/graphql', graphqlHTTP({
  schema: schema,
  // Enable UI
  graphiql: true,
}));

app.listen(3000, () => console.log('Now listening on localhost:3000'));

We’ve created a new express endpoint called /graphql and have attached the graphqlHTTP client with the schema we’ve declared in the previous steps. We have also enabled the grapiql ui. If you run the service  and navigate to http://localhost:3000/graphql, you’ll see the UI.

Graphiql

Graphiql is a fab front end to test out your GraphQL queries, read documentation about the capabilities of the API, and shows off some of the nice GraphQL features such as checking the query is valid (e.g. the API supports the fields being queried etc.).

Summary

This has been a quick write up of getting up and running with Express, GraphQL and Sequelize. It only scratches the surface of GraphQL – in this example we’ve only looked at reading data not mutating it. So far, we’ve been really impressed with GraphQL and have found it really good to work with. On the client, we’ve been looking at the Apollo GraphQL client which offers some nice features out of the box – including integration of a Redux store, query caching and some nice Chrome developer tools…but maybe more on this in a later post.

 

 

 

 

Advertisements

Sequelize

It’s a new year and we’ve started some new projects at work. Over the next few months I’m working on a project to push our specification products on using newer technologies. Traditionally, I’ve worked mostly with a Microsoft Stack – SQL Server and .NET (EntityFramework, WinForms or  WebAPI and ASP.NET). However, a hobby of mine (and part of my role at work) is to keep tabs on latest technologies. I’ve been following the various emerging JavaScript frameworks closely over the last few years – EmberJS, Angular, VueJS, NodeJS, Express (I’ve not looked at ReactJS yet but mean to). One thing I tell everyone who will listen is to bookmark the ThoughWorks technology radar.

For the new project, I want to use JavaScript only – Angular2 on the front end and, NodeJS/Express on the back-end. The main motivation is one of cost and scalability – JavaScript runs on pretty much anything, the ecosystem is full of open source solutions and the stack is now fairly mature (with successful production usage of many of the frameworks). I considered .NET Core but from a previous prototype, the toolset isn’t mature enough yet (maybe it will be when the next version of VisualStudio is released). I also have to admit, I found the whole .NET Core experience quite frustrating during that prototype with tools being marked as RC1 (DNC, DNX etc) only to be totally re-written in RC2 (dotnet cli). Good reason, but changes were so fundamental and should have gone back to a beta/preview status.

The first area I started looking at was the backend data model, API and database. It was during reviewing GraphQL that I happened upon an excellent video by Lee Benson where he showed implementing a GraphQL API backed by a database that used Sequelize as the data access component. As mentioned, I’m used to EntityFramework so I’m familiar with ORMs – I’ve just never used an ORM written in JavaScript!

This blog post will cover a very simple example of creating a NodeJS app and Sequalize model that backs a Postgres database.

Step1

Our first step is to create a new node app and add the necessary dependancies.

$ npm init
$ npm install sequelize -save

# Package for Postgres support
$ npm install pg -save

Step2

We’re going to create a very simple model to store Uniclass 2015 in a database. We will model this as 2 tables:

erd

Simple Entity-Relationship-Diagram

The classification table will store the name of the classification; the classificationItems table will store all of the entries in Uniclass 2015. ClassificationItems will be a self-referencing table so that we can model Uniclass 2015 as a tree.

Step3

We’re going to use Atom, a fantastic text editor, to write our JavaScript. First, we need to create a new .js file to add our database model to. We’ll call this new file “db.js”.

First off, we need to import the sequelize library and create our database connection

const Sequelize = require('sequelize');

const Conn = new Sequelize(
 'classification',
 'postgres',
 'postgres',
 {
 dialect: 'postgres',
 host: 'localhost',
 }
);

Sequelize supports a number of different databases – MySQL, MariaDb, SQlite, Postgres and MS SQL Server. In this example, we’re using the Postgres provider.

Next we define our two models:

const Classification = Conn.define('classification', {
  title: {
    type: Sequelize.STRING,
    allowNull: false,
    comment: 'Classification system name'
  },
  publisher: {
    type: Sequelize.STRING,
    allowNull: true,
    description: 'The author of the classification system'
  },
});

const ClassificationItem = Conn.define('classificationItem', {
  notation: {
    type: Sequelize.STRING,
    allowNull: false,
    comment: 'Notation of the Classification'
  },
  title: {
    type: Sequelize.STRING,
    allowNull: false,
    comment: 'Title of the Classification Item'
   }
});

We use the connection to define each table. We then define the fields within that table (in or example we allow Sequalize to generate an id field and manage the primary keys).

As you’d expect, Sequelize supports a number of field data types – strings, blobs, numbers etc. In our simple example, we’ll just use strings.

Each of our fields requires a value – so we use the allowNull property to enforce that values are required. Sequelize has a wealth of other validators to check whether fields are email addresses, credit card numbers etc.

Once we have our models, we have to define the relationships between them so that Sequelize can manage our many-to-one relationships.

Classification.hasMany(ClassificationItem);
ClassificationItem.belongsTo(Classification);
ClassificationItem.hasMany(ClassificationItem, { foreignKey: 'parentId' });
ClassificationItem.belongsTo(ClassificationItem, {as: 'parent'});

We use the hasMany relationship to tell Sequelize that both Classification and ClassificationItem have many children. Sequelize automatically adds a foreign key to the child relationship and provides convenience methods to add models to the child relationship.

The belongsTo relationship allows child models to get their parent object. This provides us with a convenience method to get our parent object if we need it in our application. Sequelize allows us to control the name of the foreign key. As mentioned above, ClassificationItem is a self-referencing table to help us model the classification system as a tree. Rather than ‘classificationItemId’ being the foreign key to the parent item, I’d prefer parentId to be used instead. This would give us a getParent() method too which reads better. We achieve this by specifying the foreignKey on one side of the relationship and { as: ‘parent’ } against the other side.

Step4

Next we get Sequelize to create the database tables and were write a bit of code to seed the database with some test data:

Conn.sync({force: true}).then(() => {
    return Classification.create({
    title: 'Uniclass 2015',
    publisher: 'NBS'
  });
 }).then((classification) => {
   return classification.createClassificationItem({
     notation: 'Ss',
     title: 'Systems'
   }).then((classificationItem) => {
     return classificationItem.createClassificationItem({
       notation: 'Ss_15',
       title: 'Earthworks systems',
       classificationId: classification.id,
       //parentId: classificationItem.id
     })
   }).then((classificationItem) => {
     return classificationItem.createClassificationItem({
       notation: 'Ss_15_10',
       title: 'Groundworks and earthworks systems',
       classificationId: classification.id
     });
   }).then((classificationItem) => {
     return classificationItem.createClassificationItem({
       notation: 'Ss_15_10_30',
       title: 'Excavating and filling systems',
       classificationId: classification.id
     });
   }).then((classificationItem) => {
     classificationItem.createClassificationItem({
       notation: 'Ss_15_10_30_25',
       title: 'Earthworks excavating systems',
       classificationId: classification.id
     });

     classificationItem.createClassificationItem({
       notation: 'Ss_15_10_30_27',
       title: 'Earthworks filling systems',
       classificationId: classification.id
     });
   });
 });

The sync command creates the database tables – by specifying { force: true }, Sequelize will drop any existing tables and re-create them. This is ideal for development environments but obviously NOT production!

The rest of the code creates a classification object and several classification items. Notice that I use the createClassificationItem method so that parent id’s are set automatically when inserting child records.

The resulting database looks like this:

Step 5

Now we have a model and some data, we can perform a few queries.

1. Get root level classification items:

Classification.findOne({
  where: {
   title: 'Uniclass 2015'
  }
}).then((result) => {
  return result.getClassificationItems({
    where: {
      parentId: null
    }
  })
}).then((result) => {
  result.map((item) => {
    const {notation, title} = item;
    console.log(`${notation} ${title}`);
  });
});

Output:

Ss Systems

2. Get classification items (and their children) with a particular notation:

ClassificationItem.findAll({
  where: {
    notation: {
      $like: 'Ss_15_10_30%'
    }
  }
}).then((results) => {
  results.map((item) => {
    const {notation, title} = item;
    console.log(`${notation} ${title}`);
  })
});

Output

Ss_15_10_30 Excavating and filling systems
Ss_15_10_30_25 Earthworks excavating systems
Ss_15_10_30_27 Earthworks filling systems

3. Get a classification items’s parent:

ClassificationItem.findOne({
  where: {
   id: 6
  }
}).then((result) => {
  const {notation, title} = result;
  console.log(`Child: ${notation} ${title}`);
  return result.getParent();
}).then((parent) => {
  const {notation, title} = parent;
  console.log(`Parent: ${notation} ${title}`);
});

Output

Ss_15_10_30_27 Earthworks filling systems
Ss_15_10_30 Excavating and filling systems

That was a quick whistle stop tour of some of the basic features of Sequelize. At the moment I’m really impressed with it. The only thing that takes a bit of getting used to is working with all the promises. Promises are really powerful, but you need to think about the structure of your code to prevent lots of nested then’s.

Run up to Christmas 2016

It’s been a busy few months running up to Christmas. A trip to Autodesk University in Las Vegas, followed by a family holiday to Centerparcs Cumbria. This left 3 working weeks for me running up to my Christmas break which starts today (Friday 16th December). This blog post is a quick round up of what I’ve been up to the last few weeks!

Autodesk University

I was extremely lucky to go to Autodesk University in Las Vegas this year. I went to 3 of the 4 days of the conference so that I was’t away from my family for the whole week. The main purpose of the trip was to demonstrate how NBS and Autodesk technologies can come together to offer innovative solutions to our customers. I also wanted to find out more about the Forge platform, find out more about where it’s going, what the pricing model is etc and generally make some support contacts. I also wanted to attend some of the classes, visit the exhibitors and hopefully come away with a good haul of free stuff 🙂

I have to admit, the whole event was a total whirlwind for me. Las Vegas is an amazing place, but totally unlike anywhere I’ve ever been. Everything is massive and over-the-top. The venue and hotel I stayed at, the Venetian, for example was home of the famous shopping arcade with gondolas in it!

The conference was equally as huge – around 10,000 atendees – milling about a huge exhibition hall, classes, breakout areas, and labs. After a jam packed day starting at 8am, there were loads of after conference parties, the highlight being the AU party on the promenade. In a bar with a bowling alley and a tremendous 80’s tribute band, with a slightly politically incorrect name of The Spazmatics.

On the whole I got a lot out of the conference. I’m a big fan of going to conferences to get out of the office and see what’s going on in the industry. I especially liked seeing the advances in 3D printing, Computer Numeric Control (CNC) machining, Augmented Reality an Virtual Reality – hopefully I’ll get a chance to do something in this area in the next year or two.

Forge prototype

We got a load of good feedback from Autodesk off the back of Autodesk University so over the last few weeks I’ve been adding additional functionality to our Forge viewer prototype to get it ready for a private beta test at some point next year. I’ve also learnt a load about VueJS – the JavaScript framework I used to help with some of the logic. I used VueJS 1.0 – but still have a blog or two on how to communicate between components and how to get plain JavaScript code to update Vue observables – so watch this space!

New technologies and what to expect in 2017

As well as Forge, we’ve been planning the next year at work. In the new year I’ll be working on a few exciting projects that will look to use latest technologies to push our specification products on. I’m hoping for opportunities to do quite a few blog posts on graph databases, AngularJS 2 and more.

And finally…

On reflection, 2016 has been a fantastic year – at work I got the opportunity to visit Milan, San Francisco and Las Vegas. I worked on projects that used new technologies to me such as RFID readers, and technologies such as Forge. Outside of work, my wife gave birth to our baby girl, Chloe Eve Smith, who completes my wonderful family. 2017 looks to be challenging but extremely exciting times to be both in software development and working at NBS.

More Autodesk Forge

Back in August, I blogged about attending the Autodesk Forge DevCon is San Francisco. This month I’m again extremely fortunate and am attending Autodesk University in Las Vegas with work.

Since my previous blog, I’ve been busy on a proof of concept that marries our NBS Create specification product and the Autodesk Forge Viewer. There will be more to follow in the coming months, but for now I just wanted to capture a few features I implemented incase they are useful to anyone else.

1. Creating an extension that captures object selection

The application I’m prototyping needs to extract data from the model when an object is clicked. The Forge Viewer api documentation covers how to create and register an extension to get selection events etc. Adding functionality as an extension, is the recommended approach for adding custom functionality to the viewer.

The data my application needs from the viewer can only be obtained when the viewer has fully loaded the model’s geometry and object tree. So we have to be sure we subscribe to the appropriate events.

Create and register the extension

function NBSExtension(viewer, options) {
  Autodesk.Viewing.Extension.call(this, viewer, options);
}

NBSExtension.prototype = Object.create(Autodesk.Viewing.Extension.prototype);
NBSExtension.prototype.constructor = NBSExtension;

Autodesk.Viewing.theExtensionManager.registerExtension('NBSExtension', NBSExtension);

Subscribe and handle the events

My extension needs to handle the SELECTION_CHANGED_EVENT, GEOMETRY_LOADED_EVENT and OBJECT_TREE_CREATED_EVENT. The events are bound on the extensions “load” method.

NBSExtension.prototype.load = function () {
  console.log('NBSExtension is loaded!');

  this.onSelectionBinded = this.onSelectionEvent.bind(this);
  this.viewer.addEventListener(Autodesk.Viewing.SELECTION_CHANGED_EVENT, this.onSelectionBinded);

  this.onGeometryLoadedBinded = this.onGeometryLoadedEvent.bind(this);
  this.viewer.addEventListener(Autodesk.Viewing.GEOMETRY_LOADED_EVENT, this.onGeometryLoadedBinded);

  this.onObjectTreeCreatedBinded = this.onObjectTreeCreatedEvent.bind(this);
  this.viewer.addEventListener(Autodesk.Viewing.OBJECT_TREE_CREATED_EVENT, this.onObjectTreeCreatedBinded);

  return true;
};

A well behaved extension should also clean up after it’s unloaded.

NBSExtension.prototype.unload = function () {
  console.log('NBSExtension is now unloaded!');

  this.viewer.removeEventListener(Autodesk.Viewing.SELECTION_CHANGED_EVENT, this.onSelectionBinded);
  this.onSelectionBinded = null;

  this.viewer.removeEventListener(Autodesk.Viewing.GEOMETRY_LOADED_EVENT, this.onGeometryLoadedBinded);
  this.onGeometryLoadedBinded = null;

  this.viewer.removeEventListener(Autodesk.Viewing.OBJECT_TREE_CREATED_EVENT, this.onObjectTreeCreatedBinded);
  this.onObjectTreeCreatedBinded = null;

  return true;
};

When the events fire, the following functions are called to allow us to handle the event however we want:

// Event handler for Autodesk.Viewing.SELECTION_CHANGED_EVENT
NBSExtension.prototype.onSelectionEvent = function (event) {
  var currSelection = this.viewer.getSelection();

  // Do more work with current selection
}

// Event handler for Autodesk.Viewing.GEOMETRY_LOADED_EVENT
NBSExtension.prototype.onGeometryLoadedEvent = function (event) {
 
};

// Event handler for Autodesk.Viewing.OBJECT_TREE_CREATED_EVENT
NBSExtension.prototype.onObjectTreeCreatedEvent = function (event) {

};

2. Get object properties

Once we have the selected item, we can call getProperties on the viewer to get an array of all of the property key/value pairs for that object.

var currSelection = this.viewer.getSelection();

// Do more work with current selection
var dbId = currSelection[0];

this.viewer.getProperties(dbId, function (data) {
  // Find the property NBSReference 
  var nbsRef = _.find(data.properties, function (item) {
    return (item.displayName === 'NBSReference');
  });

  // If we have found NBSReference, get the display value
  if (nbsRef && nbsRef.displayValue) {
    console.log('NBS Reference found: ' + nbsRef.displayValue);
  }
}, function () {
  console.log('Error getting properties');
});

The call to this.viewer.getSelection() returns an array of dbId’s (Database ID’s). Each Id can be passed to the getProperties function to get the properties for that dbId. My extension then looks through the array of properties for an “NBSReference” property which can be used to display the associated specification for that object.

Notice that I use Underscore.js’s _.find() function to search the array of properties. I opted for this as I found IE11 didn’t support Javascript’s native Array.prototype.find(). I like the readability of the function and Underscore.js provides the necessary polyfill for IE11.

3. Getting area and volume information

Once the geometry is loaded from the model and the internal object tree create, it’s possible to query the properties in the model that relate to area and volume. For my prototype, I wanted to sum the area and volume of types of a objects the user has selected in the model.

In order to do this, I needed to:

  1. Get the dbId of the selection item
  2. Find that dbID in the object tree
  3. Move to the object’s parent and get all of it’s children (in other words, get the siblings of the selected item)
  4. Sum the area and volume properties of the children

The first step is to build our own representation of the model tree in memory (this must effectively be how the Forge viewer displays the model tree). My code is based on this blog post by Philippe Leefsma.

var viewer = viewerApp.getCurrentViewer();
var model = viewer.model;

if (!modelTree && model.getData().instanceTree) {
  modelTree = buildModelTree(viewer.model);
}

var buildModelTree = function (model) {
  // builds model tree recursively
  function _buildModelTreeRec(node) {
    instanceTree.enumNodeChildren(node.dbId, function (childId) {
      node.children = node.children || [];

      var childNode = {
        dbId: childId,
        name: instanceTree.getNodeName(childId)
      }

      node.children.push(childNode);
      _buildModelTreeRec(childNode);
    });
  }

  // get model instance tree and root component
  var instanceTree = model.getData().instanceTree;
  var rootId = instanceTree.getRootId();
  var rootNode = {
    dbId: rootId,
    name: instanceTree.getNodeName(rootId)
  }
 
  _buildModelTreeRec(rootNode);

  return rootNode;
};

This gives us a representation of the model tree. Once we’ve located all of the siblings, we can use the dbId of each sibling to get it’s area and volume properties.

The code I wrote was based on this sample, originally written be Jim Awe I have to admit, my code is a little bit messy. There are a lot of asynchronous operations going on, which use quite a few callbacks and you do end up close to a pyramid of doom. The code was good for my needs, but I think if I was doing anything more complicated I’d look in to using Promises to tidy the code up a bit.

function _getReportData(items, callback) {
  var results = { "areaSum": 0.0, "areaSumLabel": "", "areaProps": [], "volumeSum": 0.0, "volumeSumLabel": "", volumeProps: [], "instanceCount": 0, "friendlyNotationWithSuffix": friendlyNotationWithSuffix.trim() };

  var viewer = viewerApp.getCurrentViewer();
  var nodes = items;

  nodes.forEach(function (dbId, nodeIndex, nodeArray) {
    // Find node 
    var leafNodes = getLeafNodes(dbId, modelTree);
    if (!leafNodes) return;
    results.instanceCount += leafNodes.length;

    leafNodes.forEach(function (node, leafNodeIndex, leafNodeArray) {
      viewer.getProperties(node.dbId, function (propObj) {
        for (var i = 0; i < propObj.properties.length; ++i) {
          var prop = propObj.properties[i];
          var propValue;
          var propFormat;

          if (prop.displayName === "Area") {
            propValue = parseFloat(prop.displayValue);

            results.areaSum += propValue;
            results.areaSumLabel = Autodesk.Viewing.Private.formatValueWithUnits(results.areaSum.toFixed(2), prop.units, prop.type);

            propFormat = Autodesk.Viewing.Private.formatValueWithUnits(prop.displayValue, prop.units, prop.type);
            results.areaProps.push({ "dbId": dbId, "val": propValue, "label": propFormat, "units": prop.units });
          } else if (prop.displayName === "Volume") {
            propValue = parseFloat(prop.displayValue);

            results.volumeSum += propValue;
            results.volumeSumLabel = Autodesk.Viewing.Private.formatValueWithUnits(results.volumeSum.toFixed(2), prop.units, prop.type);

            propFormat = Autodesk.Viewing.Private.formatValueWithUnits(prop.displayValue, prop.units, prop.type);
            results.volumeProps.push({ "dbId": dbId, "val": propValue, "label": propFormat, "units": prop.units });
          }
        };

        // Callback when we've processed everything
        if (callback && nodeIndex === nodeArray.length - 1 && leafNodeIndex === leafNodeArray.length - 1) {
          callback(results);
        }
      });
    });
  });
}

var getLeafNodes = function (parentNodeDbId, parentNode) {
  var result = null;

  function _getLeafNodesRec(parentNodeDbId, node) {
    // Have we found the node we're looking for?
    if (node.dbId === parentNodeDbId) {
      // We return the children (or the node itself if there are no children)
      result = node.children || [node];
    } else {
      if (node.children) {
        node.children.forEach(function (childNode, index, array) {
          if (result) return;
          _getLeafNodesRec(parentNodeDbId, childNode);
        });
      }
    }
 }

 _getLeafNodesRec(parentNodeDbId, parentNode);
 return result;
};

A couple of things to call out from the above code – The function getLeafNodes is used to get the siblings of the selected item. And the Autodesk Forge viewer has a method to nicely format volumes and areas with the appropriate units:

Autodesk.Viewing.Private.formatValueWithUnits(prop.displayValue, prop.units, prop.type);

I couldn’t actually find this documented in the API though – it was only in the samples on GitHub. But it’s a nice way of getting a nicely formatted string of values with the appropriate units.

This has been another fairly lengthy blog post – so it deserves a few screenshots of the functionality that has been implemented:

And a big shout out to Kirsty Hudson for her awesome UX work!

XLST via JavaScript

I’ve recently been working on a prototype that makes NBS Create systems readable in the web browser. This isn’t really a new concept as our NBS Create product and Revit plugin actually use an embedded WebBrowser .NET control (which is basically a wrapper around Internet Explorer).

Most of our products store their data in XML and most transform XML to HTML to provide a rich editing experience (fonts, bullets, hyperlinks, symbols, etc). XML is a bit out of favour now, with JSON being the preferred format. That said, XML still brings something to the table – with schemas, xpath and xsl transforms.

The data I was using for my prototype was in XML format. I was originally planning on letting .NET transform the XML and stream HTML to the client web browser. Out of curiosity, I Googled whether it was possible to do the transform via JavaScript. My thinking was that I could get the client webbrowser to do the work rather than the server. It’s quite an old topic actually, and to my surprise some browsers can natively do the transform – but many tutorials or StackOverflow answers recommended using JavaScript. So I thought….”why not give it a shot!”.

In modern WebBrowsers (Chrome, Firefox, Safari) it was a trivial task:

Step 1: Load XML

My XML documents are sent to the client via an API as a string – the client must convert this string to an XMLDocument. JQuery makes this a breeze:

function parseXmlString(xml) {
  var xmlDoc = $.parseXML(xml);

  if (xmlDoc) {
    return xmlDoc;
  }
}

Step 2: Load XSLT

The XSLT is a resource on the server, the client must load it like any other web resource. I chose to use JQuery to send an Ajax request:

function loadXmlDocument(url, callback) {
  $.ajax({
    url: url,
    dataType: "xml",
    success: function (data) {
      callback(data);
    }
  });
}

Step 3: Use the XSLTProcessor to transform

We just do a quick check of the browsers capabilities to make sure it supports the XSLTProcessor.

if (typeof (XSLTProcessor) !== "undefined") { // FF, Safari, Chrome etc
  xsltProcessor = new XSLTProcessor();

  xsltProcessor.importStylesheet(xsl);
  xsltProcessor.setParameter(null, "resPath", configSettings.areaPath + 'Content/img/CreateResources');

  resultDocument = xsltProcessor.transformToFragment(xml, document);
  var contentNode = document.getElementById("clause-content");
  contentNode.appendChild(resultDocument);
}

Also worth highlighting, is that my XSL requires some parameters passing to it, this is easily done via that setParameter() method.

Internet Explorer quirks

But things are never just that easy are they? Internet Explorer 9-11 don’t support the XSLTProcessor, instead they use an ActiveXObject to do the transform.

Again we need to test the browsers capabilities, but there’s another quirk. IE 9-10 will pass a test for window.ActiveXObject; IE11 however has a bug and will report a fail, so we must check for “ActiveXObject” in window too.

We also have another issue, XML has to be loaded in to the ActiveXObject as a string (but we read it in as an XMLDocument). Frustratingly, the only workaround I could find, was to serialise the XMLDocument to a string so it can be loaded in to the ActiveXObject *sigh*.

if (window.ActiveXObject || "ActiveXObject" in window) {
  var xslt = new ActiveXObject("Msxml2.XSLTemplate");
  var xslDoc = new ActiveXObject("Msxml2.FreeThreadedDOMDocument");

  var serializer = new XMLSerializer();
  var text = serializer.serializeToString(xsl);

  xslDoc.loadXML(text);
  xslt.stylesheet = xslDoc;

  var xslProc = xslt.createProcessor();
  xslProc.input = xml;
  xslProc.addParameter("resPath", configSettings.areaPath + 'Content/img/CreateResources');
  xslProc.transform();

  var output = xslProc.output;
  document.getElementById("clause-content").innerHTML = output;
}

Fortunately the IE 9-11 XSLT processor also supports the passing of parameter values.

And the final result

 

Swift, WCF WS2007HttpBinding and NBS Guidance on the Mac.

Background (and disclaimer)

I got my first IBM compatible PC in my early teens. It was a blisteringly fast 386sx running at 25Mhz, with a 40MB HDD all running MS-DOS 6 and Windows 3.1. I continued as a PC user until my 18th birthday back in 1999, when I got an iMac as a present from my parents. For the next few years I was quite a keen Mac user and during University started looking at the various Mac programming languages such REALBasic, Carbon and Objective-C Cocoa.

In 2008, I switched back to PC – mainly because my Core Duo iMac was starting to show it’s age against the new Intel Core i3/i5/i7 processors and it was dirt cheap (in comparison to a new iMac) to custom build a new Windows based Core ix system. I was sad leaving the Mac platform though as both the hardware and software are fantastic (even though the platform is quite closed).

In 2015, I returned to the Mac platform after using several Macs to present the BIM Toolkit. Using the Mac again, even briefly, brought back memories of the platform. I got myself a MacBook Air and very quickly after returning to the platform, got back in to looking at how the development tools had progressed since 2008. The new kid on the block is Swift, which seems to be steadily replacing Objective-C as the language used for iOS and MacApp development.

At NBS, our desktop products are written for Windows. Early versions of NBS Specification Manager were written in Visual C++, and were then ported to .NET 1.0. .NET WinForms is heavily tied to the underlying Windows APIs so isn’t easily portable to other operating systems. However, as a Mac user, I’ve always been keen on trying to do some sort of Mac prototype. I was eager to try to do some kind of MacApp using Swift and thought I could create a simple(ish) NBS Guidance viewer app.

Just before I get too in to the details, it’s important to mention that the work discussed in this post is purely hobby work created in my own time. It’s a proof of concept/training application to learn Swift.

Features/requirements

The application I wanted to create would display the NBS Guidance from NBS Create in a native MacApp. The features I wanted to implement were:

  • Login and take NBS Create license seats to view NBS Guidance
  • Navigate NBS Guidance
  • Search NBS Guidance
  • Print NBS Guidance
  • Open external references (such as British Standards) in the users default web browser.
  • Add, Edit and Delete notes
  • Create Unit tests to automate testing of the application

Challenges

In implementing the above, I encountered a number of problems and at the very least hope that someone will find some of my solutions helpful.

The challenges I faced were:

  • Authenticating an NBS user account against the WS2007HttpBinding of our WCF licensing web service.
  • Embedding a WebKit view within a MacApp
  • Calling JavaScript methods within the WebKitView from Swift
  • Calling Swift methods from the WebKitView

Create Licensing Service

The first hurdle I had to jump was authenticating an NBS user account against our NBS Create Licensing web service. The licensing web service endpoint we need to communicate with uses the WS2007HttpBinding. We use this binding over SSL to provide end-to-end encryption from the client to the server. The users’s username and password are used for authentication and internally verified against our user account database.

The WCF service was created back in 2009 and all requests and responses are sent as SOAP envelops. This makes request and response messages quite verbose. .NET has a nifty feature of building a proxy client based on the WCF service’s web service definition. This wraps up/auto generates a lot of the code  to invoke endpoint methods and authenticate requests. I would have to understand how the proxy does this in order for my Swift project to send the same requests to the service.

It took hours of reading to fathom how to authenticate against a WS2007HttpBinding – to understand the WS_Trust specification and the algorithms used to encrypt and sign messages. I even had to look in the .NET source!

Communicating with the licensing service

Authenticating with a WCF service via a WS2007HTTPBinding takes a number of steps.

Step 1

We need to establish a security context (or a session) with the server. This involves sending an unauthenticated request for a security token to the server with a few bits of key information that will be used to establish end-to-end encryption between the client and the server.

<s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope" xmlns:a="http://www.w3.org/2005/08/addressing" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">
  <s:Header>
    <a:Action s:mustUnderstand="1">http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/SCT</a:Action>
    <a:MessageID>urn:uuid:Client generated GUID</a:MessageID>
    <a:ReplyTo>
        <a:Address>http://www.w3.org/2005/08/addressing/anonymous</a:Address></a:ReplyTo>
    <a:To s:mustUnderstand="1">Service URI</a:To>
    <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">
      <u:Timestamp u:Id="_0">
        <u:Created>2016-09-10T20:24:34.008Z</u:Created>
        <u:Expires>2016-09-10T20:25:34.008Z</u:Expires>
      </u:Timestamp>
      <o:UsernameToken u:Id="uuid-Client generated GUID-1">
        <o:Username>username</o:Username>
        <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">password</o:Password>
        </o:UsernameToken>
    </o:Security>
  </s:Header>
  <s:Body>
    <trust:RequestSecurityToken xmlns:trust="http://docs.oasis-open.org/ws-sx/ws-trust/200512">
      <trust:TokenType>http://docs.oasis-open.org/ws-sx/ws-secureconversation/200512/sct</trust:TokenType>
      <trust:RequestType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/Issue</trust:RequestType>
      <trust:Entropy>
        <trust:BinarySecret u:Id="uuid-Client generated GUID" Type="http://docs.oasis-open.org/ws-sx/ws-trust/200512/Nonce">Client nonce</trust:BinarySecret>
      </trust:Entropy>
      <trust:KeySize>256</trust:KeySize>
    </trust:RequestSecurityToken>
  </s:Body>
</s:Envelope>

You can see that because we’re using SOAP, the messages are quite verbose and there is quite a lot going on.

  • There are several GUIDs that the client needs to generate – MessageID, UsernameTokenID and BinarySecretID. These are created in Swift as NSUUID
  • Our service uses UsernameToken authentication, so the username and password must be sent in the request. This is why we use SSL, so this data is encrypted.
  • The client must generate Entropy (a number once (cnonce)), the server will response with its Entropy (number once (nonce)) and both nonces will be used to sign subsequent messages sent to the server so that the sever knows it’s a genuine request from our client.
  • The client nonce just is simply a 32 random bytes that is BASE64 encoded. My solution uses the following code to generate a secure random 32 byte array
let s = NSMutableData(length: 32)
SecRandomCopyBytes(kSecRandomDefault, s!.length, UnsafeMutablePointer<UInt8>(s!.mutableBytes))
let nonceString = s!.base64EncodedStringWithOptions(NSDataBase64EncodingOptions(rawValue: 0))

Step 2

The server will respond with a fairly long message:

<s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope" xmlns:a="http://www.w3.org/2005/08/addressing" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">
  <s:Header>
    <a:Action s:mustUnderstand="1">http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTR/SCT</a:Action>
    <a:RelatesTo>urn:uuid:Message GUID</a:RelatesTo>
    <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">
      <u:Timestamp u:Id="_0">
        <u:Created>2016-09-10T21:00:31.798Z</u:Created>
        <u:Expires>2016-09-10T21:05:31.798Z</u:Expires>
      </u:Timestamp>
    </o:Security>
  </s:Header>
  <s:Body>
    <trust:RequestSecurityTokenResponseCollection xmlns:trust="http://docs.oasis-open.org/ws-sx/ws-trust/200512">
    <trust:RequestSecurityTokenResponse>
      <trust:TokenType>http://docs.oasis-open.org/ws-sx/ws-secureconversation/200512/sct</trust:TokenType>
    <trust:RequestedSecurityToken>
      <sc:SecurityContextToken u:Id="uuid-66e50eda-1209-4c6a-b893-66e0ed15a79f-7681" xmlns:sc="http://docs.oasis-open.org/ws-sx/ws-secureconversation/200512">
        <sc:Identifier>urn:uuid:6fc437e3-e0dd-4847-882c-c31a9948324b</sc:Identifier>
      </sc:SecurityContextToken>
    </trust:RequestedSecurityToken>
    <trust:RequestedAttachedReference>
      <o:SecurityTokenReference xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">
        <o:Reference ValueType="http://docs.oasis-open.org/ws-sx/ws-secureconversation/200512/sct" URI="#uuid-66e50eda-1209-4c6a-b893-66e0ed15a79f-7681"></o:Reference>
      </o:SecurityTokenReference>
      </trust:RequestedAttachedReference>
      <trust:RequestedUnattachedReference>
        <o:SecurityTokenReference xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">
          <o:Reference URI="urn:uuid:6fc437e3-e0dd-4847-882c-c31a9948324b" ValueType="http://docs.oasis-open.org/ws-sx/ws-secureconversation/200512/sct"></o:Reference>
        </o:SecurityTokenReference>
      </trust:RequestedUnattachedReference>
      <trust:RequestedProofToken>
        <trust:ComputedKey>http://docs.oasis-open.org/ws-sx/ws-trust/200512/CK/PSHA1</trust:ComputedKey>
      </trust:RequestedProofToken>
      <trust:Entropy>
        <trust:BinarySecret u:Id="uuid-66e50eda-1209-4c6a-b893-66e0ed15a79f-7682" Type="http://docs.oasis-open.org/ws-sx/ws-trust/200512/Nonce">Server Nonce</trust:BinarySecret>
      </trust:Entropy>
      <trust:Lifetime>
        <u:Created>2016-09-10T21:00:31.798Z</u:Created>
        <u:Expires>2016-09-11T12:00:31.798Z</u:Expires>
      </trust:Lifetime>
      <trust:KeySize>256</trust:KeySize>
      </trust:RequestSecurityTokenResponse>
    </trust:RequestSecurityTokenResponseCollection>
  </s:Body>
</s:Envelope>

The 2 bits of information we really need from the servers response are:

  • The sc:SecurityContextToken element, this is the security context that the server has established.
  • The servers nonce (trust:BinarySecret). We need our client nice and the server nonce to compute a 256 bit combined key. Only our client and the server know these nonce values.

Step 3

We now have enough information to invoke an authenticated request to a method of the licensing service.

The (lengthy) request we will send will look something like this:

<s:Envelope xmlns:a="http://www.w3.org/2005/08/addressing" xmlns:s="http://www.w3.org/2003/05/soap-envelope" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">
 <s:Header>
   <a:Action s:mustUnderstand="1">http://tempuri.org/PivotalLicensingWebService/VerifyUserAccount</a:Action>
   <a:MessageID>urn:uuid:MessageID GUID</a:MessageID>
   <a:ReplyTo>
     <a:Address>http://www.w3.org/2005/08/addressing/anonymous</a:Address>
   </a:ReplyTo>
   <a:To s:mustUnderstand="1">NBS licensing service URI</a:To>
   <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">
     <u:Timestamp xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" u:Id="_0">
       <u:Created>2016-09-13T19:03:50.450Z</u:Created>
       <u:Expires>2016-09-13T19:33:50.450Z</u:Expires>
     </u:Timestamp>
     <sc:SecurityContextToken u:Id="uuid-SecurityContextToken Id GUID" xmlns:sc="http://docs.oasis-open.org/ws-sx/ws-secureconversation/200512">
       <sc:Identifier>urn:uuid:SecurityContextToken Identifier GUID</sc:Identifier>
     </sc:SecurityContextToken>
     <Signature xmlns="http://www.w3.org/2000/09/xmldsig#">
       <SignedInfo xmlns="http://www.w3.org/2000/09/xmldsig#">
         <CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"></CanonicalizationMethod>
         <SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#hmac-sha1"></SignatureMethod>
         <Reference URI="#_0">
           <Transforms>
             <Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"></Transform>
           </Transforms>
           <DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"></DigestMethod>
           <DigestValue>Timestamp SHA1</DigestValue>
         </Reference>
       </SignedInfo>
       <SignatureValue>SignedInfo HMACSHA1</SignatureValue>
       <KeyInfo>
         <o:SecurityTokenReference>
           <o:Reference URI="#uuid-885b42d2-70d2-44a5-8bcd-3f2083d8113f-85591" ValueType="http://docs.oasis-open.org/ws-sx/ws-secureconversation/200512/sct"/>
         </o:SecurityTokenReference>
       </KeyInfo>
     </Signature>
   </o:Security>
 </s:Header>
 <s:Body>
   <VerifyUserAccount xmlns="http://tempuri.org/">
     <Username>NBS user acount username</Username>
     <Password>NBS user account password</Password>
   </VerifyUserAccount>
 </s:Body>
</s:Envelope>

One thing to point out that is *really* important, is that the XML we send to the server, or anything we sign (more on this below) MUST be in Canonical XML form so that the client and server are working on the exact same sequence of XML.

Before we can send the request we need to do a little bit of work. Firstly, we need to create some XML with a timestamp in it:

<u:Timestamp xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" u:Id="_0">
  <u:Created>2016-09-13T19:03:50.450Z</u:Created>
  <u:Expires>2016-09-13T19:33:50.450Z</u:Expires>
</u:Timestamp>

We then need to create an SHA1 hash of the timestamp XML. This hash has to be added to the SOAP message we send to the server – in the value of the <SignedInfo><DigestValue> element. Finally we need to sign the <SignedInfo> element using a HMAC-SHA1 hash. This is a type of message authentication that involves a cryptographic hash with a secret key. The secret key is computed using a PSHA1 hash algorithm that takes the client nonce and server nonce that were previously exchanged.

Phew, that all sounds a bit complicated – and to be honest I’m not at all an expert in this area. But it does make sense that the client and server have the same knowledge to generate the same key – and use that to generate the same hash on the exact same XML. In this way the client and the server know that the message really did come from the client/server.

I create a little wrapper class for the Cryptographic functions I required:

public class Crypto : NSObject {
    public static func sha1(data: String) -> String {
        let data = data.dataUsingEncoding(NSUTF8StringEncoding)!
        var digest = [UInt8](count:Int(CC_SHA1_DIGEST_LENGTH), repeatedValue: 0)

        CC_SHA1(data.bytes, CC_LONG(data.length), &digest)

        let result = NSData(bytes: digest, length: Int(CC_SHA1_DIGEST_LENGTH))
        return result.base64EncodedStringWithOptions(NSDataBase64EncodingOptions.Encoding64CharacterLineLength)
    }

    // The symmetric key generation chosen is
    // http://schemas.xmlsoap.org/ws/2005/02/trust/CK/PSHA1
    // which per the WS-Trust specification is defined as follows:
    //
    //   The key is computed using P_SHA1
    //   from the TLS specification to generate
    //   a bit stream using entropy from both
    //   sides. The exact form is:
    //
    //   key = P_SHA1 (EntREQ, EntRES)
    //
    // where P_SHA1 is defined per http://www.ietf.org/rfc/rfc2246.txt
    // and EntREQ is the entropy supplied by the requestor and EntRES
    // is the entrophy supplied by the issuer.
    //
    // From http://www.faqs.org/rfcs/rfc2246.html:
    //
    // 8<------------------------------------------------------------>8
    // First, we define a data expansion function, P_hash(secret, data)
    // which uses a single hash function to expand a secret and seed
    // into an arbitrary quantity of output:
    //
    // P_hash(secret, seed) = HMAC_hash(secret, A(1) + seed) +
    //                        HMAC_hash(secret, A(2) + seed) +
    //                        HMAC_hash(secret, A(3) + seed) + ...
    //
    // Where + indicates concatenation.
    //
    // A() is defined as:
    //   A(0) = seed
    //   A(i) = HMAC_hash(secret, A(i-1))
    //
    // P_hash can be iterated as many times as is necessary to produce
    // the required quantity of data. For example, if P_SHA-1 was
    // being used to create 64 bytes of data, it would have to be
    // iterated 4 times (through A(4)), creating 80 bytes of output
    // data; the last 16 bytes of the final iteration would then be
    // discarded, leaving 64 bytes of output data.
    // 8<------------------------------------------------------------>8
    public static func computeCombinedKey(reqEntropy: String, resEntropy: String, keySizeInBits: Int = 256) -> NSData {
        let requestorEntropy = NSData(base64EncodedString: reqEntropy, options: NSDataBase64DecodingOptions.init(rawValue: 0))
        let issuerEntropy = NSData(base64EncodedString: resEntropy, options: NSDataBase64DecodingOptions.init(rawValue: 0))

        let keySizeInBytes = keySizeInBits / 8;
        let key = NSMutableData(capacity: keySizeInBytes)

        let khaKey: NSData = requestorEntropy!

        // A(0), the 'seed'.
        var a: NSData = issuerEntropy!
        // Buffer for A(i) + seed
        var b: NSMutableData = NSMutableData(capacity: 160 / 8 + a.length)!
        var result = NSData()

        var i = 0
        while i < keySizeInBytes {
            // Calculate A(i+1).
            a = hmacSha1(a, key: khaKey)
            
            // Calculate A(i) + seed
            b = NSMutableData(capacity: 160 / 8 + a.length)!
            b.appendData(a)
            b.appendData(issuerEntropy!)
           
            result = NSData()
            result = hmacSha1(b, key: khaKey)
            
            for j in 0 ..< result.length {
                if i < keySizeInBytes {
                    i += 1
                    key!.appendData(result.subdataWithRange(NSRange.init(location: j, length: 1)))
                } else {
                    break;
                }
            }
        }
        
        return key!
    }
    
    public static func hmacSha1(data: NSData, key: NSData) -> NSData {
        let digestLen = CryptoAlgorithm.SHA1.digestLength
        let result = UnsafeMutablePointer<UInt8>.alloc(digestLen)
        
        let dataUnsafe = UnsafePointer<UInt8>(data.bytes)
        let keyUnsafe = UnsafePointer<UInt8>(key.bytes)
        
        CCHmac(CryptoAlgorithm.SHA1.HMACAlgorithm, keyUnsafe, key.length, dataUnsafe, data.length, result)
        
        let digest = NSData(bytes: result, length: digestLen)
        result.dealloc(digestLen)
        
        return digest
    }
}

The SHA1 hashes use the CommonCrypto built in to MacOS X (10.5 and later). The PSHA1 hash was a little bit tricker as I wasn’t able to find a Swift equivalent that generated the same keys as .NET. For the solution, I had to look a the .NET source and translate from C# to Swift.

I would have also been fighting a losing battle if I hadn’t enabled WCF tracing to output digests that were computed by the server (and Service Trace Viewer). I took example digests from the trace log, and created Unit tests to ensure I was calculating the exact same signatures.

After several days of reading specs, blog posts and tearing my hair out I was finally successful in sending an authenticated message to the Licensing service.

Displaying NBS Guidance

Once I was able to make authenticated requests to the NBS licensing service, I was able to take seats and obtain tokens to display the NBS Guidance. I thought it would be quite nice for this sample app to have the capability to read, edit and add practice notes to the NBS Guidance (a feature of NBS Create).

There was quite a lot more work that went in to this, but this blog post is quite long at this point, so will have to wait for another day. In the meantime though, here are lots of screenshots of the capabilities that were implemented.

guidance-login

 

Navigate NBS Guidance pages

Link to external citations such as British Standards and Building Regulations

View and zoom in to NBS Guidance graphics

Add practice notes

 

guidance-print

Print guidance

guidance-search

Search the guidance