Angular Forge Viewer component

Overview

My blog post on how to get the Autodesk Forge Viewer working in Angular is still quite popular and I receive the odd message form time to time with questions/issues. Wind on a year and there still don’t seem to be any more examples.

So I created an Angular component library to wrap up the Forge Viewer and have published it as an NPM package – it’s available on NPM and the source code is all open-source and available on GitHub.

A couple of things to note:

  • The component library targets Angular 5 – it doesn’t work in anything less, and doesn’t work with Angular 6 yet.
  • The library includes TypeScript typings for most of the Viewer API to make the component nice to work with in conjunction with the Forge Viewer API documentation.
  • The library targets v4.* of the Forge Viewer. Required JavaScript and CSS files are downloaded from Autodesk servers by the component – so nothing needs to be added to the index.html. This also allows your app to lazy load the Forge Viewer resources.
  • The library contains a BasicExtension that is registered with all Viewer Components. The extensions captures basic events – such as item elected, object tree loaded etc. More on this below.

Availability

The initial release is immature and has not be fully exercised on a live project. I have tested it with a blank angular cli project and have a example code on the excellent StackBlitz – https://stackblitz.com/edit/angular-forge-viewer.

How to use

Follow these steps to get the viewer working in your app.

Step 1

Add the ng2-adsk-forge-viewer NPM package to your app – npm install ng2-adsk-forge-viewer --save or yarn add ng2-adsk-forge-viewer

Step 2

Add a element to your component.html:

<adsk-forge-viewer [viewerOptions]="viewerOptions3d"
  (onViewerScriptsLoaded)="setViewerOptions()"
  (onViewingApplicationInitialized)="loadDocument($event)">
</adsk-forge-viewer>

Step 3

There is a specific flow of logic to initialize the viewer:

  1. The viewer is constructed and loads scripts/resources from Autodesk’s servers
  2. The onViewerScriptsLoaded event emits to indicate all viewer resources have been loaded
  3. viewerOptions input can now be set, which triggers the creation of the ViewingApplication (i.e. Autodesk.Viewing.Initializer is called)
    • A helper method getDefaultViewerOptions can be used to get the most basic viewer options
  4. The onViewingApplicationInitialized event is emitted and you can now load a document. The event arguments contain a reference to the viewer which can be used to set the documentId to load. E.g.:
public loadDocument(event: ViewingApplicationInitializedEvent) {
  event.viewerComponent.DocumentId = 'DOCUMENT_URN_GOES_HERE';
}

Step 4

When the document has been loaded the onDocumentChanged event is emitted. This event can be used to define the view to display (by default, the viewer will load the first 3D viewable it can find). An example of displaying a 2D viewable:

component.html:

<adsk-forge-viewer [viewerOptions]="viewerOptions2d"
  (onViewerScriptsLoaded)="setViewerOptions()"
  (onViewingApplicationInitialized)="loadDocument($event)"
  (onDocumentChanged)="documentChanged($event)">
</adsk-forge-viewer>

component.ts:

public documentChanged(event: DocumentChangedEvent) {
  const viewerApp = event.viewingApplication;
  if (!viewerApp.bubble) return;

  // Use viewerApp.bubble to get a list of 2D obecjts
  const viewables = viewerApp.bubble.search({ type: 'geometry', role: '2d' });
  if (viewables && viewables.length > 0) {
    event.viewerComponent.selectItem(viewables[0].data);
  }
}

Extensions

BasicExtension

The viewer component comes with a BasicExtension that it registers against all of its viewers. The basic extension captures a handful of events including:

  • Autodesk.Viewing.FIT_TO_VIEW_EVENT,
  • Autodesk.Viewing.FULLSCREEN_MODE_EVENT,
  • Autodesk.Viewing.GEOMETRY_LOADED_EVENT,
  • Autodesk.Viewing.HIDE_EVENT,
  • Autodesk.Viewing.ISOLATE_EVENT,
  • Autodesk.Viewing.OBJECT_TREE_CREATED_EVENT,
  • Autodesk.Viewing.OBJECT_TREE_UNAVAILABLE_EVENT,
  • Autodesk.Viewing.RESET_EVENT,
  • Autodesk.Viewing.SELECTION_CHANGED_EVENT,
  • Autodesk.Viewing.SHOW_EVENT,

The viewer emits these events and should support most use cases. It’s possible to obtain a reference to the BasicExtension via the viewer’s basicExtension getter. This would allow you to subscript to the extensions events (exposed as an RxJS Observable).

Creating your own extension

The BasicExtension is derived from an abstract Extension that wraps up all the logic to register and unregister extensions with the Forge Viewer. It also contains logic to cast Forge Viewer event arguments to strongly typed TypeScript classes.

Your extension should derive from Extension and have a few basic properties and methods.

export class MyExtension extends Extension {
  // Extension must have a name
  public static extensionName: string = 'MyExtension';

  public load() {
    // Called when Forge Viewer loads your extension
  }

  public unload() {
    // Called when Forge Viewer unloads your extension
  }
}

Most of the methods in the abstract Extension class are protected. So they can be overriden in derived classes if required. For example, the BasicExtension overrides the registerExtension method to take a callback to let the viewer component know when the Extension has been registered.

More to come

This is a bit of an intro blog post – I’d suggest checking out my StackBlitz project for a working example. I’m sure more examples will follow in future posts – and remember this component is open source so feel free to fork and/or submit pull requests.

StackBlitz

Advertisements

Testing Angular components

Introduction

On a new project at work we have been using Angular as the front-end framework. One challenge we’ve had to work through is how to most cost effectively test our components and services. By cost effective, I mean in terms of time required to write useful tests.

At the beginning of the project we were following the Angular tutorials for component testing and the team embarked down the path of the TestBed/Component fixture tests. These are the types of test where the component is setup, it’s dependancies mocked out. Some action performed and output asserted. The problem we found, however, is many of these test end up interacting with the DOM (e.g. selecting elements by id or css class) to replicate user actions – such as clicking a button – or ensure some text is correct on the page. E.g.

it('should display original title', () => {
  fixture.detectChanges();
  // query for the title by CSS element selector
  de = fixture.debugElement.query(By.css('h1'));
  el = de.nativeElement;
  expect(el.textContent).toContain(comp.title);
});

This is a simple example, but we quickly found that following this kind of pattern results in extremely expensive “integration” style tests that are time consuming to write and brittle when things change.

During one sprint we were finding for every hour of development, we were spending 2-3 hours on tests and when tests broke it was mainly due to dependency changes or framework updates after upgrading npm packages – rather than the tests finding a problem. We weren’t happy with this so we decided to have a developer catchup to discuss changes to our testing strategy. We agreed to:

  • Favour more traditional Unit tests over the component test we were doing
  • Find libraries to allow us to easily mock out services and component dependancies
  • Favour Jasmine spies that just check back-end methods/services are called over mocking their methods out.

Unit tests over component tests

There is no one size fits all solution here, but the general rule of thumb is that if the logic can be wrapped up in a function and tested via a unit test do that first. This can be achieved by pushing business logic down to business logic/helper classes.

Services are a good candidate for unit testing, they are classes that can be instantiated so don’t need the TestBed.

describe('Service tests', () => {
  let service: MyService;

  beforeEach(() =&amp;gt; {
    service = new MyService();
  });

  it('should be created', () => {
    expect(service).toBeTruthy();
  });

  it('Adds item to list', () => {
    const testData = {
      id: '1234',
      title: 'Item title',
    } as TestItem;

    // Call public method
    service.addItem(testData);

    expect(service.items).toEqual([testData]);
  });
});

Before each test we create a new service. Granted that the service in this example is simple with no dependencies (we’ll come back to testing services without dependencies that).

We test one of the public methods like any unit test. Notice that we don’t use TestBed – so in our tests we don’t need to use the Inject function to allow the test to use the service so tests are more readable and less brittle if/when dependencies change or Angular update their framework.

Where a service has dependencies, these can be mocked via the constructor. We found that libraries like ts-mockito can help mock out these dependancies by providing a simple no-ops mock. ts-mockito can also mock specific calls on dependancies – replacing Jasmine spies.

import { mock, instance, when, anything, verify } from 'ts-mockito';

describe('Service tests', () => {
  let service: MyService;
  let mockOtherService: MyOtherService;

  beforeEach(() => {
    const mockOtherService = mock(MyOtherServie);
    when(mockOtherService.doWork(anything())).thenReturn(anything());

    service = new MyService(instance(mockOtherService));
  });

  it('Adds item to list', () => {
    const testData = {
      id: '1234',
      title: 'Item title',
    } as TestItem;

    // Call public method
    service.addItem(testData);

    // Should have called dependancy
    verify(mockOtherService.doWork(anything())).called();
  });
});

This example mocks out dependencies using ts-mockito. It creates a mock, overrides doWork. An instance is then passed when constructing the service under test.

ts-mockito can also replace Jasmine spies – notice the use of verify to check whether a method on a dependency has been called correctly.

Components must be constructed via the TestBed, so there will always be an element of configuration the required dependancies. We found Components to be by far the trickiest and brittle to test. So, as said before, if business logic can be hived off to helper classes or services so that more traditional unit testing techniques can be used, this is the easiest solution. If not, try to break components up to limit the number of dependant services and components. A good strategy is to make the parent component responsible for fetching data from a back-end service, and then have child components that have simple inputs and outputs of the data the will show (commonly referred to as smart and dumb components).

Libraries that helped us

ts-mockito

We found this library to be very good for mocking out dependencies. The library is relatively young and so is not without limitations. We found that it mocks out methods, getters and setters on objects well. We had mixed results mocking out properties/fields and static methods – but the developers are continuing to add improvements.

ng2-mock-component

This is a great little library for mocking out angular components – allowing child components to be mocked out with optional template, inputs and outputs.

For a simple component like this:

<h1>Test</h1>
<child-component [exampleinput]="inputValue" (onEvent)="eventHandler()"</child-component>;

We can mock out it’s child component easily in the TestBed:

import { ComponentFixture, TestBed } from '@angular/core/testing';

describe('Component tests', () => {
  let component: MyComponent;
  let fixture: ComponentFixture<MyComponent>;

  beforeEach(async(() => {
    return TestBed.configureTestingModule({
      declarations: [
        MyComponent,
        MockComponent({
          selector: 'child-component',
          inputs: ['exampleinput'],
          outputs: ['onEvent'],
        }),
      ],
    })
    .compileComponents();
  }));

  beforeEach(() => {
    fixture = TestBed.createComponent(MyComponent);
    component = fixture.componentInstance;
    component.contentSetId = mockContentSetId;
    fixture.detectChanges();
  });

  it('should create', () => {
    expect(component).toBeTruthy();
  });
});

This is much more convenient way of creating mock child components.

Other hints

Testing private methods

This always leads to debates about whether private methods should be tested directly by unit test – or implicitly through public methods whom call upon them. I sit in the camp of test wherever there is value, and sometimes this means we want private methods to have their own unit tests.

One thing to realise is that TypeScript’s private methods aren’t actually private – it’s the compiler that enforces the private keyword NOT JavaScript. So it’s possible to call a private method, we found that the following was the best for testing privates as it still gave some type safety on parameters passed to the function:

describe('private test', () => {
  it('test private method', () => {
    const test = new TestClass();

    const expected = true;
    // Call private method
    const actual = test['privateMethod'](parameter1Value);

    expect(expected).toBe(actual);
  });
});

TypeScript will show compile errors if parameter1Value is the wrong type or is missing. An alternative to this is to (test as any).privateMethod(parameter1Value); but this gives no type safety.

angular async

Angular ships with a number of helper methods for testing – one is called async and helps with asynchronous tests. From what I can gather from the docs and the source, it runs tests in a zone and wraps up a call to Jasmine’s done() function.

The problem, is that because it’s called async it becomes confusing when used with JavaScript’s new async/await. I’m a big fan of async/await– I am comfortable with it as an async pattern from C#, but it also makes nested promise chains much more readable. We found that new developers could be caught out and use the wrong async

Angular async:

it('some test', async(() => {
  // Angular async
});

TypeScript async/await:

it('some test', async (done) => {
  const test = await someMethod();
  done();

My current preference is to avoid Angular’s async method entirely and favour async/await and call Jasmine’s done() callback explicitly. I’ve not yet encountered a situation where async/await and done() don’t work but Angular’s async does.

Deploying a Node.js app to a Microsoft Azure Web app

Introduction

The project I’m currently working on uses Angular2 on the front end and Node.js on the backend. The backend is an Express app that wraps a GraphQL API. One of the things we got working very early on was our automated build and release pipeline. We are using Visual Studio Team Services to orchestrate the build and deployment process. In the initial phases of the project we were using MS Azure as our cloud provider – it is relatively easy to deploy to Azure but we encountered some gotchas which I thought were worth sharing.

Build

Our build definition consists of the following steps:

  1. Get Source from Git
  2. “npm install” to install packages
  3. “npm test” to run unit tests
  4. Publish test results – we used Jasmine as the test framework and used the jasmine-reporters package to output test results to JUnit XML format. VSTS can render a nice test report using this file.
  5. “npm run build” to build the Node JS app using babel.
  6. Archive and copy release to VSTS.

Release

Our release definition consists of the following steps:

  1. Get the latest build artefact
  2. Azure app service deploy the artefact

Gotchas

Things didn’t work first time! Documentation was out-of-date (some of the MS documentation hadn’t been updated for 2 years!). Initially it seemed every route we took wasn’t quite right.

NodeJS apps deployed to an Azure WebApp actually run in IIS via iisnode. Communication from iisnode to node js is via a named pipe (this isn’t important but is useful to know). It’s easy enough to get your app on to Azure, but I found that the build and release pipeline required a number of tweaks which weren’t apparent on the documentation.

The following tweaks were needed in our build and release pipeline:

  • node_modules needed to be packaged up with the build. The archive created by our build process included the node_modules that were installed as part of the “npm install” task. There were a few MS articles around Git deploy which said packages referenced in the package.json file should be automatically downloaded the required node_modules, this doesn’t seem to work for our particular deployment technique.
  • There is some crucial web.config required to configure iisnode:
    <?xml version="1.0" encoding="utf-8"?>
    <!--      This configuration file is required if iisnode is used to run node processes behind      IIS or IIS Express.  For more information, visit:      https://github.com/tjanczuk/iisnode/blob/master/src/samples/configuration/web.config -->
    
    <configuration>
      <system.webServer>
        <!-- Visit http://blogs.msdn.com/b/windowsazure/archive/2013/11/14/introduction-to-websockets-on-windows-azure-web-sites.aspx for more information on WebSocket support -->
        <webSocket enabled="false" />
        <handlers>
          <!-- Indicates that the server.js file is a node.js site to be handled by the iisnode module -->
          <add name="iisnode" path="server.js" verb="*" modules="iisnode"/>
        </handlers>
        <rewrite>
          <rules>
            <!-- Redirect all requests to https -->
            <!-- http://stackoverflow.com/questions/21788863/url-rewrite-http-to-https-in-iisnode -->
            <rule name="HTTP to Prod HTTPS redirect" stopProcessing="true">
              <match url="(.*)" />
              <conditions>
                <add input="{HTTPS}" pattern="off" ignoreCase="true" />
              </conditions>
              <action type="Redirect" redirectType="Found" url="https://{HTTP_HOST}/{R:1}" />
            </rule>
    
            <!-- Do not interfere with requests for node-inspector debugging -->
            <rule name="NodeInspector" patternSyntax="ECMAScript" stopProcessing="true">
              <match url="^server.js\/debug[\/]?" />
            </rule>
    
            <!-- First we consider whether the incoming URL matches a physical file in the /public folder -->
            <rule name="StaticContent">
              <action type="Rewrite" url="public{REQUEST_URI}"/>
            </rule>
    
            <!-- All other URLs are mapped to the node.js site entry point -->
            <rule name="DynamicContent">
              <conditions>
                <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="True"/>
              </conditions>
              <action type="Rewrite" url="server.js"/>
            </rule>
          </rules>
        </rewrite>
    
        <!-- 'bin' directory has no special meaning in node.js and apps can be placed in it -->
        <security>
          <requestFiltering>
            <hiddenSegments>
              <remove segment="bin"/>
            </hiddenSegments>
          </requestFiltering>
        </security>
    
        <!-- Make sure error responses are left untouched -->
        <httpErrors existingResponse="PassThrough" />
    
        <!--       You can control how Node is hosted within IIS using the following options:         * watchedFiles: semi-colon separated list of files that will be watched for changes to restart the server         * node_env: will be propagated to node as NODE_ENV environment variable         * debuggingEnabled - controls whether the built-in debugger is enabled       See https://github.com/tjanczuk/iisnode/blob/master/src/samples/configuration/web.config for a full list of options     -->
        <iisnode watchedFiles="web.config;*.js"/>
      </system.webServer>
    </configuration>
    

The most important line in the config is this:

<add name="iisnode" path="server.js" verb="*" modules="iisnode"/>

Which sets iisnode as the handler for server.js. If your main file isn’t called server.js – then you’ll need to change this e.g. to app.js or index.js etc.

The project I’m working on uses WebSockets to do some real-time communication. If you want Node JS to handle web sockets, rather oddly, you must tell IIS to disable web sockets.

And the bad news…

iisnode totally cripples the performance of node.js. We found that node running on “bare metal” (an AWS t2 micro instance) was up to 4 times faster than when the same node service was deployed as a web app on Azure. Worst still, the bare metal deployment of could out perform 4 load-balanced S2 web app instance on Azure 😦

Finally, why did we chose AWS over Azure?

In the end, we actually chose to switch entirely to Amazon Web Services (AWS) – here are a few reasons why.

I’ve used MS Azure for a while now for both production applications and proof of concepts. Generally I’ve enjoyed using it, it has a good portal and lots of great features – Azure Search, Azure SQL to name a few. But in my experience, Azure seems to work well for .NET applications and less so for non .NET solutions.

My main gripes with Azure are around account management and lack of database choice (unless you are willing to manage the DB yourself). The MS account system is a total mess! I have 2 or 3 different MS accounts – some are work ones, some personal – all because MS have a totally inconsistent account system. Some services (like Azure) can be tied to Active Directory and others (MSDN subscriptions) can’t. I just find myself is a mess of choosing which account to log in with today and whether my system administrator has control or not over my permissions to the service I’ve logged in to.

AWS has really though through their permissions model, it’s complex but really flexible. they have user accounts, roles and resource policies. I’ve only been using AWS for a year or so but totally got their permissions model after provisioning a few databases and virtual machines.

For the new project I’m working on, we were toying with a NoSQL solution – such as ArrangoDB. My company (myself included) is more familiar with RDBMS solutions – typically using MS SQL Server for most products. Moving to a NoSQL solution would be a little risky – so as part of an investigation stage of the project we looked at RDBMS’s with document db style support. I’ve been a fan of Postgres for a while, but didn’t realise how many brilliant features it has and good performance characteristics. Although only anecdotal – we found Postgres on an AWS RDS t2 micro instance to be much faster than a basic Azure SQL instance. For us, on this application, database choice was extremely important and Azure (at the time of wiring) didn’t offer a managed instance of Postgres (or anything other than MS SQL Server).

The final reason was AWS Lambda functions. AWS Lambdas are far superior to Azure Functions. A brief prototype in to each proved it was quite easy to convert a fairly complex Node JS app in to a lambda function; I couldn’t get the equivalent app working at all reliably as an Azure function. This seems to follow my main point – write a .NET app and Azure Functions work well. Try a python or Node JS app and see if you can even get it working…

Getting a job in software development

I have been developing commercial software since graduating from University in 2003. I started NBS in 2005 and in the 12 years I’ve worked there I’ve progressed from Graduate Software Developer to NBS Labs manager. As part of this role, I help recruit graduate software developers on to our Graduate Software Developer Scheme. Each year we look to employ 2-3 graduates. The graduates we employ typically studied at one of our local universities – Newcastle, Northumbria, Sunderland or Teeside.

The cost of attending University has increased significantly since the introduction of tuition fees back in 1998. Students can pay around £9000 a year for their degree. I’m going to be a little controversial and say that what is really surprising to me is that, as a recruiter, I’ve not seen an increase in quality of candidate coming through. Instead, I am seeing graduates who have done a little bit of coding – Java, C# – in year 1 and nothing since.

A core part of our recruitment process is a technical test, which many applicants struggle with. So I thought I’d write a quick blog post to give a bit of an insight into what I look for when recruiting graduates.

Review of CVs

The first step in the recruitment process is to sift through CVs. It’s quite common for graduate CVs to look very similar, after all graduates are at the beginning of their careers and have very little commercial experience.

An important thing to realise when you write your CV is that all the students on your course could potentially be applying for the same job. In addition to this, similar number of students from other local Universities might also be applying. How can you make sure that your CV stands out so you make the short list?

  • You are applying for a software development position, make sure you cover your knowledge and experience of key technologies. Also cover group projects and final year projects – with an overview of what the project was, your role and what technologies were used.
  • If you’ve been on a placement as part of your degree you should have some excellent examples of work done whilst on placement and technologies used etc. In ay ways you already have something to make your CV stand out.
  • Demonstrate your abilities and interest in computing – include hyperlinks to personal websites (written using a full development stack and ideally backed by a data store) to showcase your work, blogs, GitHub repositories.

Practical test

From the CVs, we build a short list of candidates to invite in for the first stage of our interview process – a programming test. Our test is fairly simple and is looking for a demonstration of basic programmings skills such as:

  • Interpreting requirements
  • Coding using a good programming style
  • Reading user input
  • Breaking up a problem in to reusable functions
  • Demonstration of code reuse in a loop
  • Reading a file and analysing data within it
  • And ideally showing some initiative – like writing some Unit tests and handling exceptions

This sounds simple doesn’t it? But we see many graduates struggle with this test event though it probably less difficult than something you will have done in a programming seminar at University.

My advice is always a reminder that many software development job interviews will require the completion of a test. Java, .NET, Python, NodeJs are all free to download and IDE’s like the excellent JetBrains IntelliJ IDEA or Visual Studio offer community editions. There are also loads of coding vlogs on YouTube. Practise, practise, practise the basics before you start applying for jobs.

Interview

The tests are code-reviewed by a mix of senior developers and developers to get feedback. If the candidate has written a good solution they are invited back for a formal interview as the final stage of the interview.

You can find lots of really good advice on the Internet about how to interview well – but my advice is try not to panic and remember that the interview is NOT a test to catch you out. It’s a conversation between you and your potential new employer to discuss your knowledge, skills and passion for a career in software development and for you to decide if the company is the right fit for you.

Turtle Minesweeper

I’ve mentioned a few times on various blog posts that I got my first Mac back in 1999. It was a iMac G3 266Mhz running MacOS 8.6. I loved that computer, it was bullet proof compared to previous Windows PCs I’d owned. One thing I missed when moving to the Mac however was Windows Minesweeper. I played the game quite a lot and couldn’t find a clone that did the Windows version justice.

During my time at College I wrote a number of applications in Visual Basic 6 and was quite comfortable with how it worked. An equivalent development on the Mac was REALbasic (which has recently been taken over by Xojo) which I started using in my spare time whilst studying at University. In the summer of 2001 I set about creating a Minesweeper clone for the Mac.

I recently bought a new MacBook Pro 2016 model and was curious to see if the app still worked in macOS Sierra (it was originally written for Classic Mac OS (8.6-> 9.x) and MacOS 10.1 -> 10.5. I stopped supporting it back in 2006 so wasn’t really expecting much.

Analysis and design

Whilst restoring files to the new MacBook Pro, I found the TurtleMine folder and found all the analysis and design documentation I’d written. At University, a few of my modules were about systems analysis and design using UML. I’d tried to apply what I’d learnt to the Minesweeper app.

I’d written a few use cases:

Screen Shot 2017-04-07 at 11.13.21 am

Example use case for uncovering a square

Created some wireframes:

GUI Designs

Wireframes

Sequence diagrams:

ISD - Uncover a square

Sequence diagram for uncovering a square on the minefield

And state charts:

Statechart - Square

State chart for a square on the minefield

All in MS Excel! I must have been a glutton for punishment. Nowadays, I like to use tools like Pencil for wireframes, and Visual Paradigm for UML diagrams.

Running the game

I opened up the latest release of the source code I could find and tried to double click the Turtle Mine application icon:

Screen Shot 2017-04-07 at 12.59.58 pm

And to my total surprise, the app ran!

Download

I don’t have my Turtle Soft website any more, but thought macOS people might still like to be able to download Turtle Mine. If you’d like a copy, you can download Turtle Mine from the DropBox link below:

https://www.dropbox.com/s/p1k87xe02b7an5o/TurtleMine.zip?dl=0

Shairport Sync

My son Max likes to listen to music at night, the music helps him sleep. When Max moved in to his own room, I bought an Airport Express so that I could stream music from my iPad to his room. Just recently, the Airport Express gave up the ghost – I think it overheated and something blew as all it would do is show a steady yellow light.

Fortunately, I have a number of Raspberry Pi’s lying around and suspected that there would be some open source solution to replace the Airport Express at half the cost. During my Googling I happened upon Shairport Sync. The software is actually unmaintained, but there are a number of forks that still are.

I tried my Raspberry Pi3 first as that has a headphone jack, but when I got Shairport sync working I noticed that the sound quality from the Pi3 was really poor. A week or so later, the Raspberry Pi Zero W was released. I decided to get one for Max, and also a hi-fi Digital to Analogue Converter (DAC) to address the issue with sound quality. Some soldering was required to attach the 40-pin header to both the Pi’s GPIO pins and the DAC.

Then came the installation of the Shairport Sync software.

Step 1

In a terminal on the Pi, run the following commands:

sudo apt-get install build-essential git xmltoman
sudo apt-get install autoconf automake libtool libdaemon-dev libasound2-dev libpopt-dev libconfig-dev sudo apt-get install avahi-daemon libavahi-client-dev
sudo apt-get install libssl-dev

Step 2

Get the shairport sync software from GitHub:

git clone https://github.com/mikebrady/shairport-sync.git
cd shairport-sync

Step 3

Create a shairport sync group and user:

getent group shairport-sync &>/dev/null || sudo groupadd -r shairport-sync >/dev/null
getent passwd shairport-sync &> /dev/null || sudo useradd -r -M -g shairport-sync -s /usr/bin/nologin -G audio shairport-sync >/dev/null

Step 4

Configure and compile the software (this will take a few minutes on the Pi Zero):

autoreconf -i -f
./configure --sysconfdir=/etc --with-alsa --with-avahi --with-ssl=openssl --with-metadata --with-systemd
make
sudo make install
sudo systemctl enable shairport-sync
chmod 755 ./scripts/shairport-sync
sudo cp ./scripts/shairport-sync /etc/init.d/shairport-sync/
sudo update-rc.d shairport-sync defaults 90 10

Step 5

Edit the shairport-sync.conf file and set defaults – such as the name of the share

sudo vi /etc/shairport-sync.conf
general =
{
  name = "Raspberry Pi Zero W";
};

Finally restart the pi:

shutdown -r now

And hopefully, the Pi will appear in iTunes:

Shairport

Pi appears as an AirPlay speaker in iTunes

Autodesk Forge Viewer and Angular2

I’ve been using the Autodesk Forge viewer quite a bit lately to integrate 3D building models within various prototype applications. Until now I had only used the Forge Viewer with plain JavaScript (or a bit of JQuery). I recently tried to integrate the viewer within an Angular 2 application and thought I’d share my solution – as I was unable to find any examples when I did a quick google.

Angular2 (just called Angular) is a rewrite of AngularJS framework. A key difference is that Angular2 moves away from the MVC pattern in favour of Components and the shadow DOM. Although not a requirements, Angular2 recommends the use of TypeScript to help more strongly type JavaScript with a view to help maintainability of large applications. Angular is just JavaScript, so it’s not difficult to integrate external JavaScript libraries with it – you just have to follow particular conventions to get these libraries to work. The solution to integrating the Forge Viewer is very similar to some of the React samples on GitHub.

Step 1

After creating a new Angular app via angular-cli, add the required JS includes to index.html:

<script src="https://developer.api.autodesk.com/viewingservice/v1/viewers/three.min.js?v=v2.13"></script>
<script src="https://developer.api.autodesk.com/viewingservice/v1/viewers/viewer3D.min.js?v=v2.13"></script>

Note that I’m going to use the headless Forge Viewer in this example – so I don’t need to include the Forge Viewer’s CSS.

Step 2

Create a new component using angular-cli:

ng generate component forge-viewer

Add the following to forge-viewer.component.html:

<div #viewerContainer class="viewer">
</div>

This provides a Div for the Forge Viewer to render in to. We need to add a #viewerContainer reference within theDiv so that we can obtain an ElementRef to give the Forge Viewer the DOM element to bind to. Add the following CSS to forge-viewer.component.css:

.viewer {
  position: relative;
  width: 100%;
  height: 450px;
}

Step 3

We’ve done the basic setup, we now need to add the main functionality to forge-viewer.component.ts.

import { Component, ViewChild, OnInit, OnDestroy, ElementRef } from '@angular/core';

// We need to tell TypeScript that Autodesk exists as a variables/object somewhere globally
declare const Autodesk: any;

@Component({
  selector: 'forge-viewer',
  templateUrl: './forge-viewer.component.html',
  styleUrls: ['./forge-viewer.component.scss'],
})
export class ForgeViewerComponent implements OnInit, OnDestroy{
  @ViewChild('viewerContainer') viewerContainer: any;
  private viewer: any;

  constructor(private elementRef: ElementRef) { }

...

There are a couple of lines above that are crucially important. We’ve imported the Autodesk Viewer from Autodesk’s servers – this creates a global Autodesk object. We don’t have any TypeScript typings for this object (ts.d files). At time of writing, there were no definitions on the DefinatelyTyped repository. TypeScript is just a superset of JavaScript, so it’s not a problem that we don’t have a typings file. All we need to do is declare an Autodesk variable:

declare const Autodesk: any;

This tells the TypeScript compiler that somewhere globally there is an object called Autodesk.

Also important is a reference to the Div we want to render the viewer in:

@ViewChild('viewerContainer') viewerContainer: any;

Step 4

We’ll now create an instance of the Forge Viewer – we’ll need to do this once the component has been initialised AND our hosting Div has been rendered in the DOM. We’ll use the ngAfterViewInit lifecycle hook:

ngAfterViewInit() {
  this.launchViewer();
}

private getAccessToken(onSuccess: any) {
  const { access_token, expires_in } = // Your code to get a token
  onSuccess(access_token, expires_in);
}

private launchViewer() {
  if (this.viewer) {
    // Viewer has already been initialised
    return;
  }

  const options = {
    env: 'AutodeskProduction',
    getAccessToken: (onSuccess) => { this.getAccessToken(onSuccess) },
  };

  // For a headless viewer
  this.viewer = new Autodesk.Viewing.Viewer3D(this.viewerContainer.nativeElement, {});
  // For a viewer with UI
  // this.viewer = new Autodesk.Viewing.Private.GuiViewer3D(this.viewerContainer.nativeElement, {});

  Autodesk.Viewing.Initializer(options, () => {
    // Initialise the viewer and load a document
    this.viewer.initialize();
    this.loadDocument();
  });
}

private loadDocument() {
  const urn = `urn:${//document urn}`;

  Autodesk.Viewing.Document.load(urn, (doc) => {
    // Get views that can be displayed in the viewer
    const geometryItems = Autodesk.Viewing.Document.getSubItemsWithProperties(doc.getRootItem(), {type: 'geometry'}, true);

    if (geometryItems.length === 0) {
      return;
    }

    // Example of adding event listeners
    this.viewer.addEventListener(Autodesk.Viewing.GEOMETRY_LOADED_EVENT, this.geometryLoaded);
    this.viewer.addEventListener(Autodesk.Viewing.SELECTION_CHANGED_EVENT, (event) => this.selectionChanged(event));

    // Load view in to the viewer
    this.viewer.load(doc.getViewablePath(geometryItems[0]));
  }, errorMsg => console.error);
}

private geometryLoaded(event: any) {
  const viewer = event.target;

  viewer.removeEventListener(Autodesk.Viewing.GEOMETRY_LOADED_EVENT, this.geometryLoaded);

  // Example - set light preset and fit model to view
  viewer.setLightPreset(8);
  viewer.fitToView();
}

private selectionChanged(event: any) {
  const model = event.model;
  const dbIds = event.dbIdArray;

  // Get properties of object
  this.viewer.getProperties(dbIds[0], (props) => {
    // Do something with properties.
  });
}

ngOnDestroy() {
  // Clean up the viewer when the component is destroyed
  if (this.viewer && this.viewer.running) {
    this.viewer.removeEventListener(Autodesk.Viewing.SELECTION_CHANGED_EVENT, this.selectionChanged);
    this.viewer.tearDown();
    this.viewer.finish();
    this.viewer = null;
  }
}

A lot of the code is very similar to how you’d instantiate the viewer via plain JavaScript. The following line creates a new instance of the viewer in the Div of our component template:

this.viewer = new Autodesk.Viewing.Viewer3D(this.viewerContainer.nativeElement, {});

The reset of the code just loads a document and demonstrates how events can be bound.

Gotchas

Whilst working on this prototype, I encountered one gotcha. I could successfully create an instance of the Viewer and load a model in to it. My application had simple routing – when I navigated away from the route where the viewer was hosts, to another route and then back, the viewer wouldn’t display. It seemed that viewer thought it has already been instantiated so didn’t bother and skipped to loading the model…which didn’t work because there was no instance of the viewer.

My solution to the problem isn’t as elegant as I wanted, but does work:

this.viewer = new Autodesk.Viewing.Viewer3D(this.viewerContainer.nativeElement, {}); // Headless viewer

// Check if the viewer has already been initialised - this isn't the nicest, but we've set the env in our
// options above so we at least know that it was us who did this!
if (!Autodesk.Viewing.Private.env) {
  Autodesk.Viewing.Initializer(options, () => {
    this.viewer.initialize();
      this.loadDocument();
  });
} else {
  // We need to give an initialised viewing application a tick to allow the DOM element to be established before we re-draw
  setTimeout(() => {
    this.viewer.initialize();
    this.loadDocument();
  });
}

The 2nd time out component loads, Autodesk.Viewing.Private.env will already be set (we set it!). So we simply call initialise on the viewer and load the model. This didn’t work first time – but adding a setTimeout gave Angular a tick to sort out DOM binding/it’s update cycle before attempting to load the viewer.

Screenshots

The full forge-viewer.component.ts file

import { Component, ViewChild, OnInit, OnDestroy, ElementRef, Input } from '@angular/core';

// We need to tell TypeScript that Autodesk exists as a variables/object somewhere globally
declare const Autodesk: any;

@Component({
  selector: 'forge-viewer',
  templateUrl: './forge-viewer.component.html',
  styleUrls: ['./forge-viewer.component.scss'],
})
export class ForgeViewerComponent implements OnInit, OnDestroy {
  private selectedSection: any = null;
  @ViewChild('viewerContainer') viewerContainer: any;
  private viewer: any;

  constructor(private elementRef: ElementRef) { }

  ngOnInit() {
  }

  ngAfterViewInit() { 
    this.launchViewer();
  }

  ngOnDestroy() {
    if (this.viewer && this.viewer.running) {
      this.viewer.removeEventListener(Autodesk.Viewing.SELECTION_CHANGED_EVENT, this.selectionChanged);
      this.viewer.tearDown();
      this.viewer.finish();
      this.viewer = null;
    }
  }

  private launchViewer() {
    if (this.viewer) {
      return;
    }

    const options = {
      env: 'AutodeskProduction',
      getAccessToken: (onSuccess) => { this.getAccessToken(onSuccess) },
    };

    this.viewer = new Autodesk.Viewing.Viewer3D(this.viewerContainer.nativeElement, {}); // Headless viewer
 
    // Check if the viewer has already been initialised - this isn't the nicest, but we've set the env in our
    // options above so we at least know that it was us who did this!
    if (!Autodesk.Viewing.Private.env) {
      Autodesk.Viewing.Initializer(options, () => {
        this.viewer.initialize();
        this.loadDocument();
      });
    } else {
      // We need to give an initialised viewing application a tick to allow the DOM element to be established before we re-draw
      setTimeout(() => {
        this.viewer.initialize();
        this.loadDocument();
      });
    }
  }

  private loadDocument() {
    const urn = `urn:${// model urn}`;

    Autodesk.Viewing.Document.load(urn, (doc) => {
      const geometryItems = Autodesk.Viewing.Document.getSubItemsWithProperties(doc.getRootItem(), {type: 'geometry'}, true);

      if (geometryItems.length === 0) {
        return;
      }

      this.viewer.addEventListener(Autodesk.Viewing.GEOMETRY_LOADED_EVENT, this.geometryLoaded);
      this.viewer.addEventListener(Autodesk.Viewing.SELECTION_CHANGED_EVENT, (event) => this.selectionChanged(event));

      this.viewer.load(doc.getViewablePath(geometryItems[0]));
    }, errorMsg => console.error);
  }

  private geometryLoaded(event: any) {
    const viewer = event.target;

    viewer.removeEventListener(Autodesk.Viewing.GEOMETRY_LOADED_EVENT, this.geometryLoaded);
    viewer.setLightPreset(8);
    viewer.fitToView();
    // viewer.setQualityLevel(false, true); // Getting rid of Ambientshadows to false to avoid blackscreen problem in Viewer.
  }

  private selectionChanged(event: any) {
    const model = event.model;
    const dbIds = event.dbIdArray;

    // Get properties of object
    this.viewer.getProperties(dbIds[0], (props) => {
       // Do something with properties
    });
  }

  private getAccessToken(onSuccess: any) {
    const { access_token, expires_in } = // get token
    onSuccess(access_token, expires_in);
  }
}