Subscribe

Hue: Security Lights

Aug 18, 2017

My previous post about Philips Hue bulbs got me thinking that the API exposed by the bridge might be used to warn if the house lights are left on too late at night, or even if they get turned on at unexpected times - potentially for security.

I put together a simple program that periodically checks the status of known Hue bulbs late at night. If any bulbs are discovered to be powered on during such times then an email notification is sent. It runs as a systemd service on a Raspberry Pi.

Currently the project is quite basic, but it could be further extended - perhaps to implement ignore lists or to automatically turn off specific sets of bulbs if they are found to be powered on.

For those interested, the project source and setup info is available on GitHub.

Alexa, ask Sherlock...

Jul 19, 2017

I have recently posted about CENode and how it might be used in IoT systems.

Since CENode is partially designed to communicate directly with humans (particularly those out and about or “in the field”) it makes sense for inputs and queries to be provided via voice in addition to or instead of a text interface. Whilst this has been explored in the browser (including in the previous Philips Hue control demo), it made sense to also try to leverage the Alexa voice service to interact with a CENode instance.

The Alexa Voice Service and Alexa Skills Kit are great to work with, and it was relatively straight forward to create a skill to communicate with CENode’s RESTful API.

The short video below demonstrates this through using an Amazon Echo to interact with a standard, non-modified CENode instance running on CENode Explorer that is partly pre-loaded with the “space” scenario used in our main CENode demo. The rest of the post discusses the implementation and challenges.

Typical Alexa skills are split into “intents”, which describe the individual ways people might interact with the service. For example, the questions “what is the weather like today?” and “is it going to rain today?” may be two intents of a single weather skill.

The skill logic is handled by AWS Lambda, which is used to associate each intent with an action. When someone gives a voice command, the Alexa Voice Service (AVS) determines which intent is being called for which service, and then passes the control over to the appropriate segment in the Lambda function. The function returns a response to the AVS, which is read back out to the user.

The strength of Alexa’s ability to recognise speech is largely dependent on the information given to build each intent. For example, the intent “what is the weather like in {cityName}?”, where cityName is a variable with several different possibilities generated during the build, will accurately recognise speech initiating this intent because the sentence structure is so well defined. A single intent may have several ways of calling it - “what’s the weather like in…”, “tell me what the weather is in…”, “what’s the weather forecast for…”, etc. - which can be bundled into the model to further improve the accuracy even in noisy environments or when spoken by people with strong accents.

Since CENode is designed to work with an entire input string, however, the voice-to-text accuracy is much lower, and thus determining the intent and its arguments is harder. Since we need CENode to handle the entire input, our demo only has a single intent with two methods of invocation (slots):

Since ‘Sherlock’ is also provided as the invocation word for the service, both slots implicitly indicate both the service and the single intent to work with. I used ‘Sherlock’ as the name for the skill as it’s a name we’ve used before for CENode-related apps and it is an easy word for Alexa to understand!

sentence is the complete body to be processed by CENode - e.g. “Jupiter is a planet” or “what is Jupiter?” - giving a typical full Echo invocation: “Alexa, tell Sherlock Jupiter is a planet”. The Alexa segment tells the Echo to begin listening, the tell Sherlock component determines the skill and intent to use, and the remainder of the sentence is the body provided to CENode.

Since we only have a single intent, using either ‘ask’ or ‘tell’ in the invocation is irrelevant since it is CENode that will try and work out what is meant from the sentence body - whether a question or an input of information. The two slots are only used for the benefit of the human user and so invocations such as “tell Sherlock what is Jupiter?” still work.

At this stage, the AWS Lambda function handling the intent makes a standard HTTP POST request to a CENode instance, and the response is directly passed back to the Alexa service for reading-out to the user. As such, CENode itself provides all of the error-handling and misunderstood inputs, making the Alexa service itself combined with the Lambda function, in this scenario, very ‘thin’.

The skill has not yet been published to the Alexa skills store for general use, but the code for this project, including the Alexa Skills Kit configuration and the AWS Lambda code (written using their Node environment) is available on GitHub.

CENode in IoT

Jun 26, 2017

In my previous post I discussed CENode and briefly mentioned its potential for use in interacting with the Internet of Things. I thought I’d add a practical example of how it might be used for this and for ‘tasking’ other systems.

I have a few Philips Hue bulbs at home, and the Hue Bridge that enables interaction with the bulbs exposes a nice RESTful API. My aim was to get CENode to use this API to control my lights.

A working example of the concepts in this post is available on GitHub (as a small webapp) and here’s a short demo video (which includes a speech-recognition component):

The first step was to generate a username for the Bridge, which CENode can use to authenticate requests through the API.

I use CE cards to supply instructions to a CENode agent, since this is the generally recognised method for interaction between CE-capable devices. When instantiating a node, any number of CE ‘models’ may be passed in order to form a base knowledge set to work from. Here is such a model for giving CENode a view of the Hue ‘world’:

const lightModel = [
  'conceptualise a ~ hue bridge ~ h that has the value V as ~ address ~ and has the value W as ~ token ~',
  'conceptualise a ~ hue bulb ~ h that has the value C as ~ code ~ and has the value V as ~ strength ~',
  'conceptualise an ~ iot card ~ I that is a card and ~ targets ~ the hue bulb D and has the value P as ~ power ~ and has the value B as ~ brightness ~ and has the value S as ~ saturation ~ and has the value H as ~ hue ~ and has the value C as ~ colour ~',
  'there is a hue bridge named bridge1 that has \'192.168.1.2\' as address and has \'abc123\' as token',
];

The model tells the node about Hue Bridges, bulbs, and a new type of card called an iot card, which supports properties for controlling bulbs. Finally, we instantiate a single bridge with an appropriate IP address and the username/token generated earlier.

Next the CENode instance needs to be created and its agent prepared:

const node = new CENode(CEModels.core, lightModel);
const hueBridge = node.concepts.hue_bridge.instances[0];
updateBulbs();
node.attachAgent();
node.agent.setName('House');

The updateBulbs() function (see it here) makes a request to the Bridge to download data about known Hue bulbs, which are added to the node’s knowledge base. For example;

there is a hue bulb named 'Lounge' that has '7' as code

The code property is the unique identifier the bridge uses to determine the bulb on the network.

Finally, all that was needed was to include a handler function for iot cards and to add this to the CENode agent:

node.agent.cardHandler.handlers['iot card'] = (card) => {
  if (card.targets){
    const data = {};
    if (card.power) data.on = card.power === 'on';
    if (card.brightness) data.bri = parseInt(card.brightness)
    if (card.saturation) data.sat = parseInt(card.saturation)
    if (card.hue) data.hue = parseInt(card.hue)
    request('PUT', hueBridge, '/lights/' + card.targets.code + '/state', data);
  }
};

The function makes an appropriate request to the Hue Bridge based on the properties of the iot card. Now, we can submit sentences like this in order to interact with the system (e.g. to turn the ‘Lounge’ bulb on):

there is an iot card named card1 that is to the agent House and has 'instruction' as content and targets the hue bulb 'Lounge' and has 'on' as power

And that’s it, really. This post contains only the more interesting components of the experiment, but hopefully provides an indication of how the library may be used for simple inter-device communication. The full demo includes extra code to handle the UI for a webapp and extra utility functions.

CENode

Jun 22, 2017

Whilst working on the ITA Project - a collaborative research programme between the UK MoD and the US Army Research Laboratory - over the last few years, one of my primary areas has been to research around controlled natural languages, and working with Cardiff University and IBM UK’s Emerging Technology team to develop CENode.

As part of the project - before I joined - researchers at IBM developed the CEStore, which aims to provide tools for working with ITA Controlled English. Controlled English (CE) is a subset of the English language which is structured in a way that attempts to remove ambiguity from statements, enabling machines to understand ‘English’ inputs.

Such a language was developed partly to support multi-agent systems consisting of a mixture of humans and machines, and to allow each agent to be able to communicate with one another using the same protocol in coalition scenarios. In these systems, there may be agents on the ground who submit information to the CEStore in CE, which is able to parse and understand the inputs. The CEStore may then pass the information on to other interested parties or may give an agent (such as a drone, camera, sensor, or other equipment) a task (follow, intersect, watch, etc.) based on the complement of the existing knowledge and the new input.

An old example we use combines the CEStore with a system capable of assigning missions to sensors or equipment (see this paper). This example focuses on ‘John Smith’, who is known to the CE system as a HVT (high-value target) owning a black car with licence plate ‘ABC 123’. A human agent on the ground may later observe a speeding car and issue information into the system through an interface on their mobile device or via a microphone;

there is a car named car1 which has black as colour and has 'ABC 123' as licence plate and is travelling north on North Road

The system receiving the message can put together that this speeding car most likely contains John Smith (since it’s known that he owns a car with this licence plate), and so can task a nearby drone to follow it based on the coordinates of the road and the direction of travel.

A human agent being able to type or speak this precise type of English is unlikely, particularly in emergency or rapid-response scnearios, and so the CEStore has a level of understanding of ‘natural’ language, and is able to translate many sentences from natural language English into CE - enabling agents to, largely, speak in a more native fashion.

The usefulness of the CEStore project led us to consider possibilities of a (lighter) version of a CEStore that could run on mobile devices in a decentralised network of CE-capable devices without relying on a centralised node responsible for parsing and translating all CE inputs. Such a system would also have the benefit of supporting a network of distributed ‘nodes’, each with the ability to maintain their own distinct knowledge bases and to understand and ‘speak’ CE - and thus the concept for CENode was produced.

A key motivation for this was to support those agents who may not have a consistent network connection to a central server, but who still need knowledge support and the ability to report information - thus building the local knowledge base and improving inferences. Then, once the agent can re-establish a connection to other nodes, new information can propagate through the network.

The CENode project (with source hosted on GitHub) began with a focus on supporting our SHERLOCK experiments, which had traditionally been powered using the CEStore. Using CENode, users of SHERLOCK experienced benefits such as auto-correct and typing suggestions, the ability to continue working offline (with information syncing when a network is re-established), and the display of a personalised ‘dashboard’ indicating the local agent’s view of the world represented by the game.

The SHERLOCK experiment was even covered by the BBC.

Since then, the CENode project has grown, and many of the features enjoyed by the CEStore (which is written in Java and deployed using Apache Tomcat) have been re-implemented for CENode. The library supports rules that fire given specific inputs, simple natural language understanding and parsing, querying through CE inputs, the CE cards blackboard architecture, and policies - enabling CENode instances to communicate with each other in different topologies.

CENode is written in JavaScript, since this allows it to be downloaded to and cached on any JavaScript-supporting browser (for example, on a mobile phone or tablet), and to run as a Node app.

In addition to using the CE-based (‘cards’) interfaces, CENode can be interacted-with using the JavaScript bindings and can expose RESTful APIs when run as a Node app, enabling several types of CENode deployments to work together as part of a single system.

Check out a demo of the library here, which wraps a simple user interface around the library’s JavaScript bindings. In the demo, the local CENode agent is preloaded with some knowledge about planets and stars. Try asking it questions or teaching it something new. Additionally, we have deployed a service called CENode Explorer which can launch cloud-based CENode instances and allows you to browse the knowledge base.

We hope to continue to maintain CENode as part of the project, and to discover more interesting use-cases. There are already clear pathways for its use in voice assistants, bots, and as a protocol for communication in IoT devices (some work for which is already underway). Those interested in developing with the library can get started using the CENode Wiki.

Two Year Update

Mar 16, 2017

I haven’t written a post since summer 2015. It’s now March 2017 and I thought I’d write an update very briefly covering the last couple of years.

I finished researching and lecturing full-time in the summer of 2015. It felt like the end of an era; I’d spent around a third of my life at the School of Computer Science & Informatics at Cardiff University, and had experienced time there as an undergraduate through to postgrad and on to full-time staff. However, I felt it was time to move on and to try something new, although I was really pleased to be able to continue working with them on a more casual part-time basis - something that continues to today.

In that summer after leaving full-time work at Cardiff I went interrailing around Europe with my friend, Dan. It was an amazing experience through which I had a taste of many new European cities where we met lots of interesting people. We started by flying out to Berlin, and from there our route took us through Prague, Krakow, Budapest, Bratislava, Vienna, Munich, Koblenz, Luxembourg City, Brussels, Antwerp, and then finished in Amsterdam (which I’d been to before, but always love visiting).

Some photos from the Interrail trip taken from my Instagram.

After returning, I moved to London to start a new full-time job with Chaser. Having met the founders David and Mark at a previous Silicon Milkroundabout, Chaser was so great to get involved with - I was part of a fab team creating fin-tech software with a goal to help boost the cashflows in small-medium sized businesses. Working right in the City was fun and totally different to what seemed like a much quieter life in Cardiff. Whilst there, I learned loads more about web-based programming and was able to put some of the data-analysis skills from my PhD to use.

At the end of 2015 I was to move back to South Wales to begin a new job at Simply Do Ideas as a senior engineer. Again, this was a totally different experience involving a shift from fin-tech to ed-tech and a move from the relentless busy-ness of London to the quieter (but no less fun) life of Caerphilly - where our offices were based. Since I was to head the technical side of the business, I was able to put my own stamp on the company and the product, and was able to help decide its future and direction.

Myself and Josh representing Simply Do Ideas at Bett 2017 in London.

In February 2016 I was honoured to be promoted to the Simply Do Ideas board and to have been made the company’s Chief Technology Officer. Over the last year myself and the rest of the team have been proud to be part of a company growing very highly respected in a really interesting and exciting domain, and we’re all very excited about what’s to come in the near (and far) future!

I still continue to work with Cardiff University on some research projects and to help out with some of the final-year students there, I hope to write a little more about this work soon.

I feel so lucky to have been able to experience so much in such a short time frame - from academic research and teaching, being a key part of two growth startups, heading a tech company’s technology arm, being a member of a board along with very highly-respected and successful entrepreneurs and business owners, and getting to meet such a wide range of great people. I feel like I’ve grown and learned so much - both professionally and personally - from all of my experiences and from everyone I’ve met along the way.