Interfacing Leap Motion with Arduino thanks to Node.js

What about controlling physical things by waving your hands?

Thanks to the Leap Motion, Arduino and a bit of Node.js magic it's pretty simple!

Let's check that!

Leap Motion

There is a nice feature in the SDK of the Leap Motion: the websocket server.

When activated, the Leap Motion software streams the tracking data over it. You then just need to connect your software to it and you're ready to analyse these data.

Go to the Leap Motion controller settings, and then in the WebSocket tab and activate it.

The websocket server is now accessible at the following adress: ws://

Connect your Node.js application to the websocket server

Node.js is amazing, and connecting your app to the Leap Motion websocket server is just a matter of 2 lines.

In your project, install the ws library with npm install ws --save

And then you can check everything is working with this simple code:

var webSocket = require('ws'),
    ws = new webSocket('ws://');

ws.on('message', function(data, flags) {

When running this script, it should log the data coming from the websocket. If so, you are ok!

It is then just a matter of parsing the data and take what you need in it!

Here you can find an example of a frame: frame.json


So you have now all the data you need from the Leap Motion, how can you pass it to the Arduino? Node.js is again the answer!

Thanks to the marvelous johnny-five library, you can talk to the Arduino directly from Node.js! To achieve that, you'll just need to upload the Standard Firmata on your Arduino.

Open the Arduino IDE, open the File>Examples>Firmata>StandardFirmata and upload it to your Arduino.

You can now use johnny-five to communicate with the Arduino.

Install it: npm install johnny-five --save. Below is a snippet to connect your board and make the led that is tied to the pin 13 blink.

var five = require('johnny-five'),
    board = new five.Board(),

board.on('ready', function() {
    led = new five.Led(13);

Easy isn't it?

Plug all this together

It's now really easy to plug all this together. Let's try to make a simple example that turns on the led when the Leap Motion sees 2 hands, and turns it off when not. You should take a look to the sample frame once more: frame.json.

var webSocket = require('ws'),
    ws = new webSocket('ws://'),
    five = require('johnny-five'),
    board = new five.Board(),
    led, frame;

board.on('ready', function() {
    led = new five.Led(13);    
    ws.on('message', function(data, flags) {
        frame = JSON.parse(data); 
        if (frame.hands && frame.hands.length > 1) {
        else {

That's it! (I hope so, it's not tested :))

Going further

Here is the demo I made few weeks ago:

You can find the code on github: leapLamp, feel free to use it and/or ask for some help.

Happy coding.

Quality analysis on Node.js projects with Mocha, Istanbul and Sonar


What about having a nice dashboard for the code quality of your project? Sonar is a well known open source tool to handle that.

It handles a wide range of programming languages, from COBOL to Javascript. So let's give a try to run Sonar on Node.js projects!


Testing with Mocha

Mocha has a lot of built-in functionalities and is extensible. We'll see this point will be really important. Because of its extensibility, you can output your tests results in various formats. If you don't know Mocha, you should give it a try!

So far, Sonar only handles the xunit format as input, so you need a Mocha reporter that can handle it. There is a bundled reporter in Mocha that achieve that, but you need to pipe the console output to the xunit file by yourself.

You could also use the following reporter plugin that does the work for you :

So install it : npm install xunit-file --save-dev and then it's just a matter of setting an environment variable during the build to tell where to output the file.

So the xunit task in my Makefile look like this :

	@# check if reports folder exists, if not create it
	@test -d reports || mkdir reports
	XUNIT_FILE="reports/TESTS-xunit.xml" $(MOCHA) -R xunit-file $(TESTS)

Where $(TESTS) is the list of tests. Since I use a convention to name my tests (they always end with .test.js) I can retrieve them easily.

TESTS=$(shell find test/ -name "*.test.js")

And $(MOCHA) points to the mocha binary. I tend to install the tools I use inside the project rather than globally, it let me handle specific version of them for each project and gives the project some portability. But I don't know if it's a good practice or not. Feel free to drop a comment!


You may have noticed the weird name I gave to the xunit output file : TESTS-xunit.xml, it's really important it starts with TESTS! If not, you won't be able to gather tests metrics in Sonar.

If you dig into the code of Sonar, here is why :

You now have a test report ready to be consumed by Sonar (and Jenkins too if you want!).

Code coverage with Istanbul

Istanbul is the new cool kid when it comes to code coverage. And it is pretty simple to use!

instabul cover myNodeCommand will transparently add coverage info to the executed node command!

Since mocha is a node command, everything is ok!

You can just do the following

	@# check if reports folder exists, if not create it
	@test -d reports || mkdir reports
	$(ISTANBUL) cover --report lcovonly --dir ./reports $(_MOCHA) -- -R spec $(TESTS)

Just note the double dash to distinguish istanbul args from the mocha ones and the use of _mocha internal executable (see istanbul/issues/44).

If you need to produce some other report formats (html, cobertura, etc.), you can check the report options.


Sonar analysis can be performed in various ways (ant, maven, sonar-runner). Even if i do like maven (yes I do!), there's no way I'll put a pom.xml in a node.js project.

We'll use the sonar-runner.

Download it : But, for the sake of portability I prefer to embed it in my project rather than installing it (again, I don't know if it's a good practice, but it's mine).

Then, you need to configure a file at the root of your project that will drive the sonar analysis : This file is really simple, it just tells Sonar where to find the reports we produced before and provide some general info about the project (see




The file speaks by itself: project info, project directory structure and tell Sonar to reuse already generated reports and where they are.

Please note that i use a custom profile which is a set of coding rules my code will be tested against. If you don't have this profile on your Sonar instance, you should delete the sonar.profile=node. Your code will be then tested against the default js profile, which is not really adapted for node.js. I'll come back on that.

So after you have the xunit and lcov file, you can run Sonar.

Here is my task in the Makefile:

	@# add the sonar sonar-runner executable to the PATH and run it
	PATH="$$PWD/tools/sonar-runner-2.2/bin:$$PATH" sonar-runner

That's it!

Please note that the actual configuration in assumes the Sonar server is running on http://localhost:9000.

You can change that by specifying the right values in (see

You can now browse your Sonar dashboard and see this nice report:

In a glance, you can see the wealth of your project : tests results, code coverage, coding rules compliance, complexity, etc...

You can then drill down in any metrics, to see where are the coding rules violations, which line is covered or not, etc. Just read about Sonar to see how powerful it is. You can even see how the metrics evolve between two analysis!

Here is an example of code coverage report:

You can find a dummy project on github that covers the ideas of this blog post :


I use a custom quality profile in Sonar for Node.js (you can find it here:

You can install it following the docs:

My profile is far from ideal, and can be discussed.

As I told you at the beginning, if you can generate xunit and lcov formats, you're good! So you can easily apply this technique to Angular projects, because of the mighty Karma runner, see :

Have fun.

Continuous deployement with Github, Travis and Heroku for Node.js

Cloud services helps the developer to focus on the code. In one or two commands you get a running server, a CI engine, etc.

I wont present you Gihtub, Travis or Heroku, if you're not aware of them, just check their websites!


You need the following to be installed on your machine:

Setting all this up

First of all, you have to get your auth token from Heroku :

token=$(heroku auth:token)

This will store the token, and keep it for later.

Then, install the Travis CLI tool:

sudo gem install travis

You now have the Travis CLI tool installed. This will help you to encrypt your Heroku token.

travis encrypt HEROKU_API_KEY=$token --add

Running the above command will add your encrypted token to your .travis.yml.

Open it and you should see something like this:


Travis will decrypt it during the build, and be able to authenticate itself with Heroku.

Plug the pipe between Travis and Heroku

Now you need to tell Travis to deploy your app to Heroku after a succesful build.

Luckily, Travis can handle it through its build lifecycle.

Just add the following to your .travis.yml (don't forget to replace HEROKU_APP_NAME!):

  - wget -qO- | sh
  - git remote add heroku
  - echo "Host" >> ~/.ssh/config
  - echo "   StrictHostKeyChecking no" >> ~/.ssh/config
  - echo "   CheckHostIP no" >> ~/.ssh/config
  - echo "   UserKnownHostsFile=/dev/null" >> ~/.ssh/config
  - yes | heroku keys:add
  - yes | git push heroku master

This will install the Heroku toolbelt, and then configure your Travis worker to communicate with Heroku and finally push your code to it!

And you're done! Now everytime you push your code to Github, Travis will build/test it and deploy it to Heroku!

Here is the whole .travis.yml:

language: node_js
  - wget -qO- | sh
  - git remote add heroku
  - echo "Host" >> ~/.ssh/config
  - echo "   StrictHostKeyChecking no" >> ~/.ssh/config
  - echo "   CheckHostIP no" >> ~/.ssh/config
  - echo "   UserKnownHostsFile=/dev/null" >> ~/.ssh/config
  - yes | heroku keys:add
  - yes | git push heroku master
- 0.8

Use bower with heroku


Bower is pretty awsome! Heroku too!

Use them together!

Without creating your own Heroku buildpack, you can achieve that quite easily.

Just add a dependency to Bower in your package.json and then rely on the npm scripts to execute a postinstall command (

So you'll end up with somthing like this in your package.json:

"dependencies": {
    "bower": "0.6.x"
"scripts": {
    "postinstall": "./node_modules/bower/bin/bower install"

And that's it! Heroku will run a npm install that will execute the bower install.

Pros: one command to rule them all.

Cons: you unnecessarily embed bower as a dependency.