Archives

Interfacing Leap Motion with Arduino thanks to Node.js

What about controlling physical things by waving your hands?

Thanks to the Leap Motion, Arduino and a bit of Node.js magic it's pretty simple!

Let's check that!

Leap Motion

There is a nice feature in the SDK of the Leap Motion: the websocket server.

When activated, the Leap Motion software streams the tracking data over it. You then just need to connect your software to it and you're ready to analyse these data.

Go to the Leap Motion controller settings, and then in the WebSocket tab and activate it.

The websocket server is now accessible at the following adress: ws://127.0.0.1:6437

Connect your Node.js application to the websocket server

Node.js is amazing, and connecting your app to the Leap Motion websocket server is just a matter of 2 lines.

In your project, install the ws library with npm install ws --save

And then you can check everything is working with this simple code:

var webSocket = require('ws'),
    ws = new webSocket('ws://127.0.0.1:6437');

ws.on('message', function(data, flags) {
    console.log(data);
});

When running this script, it should log the data coming from the websocket. If so, you are ok!

It is then just a matter of parsing the data and take what you need in it!

Here you can find an example of a frame: frame.json

Arduino

So you have now all the data you need from the Leap Motion, how can you pass it to the Arduino? Node.js is again the answer!

Thanks to the marvelous johnny-five library, you can talk to the Arduino directly from Node.js! To achieve that, you'll just need to upload the Standard Firmata on your Arduino.

Open the Arduino IDE, open the File>Examples>Firmata>StandardFirmata and upload it to your Arduino.

You can now use johnny-five to communicate with the Arduino.

Install it: npm install johnny-five --save. Below is a snippet to connect your board and make the led that is tied to the pin 13 blink.

var five = require('johnny-five'),
    board = new five.Board(),
    led;

board.on('ready', function() {
    led = new five.Led(13);
    led.strobe();
});

Easy isn't it?

Plug all this together

It's now really easy to plug all this together. Let's try to make a simple example that turns on the led when the Leap Motion sees 2 hands, and turns it off when not. You should take a look to the sample frame once more: frame.json.

var webSocket = require('ws'),
    ws = new webSocket('ws://127.0.0.1:6437'),
    five = require('johnny-five'),
    board = new five.Board(),
    led, frame;

board.on('ready', function() {
    led = new five.Led(13);    
    ws.on('message', function(data, flags) {
        frame = JSON.parse(data); 
        if (frame.hands && frame.hands.length > 1) {
            led.on();
        }
        else {
            led.off();
        }
    });
});

That's it! (I hope so, it's not tested :))

Going further

Here is the demo I made few weeks ago:

You can find the code on github: leapLamp, feel free to use it and/or ask for some help.

Happy coding.

Plug your Minitel on your Raspberry Pi

Hi,

So what is a Minitel? According to Wikipedia :

The Minitel was a Videotex online service accessible through telephone lines, and is considered one of the world's most successful pre-World Wide Web online services.¹

This service was accessible through particular devices. They had a screen, a keyboard and a modem.

A screen and a keyboard are just what we need for our Pi, so let's plug them together!

Minitel and serial communication

The Minitel have a serial port. It's goal is to communicate to peripherals such as a printer or whatever.

The socket is a classic 180° DIN with 5 pins :

Here is the description of the pins:

  • 1: Rx: data reception
  • 2: Ground
  • 3: Tx: data transmission
  • 4: Ready to work signal
  • 5: 8.5v - 1A power supply

So pins 1,2 and 3 are what we need to communicate through serial with the Pi.

Please note that not all Minitels have this kind of sockets. To find a compatible one, the Minitel must have this socket AND two special keys on the keyboard Fnct and Ctrl. They are usualy called Minitel 1B.

TTL levels and the Pi

The UART on the Pi works with 0v and 3.3v. But a lot of old stuff use 0v and 5v. This is the case of the Minitel, so we need to adapt the voltage levels :

  • Lower the Tx level of the Minitel from 5v to 3.3v
  • Raise the Tx level of the Pi from 3.3v to 5v

To achieve that, I used the following schema based on the recommendation of @lhuet35 (thanks!). You can check its Devoxx presentation (in french) here : 3615 Cloud

Be careful, the unused pin between the 5v and the GND of the Pi is not depicted on this schema!!

Here is the stuff mounted on a breadboard :

Configure a tty on the UART

You then need to configure a tty that will communicate through the UART.

The following configuration is based on a Raspbian, but it should be the same on other distros.

  • You may need to install getty :
    • sudo apt-get install getty
  • Backup the /boot/cmdline.txt file just in case :)
    • sudo cp /boot/cmdline.txt /boot/cmdline.bak.txt
  • Edit the file:
    • sudo vim /boot/cmdline.txt
    • and remove everything related to the serial port ttyAMA0, i.e. : console=ttyAMA0,115200 kgdboc=ttyAMA0,115200
  • Add a getty conf on /etc/inittab :
    • 7:2345:respawn:/sbin/getty ttyAMA0 4800v23
    • also check there is no other getty conf for this tty on the file
  • Then you need to create a gettydefs file (or edit it)
    • sudo vim /etc/gettydefs
    • and add the following 4800v23# B4800 CS7 PARENB -PARODD GLOBAL # B4800 ISTRIP CS7 PARENB -PARODD GLOBAL BRKINT IGNPAR ICRNL IXON IXANY OPOST ONLCR CREAD HUPCLISIG ICANON ECHO ECHOE ECHOK #@S login: #4800v23 on one line!
    • this will configure the tty on UART

You can now plug the Pi to the Minitel and reboot the Pi.

Configure the Minitel

You need to switch the Minitel mode to be able to communicate through the serial port.

  • Power on the Minitel
  • Press Fnct+T then A : the Minitel will switch to the serial mode
  • Press Fnct+P then 4 : the Minitel now communicate through serial at 4800bps (the max speed)
  • Press Fnct+T then E : to deactivate the local echo
  • Press and you should now see the login prompt (maybe with some white squares), put your login and you're done!

Be aware that you'll need to do this Minitel configuration everytime you power it up.

Here is a pic of my Minitel :

Happy coding!

Quality analysis on Node.js projects with Mocha, Istanbul and Sonar

Hello,

What about having a nice dashboard for the code quality of your project? Sonar is a well known open source tool to handle that.

It handles a wide range of programming languages, from COBOL to Javascript. So let's give a try to run Sonar on Node.js projects!

Prerequisites

Testing with Mocha

Mocha has a lot of built-in functionalities and is extensible. We'll see this point will be really important. Because of its extensibility, you can output your tests results in various formats. If you don't know Mocha, you should give it a try!

So far, Sonar only handles the xunit format as input, so you need a Mocha reporter that can handle it. There is a bundled reporter in Mocha that achieve that, but you need to pipe the console output to the xunit file by yourself.

You could also use the following reporter plugin that does the work for you : https://github.com/peerigon/xunit-file

So install it : npm install xunit-file --save-dev and then it's just a matter of setting an environment variable during the build to tell where to output the file.

So the xunit task in my Makefile look like this :

xunit:
	@# check if reports folder exists, if not create it
	@test -d reports || mkdir reports
	XUNIT_FILE="reports/TESTS-xunit.xml" $(MOCHA) -R xunit-file $(TESTS)

Where $(TESTS) is the list of tests. Since I use a convention to name my tests (they always end with .test.js) I can retrieve them easily.

TESTS=$(shell find test/ -name "*.test.js")

And $(MOCHA) points to the mocha binary. I tend to install the tools I use inside the project rather than globally, it let me handle specific version of them for each project and gives the project some portability. But I don't know if it's a good practice or not. Feel free to drop a comment!

MOCHA=node_modules/.bin/mocha

You may have noticed the weird name I gave to the xunit output file : TESTS-xunit.xml, it's really important it starts with TESTS! If not, you won't be able to gather tests metrics in Sonar.

If you dig into the code of Sonar, here is why : https://github.com/SonarSource/sonar-java/blob/master/sonar-surefire-plugin/src/main/java/org/sonar/plugins/surefire/api/AbstractSurefireParser.java#L67

You now have a test report ready to be consumed by Sonar (and Jenkins too if you want!).

Code coverage with Istanbul

Istanbul is the new cool kid when it comes to code coverage. And it is pretty simple to use!

instabul cover myNodeCommand will transparently add coverage info to the executed node command!

Since mocha is a node command, everything is ok!

You can just do the following

_MOCHA=node_modules/.bin/_mocha
coverage:
	@# check if reports folder exists, if not create it
	@test -d reports || mkdir reports
	$(ISTANBUL) cover --report lcovonly --dir ./reports $(_MOCHA) -- -R spec $(TESTS)

Just note the double dash to distinguish istanbul args from the mocha ones and the use of _mocha internal executable (see istanbul/issues/44).

If you need to produce some other report formats (html, cobertura, etc.), you can check the report options.

Sonar

Sonar analysis can be performed in various ways (ant, maven, sonar-runner). Even if i do like maven (yes I do!), there's no way I'll put a pom.xml in a node.js project.

We'll use the sonar-runner.

Download it : http://docs.codehaus.org/display/SONAR/Installing+and+Configuring+Sonar+Runner. But, for the sake of portability I prefer to embed it in my project rather than installing it (again, I don't know if it's a good practice, but it's mine).

Then, you need to configure a file at the root of your project that will drive the sonar analysis : sonar-project.properties. This file is really simple, it just tells Sonar where to find the reports we produced before and provide some general info about the project (see http://docs.codehaus.org/display/SONAR/JavaScript+Plugin).

sonar.projectKey=sonar-js
sonar.projectName=sonar-js
sonar.projectVersion=0.0.1
 
sonar.sources=src
sonar.tests=test
sonar.language=js
sonar.profile=node

sonar.dynamicAnalysis=reuseReports

sonar.javascript.jstest.reportsPath=reports
sonar.javascript.lcov.reportPath=reports/coverage.lcov

The file speaks by itself: project info, project directory structure and tell Sonar to reuse already generated reports and where they are.

Please note that i use a custom profile which is a set of coding rules my code will be tested against. If you don't have this profile on your Sonar instance, you should delete the sonar.profile=node. Your code will be then tested against the default js profile, which is not really adapted for node.js. I'll come back on that.

So after you have the xunit and lcov file, you can run Sonar.

Here is my task in the Makefile:

sonar:
	@# add the sonar sonar-runner executable to the PATH and run it
	PATH="$$PWD/tools/sonar-runner-2.2/bin:$$PATH" sonar-runner

That's it!

Please note that the actual configuration in sonar-project.properties assumes the Sonar server is running on http://localhost:9000.

You can change that by specifying the right values in sonar-project.properties (see http://docs.codehaus.org/display/SONAR/Analysis+Parameters).

You can now browse your Sonar dashboard and see this nice report:

In a glance, you can see the wealth of your project : tests results, code coverage, coding rules compliance, complexity, etc...

You can then drill down in any metrics, to see where are the coding rules violations, which line is covered or not, etc. Just read about Sonar to see how powerful it is. You can even see how the metrics evolve between two analysis!

Here is an example of code coverage report:

You can find a dummy project on github that covers the ideas of this blog post : https://github.com/xseignard/sonar-js

Notes

I use a custom quality profile in Sonar for Node.js (you can find it here: https://github.com/xseignard/sonar-js/blob/master/tools/node_js.xml).

You can install it following the docs: http://docs.codehaus.org/display/SONAR/Quality+profiles#QualityProfiles-BackupingRestoringProfile

My profile is far from ideal, and can be discussed.

As I told you at the beginning, if you can generate xunit and lcov formats, you're good! So you can easily apply this technique to Angular projects, because of the mighty Karma runner, see :

Have fun.

Processing and GPIOs on Raspberry Pi

Hello,

Processing is a nice programming language for creative coding, and you can physically interact with the Raspberry Pi thanks to its GPIOs. So why not combining them?

Let's do it.

Prerequisites

  • A Raspberry Pi (sic!) running with a Raspbian image (it may work on other configurations, but not tested).
  • All command line below are executed from the home directory (i.e. /home/pi/ for the pi user).
  • You may need to install some tools sudo apt-get install unzip ca-certificates.

Install Oracle JDK8 on the Raspberry Pi

The idea of installing JDK8 is not to enjoy those long awaited Lambdas, but to provide the execution platform for Processing. Luckily, Oracle started to provide builds of the JDK for the arm platform.

Download the JDK.

wget --no-check-certificate http://www.java.net/download/JavaFXarm/jdk-8-ea-b36e-linux-arm-hflt-29_nov_2012.tar.gz

Untar the binaries at the right place.

sudo mkdir -p /opt/java
tar xvzf jdk-8-ea-b36e-linux-arm-hflt-29_nov_2012.tar.gz
sudo mv jdk1.8.0 /opt/java

Then, you must tell raspbian to use these binaries to provide java.

sudo update-alternatives --install "/usr/bin/java" "java" "/opt/java/jdk1.8.0/bin/java" 1

If you already had another java version installed, you may need to choose the one we just installed, if not you can skip this.

sudo update-alternatives --config java

And choose the JDK8 by entering the corresponding number.

Now you need to define some environment variables for java to run properly.

echo export JAVA_HOME="/opt/java/jdk1.8.0" >> .bashrc
echo export PATH=$PATH:$JAVA_HOME/bin >> .bashrc
source .bashrc

It will add the environment variables at the end of your .bashrc. If you use zsh (and you should! with oh-my-zsh), just replace .bashrc with .zshrc in the three lines of code above.

Java is now installed, and you can check it with java -version. It should display something like this:

java version "1.8.0-ea"
Java(TM) SE Runtime Environment (build 1.8.0-ea-b36e)
Java HotSpot(TM) Client VM (build 25.0-b04, mixed mode)

Also check the environment variables, it should return something.

echo $JAVA_HOME | grep /opt/java/jdk1.8.0
echo $PATH | grep /opt/java/jdk1.8.0/bin

If those checks are not ok, something went wrong, feel free to drop a comment.

Install Processing

The long awaited 2.0 final version is still not here (at the time of writing), but you can download the last beta.

wget http://processing.googlecode.com/files/processing-2.0b8-linux32.tgz

Notice we'll use a x86 version, no worries we'll deal with it.

Untar it

tar xvzf processing-2.0b8-linux32.tgz

Java is bundled with Processing, so we need to tell it to use the java version we installed rather than the bundled one. To do that, we'll remove the java folder inside processing and replace this folder with a symbolic link to the java version we installed.

rm -rf processing-2.0b8/java
ln -s $JAVA_HOME processing-2.0b8/java

Processing is now installed! You can now log in the UI of the Raspberry (if not already) and run processing from the terminal with the following:

cd ~/processing-2.0b8;./processing

You'll have to wait a little bit to see the UI coming up.

You may notice some error messages in the terminal, but so far it had no incidence for me, so I ignore them.

Install a library to interact with GPIOs

So far, I haven't found any Processing ready library, so I'll use the Pi4J java library.

Processing has a particular way to handle library, you need to have a special structure in the folders. And Pi4J is not packaged according to the Processing convention. So you'll need to re-arrange stuff (see http://wiki.processing.org/w/How_to_Install_a_Contributed_Library).

First, go back to the /home/pi folder in the terminal.

Then download the Pi4J lib and unzip it:

wget https://pi4j.googlecode.com/files/pi4j-0.0.5.zip
unzip pi4j-0.0.5.zip

Since Processing is not happy when a lib have something else than letters and numbers in the lib name, you need to rename the unzipped folder.

mv pi4j-0.0.5 pi4j

Then you need to re-arrange files to stick with the Processing convention.

mv pi4j/lib pi4j/library
mv pi4j/library/pi4j-core.jar pi4j/library/pi4j.jar

Now you can put the lib in the Processing library folder (defaults to ~/sketchbook/libraries).

mv pi4j sketchbook/libraries

Done! You can now import Pi4J in your Processing sketch!

Getting started with Pi4J

Here is a simple skecth which will add an ellipse every time a button is pressed.

import com.pi4j.io.gpio.GpioController;
import com.pi4j.io.gpio.GpioFactory;
import com.pi4j.io.gpio.GpioPinDigitalInput;
import com.pi4j.io.gpio.PinPullResistance;
import com.pi4j.io.gpio.RaspiPin;

int WIDTH = 1280;
int HEIGHT = 1024;
GpioController gpio;
GpioPinDigitalInput button;

void setup() {
	size(WIDTH, HEIGHT);
	gpio = GpioFactory.getInstance();
	button = gpio.provisionDigitalInputPin(RaspiPin.GPIO_02, PinPullResistance.PULL_DOWN);
}

void draw() {
	if (button.isHigh()) {
		println("pressed");
		fill(int(random(255)), int(random(255)), int(random(255)));
		float x = random(WIDTH);
		float y = random(HEIGHT);
		ellipse(x, y, 80, 80);
	};
}

I invite you to read the Pi4J documentation to dive into it. You should use events rather than testing the state of a button as shown above (see http://pi4j.com/example/listener.html).

Here is the wiring schema that comes along the sketch from above (borrowed from http://pi4j.com/).

If you try to run it, you'll face some permission issues since Pi4J require root privileges to access GPIOs. For now I export the application and run it with sudo to bypass it. It should exist a cleaner way to handle it. I'll update this post with a proper solution if there is.

You are ready to poop some creative code! Enjoy!

Continuous deployement with Github, Travis and Heroku for Node.js

Cloud services helps the developer to focus on the code. In one or two commands you get a running server, a CI engine, etc.

I wont present you Gihtub, Travis or Heroku, if you're not aware of them, just check their websites!

Prerequisites

You need the following to be installed on your machine:

Setting all this up

First of all, you have to get your auth token from Heroku :

token=$(heroku auth:token)

This will store the token, and keep it for later.

Then, install the Travis CLI tool:

sudo gem install travis

You now have the Travis CLI tool installed. This will help you to encrypt your Heroku token.

travis encrypt HEROKU_API_KEY=$token --add

Running the above command will add your encrypted token to your .travis.yml.

Open it and you should see something like this:

env: 
  global: 
  - secure: YOUR_ENCRYPTED_TOKEN

Travis will decrypt it during the build, and be able to authenticate itself with Heroku.

Plug the pipe between Travis and Heroku

Now you need to tell Travis to deploy your app to Heroku after a succesful build.

Luckily, Travis can handle it through its build lifecycle.

Just add the following to your .travis.yml (don't forget to replace HEROKU_APP_NAME!):

after_success:
  - wget -qO- https://toolbelt.heroku.com/install-ubuntu.sh | sh
  - git remote add heroku git@heroku.com:HEROKU_APP_NAME.git
  - echo "Host heroku.com" >> ~/.ssh/config
  - echo "   StrictHostKeyChecking no" >> ~/.ssh/config
  - echo "   CheckHostIP no" >> ~/.ssh/config
  - echo "   UserKnownHostsFile=/dev/null" >> ~/.ssh/config
  - yes | heroku keys:add
  - yes | git push heroku master

This will install the Heroku toolbelt, and then configure your Travis worker to communicate with Heroku and finally push your code to it!

And you're done! Now everytime you push your code to Github, Travis will build/test it and deploy it to Heroku!

Here is the whole .travis.yml:

--- 
language: node_js
env: 
  global: 
  - secure: YOUR_ENCRYPTED_TOKEN
after_success:
  - wget -qO- https://toolbelt.heroku.com/install-ubuntu.sh | sh
  - git remote add heroku git@heroku.com:HEROKU_APP_NAME.git
  - echo "Host heroku.com" >> ~/.ssh/config
  - echo "   StrictHostKeyChecking no" >> ~/.ssh/config
  - echo "   CheckHostIP no" >> ~/.ssh/config
  - echo "   UserKnownHostsFile=/dev/null" >> ~/.ssh/config
  - yes | heroku keys:add
  - yes | git push heroku master
node_js: 
- 0.8

Use bower with heroku

Hello!

Bower is pretty awsome! Heroku too!

Use them together!

Without creating your own Heroku buildpack, you can achieve that quite easily.

Just add a dependency to Bower in your package.json and then rely on the npm scripts to execute a postinstall command (https://npmjs.org/doc/scripts.html).

So you'll end up with somthing like this in your package.json:

"dependencies": {
    "bower": "0.6.x"
},
"scripts": {
    "postinstall": "./node_modules/bower/bin/bower install"
}

And that's it! Heroku will run a npm install that will execute the bower install.

Pros: one command to rule them all.

Cons: you unnecessarily embed bower as a dependency.