Using Raspberry Pi to flash ESP8266

Necessity is mother of invention. Even though in this case there was not a lot of invention, there was necessity. I wanted to play with the ESP8266 module and I wanted it badly. Unfortunately, I could not find any FTDI in my drawers, so I had to find a different way of talking to my ESP8266. I quickly concluded that I could use either Arduino or Raspberry Pi. I decided to start with Arduino but for whatever reason could not make it work. A little desperate, I decided to try Raspberry PI. This, thanks to the information I found on the Internet, got me much further. I found a few articles scattered over the web and after combining various pieces of information I was able not only to talk to my ESP8266 but also reprogram it. I decided to write this post so that all the information I found is in one place. Let’s get started.

The idea is to connect an ESP8266 module to Raspberry Pi’s GPIO serial pins and then use a serial communication terminal like minicom or screen to talk to the module. Before we can do this, we need to reconfigure our Raspberry Pi so that the OS is not using the serial interface. The first step is to disable serial logins. The easiest way to achieve this is to use raspi-config. Start the the raspi-config with sudo raspi-config, go to Interfacing Options and then to Serial. You will be asked two questions – answer them as follows:

  • Would you like a login shell to be accessible over serial?
    Select: No
  • Would you like the serial port hardware to be enabled?
    Select: Yes

The second thing to do is to switch off sending bootup info to serial by removing references to ttyAMA0 in the /boot/cmdline.txt file – e.g.

dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty1 root=PARTUUID=9815a293-02 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait

becomes:

dwc_otg.lpm_enable=0 console=tty1 root=PARTUUID=9815a293-02 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait

Finally, you need to restart your Raspberry Pi for changes to take effect

sudo reboot now

Now, that we prepared our Raspberry Pi we need to wire the ESP8266 module. You need to power down Raspberry PI and connect your ESP8266 module as per the diagram below.

RPI_ESP8266_normal_mode


ESP8266 Raspberry Pi
GND       GND
3.3V      3.3V
RXD       UART_TXD
TXD       UART_RXD
CH_PD     3.3V (via a pull down resistor)

After wiring the ESP8266 module to the Raspberry Pi has been completed we can boot the Pi and connect to the ESP8266 using screen (I am using screen because it is installed by default but you can use minicom if you prefer):

screen /dev/ttyAMA0 115200

To see if we connected successfully let’s send some commands to the module (note: when using screen you need to press Enter and CTRL+J to send the command).

Check if AT system works (AT):

AT
OK

Display the version (AT+GMR)

AT+GMR
AT version:1.2.0.0(Jul  1 2016 20:04:45)
SDK version:1.5.4.1(39cb9a32)
Ai-Thinker Technology Co. Ltd.
Dec  2 2016 14:21:16
OK

Connect to WiFi (a series of commands)

AT+CWMODE_CUR=1
OK

AT+CWJAP_CUR="ssid","pwd"
WIFI CONNECTED
WIFI GOT IP
OK

AT+CWJAP_CUR?
+CWJAP_CUR:"ssid","18:a6:f7:23:9e:50",6,-52
OK

AT+CIFSR
+CIFSR:STAIP,"192.168.0.110"
+CIFSR:STAMAC,"5c:cf:7f:36:cd:31"

The full list of AT commands can be found on the espressif’s web site in the documents section – look for ESP8266 AT Instruction Set. Note that some commands from the document may not work if you are using an older version of the firmware.

We are able to communicate with our ESP8266. Now, let’s try updating the firmware. First, we need to power down the PI again and connect the GPIO0 pin of the ESP8266 to the PI’s ground pin. This will make the module run in the flash mode.

RPI_ESP8266_flash_mode

The next step is to install the esptool which we will use to transfer firmware files to the module:

sudo apt-get update
sudo apt-get install python
sudo apt-get install python-pip
sudo pip install esptool

Finally, we need a firmware that we will flash to the device. We can get a new firmware by cloning the espressif’s ESP8266_NONOS_SDK  repo:

git clone https://github.com/espressif/ESP8266_NONOS_SDK

After the repo has been cloned we need to go to the bin folder and run the following command:


esptool.py --port /dev/ttyAMA0 --baud 115200  write_flash --flash_freq 40m --flash_mode qio 0x0000 boot_v1.7.bin 0x1000 at/512+512/user1.1024.new.2.bin 0x7E000 blank.bin

The output of the command should look like this:

esptool.py v2.0.1
Connecting...
Detecting chip type... ESP8266
Chip is ESP8266
Uploading stub...
Running stub...
Stub running...
Configuring flash size...
Auto-detected Flash size: 1MB
Flash params set to 0x0020
Compressed 4080 bytes to 2936...
Wrote 4080 bytes (2936 compressed) at 0x00000000 in 0.3 seconds (effective 121.5 kbit/s)...
Hash of data verified.
Compressed 427060 bytes to 305755...
Wrote 427060 bytes (305755 compressed) at 0x00001000 in 27.9 seconds (effective 122.3 kbit/s)...
Hash of data verified.
Compressed 4096 bytes to 26...
Wrote 4096 bytes (26 compressed) at 0x0007e000 in 0.0 seconds (effective 4538.5 kbit/s)...
Hash of data verified.

Once the process is complete we can power down the Raspberry Pi and disconnect the ESP8266 GPIO0 pin from GND to make it run again in the normal mode. Now, we can start the PI, and dump the ESP8266 firmware version with the AT+GMR command:


AT+GMR
AT version:1.4.0.0(May  5 2017 16:10:59)
SDK version:2.1.0(116b762)
compile time:May  5 2017 16:37:48
OK

This is a much newer version than what we originally had so we were able to flash our ESP8266 successfully. Yay!

Acknowledgements:

The following articles were instrumental in writing this post:

 

 

Advertisements

The SignalR for ASP.NET Core JavaScript Client, Part 2 – Outside the Browser

Last time we looked at using the ASP.NET Core SignalR TypeScript/JavaScript client in the browser. I mentioned, however, that the new client no longer has dependencies that prevent from using it outside the browser. So, today we will try taking the client outside the browser and use it in a NodeJS application. We will add a NodeJS client for the SignalR Chat service we created last time. Initially we will write the client in JavaScript and then we will convert it to TypeScript.

Let’s start from creating a new folder in the SignalRChat repo and adding a new node project:

mkdir SignalRChatNode
cd SignalRChatNode
npm init

We will call the application signarlchatnode and we will leave all other options set to default values. (6425ec1)

Our application will read messages typed by the user and send them to the server. To handle user input we will use node’s readline module. To see that things, work, let’s just add code to prompts the user for the name and displays it in the console. We will use it a starting point of our application (34bc493).

const readline = require('readline');
let rl = readline.createInterface(process.stdin, process.stdout)

rl.question('Enter your name: ', name => {
console.log(name);
  rl.close();
});

To communicate with the SignalR server we need to add the SignalR JavaScript client to the project using the following command (7875c07):

npm install @aspnet/signalr-client --save

We can now try starting the connection like this (3228a10):

const readline = require('readline');
const signalR = require('@aspnet/signalr-client');

let rl = readline.createInterface(process.stdin, process.stdout);

rl.question('Enter your name: ', name => {
  console.log(name);

  let connection = new signalR.HubConnection('http://localhost:5000/chat');
  connection.start()
  .catch(error => {
    console.error(error);
    rl.close();
  });
});

The code looks good but if you try running it, it will immediately fail with the following error:

Error: Failed to start the connection. ReferenceError: XMLHttpRequest is not defined
ReferenceError: XMLHttpRequest is not defined

What happened? The new JavaScript client no longer depends on the browser but still uses standard libraries like XmlHttpRequest or WebSocket to communicate with the server. If these libraries are not provided the client will fail. Fortunately, the required functionality can be easily polyfilled in the NodeJS environment. For now, we will just stick the polyfills on the global object. It’s not beautiful by any means but will do the trick. We are discussing how to make it better in the future but at the moment this is the way to go.

Depending on the features of SignalR you plan to use you will need to provide appropriate polyfills. Currently the absolute minimum is XmlHttpRequest. SignalR client uses it to send the initial OPTIONS HTTP request which initializes the connection on the server side and for the long polling transport. So, if use the long polling transport only, XmlHttpRequest is the only polyfill you will need to provide . If you want to use the WebSockets transport you will need a WebSocket polyfill in addition to XmlHttpRequest. (We are thinking about skipping sending the OPTIONS request for WebSockets. If this is implemented you will not need the XmlHttpRequest polyfill when using the WebSockets transport.) For ServerSentEvents transport you will need an EventSource polyfill. Finally, if you happen to use binary protocols (e.g. MessagePack) over the ServerSentEvent transport you will need polyfills for atob/btoa functions. For simplicity, we will use the WebSocket transport in our application so we will add only polyfills for XmlHttpRequest and WebSockets:

npm install websocket xmlhttprequest --save

and make them available globally via:

XMLHttpRequest = require('xmlhttprequest').XMLHttpRequest;
WebSocket = require('websocket').w3cwebsocket;

If we run the code now we will see something like this:

moozzyk:~/source/SignalRChat/SignalRChatNode$ node index.js
Enter your name: moozzyk
moozzyk
Information: WebSocket connected to ws://localhost:5000/chat?id=0d015ce4-3a78-4313-9343-cb6183a5e8ea
Information: Using HubProtocol 'json'.

which tells us that the client was able to connect successfully to the server. (946f85d)

Now, we need to add some code to handle user input and interact with the server and our Node SignalR Chat client is ready. (I admit that the user interface is not very robust but should be enough for the purpose of this post). You can now talk to browser clients from your node client and vice versa (0f7f71f):

Screen Shot 2017-09-30 at 6.57.14 PM

Now let’s convert our client to TypeScript. We will start from creating a new TypeScript project with tsc --init. In the generated tsconfig.json file we will change the target to es6. We will also add an empty index.ts file and delete the existing index.js file (we will no longer need the index.js file since we will now be generating one by compiling the newly created index.ts). (b83cf92) If you now run tsc you should see an empty index.js file created as a result of compiling the index.ts file.  The last thing to do is to actually convert our JavaScript code to TypeScript. We could just translate it one-to-one but we can do a little better. TypeScript supports async/await which makes writing asynchronous code much easier. Since many of SignalR client methods return Promises we can just await these calls instead of using .then/.catch functions. Here is how our node SignalRChat client written in TypeScript looks like (2a6d0e9):

import * as readline from "readline"
import * as signalR from "@aspnet/signalr-client"

(<any>global).XMLHttpRequest = require("xmlhttprequest").XMLHttpRequest;
(<any>global).WebSocket = require("websocket").w3cwebsocket;

let rl = readline.createInterface(process.stdin, process.stdout);

rl.question("Enter your name: ", async name => {
  console.log(name);
  let connection = new signalR.HubConnection("http://localhost:5000/chat");

  connection.on("broadcastMessage", (name, message) => {
    console.log(`${name}: ${message}`);
    rl.prompt(true);
  });

  try {
    await connection.start();
    rl.prompt();

    rl.on("line", async input => {
      if (input === "!q") {
        console.log("Stopping connection...");
        connection.stop();
        rl.close();
        return;
      }
      await connection.send("send", name, input);
    });
  }
  catch (error) {
    console.error(error);
    rl.close();
  }
});

You can run it by executing the following commands:
tsc
node index.js

Today we learned how to use the ASP.NET Core SignalR client in the NodeJS environment. We created a small node JavaScript application that was able to communicate with browser clients which. Finally, we converted the JavaScript code to TypeScript and learn a little bit about the TypeScript’s async/await feature.

The SignalR for ASP.NET Core JavaScript Client, Part 1 – Web Applications

The first official release of SignalR for ASP.NET Core – alpha1 – was just released. In this release, all SignalR components were rewritten to make SignalR simpler, easier to use and more reliable.

The SignalR JavaScript client has always been a fundamental part of SignalR. Unfortunately, it has a few limitations which made it hard to extend or use outside the browser. The rewrite allowed to introduce changes which allow to take the client outside the browser (no more dependency on jQuery, YAY!) and open new scenarios. And this is what this blog post will focus on. I split the post to two parts. In the first part I will show how to use the client in a web application from both JavaScript and TypeScript. In the second, part we will look at NodeJS.

The plan for this part is to recreate the chat application from the tutorial on the previous version of SignalR and then to convert it to use the new SignalR Server and JavaScript client. The sample is simple enough to allow us to focus on SignalR aspects rather than on application intricacies. As a bonus, we will see what the experience of porting an application from the previous version of SignalR is. I created a github repo for the application where each commit is a step described in this post. I will refer to particular commits from this post to show changes for a given step.

Setting up the Server

Let’s start from creating an empty ASP.NET Core application. We can do that from command line by running the dotnet new web command. (See this step on github).

Once the application is created we can start the server with dotnet run and make sure it works by navigating to http://localhost:5000 from a browser.

After we ensured that the application runs we can add SignalR server components. First, we need to add a reference to the SignalR package to the SignalRChat.csproj file (See this step on github).

Now we can add the Chat Hub class – we will just copy the code from tutorial and tweak a few things. This is how the hub class looks after the changes:

using System;
using Microsoft.AspNetCore.SignalR;
namespace SignalRChat
{
    public class ChatHub : Hub
    {
        public void Send(string name, string message)
        {
            // Call the broadcastMessage method to update clients.
            Clients.All.InvokeAsync("broadcastMessage", name, message);
        }
    }
}

The changes we made were only cosmetic – we removed the reference to the System.Web namespace, added 'Core' to the Microsoft.AspNet.SignalR so that it reads Microsoft.AspNetCore.SignalR. We also changed how we invoke the client-side method by passing the method name as the first parameter to the InvokeAsync call. (See this step on github).

Now that we created a hub we need to configure the application to be aware of SignalR and to forward SignalR related messages to our hub. It’s as easy as calling AddSignalR extension method in the ConfigureServices method of our Startup class and mapping the hub with the UseSignalR method. We will also add the static files middleware which will be responsible for serving static files. The Startup class should look like this:

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddSignalR();
    } 

    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        if (env.IsDevelopment())
        {
            app.UseDeveloperExceptionPage();
        }

        app.UseFileServer();

        app.UseSignalR(routes =>
        {
            routes.MapHub<ChatHub>("chat");
        });
    }
}

(See this step on github).

And this is all the work we had to do create a functional SignalR chat server. Now we can focus on the client side.

The JavaScript Client

In the new version of SignalR the JavaScript client is distributed using npm. The npm module contains a version of the client that can be just included in a web page using the tag, as well as, typings and modules that can be consumed from TypeScript. To get the client to your machine you need to install npm if you haven’t already and run:

npm install @aspnet/signalr-client

The client will be installed in the node_modules folder and you can find the necessary files to include in the node_modules/@aspnet/signalr-client/dist/browser folder. You may wonder why there are so many files in this folder and what purpose they serve. Let’s go over them then and explain.

First, you will find that there are two sets of files – files that contain ES5 in the names and files that do not contain ES5 in the names. SignalR JavaScript uses ES6 (a.k.a EcmaScript 2015) features like Promises or arrow functions. Not all browsers however, support ES6 (looking at you Internet Explorer). The files without ES5 in the names are meant to be used in browsers that support ES6. The files that contain ES5 in the names are the ES6 files transpiled to ES5. They are ES5 compatible and include all required dependencies. The downside of the ES5 files is that they are much bigger than ES6 files.

Another interesting set of files are files containing msgpackprotocol in the name. The new version of SignalR supports custom hub protocols – including binary protocols – and has built-in support for a binary protocol based on MessagePack. The JavaScript implementation of the MessagePack based hub protocol (using the msgpack5) turned out to be quite big so we moved it to a separate file. This way you can include the MessagePack hub protocol only if you want to use it and will not pay the price if you don’t care.

You will also find that each file has a min counterpart. These are just minified versions of the corresponding files. You will want to use the minified versions in production but debugging is much easier with non-minified files so you may want to use non-minified versions during development.

Finally, there is also the third-party-notices.txt file. These are notices for the msgpack5 library and its dependencies used in the MessagePack hub protocol implementation.

Using the SignalR JavaScript Client from JavaScript

Now, that we know a little bit about the JavaScript client let’s update our application to use it.

First, let’s copy all the files from the node_modules/@aspnet/signalr-client/dist/browser folder to a new ​scritps/signalr folder under the wwwroot. (See this step on github).

After the files are copied, let’s create the index.html file in the wwwroot folder and paste the contents of the html file from the tutorial. (See this step on github).

If you try to run the application at this point it will not work. The index.html has references to files like the jQuery library or the old SignalR client which don’t exist. Let’s fix that. Note that even though jQuery is no longer required to the new SignalR client I will continue to use it to minimize the number of changes I need to make. All in all this is not a tutorial on how to remove jQuery from your app so let’s not get sidetracked. Let’s start from sorting out the scripts situation. For jQuery, I will replace the link with the one to the jQuery CDN. For SignalR, I will replace the link to the signalR-2.2.1.min.js file with signalR-client-1.0.0-alpha1.js (feel free to use the ES5 version if you are using a browser that don’t support ES6 features) and remove the link to hubs since hub proxies are currently not supported. (See this step on github (github trick – notice that the link ends with ?w=1 – try removing it and see what happens. Very useful when reviewing some PRs)).

Now we can finally fix the code. Fortunately, this is not a lot of changes:

  • Instead of using proxies we will just create a new HubConnection
  • To register the callback for the client side broadcastMessage method we will use the on function
  • We will replace the done method used by jQuery deferreds to the then used by ES6 promises
  • We will invoke hub methods with the invoke function

(See this step on github).

That’s pretty much it. If you run the application now you should be able to send and receive messages.

Using the JavaScript Client from TypeScript

We now know how to use the new JavaScript SignalR client from JavaScript code. The SignalR client module contains also all necessary bits that make it possible to be consumed from TypeScript. To see how it works let’s take our chat application a bit further and convert it TypeScript.

First, make sure that you have a recent TypeScript compiler installed – run tsc --version from command line. If running the command fails or you have an older version installed install the latest one using this command:

npm install typescript -g

After installing or updating the typescript compiler we will initialize a new project by running

tsc --init

in the project folder. This will create a tsconfig.json file which will look like this:

{
  "compilerOptions": {
    "target": "es6",
    "module": "commonjs",
    "strict": true,
    "noImplicitAny": true
  }
}

after performing some cleanup. We will also add a new chat.ts file which we will leave empty for now. If you run the tsc command from project root you should see an almost empty chat.js file generated from your chat.ts file. (See this step on github).

Because we are using TypeScript and will bring dependencies using npm we will no longer need JavaScript files for the browser so let’s delete them. (See this step on github).

To be able to add and restore dependencies the client will need, let’s create a package.json file by executint the npm init command. We will leave default values for almost all settings except for the project name which needs to be lowercase.

PS C:\source\SignalRChat\SignalRChat> npm init
This utility will walk you through creating a package.json file.
It only covers the most common items, and tries to guess sensible defaults.
See `npm help json` for definitive documentation on these fields
and exactly what they do.

Use `npm install <pkg> --save` afterwards to install a package and
save it as a dependency in the package.json file.

Press ^C at any time to quit.
name: (SignalRChat) signalrchat
version: (1.0.0)
description:
entry point: (chat.js)
test command:
git repository:
keywords:
author:
license: (ISC)
About to write to C:\source\SignalRChat\SignalRChat\package.json:

{
"name": "signalrchat",
"version": "1.0.0",
"description": "",
"main": "chat.js",
"dependencies": {},
"devDependencies": {},
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC"
}

Is this ok? (yes)
PS C:\source\SignalRChat\SignalRChat>

Now let’s add our dependencies – signalr-client, jquery and jquery typings (they enable using jquery from TypeScript). We will use the --save-dev option to save the dependencies as dev dependencies in the package.json file.

npm install @aspnet/signalr-client --save-dev
npm install jquery --save-dev
npm install @types/jquery --save-dev

We also need to install browserify – a tool which we will use to create the final script to be used by the browser:

npm install -g browserify

(See this step on github).

We can now start working on the code. First, we need to import the dependencies we are going to use. We can do that by adding the following two lines at the top of our chat.ts file:

import * as signalR from "@aspnet/signalr-client"
import * as $ from "jquery

Now we can move the script from our .html file to the .ts file. If you do that and play a little bit with the code you will notice that intellisense now tells you about class members and function parameters and if you press F12 (in Visual Studio Code) it will take you to the function header. Another thing, you will see is an error on line 5.  This TypeScript telling you that there is a type mismatch for the parameter passed to the jQuery val() function – the prompt() function can return null which is not a valid input for the val() function.

VSCodeSignalR

In our case we know that prompt will return string so we will just cast the result to string to suppress the error.

Since we moved the function to the .ts file we can now remove all the JavaScript code from our index.html file. We can also remove all the tags since we no longer depend on them to bring dependencies (we also already deleted the scripts). (See this step on github).

Let’s compile our chat.ts file now by running tsc command. If you look at the generated chat.js file you will notice that it looks pretty much the same as the source chat.ts file with some additional lines at the top. You will also notice that it does not have the required dependencies (i.e. signalr-client and jquery). This is where browserify comes into play. We will use browserify to generate the final version of the file with all the dependencies. Let’s run the following command (you may need to create the wwwroot/scripts folder if one does not exist) from the project folder:

browserify .\chat.js -o .\wwwroot\scripts\chat.js

Take a look at the chat.js file that was created by browserify and now you will see that the file is much bigger and contains all the required dependencies. If we include this file in our index.html with the tag, start the application and open in the browser you will see that it works and you can send and receive messages. (See this step on github). We could even automate build steps (e.g. with gulp) but it’s out of scope for this post.

Summary

In this post, we looked at using the new SignalR JavaScript client in web applications. We learned how to use the client from both JavaScript and TypeScript. We tried to port an application using the previous version of SignalR to see how hard it is. In the next part, we will take a look at using the client in NodeJS applications.

Entity Framework 6 Easter of Love

While Entity Framework Core along with ASP.NET Core get all the hype today, Entity Framework 6 is still the workhorse of many applications running every day which won’t be converted to the Core world anytime soon, if at all. Because of this I decided to spend some time to give my EF extensions a small refresh to adapt to the changing landscape.

Github

Some of my extensions were hosted on Codeplex. I do most of the work on Github these days and Github is nowadays a de facto standard for open source projects. Codeplex not only looks dated but is also missing a lot of features Github has (searching the code on Github is far from perfect but Codeplex does not offer it at all). All in all this turned out to be the right decision given that it was recently announced that Codeplex is being shutdown. Anyways, here is where my projects previously hosted on Codeplex found their new homes:

Updating projects

I developed most of my EF extensions before Visual Studio 2015 was released. I found that opening them in Visual Studio 2015 was not a good experience – Visual Studio would update project/solution files automatically leaving unwanted changes. Therefore, I updated solution files to the version compatible with Visual Studio 2015. I also moved to a newer version of XUnit which does not require installing an XUnit runner extension in Visual Studio to enable running tests. Even though the solution files are marked as Visual Studio 2015 compatible they can be opened just fine with Visual Studio 2017 which shipped in the meantime.

New versions

This is probably the most exciting: I released new versions of a few of my extensions.

2nd Level Cache for Entity Framework

2nd Level Cache (a.k.a. EFCache) 1.1.0 contains only one new feature. This feature will, however, make everyone’s life easier. Until now the default caching policy cached results for all queries. In the vast majority of cases this behavior is not desired (or plainly incorrect) so you had to create your own policy to limit caching only to results from selected tables. In EFCache 1.1.0 you can specify store entity sets (i.e. which correspond to tables in the database) for which the results should be cached when creating the default caching policy. As a result you no longer have to create your own policy if you want to control simple caching. This change is not breaking.

Store Functions for Entity Framework

I received a couple of community Pull Requests which are worth sharing so yesterday I published on NuGet the new new version of the Store Functions for Entity Framework (1.1.0) containing these contributions. pogi-b added support for Built-in functions so you can now map built-in store functions (e.g. FORMAT or MAP) and use them in your queries. PaulVrugt added ability to discover function stubs marked as private. The first change is not breaking. If you happened to have private function stubs that were not discovered before (a.k.a. dead code) they will be discovered now as a result of the second change.

EF6 CodeFirst View Generation T4 Template for C#

Visual Studio 2017 now requires extensions to use VSIX v3 format. The EF6 CodeFirst View Generation T4 Template for C# extension used format v1 and could not be installed in Visual Studio 2017. I updated the VSIX format to v3 and dropped support for Visual Studio 2010 and 2012.

Note: I have not updated other view generation templates for EF4/EF5 to work with Visual Studio 2017. If you need them to work with VS 2017 let me know and I will update.

Happy Easter!

 

SignalR Core Part 2/3: ASP.Net Core Sockets

Disclaimer: SignalR Core is still in early stages and changing rapidly. All information in this post is subject to change.

To test some of the scenarios described in the first part of this mini-series I came up with an idea for a relatively simple application where users can report to the server the weather at their location and their report will be broadcast to all the connected clients. I called this application SocialWeather. The central part of the system is an ASP.Net Core application running SignalR Core server. The server can handle massages received in one of the following formats – JSON, Protobuf and pipe (the pipe format is a simple format I created where the data is separated by the pipe (|) and the message ends with the new line character (\n)). I also created 3 different clients – a JavaScript clients using the JSON format, a C# client using Protobuf and a lua client using the pipe format. The JavaScript client is part of a web page served by the same application that hosts the server. The C# client is a console application that that can send and receive Protobuf messages. The lua client is the most interesting as it runs on an ESP8266 development board with the NodeMCU firmware. The whole system is using the “socket” level of the new SignalR which is a kind of counterpart of persistent connection in the previous version of SignalR (so no hubs API here). All the clients use bare websockets to connect to the server (in other words there is no SignalR client involved).

The SocialWeather server, which includes the JavaScript client is a sample project in the SignalR repo and you can run it yourself – just clone the repo, run build.cmd/build.sh to install the correct version of the runtime and restore packages. Then go to the samples/SocialWeather folder and start the server with dotnet run.

The C# and lua clients are in my personal repo. Running the C# client is straightforward. After you clone the repo you need to restore packages update the URL to the SignalR server and run the client with dotnet run. Each time you press Enter the client will generate a random weather report and send it to the server. If you type “!q:” the client will exit.

The lua client is meant to run on an ESP8266 compatible board with NodeMCU firmware installed. Preparing the board to run the client requires a bit of work. The first step is to set up serial communication to the board. If the module you have is equipped with a USB port (I have the Lolin v3 board which does have a USB port) it should be enough to install a VCP (Virtual COM Port) drivers (you can find the drivers here http://www.ftdichip.com/FTDrivers.htm or here http://www.silabs.com/products/mcu/pages/usbtouartbridgevcpdrivers.aspx). If your board doesn’t have a USB port, you will need to use an additional module (e.g.  Arduino)  for USB-to-serial translation (you can find tutorials about setting it up on the web).

The next step is to make sure that your board has the right firmware. It needs to be a NodeMCU firmware with net, http, wifi, and websocket support. You can request a build here and follow steps in this tutorial to re-image your board. When the board is ready you should be able to connect to it using a serial terminal (I used screen on Mac and Putty on Windows). On Windows determine the COM port number the board is using using the Device Manager. On Mac it will be one of the /dev/tty* devices. The speed needs to be set to 115200. When using Putty make sure the Flow Control setting is set to None or the communication will not be working correctly.

puttyflowcontrol

Once on the device you need to connect it to your wireless network by sending the following commands (you need to replace SSID with the name of the network and PWD with the password or an empty string for open networks):

wifi.setmode(wifi.STATION)
wifi.sta.config(SSID, PWD)
wifi.sta.connect()

We are now ready to run the client. If you haven’t already, clone the repo containing the client go to src/lua-client folder and update the URL to the SocialWeather SignalR server. Now you can transfer the file to the device (if you are connected to the device with the terminal you need to disconnect). The nodemcu-uploader Python script does the job.

If all stars aligned correctly you should be able now to start the client by executing:

dofile(“social-weather.lua”)

it will print a confirmation message once it connected  successfully to the server and then will print weather reports sent by other client. You can also send your own weather report just by typing:

ws:send(“72|2|0|98052\n”)

and pressing Enter.

The command may look a bit cryptic but is quite simple. The ws is a handle to the websocket instance created by the social-weather.lua script. send is a method of the websocket class so we literally invoke the send method of websocket interactively. The argument is a SocialWeather report in the pipe format: a pipe separated list of values – temperature, weather, time, zipe code – terminated with \n.

This is what it looks like when you run all three clients:

social-weather

Let’s take a look at how things work under the hood. On the server side the central part is the SocialEndPoint class which handles the clients and processes their requests. If you look at the loop that processes requests it does not do any parsing on its own. Instead it offloads parsing to a formatter and deals with strongly typed instances. Formatter is a class that knows how to turn a message into an object of a given type and vice versa. In case of the SocialWeather application the only kind of messages sent to and from the server are weather reports so this is the only type formatters need to understand.

When sending messages to clients the process is reversed. The server gives the formatter a strongly typed object and leaves it to formatter to turn it into a valid wire format.

How does the server know which formatter to use for a given connection? All available formatters need to be registered in the DI container as well as mapped to a type they can handle and format. When establishing the connection, the client sends the format type it understands as a query string parameter. The server stores this value in the connection metadata and uses later to resolve the correct formatter for the connection.

On the client side things are even simpler. The lua client has just 30 lines of code half of which is concerned with printing weather reports in a human readable form. Because the format of the connection cannot change once the connection is established message parsing can be hardcoded. The rest is just setting up the websocket to connect to the server and react to incoming message notifications.

The C# client is equally simple. It contains two asynchronous loops – one for receiving messages (weather reports) from the server and one for sending messages to the server. Again, handling the wire format (which in this case is Protobuf) is hardcoded in the client.

This is pretty much all I have on ASP.Net Core Sockets. With a simple application we were able to validate that the new version of SignalR can handle many scenarios the old one couldn’t. We were able to connect to the server from different platforms/environments without using a dedicated SignalR client. The server was capable of handling clients that use different and custom message formats – including a binary format (Protobuf). Finally, all this could be achieved with a small amount of relatively simple code.

SignalR Core Part 1/3: Design Considerations

Disclaimer: SignalR Core is still in early stages and changing rapidly. All information in this post is subject to change.

A few months ago, we started working on the new version of SignalR that will be part of the ASP.Net Core framework. Originally we just wanted to port existing code and iterate on it. However, we soon realized that doing so would prevent us from enabling new scenarios we wanted to support. The original version of SignalR was designed around long polling (note that back in the day support for websockets was not as common as it is today – it was not supported by many web browsers, it was not supported in .NET Framework 4, it was not (and still isn’t) supported natively on Windows 7 and Windows 2008 R2). A JSON based protocol was baked in and could not be replaced which blocked a possibility of using other (e.g. binary) formats. Starting the connection was heavy and complicated – it required sending 3 HTTP requests whose responses had to be correlated with messages sent over the newly created transport (you can find a detailed description of the protocol in SignalR on the wire – an informal description of the SignalR protocol – a post I wrote on this very subject). This basically meant that a dedicated client was required to talk to a SignalR server. In the old design the server was centered around MessageBus – all messages and actions had to go through the message bus. This made the code very complex and error prone especially in scale-out scenarios where all the servers were required to have the same data. The state (e.g. cursors/message ids, groups tokens etc.) was kept on the client which would then send it back to the server when needed (e.g. when reconnecting). The need of keeping the state up-to-date significantly increased the size of the messages exchanged between the server and the client in most of the non-trivial scenarios.

In the new version of SignalR we wanted to remove some of the limitations of the old version. First, we decided to no longer use long polling as the model transport. Rather, we started with a premise that a full duplex channel is available. While this might sound a lot of like websockets we are thinking that it will be possible to take it further in the future and support other protocols like TCP/IP. Note, it does not mean that the long polling and server sent events transports are going away. Only, that we would not drag better transports down to the standards of worse transports (e.g. websockets supported binary format but long polling (until XmlHttpRequest2) and server sent events didn’t so in the old version of SignalR there was no support for binary messages. In the new version we’d rather base64 encode messages if needed and let users use what websockets offers). Second, we did not want to bake in any specific protocol or message format. Sure, for hub invocations we will still need to be able to get the name of the hub method and the arguments but we will no longer care how this is represented on the wire. This opens the way to using custom (including binary) formats. Third, establishing the connection should be lightweight and connection negotiation can be skipped for persistent duplex transports (like websockets). If a transport is not persistent or uses separate channels for sending and receiving data connection negotiation is required – it creates a connection id which will be used to identify all the requests sent by a given client. However, if there are no multiple requests because the transport is full duplex and persistent (like in the case of websockets) the connection id is not needed – once the connection is established in the first request it is used to transfer the data in both directions. In practice, this means that you can connect to a SignalR server without a SignalR client – just by using bare websockets.

There are also a few things that we decided not to support in the new SignalR. One of the biggest ones was the ability re-establish a connection automatically if the client loses a connection to the server. While it may not be obvious, the reconnect feature has a huge impact on the design, complexity and performance of SignalR. Looking at what happens during reconnect should make it clear. When a client loses a connection, it tries to re-establish it by sending the reconnect request to the server. The reconnect request contains the id of the last message the client received and the groups token containing the information about groups the client belongs to. This means that the server needs to send the message id with each message so the client can tell the server what was the last message it received. The more topics the client is subscribed to the bigger the message id gets up to the point where the message id is much bigger than the actual message.

Now, when the server receives a reconnect request it reads the message id and tries to resend all the messages the client missed. To be able to do that the server needs to keep track of all messages sent to each client and buffer at least some recent messages so that it can resend them when needed. Indeed, the server has a buffer per connection which it uses to store recent messages. The default size of that buffer is 1000 messages which creates a lot of memory pressure. The size of the buffer can be configured to make it smaller but this will increase the probability of losing messages when a reconnect happens.

The groups token has similar issues – the more groups the client belongs to the bigger the token gets. It needs to be sent to the client each time the client joins or leaves a group so the client can send it back in case of reconnects to re-establish group membership. The size of the token limits the number of groups a client can belong to – if the groups token gets too big the reconnect attempt will fail due to the URL being bigger than the limit.

While auto-reconnect will no longer be supported in SignalR users can build their own solution to this problem. Even today people try restarting their connection if it was closed by adding a handler to the Closed event in which they start a new connection. It can be done in a similar fashion in SignalR core. It’s true that the client will no longer receive messages it missed but this could happen even in the old SignalR – if the number of message the client missed was greater than the size of the message buffer the newest messages would overwrite the oldest messages (message buffer is a ring buffer) so the client would never receive the oldest messages.

Another scenario we decided not to support in the new version of SignalR was allowing clients to jump servers (a multi-server scenarios). Before, the client could connect to any server and then reconnect or send a data to any other server in the farm. This required that all servers had all the data to be able to handle requests from any client. The way it was implemented was that when a server receive a message it would publish it to all the other SignalR servers via MessageBus. This resulted in a huge number of messages being sent between SignalR servers.

(Side note. Interestingly, the scenario of reconnecting to a different server than the one the client was originally connected to often did not work correctly due to server misconfiguration. The connection token and groups token are encrypted by the server before sending them to the client. When the server receives the connection token and/or groups token it needs to be able decrypt it. If it cannot, it rejects the request with the 400 (Bad Request) error. The server uses the machine key to encrypt/decrypt the data so, all machines in the farm must have the same machine key or the connection token (which is included in each request) encrypted on one server can’t be decrypted on another server and the request fails. What I have seen several times was that servers in the farm had different machine keys so, reconnecting to a different server did not actually work.)

In the new SignalR the idea is that the client sticks only to one server. In multi-server scenarios there is a client to server map stored externally which tells which client is connected to which server. When a server needs to send a message to a client it no longer needs to send the message to all other servers because the client might be connected to one of them. Rather, it checks what server the client is connected to and sends the message only to this client thus the traffic among SignalR server is greatly reduced.

The last change I want to talk about, somewhat related to the previous topic, is removing the built-in scale-out support. In the previous version of SignalR there was one official way of scaling out SignalR servers – a scale out provider would subclass the ScaleoutMessageBus and leave all the heavy lifting to SignalR. It sounds good in theory but with the time it became apparent that with regards to scale-out there is no “one size fits all” solution. Scaling out an applications turned out to be very specific to the application goals and design. As a result, many users had to implement their own solution to scaling out their applications yet still paid the cost of the built-in scale-out (even when using just one server there is an in-memory message bus all messages go through). While, scale-out support is no longer built-in the project contains a Redis based scale-out solution that can be used as-is or as a guidance to create a custom scale-out solution.

These are I think the biggest design/architecture decision we have made. I believe that they will allow to make SignalR simpler, more reliable and performant.

Running ASP.NET Core Applications with IIS and Antares (Azure Websites)

I have seen a few articles (including official docs on http://docs.asp.net) about publishing and running  ASP.NET Core applications in IIS (or Azure/Antares). Unfortunately, I was not satisfied by either of them. Yes, they showed steps you need to follow to make things work. Yes, they touched on some aspects of how things work. No, they did not explain what’s really happening, why it’s happening and how the blocks fit together. Hence this post – something I would like to read if I wanted to run my ASP.NET Core application using IIS or Azure.

Before we can get into details we need to understand how things work at a high level. The most important thing is that ASP.NET Core applications are no longer tightly coupled to IIS as it was with previous versions. Rather, IIS is acting now merely as a reverse proxy and the application itself runs as a separate process using the Kestrel HTTP server. Decoupling ASP.NET from IIS was necessary to enable running ASP.NET Core applications on other platforms. It also makes development easier because it allows to avoid the overhead of IIS/IIS Express during development by making it possible to run your application directly using Kestrel. Note that in production environment it is recommended to always run Kestrel behind a reverse proxy like IIS or  NGINX.

Going back to IIS HTTP requests are handled as follows:

  1. IIS receives a request
  2. IIS (ASP.NET Core Module) starts the ASP.NET Core application in a separate process (if the application is not already running) – i.e. the application no longer runs as w3wp.exe but as dotnet.exe or myapp.exe
  3. IIS forwards the  request to the application
  4. Application processes the request and send the response to IIS
  5. IIS forwards the response to the client

Out of these 5 steps, step two is the most interesting, the most complicated and the most fragile. There is a few pieces that need to be aligned to successfully start an ASP.NET Core application from IIS: ASP.NET Core Module, web.config file and application configuration. Let’s take a look at them one by one and discuss their role.

ASP.NET Core Module (sometimes abbreviated to ANCM) is a native IIS module that starts the application and implements reverse proxy functionality. It is installed as part of ASP.NET Core tooling for Visual Studio or can be installed separately – (the Windows (Server Hosting) package from https://www.microsoft.com/net/download.

web.config – tells IIS to use ASP.NET Core Module to process requests. It also tells ASP.NET Core Module what process (application) to start. Note, that you don’t use web.config to configure your application – it is only used by IIS. Here is how a typical web.confing of an ASP.NET Core application looks like:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <system.webServer>
    <handlers>
      <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified"/>
    </handlers>
    <aspNetCore processPath="dotnet" arguments=".\HelloWorld.dll" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" />
  </system.webServer>
</configuration>

Application configuration – a typical Main method of an ASP.NET Core application looks like this:

var host = new WebHostBuilder()
            .UseKestrel()
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseIISIntegration()
            .UseStartup<Startup>()
            .Build();

from the perspective of running the application using IIS the lines that are important are UseKestrel and UseIISIntegration.  UseKestrel configures Kestrel as the application web server. This is important from IIS perspective since you can’t use WebListener and IIS together at the moment (https://github.com/aspnet/IISIntegration/issues/8). UseIISIntegration does a bit of magic to fulfill ASP.NET Core Module expectations and registers the IISMiddleware.

With the information above we can now drill into how IIS starts ASP.NET Core applications. When IIS  receives a request for an ASP.NET Core application it passes the request to the ASP.NET Core Module. IIS was configured to do this by the following entry in the the web.config file:

<handlers>
  <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified"/>
</handlers>

Upon receiving the request the ASP.NET Core Module will attempt to start the application if it is not already running. The name of the process to start and its arguments are specified in web.config  as the processPath and arguments attributes on the aspNetCore element. ASP.NET Core Module also sets a few environment variables for the application process – ASPNETCORE_PORT , ASPNETCORE_APPL_PATH and ASPNETCORE_TOKEN. Here is where the UseIISIntegration magic happens. When the application starts, the code inside UseIISIntegration method tries to read these environment variables and if they are not empty they will be used to configure the url/port the application will listen on. (If the above environment variables are not set UseIISIntegration won’t try to configure anything so that you can use your own settings when running the application directly (i.e. without IIS)). One important detail to pay attention to is where you put the call to UseIISIntegration when configuring your application with WebHostBuilder. You need to make sure that you don’t try to set server urls after you called UseIISIntegration otherwise the url set by UseIISIntegration will get overwritten and your application will be listening on a different port that Asp.NET Core Module expects. As a result things will not work.

ASPNETCORE_TOKEN is a pairing token. IIS middleware added to the pipeline by UseIISIntegration will check each request if it contains this value and will reject requests that don’t. This is to prevent from accepting requests that did not come from IIS.

These are the fundamental blocks needed to run your ASP.NET Core application with IIS and on Azure (Antares). You can now go and set things up as described in some tutorials and actually understand what you are doing and why.

There is, however, a  second level of confusion which happens when you start using Visual Studio and you see additional magic. The first thing that is confusing is web.config. You create a new ASP.NET Core application, open web.config and you see this line instead of what I showed above:

<aspNetCore processPath="%LAUNCHER_PATH%" arguments="%LAUNCHER_ARGS%" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" forwardWindowsAuthToken="false"/>

You start wondering what are these %LAUNCHER_*% environment variables, who is supposed to set them and how IIS (or whoever) knows what values to put there. Honestly, I don’t exactly know how these environment variables work but I treat them as placeholders that Visual Studio replaces when you start your application with F5/Ctrl+F5. When you publish your application to run in a production environment you can’t have these placeholders in web.config – no one knows about them and no one is going to replace them or set values (I also don’t think you can just set environment variables with these names and they will be picked up automatically – this is why I call these strings placeholders – they look as if they were environment variables but I don’t think they behave as environment variables). So how does this work then? If you look at your project.json file you will see a "scripts" section looking more or less like this:

"scripts":
{
"postpublish":
"dotnet publish-iis --publish-folder %publish:OutputPath% --framework %publish:FullTargetFramework%"
}

It just tells dotnet to run the publish-iis tool after the application is published. What is publish-iis (or a better question would be “what publish-iis isn’t”)? publish-iis isn’t… doing much. There are a lot of misconceptions about the publish-iis tool but it actually is a very simple tool. It goes to the folder where the application was published (not your project folder) and checks if it contains a web.config file. If it doesn’t it will create one. If it does it will check what kind of application you have (i.e. whether it is targeting full CLR or Core CLR and – for Core CLR – whether it is a portable or standalone application) and will set the values of the processPath and arguments attributes removing %LAUNCHER_PATH% and %LAUNCHER_ARGS% placeholders on the way. Note that publish-iis is not a Visual Studio tool. It’s independent and whether you are using Visual Studio or you publish your application from command line using dotnet publish it will work as long as it is configured as a postpublish script in your project.json. That’s pretty much what publish-iis is.

Troubleshooting

Since there are a few pieces that need to be aligned to run ASP.NET Core application with IIS things tend to go wrong. If you don’t know how these pieces are supposed to work together (i.e. you did not read the first part of this post) you can search the internet, try random hints from stackoverflow and pray and most likely you still won’t be able to make your application work. But now you know know how things are supposed to work so you can be much more effective in troubleshooting problems. I will only focus on the infamous 502.3 Bad Gateway error as this is the most common one. I read about other failures (https://docs.asp.net/en/latest/publishing/iis.html) but so far have not seen any of them. So, if your application does not work with IIS what to do?

  • Make sure ASP.NET Core Module is installed. It will be installed on your dev box because it is installed with Visual Studio Web Tooling but it may not be on the server you are deploying your application to
  • Try running your published application without IIS – in the command prompt go to the folder where the application was published to and run dotnet {myapp}.dll (a portable Core CLR app) or {myapp}.exe (a standalone application or an application targeting full CLR)
  • If above works – make sure dotnet.exe is on the global %PATH%. It might be on the %PATH% for you but IIS is running using a different account which may not have path to dotnet.exe set. Add the path to the folder dotnet.exe lives in (typically C:\Program Files\dotnet) to the global %PATH% environment variable. I had to do iisreset.exe to make the change effective in IIS
  • Check web.config file of the published application – verify it has actual values and not %LAUNCHER_PATH%/%LAUNCHER_ARGS% placeholders. Make sure the processPath  corresponds to the application type (i.e. you can’t start a portable application with {myapp}.exe because there is only {myapp}.dll in the folder)
  • Check event log – ASP.NET Core Module writes to event log and you can find some useful (and some bogus) entries in the event log
  • Turn on logging – you probably noticed stdoutLogFile and stdoutLogEnabled attributes on the aspNetCore element. They are very helpful to diagnose issues – especially issues related to application start up. If you set stdoutLogEnabled to true ASP.NET Core Module will write all the output written by the application to the console to the stdoutLogFile file. Note that the folder configured in the stdoutLogFile attribute must exist, otherwise (at least at the moment) the file won’t be created. In case of Azure/Antares the path should look like:  \\?\%home%\LogFiles\stdout  (the \\?\%home%\LogFiles folder always exists). In fact if you publish your application to Azure/Antares using Visual Studio tooling it will make publish-iis set stdoutLogPath to point to the folder above and you can turn on logging by merely setting stdoutLogEnabled to true.
    Note that std out logging should not be used as a poor man’s file logging. It’s very useful to diagnose startup issues since they often happen before loggers are created and/or configured but if you want to log to a file configure your application to use a real logging and logging framework. (There is also a plan to create a simple file logger https://github.com/aspnet/Logging/issues/441)

app_offline

The last thing to mention is app_offline.htm. app_offline.htm is a feature of ASP.NET Core Module where ASP.NET Core Module will monitor your application directory and if it notices the app_offline.htm file (note – at the moment the file must not be empty: https://github.com/aspnet/IISIntegration/issues/174) it will stop your application and will respond to requests with the contents of the app_offline.htm file. This makes application deployment much easier since you no longer need to deal with the problem of locked files that are loaded into the process of a running application. Once you remove the app_offline.htm file from the folder ASP.NET Core Module will start your application on the first request. app_offline.htm is used by the deployment tool (WebDeploy) that ships with Visual Studio (note that in preview1 this tool uses app_offline.htm only when deploying to Azure/Antares but it is supposed to be fixed so that app_offline.htm is used by default when deploying to file system (think: IIS)).

So, these are the basics of running ASP.NET Core application with IIS. The topic is much broader but the details described in this post will hopefully make working with IIS  in the ASP.NET Core world less painful and help those who need to transition from previous versions of ASP.NET.

 

Home Automation with Raspberry Pi – Garage Door

I hate keys. And even though I hate not having keys when I need them even more I still think keys are annoying. I keep forgetting or losing them. In some cases, they make life harder than it needs to be. For example, I bike to work almost every day from early spring to late fall and I should not even have to think about keys and door since I just need to open the garage door, take/park my bike and close the garage door. Currently what I need to do is to make a couple trips around the house to open/unlock doors and then close/lock everything I opened/unlocked. I tried using my kids to do all of this for me but they simply started to avoid seeing me in the morning and even if I caught them they kept forgetting to do what I asked. Yes, I could carry a remote with me. I actually did for a while but it was even worse than carrying the key – it’s bigger, it’s another thing to remember about and the last one got some rust within a year or so and stopped working completely. There is, however, one thing I carry with me almost always – my cell phone. If I could open my garage from my cell phone (almost) all my problems would be solved. As a bonus my kids wouldn’t be afraid of leaving their rooms before I leave for work. Yes, I know there are already some garage openers on the market that can be controlled with cell phones so one way to go would be just to replace my garage opener with one like that. I don’t like, however, throwing away things that work. Replacing the garage opener would also cost at least a few hundred bucks even if I did all the work myself. Another thing is that I am currently using a Windows Phone (hey, an unlocked Nokia Lumia 640 was like $30 + tax – this deal was hard to beat for a cheapo like me) and given Windows Phone’s market share I don’t expect any garage opener to have an app for that. Finally, hacking something myself is much more fun then using a ready made solution.

Enough talking, let’s get started. What you need for this project is a Raspberry Pi running Raspbian and a remote that matches your garage opener. If you don’t have a spare remote you can buy a generic one on eBay or Amazon. (I had a remote with a broken button so it was useless in general but perfect for this project since we need to short-circuit the button anyways). The idea is to connect the remote to the Raspberry Pi via GPIO (General Purpose Input/Output) and then write a web application that could be used to control the remote. Whenever you are on your home network you will be able to access this web application (e.g. from your phone) and open or close the garage door. Heck, if you have friends who you trust and allow on your network they will be able to open your garage door to (this obviously has pros and cons). This might sound complicated but it is really not that hard.

GPIO (General Purpose Input/Output)

The first thing you need to be familiar with is GPIO (General Purpose Input/Output). If you have never heard about GPIO these are the 26 pins on your Raspberry Pi. You can find out more on GPIO from this tutorial.

Hardware

Preparing the hardware requires a bit of soldering. Don’t be afraid – it is so basic that even I could do that! The first thing to take care of is the button you press to open or close the garage door. Because we are going to control the remote from Raspberry Pi we need to short-circuit the button otherwise it will prevent the current from flowing and nothing will work. We also need to solder jumper wires which we will connect to GPIO. Note – GPIO pins are 3.3V and my garage opener used a 3.3V battery so I soldered the wires directly to the battery socket. This is what I ended up with :

Connecting remote to Raspberry PI

The important thing before connecting anything is to know what version of Raspberry PI you have because the layout of GPIO pins vary between models and revisions. In this project I used pin 7 (GPIO4) because it seems to be the same in most/all of the models and revisions. I would recommend however double checking your Raspberry Pi before wiring things up (if you decide to use a different pin you will have to update the script that controls GPIO).  Now, you just need to connect the ‘hot’ wire to pin 7 (GPIO 4) and the other wire to GND (e.g. pin 6). This is how things look like after I put them together:

Connected

Hint – for testing instead of connecting the remote you can connect just a LED. This way you can quickly test if things work without leaving your neighbors wondering why the hell your garage door opens 10 times in a minute. The GPIO tutorial shows how to do this and here is what I used for testing:

LED

Software

With hardware ready we can now take care of software. We want to be able to control GPIO from a web server. The problem is that you need root permissions to access GPIO and granting root permissions to a webserver is generally a bad idea. Fortunately, there is the pigpio library which allows to control GPIO. pigpio contains a utility – pigpiod – that launches pigpio as a daemon and enables talking to pigpio using sockets or the pipe. You talk to the pigpio library by sending pigpio messages. The list of available messages with descriptions can be found here. One very useful thing is that when pipgpiod is running you can send messages directly from command line by writing to /dev/pigpio – e.g. echo "w 4 1" > /dev/pigpio sets GPIO4 to high (w – write, 4 – GPIO4, 1 – high) – we will use this command in out script.

Thanks to pigpiod the webserver no longer needs to have root permissions to control GPIO.

I chose lighttp as my webserver. Installing and configuring it is not hard and there are quite a few tutorials on the internet showing how to set it up. Once you have a webserver up and running you only need a script that will handle user requests and send messages to pigpiod. I wrote the script in Python so you will need to install Python on your Raspberry Pi if you have not done it yet.

If you have never configured a daemon (like me – I just used the init script template I found here and adapted it) or written anything in Python (like me – but it’s like 7 lines of code) fear not. All the code is ready. Just clone my repo (or even copy files) from https://github.com/moozzyk/GarageDoor and follow steps below:

  1. Install pigpio
  2. Configure pigpiod (pigpio daemon)
    • copy the pigpiod script from the GarageDoor/scripts/init.d folder to /etc/init.d/. You must be root to do this. If you are in the GarageDoor repo root run:
      sudo cp scripts/init.d/pigpiod /etc/init.d/
    • Make sure the pigpiodscript you copied has execute permissions. If it does not you need to chmod +x it. Again, you have to have root permissions to do that:
      sudo chmod +x /etc/init.d/pigpiod
    • Start pigpiod daemon with:
      service pigpiod start
      (You can stop the service with service pigpiod stop or check the status with service pigpiod status)
    • If you have a test setup with LED you can now easily check if things work – set GPIO4 to high by running
      echo "w 4 1" > /dev/pigpio
      The LED should light up. To turn off the LED execute
      echo "w 4 0" > /dev/pigpio
    • Configure the daemon to start on boot:
      sudo update-rc.d pigpiod defaults
  3. Configure lighttp to execute python scripts (if you are not running lighttp configure your webserver accordingly). In /etc/lighttpd.config:
    • In the server.modules section uncomment or add "mod_cgi" e.g.:
      server.modules = (
              "mod_simple_vhost",
              "mod_evhost",
              "mod_userdir",
              "mod_cgi",
              "mod_compress"
      )
    • In the cgi.assignsection add a mapping for .pyfiles:
      cgi.assign = (
              ".py" => "/usr/bin/python"
      )
  4. Prepare and run the application
    • copy the garage.py file from GarageDoor/Script/web to /var/www (or wherever your application root is)
    • Navigate to the application using a browser – you should see something like this:
      wp_ss_20160522_0002
    • Now if everything is set up correctly your garage door should open/close if you click the open/close button

Points of interests

Currently the application lets everyone who is on the home network play with your garage door. This may not be desirable. One improvement to make would be to authorize people before they can open or close the door. Another useful thing would be to log when the garage door is opened and closed. Finally, configuring the webserver to use https would be useful as well.

Creating and using strong named assemblies in ASP.NET 5

The support for signing assemblies was first introduced to ASP.NET in the beta-6 release. In the upcoming RC1 release the way signing works has changed a little bit. In addition, from now on, assemblies that are part of ASP.NET 5 will be signed as well. Some people are questioning the decision of signing ASP.NET assemblies with strong name but there are good reasons to do it. First, some companies have a policy that all the assemblies they build must be signed. Without signing ASP.NET 5 could not be used in such environments since a signed assembly cannot depend on a non-signed assembly. Another reason is servicing – while it was never a goal to install ASP.NET 5 in GAC having signed assemblies leaves the option to use GAC to service ASP.NET 5 (e.g. in case of a critical security vulnerability) open.
There is a discussion about signing ASP.NET 5 assemblies where you can comment on the decision or ask questions.

In ASP.NET 5 there are three ways an assembly can be signed:

  • an assembly can be signed with a private and public key pair – this is just regular signing everyone knows and “loves”; assemblies signed like this can be installed in GAC etc.
  • an assembly can be delay signed – in this case the assembly is signed only with a public key but is really unusable unless you enable skipping strong name verification for this assembly using the sn tool (sn -Vr). The goal is to enable using assemblies signed with a private key to which developers don’t have access in test/dev environment.
  • an assembly can be OSS signed – this is a new way of signing that is very similar to delay signing in that the assembly is signed only with a public key. The difference is that it is no longer required to skip strong name verification for this assembly. The assembly just appears to be strong named while it really is not. (Internally OSS signing is oftentimes referred to as “fake signing” – in the Roslyn github repo there is even a tool called FakeSign). There are some scenarios that OSS signed assemblies cannot be used in – for instance an OSS signed assemblies cannot be installed in GAC. You can find more details on OSS signing and its limitations here.

If you want to sign an assembly in ASP.NET 5 you need to set appropriate compilation options in the project.json file. There are three settings that control signing:

  • keyFile
  • delaySign
  • useOssSigning

Depending on what combination of the signing options you use and the platform you are compiling on your assemblies will be signed in one of the ways enumerated above.
If you have just the keyFile in your compilation options (e.g. "compilationOptions": { "keyFile": "myKey.snk" }) the assembly will be signed with a private and public key pair (i.e. good (?), old signing) if you compile your project using Desktop Clr. CoreClr and Mono don’t support signing with a key pair so in these cases the public key will be extracted from the .snk file and will be used to OSS sign the assembly.
If you have both the keyFile and useOssSigning (e.g. "compilationOptions": { "keyFile": "myKey.snk", useOssSigning: true }) the public key will be extracted from the .snk file and the assembly will be OSS signed with the public key regardless of the runtime you use for compilation.
If you have just useOssSigning set to true, the assembly will be OSS signed with a hardcoded public key that is part of the ASP.NET 5 runtime.
If you set both useOssSigning and delaySign to true, the useOssSigning wins and the assembly will be OSS signed.
The delaySign option is ignored if you are compiling with CoreClr or Mono or you the keyFile option is missing.

Side note: When we enabled signing in beta-6 I noticed that there was a common misconception that signing depends on the platform the code was compiled for and not the platform the code was compiled on. You could tell this because people used the keyFile and (now removed) the strongName options together to sign assemblies produced for desktop Clr using the key pair (.snk) and OSS sign assemblies for CoreClr (since signing with a key pair is not supported on CoreClr). This resulted in errors since these options were mutually exclusive. The thing is that assemblies are signed when they are compiled and to compile an assembly you just use one runtime. Once an assembly is compiled and signed it will work with any (desktop Clr, CoreClr or Mono) runtime. In other words – if you sign an assembly with the public/private key pair (which you can do only if you compile the assembly using Desktop Clr) it will run just fine on CoreClr or Mono (they basically don’t verify strong names). From this point of view using the keyFile and strongName options together did not make much sense since it was just like saying “I want this assembly to be signed with a key pair and also with a public key only). This confusion was also one of the reasons why the strongName option was replaced with the useOssSigning option – the new option can be used together with the keyFile option and is more powerful.

Being able to OSS sign assemblies with an .snk opens up a couple of very interesting scenarios. For instance, imagine an open source project where the assemblies are signed using a private and public key pair during build. The .snk file is part of the project and is checked in to the source control. This basically means that the project was meant to be compiled using Desktop Clr. Now, if you wanted to compile this project with Mono or CoreClr things will most likely break – signing with a key pair did not work on Mono or CoreClr (before ASP.NET 5 RC1) so the assemblies will not be signed at all. This in turn may lead to further failures – e.g. if you use the InternalsVisibleToAttributes they will have to be modified as well to remove the public key. So, before you even started working on a project you had to spend a lot of time to introduce changes that will have to be thrown away when you commit your actual changes. OSS signing with .snk files fixes this because now without any changes to the project the assembly will be automatically signed with the public key and all code that depend on this assembly will just work.
The second scenario is similar to the first one except that this time the .snk file is not part of the project. So, you can clone and build the project (let’s call it A) but the assembly will not be signed. This is fine if assemblies that depend on A don’t have to be signed. But if they do you would have to sign A. The problem is that you don’t have the .snk so you will have to create your own .snk. As a result all assemblies that use A now need to be changed and recompiled because the identity of A changed. This might be quite a task but is doable… if you have code for all these assemblies. If you don’t then you’re dead in the water – you won’t be able to do that. Again, OSS signing can fix this scenario. Just dump the public key from the original assembly to an .snk (with sn -e [assembly] [targetfile]), build A and OSS sign it with the public key you just dumped. Everything else will just work (less than known limitations of OSS signing like installing OSS signed assemblies to GAC etc.) because the assembly will have the same identity as the original one.

Some miscellaneous notes:

  • assemblies signed with a private and public key pair can depend on OSS signed assemblies
  • to distinguish an OSS signed assembly from a delay signed assembly run sn –Vf [assembly]. sn -Vf will return the following error for OSS signed assemblies: Failed to verify assembly -- Strong name verification failed.
  • if you use xUnit as your testing framework you won’t be able to run tests on OSS signed assemblies with default settings. You will run into a limitation of OSS signing where the assembly cannot be loaded into AppDomain where shadow copying is turned on. To fix this run the xUnit runner with the –noshadow switch

Using native libraries in ASP.NET 5

In ASP.NET 5 RC1 we are adding a built-in support for native libraries. This may seem a little surprising – all in all ASP.NET 5 is all about managed code running on different platforms. There are, however, scenarios where this managed code needs to talk to unmanaged code. You can find such examples in ASP.NET 5 itself – the Kestrel Http server is build on top of the libuv library and Entity Framework ships a SQLite provider that uses the SQLite library. Before adding support for native libraries developers had to load native libraries on their own and then manually find and expose exported functions. Given the differences between platforms (Windows vs. non-Windows), architectures (x86 vs. x64 vs. ARM) and, possibly, runtimes (desktop Clr vs. CoreClr vs. Mono) the code to achieve the task was non-trivial. With the new feature the idea is that native libraries are shipped in a NuGet package, are part of a referenced project or live in a location where they can be automatically loaded from (e.g. /usr/lib) and the user just uses the DllImport attribute to get access to functions exported by the native library like this:

[DllImport(“mylib”)]
public static extern int get_number();

This is the simplest way of using the DllImport attribute. The name of the native library is passed as the parameter to the DllImport attribute while the exported function will be matched by convention using the name of the method attributed with the DllImport attribute. One important thing to note is that the name of the native library does not contain any filename extension. Both CoreClr and Mono will try appending different filename extensions if they can’t find a library with the exact name provided by the user and they will eventually use the extension specific for the given OS (Mono may also prepend the name with “lib”). Mono on OS X requires using __Internal as the name of the library – see below for more details.

Now, that you know how to consume native libraries in ASP.NET 5 let’s take a look at how to create a NuGet package that contains native libraries that can be consumed using the DllImport attribute. The most important thing is the package structure. Native libraries need to located in the $/runtimes/{runtime-id}/native folder where $ is the root of the package and the {runtime-id} describes the target platform the library is supposed to run on. Runtime Ids is a vast topic and I won’t go into a lot of details here – especially that, currently, they are not fully baked in when using native libraries and project references. For our purposes we can assume that a runtime id consists of the operating system name, version and architecture. For working with native libraries effectively it is enough to know the following runtime ids:

  • win7-x64
  • win7-x86
  • win7-arm
  • osx.10.9-x64
  • osx.10.10-x64
  • osx.10.11-x64

You can find a few oddities in this list. First a runtime id for Linux is missing. This is because you can’t really ship an .so that would work on any distribution of Linux in a package. While there are runtime ids that target selected distros (e.g. ubuntu.14.04-x64) or even just vanilla Linux (linux-x64) the recommended way of using native libraries on Linux is to have the user install the .so in a location from where it can be loaded automatically (e.g. install into /usr/lib or set the LD_LIBRARY_PATH environment variable accordingly). The second oddity is win7-arm. This seems odd because there was never an ARM version of Windows 7. The super-correct runtime to use here is actually win10-arm (ASP.NET 5 is not supported on Windows RT which would be win8-arm or win81-arm). However, if you used win10-arm you would need to remember to explicitly specify the runtime id each time you restore the package or publish your project. If you did not, you wouldn’t get your native library since without providing the runtime id packages will be restored for win7-{arch} (win7-arm in this case). So, because there aren’t any ASP.NET 5 runtimes that can run on pre-Windows 10 version of Windows it is safe to use win7-arm instead of win10-arm and it is much less troublesome. The last oddity is multiple runtime ids for osx. Currently, if you want to support multiple versions of OS X (officially CoreClr supports only OS X 10.10) you need to specify each version separately. Even if you use the same .dylib across all the versions it needs to be copied for each version of OS X you want to support.
For project references you need to use the same folder structure as for packages except that your root is now your project folder (i.e. the folder where the project.json lives).
The dnx repo contains sample projects for testing packages and project with native libraries. This is a good reference point if you ever get stuck.

What’s been covered so far should be enough to get started with native libraries in ASP.NET 5. You now know how to bind to functions exported from native libraries and how to build NuGet packages or structure projects containing native libraries. If you would like to understand how things are implemented under the cover (or need to know how to troubleshoot things) read on.
Loading native libraries works differently on different operating systems. In addition, different runtimes (CoreClr, desktop Clr, Mono) handle native libraries differently as well. CoreClr is consistent across all the platforms – whenever a native library needs to be loaded CoreClr will call ASP.NET 5 first and ask to provide the library. If the library CoreClr is asking for is in one of the referenced packages or projects ASP.NET 5 runtime will load it using a function exposed by CoreClr that allows to load a native library from a path. If you turn on tracing you will see the following trace entries when a native library is loaded:

Information: [LoaderContainer]: Load unmanaged library name=nativelib
Information: [PackageAssemblyLoader]: Loaded unmanaged library=nativelib in 1ms

On non-CoreClr runtimes things are a bit messy because non-CoreClr runtimes keep loading of native libraries to themselves. As a result it takes some tricks to help them load libraries from the right place. On Windows the LoadLibrary WINAPI function searches various locations when looking for a library. The search locations include folders specified in the %PATH% environment variable. So, when running on desktop Clr ASP.NET 5 will prepend the %PATH% environment variable with paths to native libraries. Note that one of disadvantages of this approach it is possible to hit the path length limit if a project references many packages with native libraries and this will most likely cause issues down the road. In traces this looks like this:

Information: [PackageDependencyProvider]: Enabling loading native libraries from packages by extendig %PATH% with: ;C:\Users\moozzyk\.dnx\packages\NativeLib\1.0.0\runtimes\win7-x86\native

On non-Windows systems the trick with the path does not work. While there is an environment variable that can be set to point the runtime linker to load a library from a non-default location (LD_LIBRARY_PATH on Linux and DYLD_LIBRARY_PATH on OS X) it has to be set before the process starts. The runtime linker reads and caches the value of the *_LIBRARY_PATH environment variable when the process starts and any further modifications have no effect. To work around this ASP.NET 5 on Mono on OS X will pre-load native libraries. This works but has one drawback – the name passed to the DllImport attribute can no longer be the library name but has to be __Internal to tell mono to look among already loaded libraries. This is ugly because it requires maintaining two sets of DllImport attributes – one that is specific to Mono on OS X with __Internal as the name and one that has the dll name which works everywhere else. If you want to see this ugliness at work you can take a look at Kestrel or the project for testing the DllImport attribute. In this case the traces will look as follows:

Information: [PackageDependencyProvider]: Attempting to preload: /Users/moozzyk/.dnx/packages/NativeLib/1.0.0/runtimes/osx.10.11-x64/native/nativelib.dylib
Information: [PackageDependencyProvider]: Preloading: /Users/moozzyk/.dnx/packages/NativeLib/1.0.0/runtimes/osx.10.11-x64/native/nativelib.dylib succeeded

The pre-loading trick does not work on Mono on Linux. In this scenario the user must make sure that the library can be loaded by the runtime linker. I don’t think this is a big problem – as explained above distributing .so’s in NuGet packages is not that a great idea anyways.

If you are trying to use a native library and ASP.NET 5 cannot find/load the library there are a few things you can try. First turn on tracing and check if you see traces similar to shown above. If you don’t see them and you are using a package reference most likely the native library was not restored correctly. To check this open the project.lock.json file and find a corresponding target section that contains the both the runtime and the runtime id (i.e. you are looking for "DNXCore,Version=v5.0/osx.10.11-x64" not "DNXCore,Version=v5.0"). Inside find the package description for your package with native libraries – e.g.:

"NativeLib/1.0.0": {
  "type": "package",
  "dependencies": {
    "System.Runtime": "4.0.21-beta-23506"
  },
  "compile": {
    "lib/dnxcore50/NativeLib.dll": {}
  },
  "runtime": {
    "lib/dnxcore50/NativeLib.dll": {}
  },
  "native": {
    "runtimes/osx.10.11-x64/native/nativelib.dylib": {}
  }
},

If the "native" does not exist, is empty or does not contain a correct path to your library this is the problem. In most cases the issue is with the runtime id – either the packages were not restored using the correct/expected runtime id or the package containing the native library did not have library for the given runtime id.
If you are seeing problems with loading native libraries on Mono you can try using the MONO_LOG_LEVEL environment variable. Setting it to debug will print a lot of traces which can be helpful troubleshooting issues. You can also filter out some categories. More details here.
Another helpful environment variable useful for debugging problems with loading libraries on Linux is LD_DEBUG. Again, if you turn on all tracing you will get a lot of stuff. You can however filter out things. This post contains a nice summary of available options.
Looks like on Mac you should be able to use DYLD_PRINT_LIBRARIES (I did not have to so far).
On Windows you can use Process Monitor to see what files are being accessed.
Also, remember that loading a dynamic library will fail if it depends on a library that cannot be found. You can check dependencies with ldd on Linux, otool -L on Mac OS X and dumpbin on Windows.

So, this more or less how native libraries can be used in ASP.NET 5 and how you troubleshoot issues.