Category Archives: .NET Framework

The SignalR for ASP.NET Core JavaScript Client, Part 2 – Outside the Browser

Last time we looked at using the ASP.NET Core SignalR TypeScript/JavaScript client in the browser. I mentioned, however, that the new client no longer has dependencies that prevent from using it outside the browser. So, today we will try taking the client outside the browser and use it in a NodeJS application. We will add a NodeJS client for the SignalR Chat service we created last time. Initially we will write the client in JavaScript and then we will convert it to TypeScript.

Let’s start from creating a new folder in the SignalRChat repo and adding a new node project:

mkdir SignalRChatNode
cd SignalRChatNode
npm init

We will call the application signarlchatnode and we will leave all other options set to default values. (6425ec1)

Our application will read messages typed by the user and send them to the server. To handle user input we will use node’s readline module. To see that things, work, let’s just add code to prompts the user for the name and displays it in the console. We will use it a starting point of our application (34bc493).

const readline = require('readline');
let rl = readline.createInterface(process.stdin, process.stdout)

rl.question('Enter your name: ', name => {
console.log(name);
  rl.close();
});

To communicate with the SignalR server we need to add the SignalR JavaScript client to the project using the following command (7875c07):

npm install @aspnet/signalr-client --save

We can now try starting the connection like this (3228a10):

const readline = require('readline');
const signalR = require('@aspnet/signalr-client');

let rl = readline.createInterface(process.stdin, process.stdout);

rl.question('Enter your name: ', name => {
  console.log(name);

  let connection = new signalR.HubConnection('http://localhost:5000/chat');
  connection.start()
  .catch(error => {
    console.error(error);
    rl.close();
  });
});

The code looks good but if you try running it, it will immediately fail with the following error:

Error: Failed to start the connection. ReferenceError: XMLHttpRequest is not defined
ReferenceError: XMLHttpRequest is not defined

What happened? The new JavaScript client no longer depends on the browser but still uses standard libraries like XmlHttpRequest or WebSocket to communicate with the server. If these libraries are not provided the client will fail. Fortunately, the required functionality can be easily polyfilled in the NodeJS environment. For now, we will just stick the polyfills on the global object. It’s not beautiful by any means but will do the trick. We are discussing how to make it better in the future but at the moment this is the way to go.

Depending on the features of SignalR you plan to use you will need to provide appropriate polyfills. Currently the absolute minimum is XmlHttpRequest. SignalR client uses it to send the initial OPTIONS HTTP request which initializes the connection on the server side and for the long polling transport. So, if use the long polling transport only, XmlHttpRequest is the only polyfill you will need to provide . If you want to use the WebSockets transport you will need a WebSocket polyfill in addition to XmlHttpRequest. (We are thinking about skipping sending the OPTIONS request for WebSockets. If this is implemented you will not need the XmlHttpRequest polyfill when using the WebSockets transport.) For ServerSentEvents transport you will need an EventSource polyfill. Finally, if you happen to use binary protocols (e.g. MessagePack) over the ServerSentEvent transport you will need polyfills for atob/btoa functions. For simplicity, we will use the WebSocket transport in our application so we will add only polyfills for XmlHttpRequest and WebSockets:

npm install websocket xmlhttprequest --save

and make them available globally via:

XMLHttpRequest = require('xmlhttprequest').XMLHttpRequest;
WebSocket = require('websocket').w3cwebsocket;

If we run the code now we will see something like this:

moozzyk:~/source/SignalRChat/SignalRChatNode$ node index.js
Enter your name: moozzyk
moozzyk
Information: WebSocket connected to ws://localhost:5000/chat?id=0d015ce4-3a78-4313-9343-cb6183a5e8ea
Information: Using HubProtocol 'json'.

which tells us that the client was able to connect successfully to the server. (946f85d)

Now, we need to add some code to handle user input and interact with the server and our Node SignalR Chat client is ready. (I admit that the user interface is not very robust but should be enough for the purpose of this post). You can now talk to browser clients from your node client and vice versa (0f7f71f):

Screen Shot 2017-09-30 at 6.57.14 PM

Now let’s convert our client to TypeScript. We will start from creating a new TypeScript project with tsc --init. In the generated tsconfig.json file we will change the target to es6. We will also add an empty index.ts file and delete the existing index.js file (we will no longer need the index.js file since we will now be generating one by compiling the newly created index.ts). (b83cf92) If you now run tsc you should see an empty index.js file created as a result of compiling the index.ts file.  The last thing to do is to actually convert our JavaScript code to TypeScript. We could just translate it one-to-one but we can do a little better. TypeScript supports async/await which makes writing asynchronous code much easier. Since many of SignalR client methods return Promises we can just await these calls instead of using .then/.catch functions. Here is how our node SignalRChat client written in TypeScript looks like (2a6d0e9):

import * as readline from "readline"
import * as signalR from "@aspnet/signalr-client"

(<any>global).XMLHttpRequest = require("xmlhttprequest").XMLHttpRequest;
(<any>global).WebSocket = require("websocket").w3cwebsocket;

let rl = readline.createInterface(process.stdin, process.stdout);

rl.question("Enter your name: ", async name => {
  console.log(name);
  let connection = new signalR.HubConnection("http://localhost:5000/chat");

  connection.on("broadcastMessage", (name, message) => {
    console.log(`${name}: ${message}`);
    rl.prompt(true);
  });

  try {
    await connection.start();
    rl.prompt();

    rl.on("line", async input => {
      if (input === "!q") {
        console.log("Stopping connection...");
        connection.stop();
        rl.close();
        return;
      }
      await connection.send("send", name, input);
    });
  }
  catch (error) {
    console.error(error);
    rl.close();
  }
});

You can run it by executing the following commands:
tsc
node index.js

Today we learned how to use the ASP.NET Core SignalR client in the NodeJS environment. We created a small node JavaScript application that was able to communicate with browser clients which. Finally, we converted the JavaScript code to TypeScript and learn a little bit about the TypeScript’s async/await feature.

Advertisements

The SignalR for ASP.NET Core JavaScript Client, Part 1 – Web Applications

The first official release of SignalR for ASP.NET Core – alpha1 – was just released. In this release, all SignalR components were rewritten to make SignalR simpler, easier to use and more reliable.

The SignalR JavaScript client has always been a fundamental part of SignalR. Unfortunately, it has a few limitations which made it hard to extend or use outside the browser. The rewrite allowed to introduce changes which allow to take the client outside the browser (no more dependency on jQuery, YAY!) and open new scenarios. And this is what this blog post will focus on. I split the post to two parts. In the first part I will show how to use the client in a web application from both JavaScript and TypeScript. In the second, part we will look at NodeJS.

The plan for this part is to recreate the chat application from the tutorial on the previous version of SignalR and then to convert it to use the new SignalR Server and JavaScript client. The sample is simple enough to allow us to focus on SignalR aspects rather than on application intricacies. As a bonus, we will see what the experience of porting an application from the previous version of SignalR is. I created a github repo for the application where each commit is a step described in this post. I will refer to particular commits from this post to show changes for a given step.

Setting up the Server

Let’s start from creating an empty ASP.NET Core application. We can do that from command line by running the dotnet new web command. (See this step on github).

Once the application is created we can start the server with dotnet run and make sure it works by navigating to http://localhost:5000 from a browser.

After we ensured that the application runs we can add SignalR server components. First, we need to add a reference to the SignalR package to the SignalRChat.csproj file (See this step on github).

Now we can add the Chat Hub class – we will just copy the code from tutorial and tweak a few things. This is how the hub class looks after the changes:

using System;
using Microsoft.AspNetCore.SignalR;
namespace SignalRChat
{
    public class ChatHub : Hub
    {
        public void Send(string name, string message)
        {
            // Call the broadcastMessage method to update clients.
            Clients.All.InvokeAsync("broadcastMessage", name, message);
        }
    }
}

The changes we made were only cosmetic – we removed the reference to the System.Web namespace, added 'Core' to the Microsoft.AspNet.SignalR so that it reads Microsoft.AspNetCore.SignalR. We also changed how we invoke the client-side method by passing the method name as the first parameter to the InvokeAsync call. (See this step on github).

Now that we created a hub we need to configure the application to be aware of SignalR and to forward SignalR related messages to our hub. It’s as easy as calling AddSignalR extension method in the ConfigureServices method of our Startup class and mapping the hub with the UseSignalR method. We will also add the static files middleware which will be responsible for serving static files. The Startup class should look like this:

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddSignalR();
    } 

    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        if (env.IsDevelopment())
        {
            app.UseDeveloperExceptionPage();
        }

        app.UseFileServer();

        app.UseSignalR(routes =>
        {
            routes.MapHub<ChatHub>("chat");
        });
    }
}

(See this step on github).

And this is all the work we had to do create a functional SignalR chat server. Now we can focus on the client side.

The JavaScript Client

In the new version of SignalR the JavaScript client is distributed using npm. The npm module contains a version of the client that can be just included in a web page using the tag, as well as, typings and modules that can be consumed from TypeScript. To get the client to your machine you need to install npm if you haven’t already and run:

npm install @aspnet/signalr-client

The client will be installed in the node_modules folder and you can find the necessary files to include in the node_modules/@aspnet/signalr-client/dist/browser folder. You may wonder why there are so many files in this folder and what purpose they serve. Let’s go over them then and explain.

First, you will find that there are two sets of files – files that contain ES5 in the names and files that do not contain ES5 in the names. SignalR JavaScript uses ES6 (a.k.a EcmaScript 2015) features like Promises or arrow functions. Not all browsers however, support ES6 (looking at you Internet Explorer). The files without ES5 in the names are meant to be used in browsers that support ES6. The files that contain ES5 in the names are the ES6 files transpiled to ES5. They are ES5 compatible and include all required dependencies. The downside of the ES5 files is that they are much bigger than ES6 files.

Another interesting set of files are files containing msgpackprotocol in the name. The new version of SignalR supports custom hub protocols – including binary protocols – and has built-in support for a binary protocol based on MessagePack. The JavaScript implementation of the MessagePack based hub protocol (using the msgpack5) turned out to be quite big so we moved it to a separate file. This way you can include the MessagePack hub protocol only if you want to use it and will not pay the price if you don’t care.

You will also find that each file has a min counterpart. These are just minified versions of the corresponding files. You will want to use the minified versions in production but debugging is much easier with non-minified files so you may want to use non-minified versions during development.

Finally, there is also the third-party-notices.txt file. These are notices for the msgpack5 library and its dependencies used in the MessagePack hub protocol implementation.

Using the SignalR JavaScript Client from JavaScript

Now, that we know a little bit about the JavaScript client let’s update our application to use it.

First, let’s copy all the files from the node_modules/@aspnet/signalr-client/dist/browser folder to a new ​scritps/signalr folder under the wwwroot. (See this step on github).

After the files are copied, let’s create the index.html file in the wwwroot folder and paste the contents of the html file from the tutorial. (See this step on github).

If you try to run the application at this point it will not work. The index.html has references to files like the jQuery library or the old SignalR client which don’t exist. Let’s fix that. Note that even though jQuery is no longer required to the new SignalR client I will continue to use it to minimize the number of changes I need to make. All in all this is not a tutorial on how to remove jQuery from your app so let’s not get sidetracked. Let’s start from sorting out the scripts situation. For jQuery, I will replace the link with the one to the jQuery CDN. For SignalR, I will replace the link to the signalR-2.2.1.min.js file with signalR-client-1.0.0-alpha1.js (feel free to use the ES5 version if you are using a browser that don’t support ES6 features) and remove the link to hubs since hub proxies are currently not supported. (See this step on github (github trick – notice that the link ends with ?w=1 – try removing it and see what happens. Very useful when reviewing some PRs)).

Now we can finally fix the code. Fortunately, this is not a lot of changes:

  • Instead of using proxies we will just create a new HubConnection
  • To register the callback for the client side broadcastMessage method we will use the on function
  • We will replace the done method used by jQuery deferreds to the then used by ES6 promises
  • We will invoke hub methods with the invoke function

(See this step on github).

That’s pretty much it. If you run the application now you should be able to send and receive messages.

Using the JavaScript Client from TypeScript

We now know how to use the new JavaScript SignalR client from JavaScript code. The SignalR client module contains also all necessary bits that make it possible to be consumed from TypeScript. To see how it works let’s take our chat application a bit further and convert it TypeScript.

First, make sure that you have a recent TypeScript compiler installed – run tsc --version from command line. If running the command fails or you have an older version installed install the latest one using this command:

npm install typescript -g

After installing or updating the typescript compiler we will initialize a new project by running

tsc --init

in the project folder. This will create a tsconfig.json file which will look like this:

{
  "compilerOptions": {
    "target": "es6",
    "module": "commonjs",
    "strict": true,
    "noImplicitAny": true
  }
}

after performing some cleanup. We will also add a new chat.ts file which we will leave empty for now. If you run the tsc command from project root you should see an almost empty chat.js file generated from your chat.ts file. (See this step on github).

Because we are using TypeScript and will bring dependencies using npm we will no longer need JavaScript files for the browser so let’s delete them. (See this step on github).

To be able to add and restore dependencies the client will need, let’s create a package.json file by executint the npm init command. We will leave default values for almost all settings except for the project name which needs to be lowercase.

PS C:\source\SignalRChat\SignalRChat> npm init
This utility will walk you through creating a package.json file.
It only covers the most common items, and tries to guess sensible defaults.
See `npm help json` for definitive documentation on these fields
and exactly what they do.

Use `npm install <pkg> --save` afterwards to install a package and
save it as a dependency in the package.json file.

Press ^C at any time to quit.
name: (SignalRChat) signalrchat
version: (1.0.0)
description:
entry point: (chat.js)
test command:
git repository:
keywords:
author:
license: (ISC)
About to write to C:\source\SignalRChat\SignalRChat\package.json:

{
"name": "signalrchat",
"version": "1.0.0",
"description": "",
"main": "chat.js",
"dependencies": {},
"devDependencies": {},
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC"
}

Is this ok? (yes)
PS C:\source\SignalRChat\SignalRChat>

Now let’s add our dependencies – signalr-client, jquery and jquery typings (they enable using jquery from TypeScript). We will use the --save-dev option to save the dependencies as dev dependencies in the package.json file.

npm install @aspnet/signalr-client --save-dev
npm install jquery --save-dev
npm install @types/jquery --save-dev

We also need to install browserify – a tool which we will use to create the final script to be used by the browser:

npm install -g browserify

(See this step on github).

We can now start working on the code. First, we need to import the dependencies we are going to use. We can do that by adding the following two lines at the top of our chat.ts file:

import * as signalR from "@aspnet/signalr-client"
import * as $ from "jquery

Now we can move the script from our .html file to the .ts file. If you do that and play a little bit with the code you will notice that intellisense now tells you about class members and function parameters and if you press F12 (in Visual Studio Code) it will take you to the function header. Another thing, you will see is an error on line 5.  This TypeScript telling you that there is a type mismatch for the parameter passed to the jQuery val() function – the prompt() function can return null which is not a valid input for the val() function.

VSCodeSignalR

In our case we know that prompt will return string so we will just cast the result to string to suppress the error.

Since we moved the function to the .ts file we can now remove all the JavaScript code from our index.html file. We can also remove all the tags since we no longer depend on them to bring dependencies (we also already deleted the scripts). (See this step on github).

Let’s compile our chat.ts file now by running tsc command. If you look at the generated chat.js file you will notice that it looks pretty much the same as the source chat.ts file with some additional lines at the top. You will also notice that it does not have the required dependencies (i.e. signalr-client and jquery). This is where browserify comes into play. We will use browserify to generate the final version of the file with all the dependencies. Let’s run the following command (you may need to create the wwwroot/scripts folder if one does not exist) from the project folder:

browserify .\chat.js -o .\wwwroot\scripts\chat.js

Take a look at the chat.js file that was created by browserify and now you will see that the file is much bigger and contains all the required dependencies. If we include this file in our index.html with the tag, start the application and open in the browser you will see that it works and you can send and receive messages. (See this step on github). We could even automate build steps (e.g. with gulp) but it’s out of scope for this post.

Summary

In this post, we looked at using the new SignalR JavaScript client in web applications. We learned how to use the client from both JavaScript and TypeScript. We tried to port an application using the previous version of SignalR to see how hard it is. In the next part, we will take a look at using the client in NodeJS applications.

Entity Framework 6 Easter of Love

While Entity Framework Core along with ASP.NET Core get all the hype today, Entity Framework 6 is still the workhorse of many applications running every day which won’t be converted to the Core world anytime soon, if at all. Because of this I decided to spend some time to give my EF extensions a small refresh to adapt to the changing landscape.

Github

Some of my extensions were hosted on Codeplex. I do most of the work on Github these days and Github is nowadays a de facto standard for open source projects. Codeplex not only looks dated but is also missing a lot of features Github has (searching the code on Github is far from perfect but Codeplex does not offer it at all). All in all this turned out to be the right decision given that it was recently announced that Codeplex is being shutdown. Anyways, here is where my projects previously hosted on Codeplex found their new homes:

Updating projects

I developed most of my EF extensions before Visual Studio 2015 was released. I found that opening them in Visual Studio 2015 was not a good experience – Visual Studio would update project/solution files automatically leaving unwanted changes. Therefore, I updated solution files to the version compatible with Visual Studio 2015. I also moved to a newer version of XUnit which does not require installing an XUnit runner extension in Visual Studio to enable running tests. Even though the solution files are marked as Visual Studio 2015 compatible they can be opened just fine with Visual Studio 2017 which shipped in the meantime.

New versions

This is probably the most exciting: I released new versions of a few of my extensions.

2nd Level Cache for Entity Framework

2nd Level Cache (a.k.a. EFCache) 1.1.0 contains only one new feature. This feature will, however, make everyone’s life easier. Until now the default caching policy cached results for all queries. In the vast majority of cases this behavior is not desired (or plainly incorrect) so you had to create your own policy to limit caching only to results from selected tables. In EFCache 1.1.0 you can specify store entity sets (i.e. which correspond to tables in the database) for which the results should be cached when creating the default caching policy. As a result you no longer have to create your own policy if you want to control simple caching. This change is not breaking.

Store Functions for Entity Framework

I received a couple of community Pull Requests which are worth sharing so yesterday I published on NuGet the new new version of the Store Functions for Entity Framework (1.1.0) containing these contributions. pogi-b added support for Built-in functions so you can now map built-in store functions (e.g. FORMAT or MAP) and use them in your queries. PaulVrugt added ability to discover function stubs marked as private. The first change is not breaking. If you happened to have private function stubs that were not discovered before (a.k.a. dead code) they will be discovered now as a result of the second change.

EF6 CodeFirst View Generation T4 Template for C#

Visual Studio 2017 now requires extensions to use VSIX v3 format. The EF6 CodeFirst View Generation T4 Template for C# extension used format v1 and could not be installed in Visual Studio 2017. I updated the VSIX format to v3 and dropped support for Visual Studio 2010 and 2012.

Note: I have not updated other view generation templates for EF4/EF5 to work with Visual Studio 2017. If you need them to work with VS 2017 let me know and I will update.

Happy Easter!

 

Running ASP.NET Core Applications with IIS and Antares (Azure Websites)

I have seen a few articles (including official docs on http://docs.asp.net) about publishing and running  ASP.NET Core applications in IIS (or Azure/Antares). Unfortunately, I was not satisfied by either of them. Yes, they showed steps you need to follow to make things work. Yes, they touched on some aspects of how things work. No, they did not explain what’s really happening, why it’s happening and how the blocks fit together. Hence this post – something I would like to read if I wanted to run my ASP.NET Core application using IIS or Azure.

Before we can get into details we need to understand how things work at a high level. The most important thing is that ASP.NET Core applications are no longer tightly coupled to IIS as it was with previous versions. Rather, IIS is acting now merely as a reverse proxy and the application itself runs as a separate process using the Kestrel HTTP server. Decoupling ASP.NET from IIS was necessary to enable running ASP.NET Core applications on other platforms. It also makes development easier because it allows to avoid the overhead of IIS/IIS Express during development by making it possible to run your application directly using Kestrel. Note that in production environment it is recommended to always run Kestrel behind a reverse proxy like IIS or  NGINX.

Going back to IIS HTTP requests are handled as follows:

  1. IIS receives a request
  2. IIS (ASP.NET Core Module) starts the ASP.NET Core application in a separate process (if the application is not already running) – i.e. the application no longer runs as w3wp.exe but as dotnet.exe or myapp.exe
  3. IIS forwards the  request to the application
  4. Application processes the request and send the response to IIS
  5. IIS forwards the response to the client

Out of these 5 steps, step two is the most interesting, the most complicated and the most fragile. There is a few pieces that need to be aligned to successfully start an ASP.NET Core application from IIS: ASP.NET Core Module, web.config file and application configuration. Let’s take a look at them one by one and discuss their role.

ASP.NET Core Module (sometimes abbreviated to ANCM) is a native IIS module that starts the application and implements reverse proxy functionality. It is installed as part of ASP.NET Core tooling for Visual Studio or can be installed separately – (the Windows (Server Hosting) package from https://www.microsoft.com/net/download.

web.config – tells IIS to use ASP.NET Core Module to process requests. It also tells ASP.NET Core Module what process (application) to start. Note, that you don’t use web.config to configure your application – it is only used by IIS. Here is how a typical web.confing of an ASP.NET Core application looks like:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <system.webServer>
    <handlers>
      <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified"/>
    </handlers>
    <aspNetCore processPath="dotnet" arguments=".\HelloWorld.dll" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" />
  </system.webServer>
</configuration>

Application configuration – a typical Main method of an ASP.NET Core application looks like this:

var host = new WebHostBuilder()
            .UseKestrel()
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseIISIntegration()
            .UseStartup<Startup>()
            .Build();

from the perspective of running the application using IIS the lines that are important are UseKestrel and UseIISIntegration.  UseKestrel configures Kestrel as the application web server. This is important from IIS perspective since you can’t use WebListener and IIS together at the moment (https://github.com/aspnet/IISIntegration/issues/8). UseIISIntegration does a bit of magic to fulfill ASP.NET Core Module expectations and registers the IISMiddleware.

With the information above we can now drill into how IIS starts ASP.NET Core applications. When IIS  receives a request for an ASP.NET Core application it passes the request to the ASP.NET Core Module. IIS was configured to do this by the following entry in the the web.config file:

<handlers>
  <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified"/>
</handlers>

Upon receiving the request the ASP.NET Core Module will attempt to start the application if it is not already running. The name of the process to start and its arguments are specified in web.config  as the processPath and arguments attributes on the aspNetCore element. ASP.NET Core Module also sets a few environment variables for the application process – ASPNETCORE_PORT , ASPNETCORE_APPL_PATH and ASPNETCORE_TOKEN. Here is where the UseIISIntegration magic happens. When the application starts, the code inside UseIISIntegration method tries to read these environment variables and if they are not empty they will be used to configure the url/port the application will listen on. (If the above environment variables are not set UseIISIntegration won’t try to configure anything so that you can use your own settings when running the application directly (i.e. without IIS)). One important detail to pay attention to is where you put the call to UseIISIntegration when configuring your application with WebHostBuilder. You need to make sure that you don’t try to set server urls after you called UseIISIntegration otherwise the url set by UseIISIntegration will get overwritten and your application will be listening on a different port that Asp.NET Core Module expects. As a result things will not work.

ASPNETCORE_TOKEN is a pairing token. IIS middleware added to the pipeline by UseIISIntegration will check each request if it contains this value and will reject requests that don’t. This is to prevent from accepting requests that did not come from IIS.

These are the fundamental blocks needed to run your ASP.NET Core application with IIS and on Azure (Antares). You can now go and set things up as described in some tutorials and actually understand what you are doing and why.

There is, however, a  second level of confusion which happens when you start using Visual Studio and you see additional magic. The first thing that is confusing is web.config. You create a new ASP.NET Core application, open web.config and you see this line instead of what I showed above:

<aspNetCore processPath="%LAUNCHER_PATH%" arguments="%LAUNCHER_ARGS%" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" forwardWindowsAuthToken="false"/>

You start wondering what are these %LAUNCHER_*% environment variables, who is supposed to set them and how IIS (or whoever) knows what values to put there. Honestly, I don’t exactly know how these environment variables work but I treat them as placeholders that Visual Studio replaces when you start your application with F5/Ctrl+F5. When you publish your application to run in a production environment you can’t have these placeholders in web.config – no one knows about them and no one is going to replace them or set values (I also don’t think you can just set environment variables with these names and they will be picked up automatically – this is why I call these strings placeholders – they look as if they were environment variables but I don’t think they behave as environment variables). So how does this work then? If you look at your project.json file you will see a "scripts" section looking more or less like this:

"scripts":
{
"postpublish":
"dotnet publish-iis --publish-folder %publish:OutputPath% --framework %publish:FullTargetFramework%"
}

It just tells dotnet to run the publish-iis tool after the application is published. What is publish-iis (or a better question would be “what publish-iis isn’t”)? publish-iis isn’t… doing much. There are a lot of misconceptions about the publish-iis tool but it actually is a very simple tool. It goes to the folder where the application was published (not your project folder) and checks if it contains a web.config file. If it doesn’t it will create one. If it does it will check what kind of application you have (i.e. whether it is targeting full CLR or Core CLR and – for Core CLR – whether it is a portable or standalone application) and will set the values of the processPath and arguments attributes removing %LAUNCHER_PATH% and %LAUNCHER_ARGS% placeholders on the way. Note that publish-iis is not a Visual Studio tool. It’s independent and whether you are using Visual Studio or you publish your application from command line using dotnet publish it will work as long as it is configured as a postpublish script in your project.json. That’s pretty much what publish-iis is.

Troubleshooting

Since there are a few pieces that need to be aligned to run ASP.NET Core application with IIS things tend to go wrong. If you don’t know how these pieces are supposed to work together (i.e. you did not read the first part of this post) you can search the internet, try random hints from stackoverflow and pray and most likely you still won’t be able to make your application work. But now you know know how things are supposed to work so you can be much more effective in troubleshooting problems. I will only focus on the infamous 502.3 Bad Gateway error as this is the most common one. I read about other failures (https://docs.asp.net/en/latest/publishing/iis.html) but so far have not seen any of them. So, if your application does not work with IIS what to do?

  • Make sure ASP.NET Core Module is installed. It will be installed on your dev box because it is installed with Visual Studio Web Tooling but it may not be on the server you are deploying your application to
  • Try running your published application without IIS – in the command prompt go to the folder where the application was published to and run dotnet {myapp}.dll (a portable Core CLR app) or {myapp}.exe (a standalone application or an application targeting full CLR)
  • If above works – make sure dotnet.exe is on the global %PATH%. It might be on the %PATH% for you but IIS is running using a different account which may not have path to dotnet.exe set. Add the path to the folder dotnet.exe lives in (typically C:\Program Files\dotnet) to the global %PATH% environment variable. I had to do iisreset.exe to make the change effective in IIS
  • Check web.config file of the published application – verify it has actual values and not %LAUNCHER_PATH%/%LAUNCHER_ARGS% placeholders. Make sure the processPath  corresponds to the application type (i.e. you can’t start a portable application with {myapp}.exe because there is only {myapp}.dll in the folder)
  • Check event log – ASP.NET Core Module writes to event log and you can find some useful (and some bogus) entries in the event log
  • Turn on logging – you probably noticed stdoutLogFile and stdoutLogEnabled attributes on the aspNetCore element. They are very helpful to diagnose issues – especially issues related to application start up. If you set stdoutLogEnabled to true ASP.NET Core Module will write all the output written by the application to the console to the stdoutLogFile file. Note that the folder configured in the stdoutLogFile attribute must exist, otherwise (at least at the moment) the file won’t be created. In case of Azure/Antares the path should look like:  \\?\%home%\LogFiles\stdout  (the \\?\%home%\LogFiles folder always exists). In fact if you publish your application to Azure/Antares using Visual Studio tooling it will make publish-iis set stdoutLogPath to point to the folder above and you can turn on logging by merely setting stdoutLogEnabled to true.
    Note that std out logging should not be used as a poor man’s file logging. It’s very useful to diagnose startup issues since they often happen before loggers are created and/or configured but if you want to log to a file configure your application to use a real logging and logging framework. (There is also a plan to create a simple file logger https://github.com/aspnet/Logging/issues/441)

app_offline

The last thing to mention is app_offline.htm. app_offline.htm is a feature of ASP.NET Core Module where ASP.NET Core Module will monitor your application directory and if it notices the app_offline.htm file (note – at the moment the file must not be empty: https://github.com/aspnet/IISIntegration/issues/174) it will stop your application and will respond to requests with the contents of the app_offline.htm file. This makes application deployment much easier since you no longer need to deal with the problem of locked files that are loaded into the process of a running application. Once you remove the app_offline.htm file from the folder ASP.NET Core Module will start your application on the first request. app_offline.htm is used by the deployment tool (WebDeploy) that ships with Visual Studio (note that in preview1 this tool uses app_offline.htm only when deploying to Azure/Antares but it is supposed to be fixed so that app_offline.htm is used by default when deploying to file system (think: IIS)).

So, these are the basics of running ASP.NET Core application with IIS. The topic is much broader but the details described in this post will hopefully make working with IIS  in the ASP.NET Core world less painful and help those who need to transition from previous versions of ASP.NET.

 

The final version of the Store Functions for EntityFramework 6.1.1+ Code First convention released

Today I posted the final version of the Store Functions for Entity Framework Code First convention to NuGet. The instructions for downloading and installing the latest version of the package to your project are as described in my earlier blog post only you no longer have to select the “Include Pre-release” option when using UI or use the –Pre option when installing the package with the Package Manager Console. If you installed a pre-release version of this package to your project and would like to update to this version just run the Update-Package EntityFramework.CodeFirstStoreFunctions command from the Package Manager Console.

What’s new in this version?

This new version contains only one addition comparing to the beta-2 version – the ability to specify the name of the store type for parameters. This is needed in cases where a CLR type can be mapped to more than one store type. In case of the Sql Server provider there is only one type like this – the xml type. If you look at the Sql Server provider code (SqlProviderManifest.cs ln. 409) you will see that the store xml type is mapped to the EDM String type. This mapping is unambiguous when going from the store side. However the type inference in the Code First Functions convention works from the other end. First we have a CLR type (e.g. string) which maps to the EDM String type which is then used to find the corresponding store type by asking the provider. For the EDM String type the Sql Server the provider will return (depending on the facets) one of the nchar, nvarchar, nvarchar(max), char, varchar, varchar(max) types but it will never return the xml type. This makes it basically impossible to use the xml type when mapping store functions using the Code First Functions convention even though this is possible when using Database First EDMX based models.
Because, in general case, the type inference will not always work if multiple store types are mapped to one EDM Type I made it possible to specify the store type of a parameter using the new StoreType property of the ParameterTypeAttribute. For instance if you had a stored procedure called GetXmlInfo that takes an xml typed in/out parameter and returns some data (kind of a more advanced (spaghetti?) scenario but came from a real world application where the customer wanted to replace EDMX with Code First so they decided to use Code First Functions to map store functions and this was the only stored procedure they had problems with) you would use the following method to invoke this stored procedure:

[DbFunctionDetails(ResultColumnName = "Number")]
[DbFunction("MyContext", "GetXmlInfo")]
public virtual ObjectResult<int> GetXmlInfo(
    [ParameterType(typeof(string), StoreType = "XML")] ObjectParameter xml)
{
    return ((IObjectContextAdapter)this).ObjectContext
        .ExecuteFunction("GetXmlInfo", xml);
}

Because the parameter is in/out I had to use the ObjectParameter to pass the value and to read the value returned by the stored procedure. Because I used ObjectParameter I had to use the ParameterTypeAttribute to tell the convention what is the Clr type of the parameter. Finally, I also used the StoreType parameter which results in skipping asking the provider for the store type and using the type I passed.

That would be it. See my other blog posts here and here if you would like to see other supported scenarios. The code and issue tracking is on codeplex. Use and enjoy.

The final version of the Second Level Cache for EF6.1+ available.

This is it! Today I pushed the final version of the Second Level Cache for Entity Framework 6.1+ to NuGet. Now you no longer need to use -Pre when installing the package from the Package Manager Console nor remember to select the “Include Prerelease” option from the dropdown when installing the package using the “Manage NuGet Packages” window. The final version is functionally equivalent to the beta-2 version – there wasn’t a single change to the product code between the beta-2 version and this version. I would still encourage you to upgrade to the final version if you are using the beta-2 version.
There is also a new implementation of cache which uses Redis to store cached items. It was created by silentbobbert and you can get it from NuGet. I am pretty excited about this since it opens quite a few new possibilities. Try it out and see how it works.
Update/install, enjoy and file bugs (if you find any).

The Beta Version of Store Functions for EntityFramework 6.1.1+ Code First Available

This is very exciting! Finally after some travelling and getting the beta-2 version of the Second Level Cache for EF 6.1+ out the door I was able to focus on store functions for EF Code First. I pushed quite hard for the past two weeks and here it is – the beta version of the convention that enables using store functions (i.e. stored procedures, table valued functions etc.) in applications that use Code First approach and Entity Framework 6.1.1 (or newer). I am more than happy with the fixes and new features that are included in this release. Here is the full list:

  • Support for .NET Framework 4 – the alpha NuGet package contained only assemblies built against .NET Framework 4.5 and the convention could not be used when the project targeted .NET Framework 4. Now the package contains assemblies for both .NET Framework 4 and .NET Framework 4.5.
  • Nullable scalar parameters and result types are now supported
  • Support for type hierarchies – previously when you tried using a derived type the convention would fail because it was not able to find a corresponding entity set. This was fixed by Martin Klemsa in his contribution
  • Support for stored procedures returning multiple resultsets
  • Enabling using a different name for the method than the name of the stored procedure/function itself – a contribution from Angel Yordanov
  • Enabling using non-DbContext derived types (including static classes) as containers for store function method stubs and methods used to invoke store functions – another contribution from Angel Yordanov
  • Support for store scalar functions (scalar user defined functions)
  • Support for output (input/output really) parameters for stored procedures

This is a pretty impressive list. Let’s take a closer look at some of the items from the list.

Support for stored procedure returning multiple resultsets

Starting with version 5 Entity Framework runtime has a built-in support for stored procedures returning multiple resultsets (only when targeting .NET Framework 4.5). This is not a very well-known feature which is not very surprising given that up to now it was practically unusable. Neither Code First nor EF Tooling supports creating models with store functions returning multiple resultsets. There are some workarounds like dropping to ADO.NET in case of Code First (http://msdn.microsoft.com/en-us/data/JJ691402.aspx) or editing the Edmx file manually (and losing the changes each time the model is re-generated) for Database First but they do not really change the status of the native support for stored procedures returning multiple resultsets as being de facto an unfeature. This is changing now – it is now possible to decorate the method that invokes the stored procedure with the DbFunctionDetails attribute and specify return types for subsequent resultsets and the convention will pick it up and create metadata EF requires to execute such a stored procedure.

Using a different name for the method than the name of the stored procedure

When the alpha version shipped the name of method used to invoke a store function had to match the name of the store function. This was quite unfortunate since most of the time naming conventions used for database objects are different from naming conventions used in the code. This is now fixed. Now, the function name passed to the DbFunction attribute will be used as the name of the store function.

Support for output parameters

The convention now supports stored procedures with output parameters. I have to admit it ended a bit rough because of how the value of the output parameter is being set (at least in case of Sql Server) but if you are in a situation where you have a stored procedure with an output parameter it is better than nothing. The convention will treat a parameter as an output parameter (in fact it will be an input/output parameter) if the type of the parameter is ObjectParameter. This means you will have to create and initialize the parameter yourself before passing it to the method. This is because the output value (at least for Sql Server) is set after all the results returned by the stored procedure have been consumed. Therefore you need to keep a reference to the parameter to be able to read the output value after you have consumed the results of the query. In addition because the actual type of the parameter will be only known at runtime and not during model discovery all ObjectParameter parameters have to be decorated with the ParameterTypeAttribute which specifies the type that will be used to build the model. Finally the name of the parameter in the method must match the name of the parameter in the database (yeah, I had to debug EF code to figure out why things did not work) – fortunately casing does not matter. As I said – it’s quite rough but should work once you align all the moving pieces correctly.

Exmple 1

The following example illustrates how to use the functionality described above. It uses a stored procedure with an output parameter and returning multiple resultsets. In addition the name of the method used to invoke the store procedure (MultipleResultSets) is different from the name of the stored procedure itself (CustomersOrdersAndAnswer).

internal class MultupleResultSetsContextInitializer : DropCreateDatabaseAlways<MultipleResultSetsContext>
{
    public override void InitializeDatabase(MultipleResultSetsContext context)
    {
        base.InitializeDatabase(context);

        context.Database.ExecuteSqlCommand(
        "CREATE PROCEDURE [dbo].[CustomersOrdersAndAnswer] @Answer int OUT AS " +
        "SET @Answer = 42 " +
        "SELECT [Id], [Name] FROM [dbo].[Customers] " +
        "SELECT [Id], [Customer_Id], [Description] FROM [dbo].[Orders] " +
        "SELECT -42 AS [Answer]");
    }

    protected override void Seed(MultipleResultSetsContext ctx)
    {
        ctx.Customers.Add(new Customer
        {
            Name = "ALFKI",
            Orders = new List<Order>
                {
                    new Order {Description = "Pens"},
                    new Order {Description = "Folders"}
                }
        });

        ctx.Customers.Add(new Customer
        {
            Name = "WOLZA",
            Orders = new List<Order> { new Order { Description = "Tofu" } }
        });
    }
}

public class Customer
{
    public int Id { get; set; }
    public string Name { get; set; }
    public virtual ICollection<Order> Orders { get; set; }
}

public class Order
{
    public int Id { get; set; }
    public string Description { get; set; }
    public virtual Customer Customer { get; set; }
}

public class MultipleResultSetsContext : DbContext
{
    static MultipleResultSetsContext()
    {
        Database.SetInitializer(new MultupleResultSetsContextInitializer());
    }

    protected override void OnModelCreating(DbModelBuilder modelBuilder)
    {
        modelBuilder.Conventions.Add(
            new FunctionsConvention<MultipleResultSetsContext>("dbo"));
    }

    public DbSet<Customer> Customers { get; set; }
    public DbSet<Order> Orders { get; set; }

    [DbFunction("MultipleResultSetsContext", "CustomersOrdersAndAnswer")]
    [DbFunctionDetails(ResultTypes = 
        new[] { typeof(Customer), typeof(Order), typeof(int) })]
    public virtual ObjectResult<Customer> MultipleResultSets(
        [ParameterType(typeof(int))] ObjectParameter answer)
    {
        return ((IObjectContextAdapter)this).ObjectContext
            .ExecuteFunction<Customer>("CustomersOrdersAndAnswer", answer);
    }
}

class MultipleResultSetsSample
{
    public void Run()
    {
        using (var ctx = new MultipleResultSetsContext())
        {
            var answerParam = new ObjectParameter("Answer", typeof (int));

            var result1 = ctx.MultipleResultSets(answerParam);

            Console.WriteLine("Customers:");
            foreach (var c in result1)
            {
                Console.WriteLine("Id: {0}, Name: {1}", c.Id, c.Name);
            }

            var result2 = result1.GetNextResult<Order>();

            Console.WriteLine("Orders:");
            foreach (var e in result2)
            {
                Console.WriteLine("Id: {0}, Description: {1}, Customer Name {2}", 
                    e.Id, e.Description, e.Customer.Name);
            }

            var result3 = result2.GetNextResult<int>();
            Console.WriteLine("Wrong Answer: {0}", result3.Single());

            Console.WriteLine("Correct answer from output parameter: {0}", 
               answerParam.Value);
        }
    }
}

The first half of the sample is just setting up the context and is rather boring. The interesting part starts at the MultipleResultSets method. The method is decorated with two attributes – the DbFunctionAttribute and the DbFunctionDetailsAttribute. The DbFunctionAttribute tells EF how the function will be mapped in the model. The first parameter is the namespace which in case of Code First is typically the name of the context type. The second parameter is the name of the store function in the model. The convention treats it also as the name of the store function. Note that this name has to match the name of the stored procedure (or function) in the database and also the name used in the ExecuteFunction call. The DbFunctionDetailsAttribute is what makes it possible to invoke a stored procedure returning multiple resultsets. The ResultTypes parameter allows specifying multiple types each of which defines the type of items returned in subsequent resultsets. The types have to be types that are part of the model or, in case of primitive types, they have to have an Edm primitive type counterpart. In our sample the stored procedure returns Customer entities in the first resultset, Order entities in the second resultset and int values in the third resultset. One important thing to mention is that the first type in the ResultTypes array must match the generic type of the returned ObjectResult. To invoke the procedure (let’s ignore the parameter for a moment) you just call the method and enumerate the results. Once the results are consumed you can move to the next resultset. You do it by calling the GetNextResult<T> method where T is the element type of the next resultset. Note that you call the GetNextResult<T> on the previously returned ObjectResult<> instance. Finally let’s take a look at the parameter. As described above it is of the ObjectParameter type to indicate an output parameter. It is decorated with the ParameterTypeAttribute which tells what is the type of the attribute. Its name is the same as the name of the parameter in the stored procedure (less the casing). We create an instance of this parameter before invoking the stored procedure, enumerate all the results and only then read the value. If you tried reading the value before enumerating all the resultsets it would be null.
Running the sample code produces the following output:

Customers:
Id: 1, Name: ALFKI
Id: 2, Name: WOLZA
Orders:
Id: 1, Description: Pens, Customer Name ALFKI
Id: 2, Description: Folders, Customer Name ALFKI
Id: 3, Description: Tofu, Customer Name WOLZA
Wrong Answer: -42
Correct answer from output parameter: 42
Press any key to continue . . .

Enabling using non-DbContext derived types (including static classes) as containers for store function method stubs and methods used to invoke store functions

In the alpha version all the methods that were needed to handle store functions had to live inside a DbContext derived class (the type was a generic argument to the FunctionsConvention<T> where T was constrained to be a DbContext derived type). While this is a convention used by EF Tooling when generating code for Database First approach it is not a requirement. Since it blocked some scenarios (e.g. using extension methods which have to live in a static class) the requirement has been lifted by adding a non-generic version of the FunctionsConvention which takes the type where the methods live as a constructor parameter.

Support for store scalar functions

This is another a not very well-known EF feature. EF actually knows how to invoke user defined scalar functions. To use a scalar function you need to create a method stub. Method stubs don’t have implementation but when used inside a query they are recognized by the EF Linq translator and translated to a udf call.

Example 2
This example shows how to use non-DbContext derived classes for methods/method stubs and how to use scalar UDFs.

internal class ScalarFunctionContextInitializer : DropCreateDatabaseAlways<ScalarFunctionContext>
{
    public override void InitializeDatabase(ScalarFunctionContext context)
    {
        base.InitializeDatabase(context);

        context.Database.ExecuteSqlCommand(
            "CREATE FUNCTION [dbo].[DateTimeToString] (@value datetime) " + 
            "RETURNS nvarchar(26) AS " +
            "BEGIN RETURN CONVERT(nvarchar(26), @value, 109) END");
    }

    protected override void Seed(ScalarFunctionContext ctx)
    {
        ctx.People.AddRange(new[]
        {
            new Person {Name = "John", DateOfBirth = new DateTime(1954, 12, 15, 23, 37, 0)},
            new Person {Name = "Madison", DateOfBirth = new DateTime(1994, 7, 3, 11, 42, 0)},
            new Person {Name = "Bronek", DateOfBirth = new DateTime(1923, 1, 26, 17, 11, 0)}
        });
    }
}

public class Person
{
    public int Id { get; set; }
    public string Name { get; set; }
    public DateTime DateOfBirth { get; set; }
}

internal class ScalarFunctionContext : DbContext
{
    static ScalarFunctionContext()
    {
        Database.SetInitializer(new ScalarFunctionContextInitializer());
    }

    public DbSet<Person> People { get; set; }

    protected override void OnModelCreating(DbModelBuilder modelBuilder)
    {
        modelBuilder.Conventions.Add(
            new FunctionsConvention("dbo", typeof (Functions)));
    }
}

internal static class Functions
{
    [DbFunction("CodeFirstDatabaseSchema", "DateTimeToString")]
    public static string DateTimeToString(DateTime date)
    {
        throw new NotSupportedException();
    }
}

internal class ScalarFunctionSample
{
    public void Run()
    {
        using (var ctx = new ScalarFunctionContext())
        {
            Console.WriteLine("Query:");

            var bornAfterNoon =
               ctx.People.Where(
                 p => Functions.DateTimeToString(p.DateOfBirth).EndsWith("PM"));

            Console.WriteLine(bornAfterNoon.ToString());

            Console.WriteLine("People born after noon:");

            foreach (var person in bornAfterNoon)
            {
                Console.WriteLine("Name {0}, Date of birth: {1}",
                    person.Name, person.DateOfBirth);
            }
        }
    }
}

In the above sample the method stub for the scalar store function lives in the Functions class. Since this class is static it cannot be a generic argument to the FunctionsConvention<T> type. Therefore we use the non-generic version of the convention to register the convention (in the OnModelCreating method).
The method stub is decorated with the DbFunctionAttribute which tells the EF Linq translator what function should be invoked. The important thing is that scalar store functions operate on a lower level (they exist only in the S-Space and don’t have a corresponding FunctionImport in the C-Space) and therefore the namespace used in the DbFunctionAttribute is no longer the name of the context but has always to be CodeFirstDatabaseSchema. Another consequence is that the return and parameter types must be of a type that can be mapped to a primitive Edm type. Once all this conditions are met you can use the method stub in Linq queries. In the sample the function is converting the date of birth to a string in a format which ends with “AM” or “PM”. This makes it possible to easily find people who were born before or after noon just by checking the suffix. All this happens on the database side – you can tell this by looking at the results produced when running this code which contain the SQL query the linq query was translated to:

Query:
SELECT
    [Extent1].[Id] AS [Id],
    [Extent1].[Name] AS [Name],
    [Extent1].[DateOfBirth] AS [DateOfBirth]
    FROM [dbo].[People] AS [Extent1]
    WHERE [dbo].[DateTimeToString]([Extent1].[DateOfBirth]) LIKE N'%PM'
People born after noon:
Name John, Date of birth: 12/15/1954 11:37:00 PM
Name Bronek, Date of birth: 1/26/1923 5:11:00 PM
Press any key to continue . . .

That’s pretty much it. The convention ships on NuGet – the process of installing the package is the same as it was for the alpha version and can be found here. If you are already using the alpha version in your project you can upgrade the package to the latest version with the Update-Package command. The code (including the samples) is on codeplex.
I would like to thank again Martin and Angel for their contributions.
Play with the beta version and report bugs before I ship the final version.

Second Level Cache Beta-2 for EntityFramework 6.1+ shipped

When I published the Beta version of EFCache back in May I intentionally did not call it Beta-1 since at that time I did not plan to ship Beta-2. Instead, I wanted to go straight to the RTM. Alas! It turned out that EFCache could not be used with models where CUD (Create/Update/Delete) operations where mapped to stored procedures. Another problem was that in scenarios where there were multiple databases with the same schema EFCache returned cached results even if the database the app was connecting changed. I also got a pull request which I wanted to include. As a result I decided to ship Beta-2. Here is the full list of what’s included in this release:

  • support for models containing CUD operations mapped to stored procedures – invoking a CUD operation will invalidate cache items for the given entity set
  • CacheTransactionHandler.AddAffectedEntitySets is now protected – makes subclassing CacheTransactionHandler easier
  • database name is now part of the cache key – enables using the same cache across multiple databases with the same schema/structure
  • new Cached() extension method forces caching results for selected queries – results for queries marked with Cached() method will be cached regardless of caching policy, whether the query has been blacklisted or if it contains non-deterministic SQL functions. This feature started with a contribution from ragoster
  • new name – Second Level Cache for Entity Framework 6.1+ – (notice ‘+’) to indicate that EFCache works not only with Entity Framework 6.1 but also with newer point releases (i.e. all releases up to, but not including, the next major release)

The new version has been uploaded to NuGet, so update the package in your project and let me know of any issues. If I don’t hear back from you I will release the final version in a month or so.

Automating creating NuGet packages with MSBuild

NuGet is a great way of shipping projects. You work on a project, you publish a package and it is immediately available to, literally, millions of developers. Creating a package consists of a few steps like authoring a .nuspec file, creating a folder structure, copying the right files to the right folders/subfolders and calling the nuget pack command. While the steps are not complicated they are error prone. I learnt this lesson when I shipped the first alpha versions of some of my NuGet packages. What happened was that I would create a package and then I would start feeling some doubts – did I really build the project before copying the files? did I copy the Release and not the Debug version? did I sign the file? And then people started using my packages and started asking (among other things) for a version that would work on other versions of .NET Framework. This meant that the amount of work to create the package would basically at least double since the steps I outlined above would have to be followed for each targeted platform. I had already decided to automate the process of creating NuGet packages but having to ship a multiplatform NuGet package was a forcing function to actually do the work. I set a few goals before starting working on this:

  • I will be able to create a package with just one simple command
  • I will be able to create a multiplatform package
  • I will be able to exclude/include platform specific code
  • The assemblies included in the package will be signed
  • I will be able to strip InternalsVisibleTo from my assembies
  • I won’t have to check any binaries in to the source control
  • None of the changes will break Visual Studio experience (i.e. I will be able to use Visual Studio the same way I was using it before the changes)

Since I am using Visual Studio for all the projects I ship NuGet packages for using MSBuild to achieve my goal was a no brainer. I started from enabling building the project for multiple .NET Framework versions (in my case I really needed only to be able to target .NET Framework 4 and .NET Framework 4.5). This was pretty straightforward – if you open your .csproj file you will quickly find the following line:

<TargetFrameworkVersion>v4.5</TargetFrameworkVersion>

which, as you probably already guessed, indicates the target .NET Framework version. This can be parameterized as follows:

<TargetFrameworkVersion
Condition="'$(TargetFrameworkVersion)' != 'v4.0'">v4.5</TargetFrameworkVersion>

which will enable building the project for .NET Framework 4 just by passing/setting the TargetFrameworkVersion parameter to ‘v4.0’. If any other value is passed/set (or the value is not passed/set at all) the project will be built against .NET Framework 4.5. You can now test the project by building it from the developer command prompt. The following command will build the version for .NET Framework 4:

msbuild myproj.csproj /t:Build /p:TargetFrameworkVersion=v4.0

Note that the above command may fail. One of the reasons might be that your code uses APIs that are available only on .NET Framework 4.5 (async is probably the best example but there are many more). Therefore you may need a mechanism to exclude this code or replace it with a .NET Framework 4 counterpart. Typically this is done using the #ifdef precompiler directive. You just need a constant (let’s call it NET40) that will indicate that the code is being built against .NET Framework 4. We can define this constant in the csproj file depending on the value of the TargetFrameworkVersion property we already set.
To do that we just need to create a new PropertyGroup just below PropertyGroups used to set configuration (i.e. Debug/Release) specific properties. Here is how this new property group would look like:

 <PropertyGroup>
<DefineConstants
Condition=" '$(TargetFrameworkVersion)' == 'v4.0'">$(DefineConstants);NET40</DefineConstants>
</PropertyGroup>

Now we can use the NET40 in the #ifdef precompiler directives to conditionally compile/exclude code for a specific platform.
While we are at it we can take care of removing InternalsVisibleTo attributes from the code. We can use the same trick as we used for the TargetFrameworkVersion and define a constant if a property (let’s call it InternalsVisibleToEnabled) is set to false. By default its value will be set to true true by but when building a package (as opposed to building the project itself) we will set it to false. This will allow us to use the constant in the #ifdef directives to exclude code we don’t want when building packages. With this change the PropertyGroup created above will turn to:

<PropertyGroup>
<DefineConstants
Condition=" '$(TargetFrameworkVersion)' == 'v4.0'">$(DefineConstants);NET40</DefineConstants>
<DefineConstants
Condition=" '$(InternalsVisibleToEnabled)'">$(DefineConstants);INTERNALSVISIBLETOENABLED</DefineConstants>
</PropertyGroup>

Another important thing to look at are references. If you have a reference to an assembly that is not shipped with the .NET Framework you need to make sure that you are referencing a correct version of this assembly. This is especially conspicuous if you include other multiplatform NuGet packages – referencing a wrong version of the package may cause weird build breaks or, some APIs you would expect to be present will appear to be missing. This can be fixed with conditional references. For instance in some of my projects I am referencing the EntityFramework NuGet package and I need to reference the correct version of EntityFramework.dll depending on the .NET Framework version I compile my project against. To solve this I can use the MSBuild Choose construct like this:

<Choose>
<When Condition="'$(TargetFrameworkVersion)' == 'v4.0'">
<ItemGroup>
<Reference Include="EntityFramework, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=MSIL">
<SpecificVersion>False</SpecificVersion>
<HintPath>..\packages\EntityFramework.6.1.0\lib\net40\EntityFramework.dll</HintPath>
</Reference>
<Reference Include="EntityFramework.SqlServer, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=MSIL">
<SpecificVersion>False</SpecificVersion>
<HintPath>..\packages\EntityFramework.6.1.0\lib\net40\EntityFramework.SqlServer.dll</HintPath>
</Reference>
</ItemGroup>
</When>
<Otherwise>
<ItemGroup>
<Reference Include="EntityFramework, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=MSIL">
<SpecificVersion>False</SpecificVersion>
<HintPath>..\packages\EntityFramework.6.1.0\lib\net45\EntityFramework.dll</HintPath>
</Reference>
<Reference Include="EntityFramework.SqlServer, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=MSIL">
<SpecificVersion>False</SpecificVersion>
<HintPath>..\packages\EntityFramework.6.1.0\lib\net45\EntityFramework.SqlServer.dll</HintPath>
</Reference>
</ItemGroup>
</Otherwise>
</Choose>

That’s pretty much it as far as building the project against different versions of the .NET Framework goes. With this we can look at the NuGet side of things. The first thing to do is to open the solution in the Visual Studio, right click on the solution node in the project explorer and select “Enable NuGet Package Restore”. This will do a few things

  • it will enable restoring missing NuGet packages when building. This is very useful and help prevent from having binaries checked in in the source control (you almost always want it)
  • to achieve the above it will create a .nuget folder which will contain a few files with the NuGet.targets being the most important for us
  • it will modify .csproj files to define some properties but most importantly it will include the NuGet.targets file

Note: one of the files NuGet drops to the .nuget folder is Nuget.exe. If you are using source control (if you are not you deserve to be named a lead developer of a new feature in this old COBOL project no one has touched in years – they did not use source control either so you should be fine) you want to check in all the files from the .nuget folder but Nuget.exe.
Save your files or, even better, close Visual Studio. This is important. If you modify your project files both inside and outside VS you will lose changes you made outside VS in the best case (and only if you pick the right option when VS detects you edited files outside VS). In the worst case you will end up in this .csproj limbo when .csproj files are only partially modified by tools and you are not able to recreate what has been lost. The tools no longer work (or work randomly which is even worse) because some changes are lost, the project does not compile and the best option is just to revert all the changes to the last commit (if you are not using source control then, you know, the COBOL project needs a lead (or, actually, any) developer) and start from scratch.
Now we are ready to add the logic to build the NuGet package. We will create a new target called ‘CreatePackage’ (I originally wanted to call it ‘BuildPackage’ but it turned out that a target with that name already exists in the .NuGet.targets file) which will be the entry point to build the package. Currently all my projects I ship NuGet packages for are simple and contain just one assembly with the product code and one assembly with tests. Since tests are not part of the NuGet packages I ship I can add the CreatePackage target to the .csproj file containing the product code. If I had more assemblies I would create a separate file (probably called something like build.proj where I would have all the targets needed to build and package all the assemblies). In this target I would need to:

  • build my assembly/assemblies for each platform I want to target
  • create a folder structure required by NuGet
  • copy artifacts built in the first step to folders from the second step
  • create the package using NuGet.exe pack command

The first thing is to further modify the .csproj file to fix a couple of problems. The first problem we need to fix is that each time we build the project we overwrite existing files. Normally (e.g. when working from Visual Studio) it is not a problem but because we are going to invoke build more than once (we need to build for multiple platforms) subsequent builds would overwrite files built by previous builds. We could solve this by just copying files between builds but the solution I like better is just to be able to tell the build where to place the build artifacts. This can be done by removing setting the OutputPath property in the configuration specific PropertyGroups (e.g. to ‘bin\Release\’) and adding the following conditional OutputPath property definition to the first PropertyGroup after the Configration. The new OutputPath definition would like this:

<OutputPath Condition="'$(OutputPath)' == ''">bin\$(Configuration)\</OutputPath>

This allows to pass the OutputPath value as a parameter and if it is not empty it will be used throughout the build. Otherwise we will set a default value which effectively will be the same as the one that would have originally been set.
The second thing to take care of is the case where there is no NuGet.exe file in the .nuget folder (I recommended to not check this file in so it will be missing for newly cloned repos or when you clean the repo with git clean –xdf). This can be easily fixed by adding the following property definition to the first PropertyGroup:

<DownloadNuGetExe>true</DownloadNuGetExe>

Now whenever you invoke a target that depends on the NuGet’s CheckPrerequisites target NuGet will check if the NuGet.exe file is present and if it is not it will download it.
With the above changes we are ready for the CreatePackage target. Here is how it looks like (I copied it from the Interactive Pre-Generated Views project):

<Target Name="CreatePackage" DependsOnTargets="CheckPrerequisites">
<Error Text="KeyFile parameter not spercified (/p:KeyFile=MyKey.snk)" Condition=" '$(KeyFile)' == ''" />
<PropertyGroup>
<Configuration>Release</Configuration>
<PackageSource>bin\$(Configuration)\Package\</PackageSource>
<NuSpecPath>..\tools\EFInteractiveViews.nuspec</NuSpecPath>
</PropertyGroup>
<RemoveDir Directories="$(PackageSource)" />
<MSBuild Projects="$(MSBuildThisFile)" Targets="Rebuild" Properties="InternalsVisibleToEnabled=false;SignAssembly=true;AssemblyOriginatorKeyFile=$(KeyFile);TargetFrameworkVersion=v4.5;OutputPath=bin\$(Configuration)\net45;Configuration=$(Configuration)" BuildInParallel="$(BuildInParallel)" />
<MSBuild Projects="$(MSBuildThisFile)" Targets="Rebuild" Properties="InternalsVisibleToEnabled=false;SignAssembly=true;AssemblyOriginatorKeyFile=$(KeyFile);TargetFrameworkVersion=v4.0;OutputPath=bin\$(Configuration)\net40;Configuration=$(Configuration)" BuildInParallel="$(BuildInParallel)" />
<Copy SourceFiles="bin\$(Configuration)\net45\$(AssemblyName).dll" DestinationFolder="$(PackageSource)\lib\net45" />
<Copy SourceFiles="bin\$(Configuration)\net40\$(AssemblyName).dll" DestinationFolder="$(PackageSource)\lib\net40" />
<Copy SourceFiles="$(NuSpecPath)" DestinationFolder="$(PackageSource)" />
<Exec Command='$(NuGetCommand) pack $(NuSpecPath) -BasePath $(PackageSource) -OutputDirectory bin\$(Configuration)' LogStandardErrorAsError="true" />
</Target>

Let’s look at this target line by line since there are some small additions I have not mentioned yet. One of my goals was to be able to sign the assembly. I do it by passing a path to my .snk file. To make sure I don’t build packages with non-signed assemblies I error out on line #2 if the path to the file was not provided (the error also tells me the parameter name so that I don’t have to open the .csproj file each time I need to build the package to remember the name of the parameter used to pass the path to the .snk file). Then I define a few properties I use later:

  • Configuration – I always want to ship Release versions in NuGet packages so it is hardcoded to ‘Release’
  • PackageSource – the path where the target will create folders that will be later consumed by NuGet. Platform specific assemblies will be placed in subfolders
  • NuSpecPath – the path where the .nuspec file lives

After that I build the project. I do it twice – once for each target platform. You can see that when you look at the parameters passed to the Build target – especially the TragetFrameworkVersion (‘v4.5’ vs. ‘v4.0’) and the OutputPath (‘net45’ vs. ‘net40’). SignAssembly and AssemblyOriginatorKeyFile make sure that the assembly will be signed. InternalsVisibleToEnabled is set to false to exclude InternalsVisbileTo attributes. Once the assemblies are build they are copied to the folders the NuGet package will be built from. Note that the Copy MSBuild task will create the target folder if it does not exist. The only thing remaining is to create the NuGet package (at long last!). I execute the NuGet.exe (disguised as $(NuGetCommand) – a variable defined in the NuGet.targets file) with the pack option and provide the path to the .nuspec file, the folder structure to build the package from (the BasePath parameter) and the path to save the NuGet package to. The LogStandardErrorAsError attribute is telling MSBuild to fail the build if the executed command returns a non-zero exit code.
Now I can build my NuGet packages with the following command (from the developer command prompt):
msbuild myproject.csproj /t:CreatePackage /p:KeyFile=MyKey.snk
Despite all the manual edits to the .csproj file Visual Studio continues to work (at least at the same level it did before) which was the last of my goals.
This is it! With this guide you should be able to automate building your NuGet packages. Just in case here are links to some of the changsets I introduced this method in:

Once you understand what’s going on and have the template I provided above it takes less than 10 minutes to adapt it to your project (funnily, even coming up with the first version took me considerably less time (and wine) than describing it in this blog post).
Enjoy (and show me your package)!

Second Level Cache Beta for EF 6.1 Available

It took a bit longer than I expected but the Beta version of the Second Level Cache for EF 6.1 is now available on NuGet with the source available on Codeplex.

What’s new in the beta version.

  • Support for .NET Framework 4 – the NuGet package now contains two versions of the second level cache assembly – one that is specific to .NET Framework 4 and one that is specific to .NET Framework 4.5. As a result it is now possible to use second level caching in EF 6.1 applications that target .NET Framework 4. (A side note: you should update NuGet packages if you change the .NET Framework version your application targets to avoid errors where (some of) the referenced assemblies target a different version of .NET Framework than the app itself).
  • Support for async (.NET Framework 4.5 only) – results for queries executed asynchronously are now cached.
  • The CachingPolicy and the DefaultCachingPolicy classes merged
  • The CachingPolicy.CanBeCached method was modified to take the Sql query and parameters. This enables more granular control over the cached results. Note that this is a breaking change from alpha release and you will need to update your code if you created a custom CachingPolicy derived class
  • A new mechanisms allowing excluding caching results for specific queries.

Let’s take a closer look at the last two items. They allow achieving a similar goal but in different ways. Starting from the Beta version the SQL query and query parameters are passed to the CanBeCached method in addition to the store entity sets (which are abstractions of database tables). This allows for inspecting the query and its parameters to decide whether the results yielded by the query should be cached. “Inspecting the query and its parameters” may sound easy but the queries generated by EF tend to be complicated and parsing them may not be trivial. Easier cases are where you just have some queries you never want to cache the results for and instead of “inspecting” you just need to compare if the input query is one of these non-cacheable queries
and if it is return false from the method.
(Side note: I personally believe that with regards to caching you are most often interested in tables the results come from and not in what the query does. In this scenario the affectedEntitySets might be more helpful because you can get the names of the tables used in the query without having to try to actually reverse engineer the query. You can get the names of the tables used in the query as follows:

affectedEntitySets.Select(e => e.Table ?? e.Name);

).
Another way to prevent results for a specific query from being cached is to use the new built-in mechanism which for blacklisting queries. This mechanism consists of two parts – a registrar that contains a list of blacklisted queries (i.e. queries whose results won’t be cached) and the DbQuery.NotCached() (and ObjectQuery.NotCached()) extension methods which make using the registrar easier. As a result blacklisting a query is as easy as appending .NotCached() to the query, just like this:

var q = ctx.Entities.Where(e => e.Flag != null).OrderBy(e => e.Id).NotCached();

Blacklisted queries take precedence (i.e. win) over caching policy and therefore the CachingPolicy.CanBeCached() method will never be called for blacklisted queries.
The registrar itself is public and implements the singleton pattern. You can get the instance using the BlacklistedQueriesRegistrar.Instance property and then you will be able to register (or unregister) blacklisted queries manually (note however that queries are compared using string comparison and therefore the registered query must exactly match the query EF would produce – the extension methods ensure the queries are identical by calling .ToString()/.ToTraceString() on the DbQuery/ObjectQuery instance).
As you can see both CachingPolicy.CanBeCached() and the built-in query blacklisting mechanism allow to prevent results for specific queries from being cached. The difference is that the built-in mechanism is very simple to use but does not give the flexibility and granularity offered by the CachingPolicy.CanBeCached() method. On the other hand the flexibility and granularity of the CachingPolicy.CanBeCached() method is not for free – you need to implement at least some logic yourself.

The road to “RTM”.
I consider the Beta version to be feature complete. I am planning to let it bake for a few weeks, fix reported issues and then release the final version. Your part is to try out the Beta version (or upgrade your projects) and report bugs.