Category Archives: .NET Framework

Running ASP.NET Core Applications with IIS and Antares (Azure Websites)

I have seen a few articles (including official docs on http://docs.asp.net) about publishing and running  ASP.NET Core applications in IIS (or Azure/Antares). Unfortunately, I was not satisfied by either of them. Yes, they showed steps you need to follow to make things work. Yes, they touched on some aspects of how things work. No, they did not explain what’s really happening, why it’s happening and how the blocks fit together. Hence this post – something I would like to read if I wanted to run my ASP.NET Core application using IIS or Azure.

Before we can get into details we need to understand how things work at a high level. The most important thing is that ASP.NET Core applications are no longer tightly coupled to IIS as it was with previous versions. Rather, IIS is acting now merely as a reverse proxy and the application itself runs as a separate process using the Kestrel HTTP server. Decoupling ASP.NET from IIS was necessary to enable running ASP.NET Core applications on other platforms. It also makes development easier because it allows to avoid the overhead of IIS/IIS Express during development by making it possible to run your application directly using Kestrel. Note that in production environment it is recommended to always run Kestrel behind a reverse proxy like IIS or  NGINX.

Going back to IIS HTTP requests are handled as follows:

  1. IIS receives a request
  2. IIS (ASP.NET Core Module) starts the ASP.NET Core application in a separate process (if the application is not already running) – i.e. the application no longer runs as w3wp.exe but as dotnet.exe or myapp.exe
  3. IIS forwards the  request to the application
  4. Application processes the request and send the response to IIS
  5. IIS forwards the response to the client

Out of these 5 steps, step two is the most interesting, the most complicated and the most fragile. There is a few pieces that need to be aligned to successfully start an ASP.NET Core application from IIS: ASP.NET Core Module, web.config file and application configuration. Let’s take a look at them one by one and discuss their role.

ASP.NET Core Module (sometimes abbreviated to ANCM) is a native IIS module that starts the application and implements reverse proxy functionality. It is installed as part of ASP.NET Core tooling for Visual Studio or can be installed separately – (the Windows (Server Hosting) package from https://www.microsoft.com/net/download.

web.config – tells IIS to use ASP.NET Core Module to process requests. It also tells ASP.NET Core Module what process (application) to start. Note, that you don’t use web.config to configure your application – it is only used by IIS. Here is how a typical web.confing of an ASP.NET Core application looks like:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <system.webServer>
    <handlers>
      <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified"/>
    </handlers>
    <aspNetCore processPath="dotnet" arguments=".\HelloWorld.dll" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" />
  </system.webServer>
</configuration>

Application configuration – a typical Main method of an ASP.NET Core application looks like this:

var host = new WebHostBuilder()
            .UseKestrel()
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseIISIntegration()
            .UseStartup<Startup>()
            .Build();

from the perspective of running the application using IIS the lines that are important are UseKestrel and UseIISIntegration.  UseKestrel configures Kestrel as the application web server. This is important from IIS perspective since you can’t use WebListener and IIS together at the moment (https://github.com/aspnet/IISIntegration/issues/8). UseIISIntegration does a bit of magic to fulfill ASP.NET Core Module expectations and registers the IISMiddleware.

With the information above we can now drill into how IIS starts ASP.NET Core applications. When IIS  receives a request for an ASP.NET Core application it passes the request to the ASP.NET Core Module. IIS was configured to do this by the following entry in the the web.config file:

<handlers>
  <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified"/>
</handlers>

Upon receiving the request the ASP.NET Core Module will attempt to start the application if it is not already running. The name of the process to start and its arguments are specified in web.config  as the processPath and arguments attributes on the aspNetCore element. ASP.NET Core Module also sets a few environment variables for the application process – ASPNETCORE_PORT , ASPNETCORE_APPL_PATH and ASPNETCORE_TOKEN. Here is where the UseIISIntegration magic happens. When the application starts, the code inside UseIISIntegration method tries to read these environment variables and if they are not empty they will be used to configure the url/port the application will listen on. (If the above environment variables are not set UseIISIntegration won’t try to configure anything so that you can use your own settings when running the application directly (i.e. without IIS)). One important detail to pay attention to is where you put the call to UseIISIntegration when configuring your application with WebHostBuilder. You need to make sure that you don’t try to set server urls after you called UseIISIntegration otherwise the url set by UseIISIntegration will get overwritten and your application will be listening on a different port that Asp.NET Core Module expects. As a result things will not work.

ASPNETCORE_TOKEN is a pairing token. IIS middleware added to the pipeline by UseIISIntegration will check each request if it contains this value and will reject requests that don’t. This is to prevent from accepting requests that did not come from IIS.

These are the fundamental blocks needed to run your ASP.NET Core application with IIS and on Azure (Antares). You can now go and set things up as described in some tutorials and actually understand what you are doing and why.

There is, however, a  second level of confusion which happens when you start using Visual Studio and you see additional magic. The first thing that is confusing is web.config. You create a new ASP.NET Core application, open web.config and you see this line instead of what I showed above:

<aspNetCore processPath="%LAUNCHER_PATH%" arguments="%LAUNCHER_ARGS%" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" forwardWindowsAuthToken="false"/>

You start wondering what are these %LAUNCHER_*% environment variables, who is supposed to set them and how IIS (or whoever) knows what values to put there. Honestly, I don’t exactly know how these environment variables work but I treat them as placeholders that Visual Studio replaces when you start your application with F5/Ctrl+F5. When you publish your application to run in a production environment you can’t have these placeholders in web.config – no one knows about them and no one is going to replace them or set values (I also don’t think you can just set environment variables with these names and they will be picked up automatically – this is why I call these strings placeholders – they look as if they were environment variables but I don’t think they behave as environment variables). So how does this work then? If you look at your project.json file you will see a "scripts" section looking more or less like this:

"scripts":
{
"postpublish":
"dotnet publish-iis --publish-folder %publish:OutputPath% --framework %publish:FullTargetFramework%"
}

It just tells dotnet to run the publish-iis tool after the application is published. What is publish-iis (or a better question would be “what publish-iis isn’t”)? publish-iis isn’t… doing much. There are a lot of misconceptions about the publish-iis tool but it actually is a very simple tool. It goes to the folder where the application was published (not your project folder) and checks if it contains a web.config file. If it doesn’t it will create one. If it does it will check what kind of application you have (i.e. whether it is targeting full CLR or Core CLR and – for Core CLR – whether it is a portable or standalone application) and will set the values of the processPath and arguments attributes removing %LAUNCHER_PATH% and %LAUNCHER_ARGS% placeholders on the way. Note that publish-iis is not a Visual Studio tool. It’s independent and whether you are using Visual Studio or you publish your application from command line using dotnet publish it will work as long as it is configured as a postpublish script in your project.json. That’s pretty much what publish-iis is.

Troubleshooting

Since there are a few pieces that need to be aligned to run ASP.NET Core application with IIS things tend to go wrong. If you don’t know how these pieces are supposed to work together (i.e. you did not read the first part of this post) you can search the internet, try random hints from stackoverflow and pray and most likely you still won’t be able to make your application work. But now you know know how things are supposed to work so you can be much more effective in troubleshooting problems. I will only focus on the infamous 502.3 Bad Gateway error as this is the most common one. I read about other failures (https://docs.asp.net/en/latest/publishing/iis.html) but so far have not seen any of them. So, if your application does not work with IIS what to do?

  • Make sure ASP.NET Core Module is installed. It will be installed on your dev box because it is installed with Visual Studio Web Tooling but it may not be on the server you are deploying your application to
  • Try running your published application without IIS – in the command prompt go to the folder where the application was published to and run dotnet {myapp}.dll (a portable Core CLR app) or {myapp}.exe (a standalone application or an application targeting full CLR)
  • If above works – make sure dotnet.exe is on the global %PATH%. It might be on the %PATH% for you but IIS is running using a different account which may not have path to dotnet.exe set. Add the path to the folder dotnet.exe lives in (typically C:\Program Files\dotnet) to the global %PATH% environment variable. I had to do iisreset.exe to make the change effective in IIS
  • Check web.config file of the published application – verify it has actual values and not %LAUNCHER_PATH%/%LAUNCHER_ARGS% placeholders. Make sure the processPath  corresponds to the application type (i.e. you can’t start a portable application with {myapp}.exe because there is only {myapp}.dll in the folder)
  • Check event log – ASP.NET Core Module writes to event log and you can find some useful (and some bogus) entries in the event log
  • Turn on logging – you probably noticed stdoutLogFile and stdoutLogEnabled attributes on the aspNetCore element. They are very helpful to diagnose issues – especially issues related to application start up. If you set stdoutLogEnabled to true ASP.NET Core Module will write all the output written by the application to the console to the stdoutLogFile file. Note that the folder configured in the stdoutLogFile attribute must exist, otherwise (at least at the moment) the file won’t be created. In case of Azure/Antares the path should look like:  \\?\%home%\LogFiles\stdout  (the \\?\%home%\LogFiles folder always exists). In fact if you publish your application to Azure/Antares using Visual Studio tooling it will make publish-iis set stdoutLogPath to point to the folder above and you can turn on logging by merely setting stdoutLogEnabled to true.
    Note that std out logging should not be used as a poor man’s file logging. It’s very useful to diagnose startup issues since they often happen before loggers are created and/or configured but if you want to log to a file configure your application to use a real logging and logging framework. (There is also a plan to create a simple file logger https://github.com/aspnet/Logging/issues/441)

app_offline

The last thing to mention is app_offline.htm. app_offline.htm is a feature of ASP.NET Core Module where ASP.NET Core Module will monitor your application directory and if it notices the app_offline.htm file (note – at the moment the file must not be empty: https://github.com/aspnet/IISIntegration/issues/174) it will stop your application and will respond to requests with the contents of the app_offline.htm file. This makes application deployment much easier since you no longer need to deal with the problem of locked files that are loaded into the process of a running application. Once you remove the app_offline.htm file from the folder ASP.NET Core Module will start your application on the first request. app_offline.htm is used by the deployment tool (WebDeploy) that ships with Visual Studio (note that in preview1 this tool uses app_offline.htm only when deploying to Azure/Antares but it is supposed to be fixed so that app_offline.htm is used by default when deploying to file system (think: IIS)).

So, these are the basics of running ASP.NET Core application with IIS. The topic is much broader but the details described in this post will hopefully make working with IIS  in the ASP.NET Core world less painful and help those who need to transition from previous versions of ASP.NET.

 

The final version of the Store Functions for EntityFramework 6.1.1+ Code First convention released

Today I posted the final version of the Store Functions for Entity Framework Code First convention to NuGet. The instructions for downloading and installing the latest version of the package to your project are as described in my earlier blog post only you no longer have to select the “Include Pre-release” option when using UI or use the –Pre option when installing the package with the Package Manager Console. If you installed a pre-release version of this package to your project and would like to update to this version just run the Update-Package EntityFramework.CodeFirstStoreFunctions command from the Package Manager Console.

What’s new in this version?

This new version contains only one addition comparing to the beta-2 version – the ability to specify the name of the store type for parameters. This is needed in cases where a CLR type can be mapped to more than one store type. In case of the Sql Server provider there is only one type like this – the xml type. If you look at the Sql Server provider code (SqlProviderManifest.cs ln. 409) you will see that the store xml type is mapped to the EDM String type. This mapping is unambiguous when going from the store side. However the type inference in the Code First Functions convention works from the other end. First we have a CLR type (e.g. string) which maps to the EDM String type which is then used to find the corresponding store type by asking the provider. For the EDM String type the Sql Server the provider will return (depending on the facets) one of the nchar, nvarchar, nvarchar(max), char, varchar, varchar(max) types but it will never return the xml type. This makes it basically impossible to use the xml type when mapping store functions using the Code First Functions convention even though this is possible when using Database First EDMX based models.
Because, in general case, the type inference will not always work if multiple store types are mapped to one EDM Type I made it possible to specify the store type of a parameter using the new StoreType property of the ParameterTypeAttribute. For instance if you had a stored procedure called GetXmlInfo that takes an xml typed in/out parameter and returns some data (kind of a more advanced (spaghetti?) scenario but came from a real world application where the customer wanted to replace EDMX with Code First so they decided to use Code First Functions to map store functions and this was the only stored procedure they had problems with) you would use the following method to invoke this stored procedure:

[DbFunctionDetails(ResultColumnName = "Number")]
[DbFunction("MyContext", "GetXmlInfo")]
public virtual ObjectResult<int> GetXmlInfo(
    [ParameterType(typeof(string), StoreType = "XML")] ObjectParameter xml)
{
    return ((IObjectContextAdapter)this).ObjectContext
        .ExecuteFunction("GetXmlInfo", xml);
}

Because the parameter is in/out I had to use the ObjectParameter to pass the value and to read the value returned by the stored procedure. Because I used ObjectParameter I had to use the ParameterTypeAttribute to tell the convention what is the Clr type of the parameter. Finally, I also used the StoreType parameter which results in skipping asking the provider for the store type and using the type I passed.

That would be it. See my other blog posts here and here if you would like to see other supported scenarios. The code and issue tracking is on codeplex. Use and enjoy.

The final version of the Second Level Cache for EF6.1+ available.

This is it! Today I pushed the final version of the Second Level Cache for Entity Framework 6.1+ to NuGet. Now you no longer need to use -Pre when installing the package from the Package Manager Console nor remember to select the “Include Prerelease” option from the dropdown when installing the package using the “Manage NuGet Packages” window. The final version is functionally equivalent to the beta-2 version – there wasn’t a single change to the product code between the beta-2 version and this version. I would still encourage you to upgrade to the final version if you are using the beta-2 version.
There is also a new implementation of cache which uses Redis to store cached items. It was created by silentbobbert and you can get it from NuGet. I am pretty excited about this since it opens quite a few new possibilities. Try it out and see how it works.
Update/install, enjoy and file bugs (if you find any).

The Beta Version of Store Functions for EntityFramework 6.1.1+ Code First Available

This is very exciting! Finally after some travelling and getting the beta-2 version of the Second Level Cache for EF 6.1+ out the door I was able to focus on store functions for EF Code First. I pushed quite hard for the past two weeks and here it is – the beta version of the convention that enables using store functions (i.e. stored procedures, table valued functions etc.) in applications that use Code First approach and Entity Framework 6.1.1 (or newer). I am more than happy with the fixes and new features that are included in this release. Here is the full list:

  • Support for .NET Framework 4 – the alpha NuGet package contained only assemblies built against .NET Framework 4.5 and the convention could not be used when the project targeted .NET Framework 4. Now the package contains assemblies for both .NET Framework 4 and .NET Framework 4.5.
  • Nullable scalar parameters and result types are now supported
  • Support for type hierarchies – previously when you tried using a derived type the convention would fail because it was not able to find a corresponding entity set. This was fixed by Martin Klemsa in his contribution
  • Support for stored procedures returning multiple resultsets
  • Enabling using a different name for the method than the name of the stored procedure/function itself – a contribution from Angel Yordanov
  • Enabling using non-DbContext derived types (including static classes) as containers for store function method stubs and methods used to invoke store functions – another contribution from Angel Yordanov
  • Support for store scalar functions (scalar user defined functions)
  • Support for output (input/output really) parameters for stored procedures

This is a pretty impressive list. Let’s take a closer look at some of the items from the list.

Support for stored procedure returning multiple resultsets

Starting with version 5 Entity Framework runtime has a built-in support for stored procedures returning multiple resultsets (only when targeting .NET Framework 4.5). This is not a very well-known feature which is not very surprising given that up to now it was practically unusable. Neither Code First nor EF Tooling supports creating models with store functions returning multiple resultsets. There are some workarounds like dropping to ADO.NET in case of Code First (http://msdn.microsoft.com/en-us/data/JJ691402.aspx) or editing the Edmx file manually (and losing the changes each time the model is re-generated) for Database First but they do not really change the status of the native support for stored procedures returning multiple resultsets as being de facto an unfeature. This is changing now – it is now possible to decorate the method that invokes the stored procedure with the DbFunctionDetails attribute and specify return types for subsequent resultsets and the convention will pick it up and create metadata EF requires to execute such a stored procedure.

Using a different name for the method than the name of the stored procedure

When the alpha version shipped the name of method used to invoke a store function had to match the name of the store function. This was quite unfortunate since most of the time naming conventions used for database objects are different from naming conventions used in the code. This is now fixed. Now, the function name passed to the DbFunction attribute will be used as the name of the store function.

Support for output parameters

The convention now supports stored procedures with output parameters. I have to admit it ended a bit rough because of how the value of the output parameter is being set (at least in case of Sql Server) but if you are in a situation where you have a stored procedure with an output parameter it is better than nothing. The convention will treat a parameter as an output parameter (in fact it will be an input/output parameter) if the type of the parameter is ObjectParameter. This means you will have to create and initialize the parameter yourself before passing it to the method. This is because the output value (at least for Sql Server) is set after all the results returned by the stored procedure have been consumed. Therefore you need to keep a reference to the parameter to be able to read the output value after you have consumed the results of the query. In addition because the actual type of the parameter will be only known at runtime and not during model discovery all ObjectParameter parameters have to be decorated with the ParameterTypeAttribute which specifies the type that will be used to build the model. Finally the name of the parameter in the method must match the name of the parameter in the database (yeah, I had to debug EF code to figure out why things did not work) – fortunately casing does not matter. As I said – it’s quite rough but should work once you align all the moving pieces correctly.

Exmple 1

The following example illustrates how to use the functionality described above. It uses a stored procedure with an output parameter and returning multiple resultsets. In addition the name of the method used to invoke the store procedure (MultipleResultSets) is different from the name of the stored procedure itself (CustomersOrdersAndAnswer).

internal class MultupleResultSetsContextInitializer : DropCreateDatabaseAlways<MultipleResultSetsContext>
{
    public override void InitializeDatabase(MultipleResultSetsContext context)
    {
        base.InitializeDatabase(context);

        context.Database.ExecuteSqlCommand(
        "CREATE PROCEDURE [dbo].[CustomersOrdersAndAnswer] @Answer int OUT AS " +
        "SET @Answer = 42 " +
        "SELECT [Id], [Name] FROM [dbo].[Customers] " +
        "SELECT [Id], [Customer_Id], [Description] FROM [dbo].[Orders] " +
        "SELECT -42 AS [Answer]");
    }

    protected override void Seed(MultipleResultSetsContext ctx)
    {
        ctx.Customers.Add(new Customer
        {
            Name = "ALFKI",
            Orders = new List<Order>
                {
                    new Order {Description = "Pens"},
                    new Order {Description = "Folders"}
                }
        });

        ctx.Customers.Add(new Customer
        {
            Name = "WOLZA",
            Orders = new List<Order> { new Order { Description = "Tofu" } }
        });
    }
}

public class Customer
{
    public int Id { get; set; }
    public string Name { get; set; }
    public virtual ICollection<Order> Orders { get; set; }
}

public class Order
{
    public int Id { get; set; }
    public string Description { get; set; }
    public virtual Customer Customer { get; set; }
}

public class MultipleResultSetsContext : DbContext
{
    static MultipleResultSetsContext()
    {
        Database.SetInitializer(new MultupleResultSetsContextInitializer());
    }

    protected override void OnModelCreating(DbModelBuilder modelBuilder)
    {
        modelBuilder.Conventions.Add(
            new FunctionsConvention<MultipleResultSetsContext>("dbo"));
    }

    public DbSet<Customer> Customers { get; set; }
    public DbSet<Order> Orders { get; set; }

    [DbFunction("MultipleResultSetsContext", "CustomersOrdersAndAnswer")]
    [DbFunctionDetails(ResultTypes = 
        new[] { typeof(Customer), typeof(Order), typeof(int) })]
    public virtual ObjectResult<Customer> MultipleResultSets(
        [ParameterType(typeof(int))] ObjectParameter answer)
    {
        return ((IObjectContextAdapter)this).ObjectContext
            .ExecuteFunction<Customer>("CustomersOrdersAndAnswer", answer);
    }
}

class MultipleResultSetsSample
{
    public void Run()
    {
        using (var ctx = new MultipleResultSetsContext())
        {
            var answerParam = new ObjectParameter("Answer", typeof (int));

            var result1 = ctx.MultipleResultSets(answerParam);

            Console.WriteLine("Customers:");
            foreach (var c in result1)
            {
                Console.WriteLine("Id: {0}, Name: {1}", c.Id, c.Name);
            }

            var result2 = result1.GetNextResult<Order>();

            Console.WriteLine("Orders:");
            foreach (var e in result2)
            {
                Console.WriteLine("Id: {0}, Description: {1}, Customer Name {2}", 
                    e.Id, e.Description, e.Customer.Name);
            }

            var result3 = result2.GetNextResult<int>();
            Console.WriteLine("Wrong Answer: {0}", result3.Single());

            Console.WriteLine("Correct answer from output parameter: {0}", 
               answerParam.Value);
        }
    }
}

The first half of the sample is just setting up the context and is rather boring. The interesting part starts at the MultipleResultSets method. The method is decorated with two attributes – the DbFunctionAttribute and the DbFunctionDetailsAttribute. The DbFunctionAttribute tells EF how the function will be mapped in the model. The first parameter is the namespace which in case of Code First is typically the name of the context type. The second parameter is the name of the store function in the model. The convention treats it also as the name of the store function. Note that this name has to match the name of the stored procedure (or function) in the database and also the name used in the ExecuteFunction call. The DbFunctionDetailsAttribute is what makes it possible to invoke a stored procedure returning multiple resultsets. The ResultTypes parameter allows specifying multiple types each of which defines the type of items returned in subsequent resultsets. The types have to be types that are part of the model or, in case of primitive types, they have to have an Edm primitive type counterpart. In our sample the stored procedure returns Customer entities in the first resultset, Order entities in the second resultset and int values in the third resultset. One important thing to mention is that the first type in the ResultTypes array must match the generic type of the returned ObjectResult. To invoke the procedure (let’s ignore the parameter for a moment) you just call the method and enumerate the results. Once the results are consumed you can move to the next resultset. You do it by calling the GetNextResult<T> method where T is the element type of the next resultset. Note that you call the GetNextResult<T> on the previously returned ObjectResult<> instance. Finally let’s take a look at the parameter. As described above it is of the ObjectParameter type to indicate an output parameter. It is decorated with the ParameterTypeAttribute which tells what is the type of the attribute. Its name is the same as the name of the parameter in the stored procedure (less the casing). We create an instance of this parameter before invoking the stored procedure, enumerate all the results and only then read the value. If you tried reading the value before enumerating all the resultsets it would be null.
Running the sample code produces the following output:

Customers:
Id: 1, Name: ALFKI
Id: 2, Name: WOLZA
Orders:
Id: 1, Description: Pens, Customer Name ALFKI
Id: 2, Description: Folders, Customer Name ALFKI
Id: 3, Description: Tofu, Customer Name WOLZA
Wrong Answer: -42
Correct answer from output parameter: 42
Press any key to continue . . .

Enabling using non-DbContext derived types (including static classes) as containers for store function method stubs and methods used to invoke store functions

In the alpha version all the methods that were needed to handle store functions had to live inside a DbContext derived class (the type was a generic argument to the FunctionsConvention<T> where T was constrained to be a DbContext derived type). While this is a convention used by EF Tooling when generating code for Database First approach it is not a requirement. Since it blocked some scenarios (e.g. using extension methods which have to live in a static class) the requirement has been lifted by adding a non-generic version of the FunctionsConvention which takes the type where the methods live as a constructor parameter.

Support for store scalar functions

This is another a not very well-known EF feature. EF actually knows how to invoke user defined scalar functions. To use a scalar function you need to create a method stub. Method stubs don’t have implementation but when used inside a query they are recognized by the EF Linq translator and translated to a udf call.

Example 2
This example shows how to use non-DbContext derived classes for methods/method stubs and how to use scalar UDFs.

internal class ScalarFunctionContextInitializer : DropCreateDatabaseAlways<ScalarFunctionContext>
{
    public override void InitializeDatabase(ScalarFunctionContext context)
    {
        base.InitializeDatabase(context);

        context.Database.ExecuteSqlCommand(
            "CREATE FUNCTION [dbo].[DateTimeToString] (@value datetime) " + 
            "RETURNS nvarchar(26) AS " +
            "BEGIN RETURN CONVERT(nvarchar(26), @value, 109) END");
    }

    protected override void Seed(ScalarFunctionContext ctx)
    {
        ctx.People.AddRange(new[]
        {
            new Person {Name = "John", DateOfBirth = new DateTime(1954, 12, 15, 23, 37, 0)},
            new Person {Name = "Madison", DateOfBirth = new DateTime(1994, 7, 3, 11, 42, 0)},
            new Person {Name = "Bronek", DateOfBirth = new DateTime(1923, 1, 26, 17, 11, 0)}
        });
    }
}

public class Person
{
    public int Id { get; set; }
    public string Name { get; set; }
    public DateTime DateOfBirth { get; set; }
}

internal class ScalarFunctionContext : DbContext
{
    static ScalarFunctionContext()
    {
        Database.SetInitializer(new ScalarFunctionContextInitializer());
    }

    public DbSet<Person> People { get; set; }

    protected override void OnModelCreating(DbModelBuilder modelBuilder)
    {
        modelBuilder.Conventions.Add(
            new FunctionsConvention("dbo", typeof (Functions)));
    }
}

internal static class Functions
{
    [DbFunction("CodeFirstDatabaseSchema", "DateTimeToString")]
    public static string DateTimeToString(DateTime date)
    {
        throw new NotSupportedException();
    }
}

internal class ScalarFunctionSample
{
    public void Run()
    {
        using (var ctx = new ScalarFunctionContext())
        {
            Console.WriteLine("Query:");

            var bornAfterNoon =
               ctx.People.Where(
                 p => Functions.DateTimeToString(p.DateOfBirth).EndsWith("PM"));

            Console.WriteLine(bornAfterNoon.ToString());

            Console.WriteLine("People born after noon:");

            foreach (var person in bornAfterNoon)
            {
                Console.WriteLine("Name {0}, Date of birth: {1}",
                    person.Name, person.DateOfBirth);
            }
        }
    }
}

In the above sample the method stub for the scalar store function lives in the Functions class. Since this class is static it cannot be a generic argument to the FunctionsConvention<T> type. Therefore we use the non-generic version of the convention to register the convention (in the OnModelCreating method).
The method stub is decorated with the DbFunctionAttribute which tells the EF Linq translator what function should be invoked. The important thing is that scalar store functions operate on a lower level (they exist only in the S-Space and don’t have a corresponding FunctionImport in the C-Space) and therefore the namespace used in the DbFunctionAttribute is no longer the name of the context but has always to be CodeFirstDatabaseSchema. Another consequence is that the return and parameter types must be of a type that can be mapped to a primitive Edm type. Once all this conditions are met you can use the method stub in Linq queries. In the sample the function is converting the date of birth to a string in a format which ends with “AM” or “PM”. This makes it possible to easily find people who were born before or after noon just by checking the suffix. All this happens on the database side – you can tell this by looking at the results produced when running this code which contain the SQL query the linq query was translated to:

Query:
SELECT
    [Extent1].[Id] AS [Id],
    [Extent1].[Name] AS [Name],
    [Extent1].[DateOfBirth] AS [DateOfBirth]
    FROM [dbo].[People] AS [Extent1]
    WHERE [dbo].[DateTimeToString]([Extent1].[DateOfBirth]) LIKE N'%PM'
People born after noon:
Name John, Date of birth: 12/15/1954 11:37:00 PM
Name Bronek, Date of birth: 1/26/1923 5:11:00 PM
Press any key to continue . . .

That’s pretty much it. The convention ships on NuGet – the process of installing the package is the same as it was for the alpha version and can be found here. If you are already using the alpha version in your project you can upgrade the package to the latest version with the Update-Package command. The code (including the samples) is on codeplex.
I would like to thank again Martin and Angel for their contributions.
Play with the beta version and report bugs before I ship the final version.

Second Level Cache Beta-2 for EntityFramework 6.1+ shipped

When I published the Beta version of EFCache back in May I intentionally did not call it Beta-1 since at that time I did not plan to ship Beta-2. Instead, I wanted to go straight to the RTM. Alas! It turned out that EFCache could not be used with models where CUD (Create/Update/Delete) operations where mapped to stored procedures. Another problem was that in scenarios where there were multiple databases with the same schema EFCache returned cached results even if the database the app was connecting changed. I also got a pull request which I wanted to include. As a result I decided to ship Beta-2. Here is the full list of what’s included in this release:

  • support for models containing CUD operations mapped to stored procedures – invoking a CUD operation will invalidate cache items for the given entity set
  • CacheTransactionHandler.AddAffectedEntitySets is now protected – makes subclassing CacheTransactionHandler easier
  • database name is now part of the cache key – enables using the same cache across multiple databases with the same schema/structure
  • new Cached() extension method forces caching results for selected queries – results for queries marked with Cached() method will be cached regardless of caching policy, whether the query has been blacklisted or if it contains non-deterministic SQL functions. This feature started with a contribution from ragoster
  • new name – Second Level Cache for Entity Framework 6.1+ – (notice ‘+’) to indicate that EFCache works not only with Entity Framework 6.1 but also with newer point releases (i.e. all releases up to, but not including, the next major release)

The new version has been uploaded to NuGet, so update the package in your project and let me know of any issues. If I don’t hear back from you I will release the final version in a month or so.

Automating creating NuGet packages with MSBuild

NuGet is a great way of shipping projects. You work on a project, you publish a package and it is immediately available to, literally, millions of developers. Creating a package consists of a few steps like authoring a .nuspec file, creating a folder structure, copying the right files to the right folders/subfolders and calling the nuget pack command. While the steps are not complicated they are error prone. I learnt this lesson when I shipped the first alpha versions of some of my NuGet packages. What happened was that I would create a package and then I would start feeling some doubts – did I really build the project before copying the files? did I copy the Release and not the Debug version? did I sign the file? And then people started using my packages and started asking (among other things) for a version that would work on other versions of .NET Framework. This meant that the amount of work to create the package would basically at least double since the steps I outlined above would have to be followed for each targeted platform. I had already decided to automate the process of creating NuGet packages but having to ship a multiplatform NuGet package was a forcing function to actually do the work. I set a few goals before starting working on this:

  • I will be able to create a package with just one simple command
  • I will be able to create a multiplatform package
  • I will be able to exclude/include platform specific code
  • The assemblies included in the package will be signed
  • I will be able to strip InternalsVisibleTo from my assembies
  • I won’t have to check any binaries in to the source control
  • None of the changes will break Visual Studio experience (i.e. I will be able to use Visual Studio the same way I was using it before the changes)

Since I am using Visual Studio for all the projects I ship NuGet packages for using MSBuild to achieve my goal was a no brainer. I started from enabling building the project for multiple .NET Framework versions (in my case I really needed only to be able to target .NET Framework 4 and .NET Framework 4.5). This was pretty straightforward – if you open your .csproj file you will quickly find the following line:

<TargetFrameworkVersion>v4.5</TargetFrameworkVersion>

which, as you probably already guessed, indicates the target .NET Framework version. This can be parameterized as follows:

<TargetFrameworkVersion
Condition="'$(TargetFrameworkVersion)' != 'v4.0'">v4.5</TargetFrameworkVersion>

which will enable building the project for .NET Framework 4 just by passing/setting the TargetFrameworkVersion parameter to ‘v4.0’. If any other value is passed/set (or the value is not passed/set at all) the project will be built against .NET Framework 4.5. You can now test the project by building it from the developer command prompt. The following command will build the version for .NET Framework 4:

msbuild myproj.csproj /t:Build /p:TargetFrameworkVersion=v4.0

Note that the above command may fail. One of the reasons might be that your code uses APIs that are available only on .NET Framework 4.5 (async is probably the best example but there are many more). Therefore you may need a mechanism to exclude this code or replace it with a .NET Framework 4 counterpart. Typically this is done using the #ifdef precompiler directive. You just need a constant (let’s call it NET40) that will indicate that the code is being built against .NET Framework 4. We can define this constant in the csproj file depending on the value of the TargetFrameworkVersion property we already set.
To do that we just need to create a new PropertyGroup just below PropertyGroups used to set configuration (i.e. Debug/Release) specific properties. Here is how this new property group would look like:

 <PropertyGroup>
<DefineConstants
Condition=" '$(TargetFrameworkVersion)' == 'v4.0'">$(DefineConstants);NET40</DefineConstants>
</PropertyGroup>

Now we can use the NET40 in the #ifdef precompiler directives to conditionally compile/exclude code for a specific platform.
While we are at it we can take care of removing InternalsVisibleTo attributes from the code. We can use the same trick as we used for the TargetFrameworkVersion and define a constant if a property (let’s call it InternalsVisibleToEnabled) is set to false. By default its value will be set to true true by but when building a package (as opposed to building the project itself) we will set it to false. This will allow us to use the constant in the #ifdef directives to exclude code we don’t want when building packages. With this change the PropertyGroup created above will turn to:

<PropertyGroup>
<DefineConstants
Condition=" '$(TargetFrameworkVersion)' == 'v4.0'">$(DefineConstants);NET40</DefineConstants>
<DefineConstants
Condition=" '$(InternalsVisibleToEnabled)'">$(DefineConstants);INTERNALSVISIBLETOENABLED</DefineConstants>
</PropertyGroup>

Another important thing to look at are references. If you have a reference to an assembly that is not shipped with the .NET Framework you need to make sure that you are referencing a correct version of this assembly. This is especially conspicuous if you include other multiplatform NuGet packages – referencing a wrong version of the package may cause weird build breaks or, some APIs you would expect to be present will appear to be missing. This can be fixed with conditional references. For instance in some of my projects I am referencing the EntityFramework NuGet package and I need to reference the correct version of EntityFramework.dll depending on the .NET Framework version I compile my project against. To solve this I can use the MSBuild Choose construct like this:

<Choose>
<When Condition="'$(TargetFrameworkVersion)' == 'v4.0'">
<ItemGroup>
<Reference Include="EntityFramework, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=MSIL">
<SpecificVersion>False</SpecificVersion>
<HintPath>..\packages\EntityFramework.6.1.0\lib\net40\EntityFramework.dll</HintPath>
</Reference>
<Reference Include="EntityFramework.SqlServer, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=MSIL">
<SpecificVersion>False</SpecificVersion>
<HintPath>..\packages\EntityFramework.6.1.0\lib\net40\EntityFramework.SqlServer.dll</HintPath>
</Reference>
</ItemGroup>
</When>
<Otherwise>
<ItemGroup>
<Reference Include="EntityFramework, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=MSIL">
<SpecificVersion>False</SpecificVersion>
<HintPath>..\packages\EntityFramework.6.1.0\lib\net45\EntityFramework.dll</HintPath>
</Reference>
<Reference Include="EntityFramework.SqlServer, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=MSIL">
<SpecificVersion>False</SpecificVersion>
<HintPath>..\packages\EntityFramework.6.1.0\lib\net45\EntityFramework.SqlServer.dll</HintPath>
</Reference>
</ItemGroup>
</Otherwise>
</Choose>

That’s pretty much it as far as building the project against different versions of the .NET Framework goes. With this we can look at the NuGet side of things. The first thing to do is to open the solution in the Visual Studio, right click on the solution node in the project explorer and select “Enable NuGet Package Restore”. This will do a few things

  • it will enable restoring missing NuGet packages when building. This is very useful and help prevent from having binaries checked in in the source control (you almost always want it)
  • to achieve the above it will create a .nuget folder which will contain a few files with the NuGet.targets being the most important for us
  • it will modify .csproj files to define some properties but most importantly it will include the NuGet.targets file

Note: one of the files NuGet drops to the .nuget folder is Nuget.exe. If you are using source control (if you are not you deserve to be named a lead developer of a new feature in this old COBOL project no one has touched in years – they did not use source control either so you should be fine) you want to check in all the files from the .nuget folder but Nuget.exe.
Save your files or, even better, close Visual Studio. This is important. If you modify your project files both inside and outside VS you will lose changes you made outside VS in the best case (and only if you pick the right option when VS detects you edited files outside VS). In the worst case you will end up in this .csproj limbo when .csproj files are only partially modified by tools and you are not able to recreate what has been lost. The tools no longer work (or work randomly which is even worse) because some changes are lost, the project does not compile and the best option is just to revert all the changes to the last commit (if you are not using source control then, you know, the COBOL project needs a lead (or, actually, any) developer) and start from scratch.
Now we are ready to add the logic to build the NuGet package. We will create a new target called ‘CreatePackage’ (I originally wanted to call it ‘BuildPackage’ but it turned out that a target with that name already exists in the .NuGet.targets file) which will be the entry point to build the package. Currently all my projects I ship NuGet packages for are simple and contain just one assembly with the product code and one assembly with tests. Since tests are not part of the NuGet packages I ship I can add the CreatePackage target to the .csproj file containing the product code. If I had more assemblies I would create a separate file (probably called something like build.proj where I would have all the targets needed to build and package all the assemblies). In this target I would need to:

  • build my assembly/assemblies for each platform I want to target
  • create a folder structure required by NuGet
  • copy artifacts built in the first step to folders from the second step
  • create the package using NuGet.exe pack command

The first thing is to further modify the .csproj file to fix a couple of problems. The first problem we need to fix is that each time we build the project we overwrite existing files. Normally (e.g. when working from Visual Studio) it is not a problem but because we are going to invoke build more than once (we need to build for multiple platforms) subsequent builds would overwrite files built by previous builds. We could solve this by just copying files between builds but the solution I like better is just to be able to tell the build where to place the build artifacts. This can be done by removing setting the OutputPath property in the configuration specific PropertyGroups (e.g. to ‘bin\Release\’) and adding the following conditional OutputPath property definition to the first PropertyGroup after the Configration. The new OutputPath definition would like this:

<OutputPath Condition="'$(OutputPath)' == ''">bin\$(Configuration)\</OutputPath>

This allows to pass the OutputPath value as a parameter and if it is not empty it will be used throughout the build. Otherwise we will set a default value which effectively will be the same as the one that would have originally been set.
The second thing to take care of is the case where there is no NuGet.exe file in the .nuget folder (I recommended to not check this file in so it will be missing for newly cloned repos or when you clean the repo with git clean –xdf). This can be easily fixed by adding the following property definition to the first PropertyGroup:

<DownloadNuGetExe>true</DownloadNuGetExe>

Now whenever you invoke a target that depends on the NuGet’s CheckPrerequisites target NuGet will check if the NuGet.exe file is present and if it is not it will download it.
With the above changes we are ready for the CreatePackage target. Here is how it looks like (I copied it from the Interactive Pre-Generated Views project):

<Target Name="CreatePackage" DependsOnTargets="CheckPrerequisites">
<Error Text="KeyFile parameter not spercified (/p:KeyFile=MyKey.snk)" Condition=" '$(KeyFile)' == ''" />
<PropertyGroup>
<Configuration>Release</Configuration>
<PackageSource>bin\$(Configuration)\Package\</PackageSource>
<NuSpecPath>..\tools\EFInteractiveViews.nuspec</NuSpecPath>
</PropertyGroup>
<RemoveDir Directories="$(PackageSource)" />
<MSBuild Projects="$(MSBuildThisFile)" Targets="Rebuild" Properties="InternalsVisibleToEnabled=false;SignAssembly=true;AssemblyOriginatorKeyFile=$(KeyFile);TargetFrameworkVersion=v4.5;OutputPath=bin\$(Configuration)\net45;Configuration=$(Configuration)" BuildInParallel="$(BuildInParallel)" />
<MSBuild Projects="$(MSBuildThisFile)" Targets="Rebuild" Properties="InternalsVisibleToEnabled=false;SignAssembly=true;AssemblyOriginatorKeyFile=$(KeyFile);TargetFrameworkVersion=v4.0;OutputPath=bin\$(Configuration)\net40;Configuration=$(Configuration)" BuildInParallel="$(BuildInParallel)" />
<Copy SourceFiles="bin\$(Configuration)\net45\$(AssemblyName).dll" DestinationFolder="$(PackageSource)\lib\net45" />
<Copy SourceFiles="bin\$(Configuration)\net40\$(AssemblyName).dll" DestinationFolder="$(PackageSource)\lib\net40" />
<Copy SourceFiles="$(NuSpecPath)" DestinationFolder="$(PackageSource)" />
<Exec Command='$(NuGetCommand) pack $(NuSpecPath) -BasePath $(PackageSource) -OutputDirectory bin\$(Configuration)' LogStandardErrorAsError="true" />
</Target>

Let’s look at this target line by line since there are some small additions I have not mentioned yet. One of my goals was to be able to sign the assembly. I do it by passing a path to my .snk file. To make sure I don’t build packages with non-signed assemblies I error out on line #2 if the path to the file was not provided (the error also tells me the parameter name so that I don’t have to open the .csproj file each time I need to build the package to remember the name of the parameter used to pass the path to the .snk file). Then I define a few properties I use later:

  • Configuration – I always want to ship Release versions in NuGet packages so it is hardcoded to ‘Release’
  • PackageSource – the path where the target will create folders that will be later consumed by NuGet. Platform specific assemblies will be placed in subfolders
  • NuSpecPath – the path where the .nuspec file lives

After that I build the project. I do it twice – once for each target platform. You can see that when you look at the parameters passed to the Build target – especially the TragetFrameworkVersion (‘v4.5’ vs. ‘v4.0’) and the OutputPath (‘net45’ vs. ‘net40’). SignAssembly and AssemblyOriginatorKeyFile make sure that the assembly will be signed. InternalsVisibleToEnabled is set to false to exclude InternalsVisbileTo attributes. Once the assemblies are build they are copied to the folders the NuGet package will be built from. Note that the Copy MSBuild task will create the target folder if it does not exist. The only thing remaining is to create the NuGet package (at long last!). I execute the NuGet.exe (disguised as $(NuGetCommand) – a variable defined in the NuGet.targets file) with the pack option and provide the path to the .nuspec file, the folder structure to build the package from (the BasePath parameter) and the path to save the NuGet package to. The LogStandardErrorAsError attribute is telling MSBuild to fail the build if the executed command returns a non-zero exit code.
Now I can build my NuGet packages with the following command (from the developer command prompt):
msbuild myproject.csproj /t:CreatePackage /p:KeyFile=MyKey.snk
Despite all the manual edits to the .csproj file Visual Studio continues to work (at least at the same level it did before) which was the last of my goals.
This is it! With this guide you should be able to automate building your NuGet packages. Just in case here are links to some of the changsets I introduced this method in:

Once you understand what’s going on and have the template I provided above it takes less than 10 minutes to adapt it to your project (funnily, even coming up with the first version took me considerably less time (and wine) than describing it in this blog post).
Enjoy (and show me your package)!

Second Level Cache Beta for EF 6.1 Available

It took a bit longer than I expected but the Beta version of the Second Level Cache for EF 6.1 is now available on NuGet with the source available on Codeplex.

What’s new in the beta version.

  • Support for .NET Framework 4 – the NuGet package now contains two versions of the second level cache assembly – one that is specific to .NET Framework 4 and one that is specific to .NET Framework 4.5. As a result it is now possible to use second level caching in EF 6.1 applications that target .NET Framework 4. (A side note: you should update NuGet packages if you change the .NET Framework version your application targets to avoid errors where (some of) the referenced assemblies target a different version of .NET Framework than the app itself).
  • Support for async (.NET Framework 4.5 only) – results for queries executed asynchronously are now cached.
  • The CachingPolicy and the DefaultCachingPolicy classes merged
  • The CachingPolicy.CanBeCached method was modified to take the Sql query and parameters. This enables more granular control over the cached results. Note that this is a breaking change from alpha release and you will need to update your code if you created a custom CachingPolicy derived class
  • A new mechanisms allowing excluding caching results for specific queries.

Let’s take a closer look at the last two items. They allow achieving a similar goal but in different ways. Starting from the Beta version the SQL query and query parameters are passed to the CanBeCached method in addition to the store entity sets (which are abstractions of database tables). This allows for inspecting the query and its parameters to decide whether the results yielded by the query should be cached. “Inspecting the query and its parameters” may sound easy but the queries generated by EF tend to be complicated and parsing them may not be trivial. Easier cases are where you just have some queries you never want to cache the results for and instead of “inspecting” you just need to compare if the input query is one of these non-cacheable queries
and if it is return false from the method.
(Side note: I personally believe that with regards to caching you are most often interested in tables the results come from and not in what the query does. In this scenario the affectedEntitySets might be more helpful because you can get the names of the tables used in the query without having to try to actually reverse engineer the query. You can get the names of the tables used in the query as follows:

affectedEntitySets.Select(e => e.Table ?? e.Name);

).
Another way to prevent results for a specific query from being cached is to use the new built-in mechanism which for blacklisting queries. This mechanism consists of two parts – a registrar that contains a list of blacklisted queries (i.e. queries whose results won’t be cached) and the DbQuery.NotCached() (and ObjectQuery.NotCached()) extension methods which make using the registrar easier. As a result blacklisting a query is as easy as appending .NotCached() to the query, just like this:

var q = ctx.Entities.Where(e => e.Flag != null).OrderBy(e => e.Id).NotCached();

Blacklisted queries take precedence (i.e. win) over caching policy and therefore the CachingPolicy.CanBeCached() method will never be called for blacklisted queries.
The registrar itself is public and implements the singleton pattern. You can get the instance using the BlacklistedQueriesRegistrar.Instance property and then you will be able to register (or unregister) blacklisted queries manually (note however that queries are compared using string comparison and therefore the registered query must exactly match the query EF would produce – the extension methods ensure the queries are identical by calling .ToString()/.ToTraceString() on the DbQuery/ObjectQuery instance).
As you can see both CachingPolicy.CanBeCached() and the built-in query blacklisting mechanism allow to prevent results for specific queries from being cached. The difference is that the built-in mechanism is very simple to use but does not give the flexibility and granularity offered by the CachingPolicy.CanBeCached() method. On the other hand the flexibility and granularity of the CachingPolicy.CanBeCached() method is not for free – you need to implement at least some logic yourself.

The road to “RTM”.
I consider the Beta version to be feature complete. I am planning to let it bake for a few weeks, fix reported issues and then release the final version. Your part is to try out the Beta version (or upgrade your projects) and report bugs.

Support for Store Functions (TVFs and Stored Procs) in Code First (Entity Framework 6.1)

See what’s new in Beta here

Until Entity Framework 6.1 was released store functions (i.e. Table Valued Functions and Stored Procedures) could be used in EF only when doing Database First. There were some workarounds which made it possible to invoke store functions in Code First apps but you still could not use TVFs in Linq queries which was one of the biggest limitations. In EF 6.1 the mapping API was made public which (along with some additional tweaks) made it possible to use store functions in your Code First apps. Note, that it does not mean that things will start working automagically once you upgrade to EF6.1. Rather, it means that it is now possible to help EF realize that it actually is capable of handling store functions even when Code First approach is being used. Sounds exciting doesn’t it? So, probably the question you have is:

How do I do that?

To understand how store functions could be enabled for Code First in EF 6.1 let’s take a look first at how they work in the Database First scenario. In Database First you define methods that are driving the execution of store functions in your context class (typically these methods are generated for you when you create a model from the database). You use these methods in your app by calling them directly or, in case of TVFs, in LINQ queries. One thing that is worth mentioning is that these methods need to follow certain conventions otherwise EF won’t be able to use them. Apart from methods defined in your context class store functions must also be specified in the artifacts describing the model – SSDL, CSDL and MSL (think: edmx). At runtime these artifacts are loaded to MetadataWorkspace object which contains all the information about the model.
In Code First the model is being built from the code when the application starts. Types are discovered using reflection and are configured with fluent API in the OnModelCreating method, attributes and/or conventions. The model is then loaded to the MetadataWorkspace (similarly to what happens in the Database First approach) and once this is done both – Code First and Database First operate in the same way. Note that the model becomes read-only after it has been loaded the MetadataWorkspace.
Because Database First and Code First converge at the MetadataWorkspace level enabling discovery of store functions in Code First along with additional model configuration should suffice to add general support for store functions in Code First. Model configuration (and therefore store function discovery) has to happen before the model is loaded to the MetadataWorkspace otherwise the metadata will be sealed and it will be impossible to tweak the model. There are three ways we can configure the model in Code First – configuration attributes, fluent API and conventions. Attributes are not rich enough to configure store functions. Fluent API does not have access to mapping. This leaves conventions. Indeed a custom model convention seems ideal – it gives you access to the model which in EF 6.1 not only contains conceptual and store models but also modifiable mapping information. So, we could create a convention which discovers methods using reflection, then configures store and conceptual models accordingly and defines the mapping. Methods mapped to store functions will have to meet some specific requirements imposed by Entity Framework. The requirements for methods mapped to table valued functions are the following:

  • return type must be an IQueryable<T> where T is a type for which a corresponding EDM type exists – i.e. is either a primitive type that is supported by EF (for instance int is fine while uint won’t work) or a non-primitive type (enum/complex type/entity type) that has been configured (either implicitly or explicitly) in your model
  • method parameters must be of scalar (i.e. primitive or enum) types mappable to EF types
  • methods must have the DbFunctionAttribute whose the first argument is the conceptual container name and the second argument is the function name. The container name is typically the name of the DbContext derived class however if you are unsure you can use the following code snippet to get the name:
    Console.WriteLine(
        ((IObjectContextAdapter) ctx).ObjectContext.MetadataWorkspace
            .GetItemCollection(DataSpace.CSpace)
            .GetItems<EntityContainer>()
            .Single()
            .Name);
    
  • the name of the method, the value of the DbFunction.FunctionName and the queryString name passed to the CreateQuery call must all be the same
  • in some cases TVF mapping may require additional details – a column name and/or the name of the database schema. You can specify them using the DbFunctionDetailsAttribute. The column name is required if the method is mapped to a TVF that returns a collection of primitive values. This is needed because EF requires providing the name of the column containing the values and there is no way of inferring this information from the code and therefore it has to be provided externally by setting the ResultColumnName property of the DbFunctionDetails attribute to the name of the column returned by the function. The database schema name needs to be specified if the schema of the TVF being mapped is different from the default schema name passed to the convention constructor and can be done by setting the DatabaseSchema property of the DbFunctionDetailsAttribute.

The requirements for methods mapped to stored procedures are less demanding and are the following:

  • the return type has to be ObjectResult<T> where T, similarly to TVFs, is a type that can be mapped to an EDM type
  • you can also specify the name of the database schema if it is different from the default name by setting the DatabaseSchema property of the DbFunctionDetailsAttribute. (Because of how the result mapping works for stored procedures setting the ResultColumnName property has no effect)

The above requirements were mostly about method signatures but the bodies of the methods are important too. For TVFs you create a query using the ObjectContext.CreateQuery method while stored procedures just use ObjectContext.ExecuteFunction method. Below you can find examples for both TVFs and stored procedures (also notice how parameters passed to store functions are created). In addition the methods need to be members of the DbContext derived type which itself is the generic argument of the convention.
Currently only the simplest result mapping where names of the columns returned from the database match the names of the names of the properties of the target type (except for mapping to scalar results) is supported. This is actually a limitation in the EF Code First where more complicated mappings would currently be ignored in most cases even though they are valid from the MSL perspective. There is a chance of having more complicated mappings enabled in EF 6.1.1 if appropriate changes are checked in by the time EF 6.1.1 ships. From here there should be just one step to enabling stored procedures returning multiple resultsets in Code First.
Now you probably are a bit tired of all this EF mumbo-jumbo and would like to see

The Code

To see the custom convention in action create a new (Console Application) project. Once the project has been created add the EntityFramework.CodeFirstStoreFunctions NuGet package. You can add it either from the Package Manager Console by executing

Install-Package EntityFramework.CodeFirstStoreFunctions -Pre 

command or using the UI – right click the References in the solution explorer and select “Manage NuGet Packages”, then when the dialog opens make sure that the “Include Prerelease” option in the dropdown at the top of the dialog is selected and use “storefunctions” in the search box to find the package. Finally click the “Install” button to install the package.

Code First Store Functions NuGet


Installing EntityFramework.CodeFirstStoreFunctions from UI

After the package has been installed copy and paste the code snippet from below to your project. This code demonstrates how to enable store functions in Code First.

public class Customer
{
    public int Id { get; set; }

    public string Name { get; set; }

    public string ZipCode { get; set; }
}

public class MyContext : DbContext
{
    static MyContext()
    {
        Database.SetInitializer(new MyContextInitializer());
    }

    public DbSet<Customer> Customers { get; set; }

    protected override void OnModelCreating(DbModelBuilder modelBuilder)
    {
        modelBuilder.Conventions.Add(new FunctionsConvention<MyContext>("dbo"));
    }

    [DbFunction("MyContext", "CustomersByZipCode")]
    public IQueryable<Customer> CustomersByZipCode(string zipCode)
    {
        var zipCodeParameter = zipCode != null ?
            new ObjectParameter("ZipCode", zipCode) :
            new ObjectParameter("ZipCode", typeof(string));

        return ((IObjectContextAdapter)this).ObjectContext
            .CreateQuery<Customer>(
                string.Format("[{0}].{1}", GetType().Name, 
                    "[CustomersByZipCode](@ZipCode)"), zipCodeParameter);
    }

    public ObjectResult<Customer> GetCustomersByName(string name)
    {
        var nameParameter = name != null ?
            new ObjectParameter("Name", name) :
            new ObjectParameter("Name", typeof(string));

        return ((IObjectContextAdapter)this).ObjectContext.
            ExecuteFunction<Customer>("GetCustomersByName", nameParameter);
    }
}

public class MyContextInitializer : DropCreateDatabaseAlways<MyContext>
{
    public override void InitializeDatabase(MyContext context)
    {
        base.InitializeDatabase(context);

        context.Database.ExecuteSqlCommand(
            "CREATE PROCEDURE [dbo].[GetCustomersByName] @Name nvarchar(max) AS " +
            "SELECT [Id], [Name], [ZipCode] " +
            "FROM [dbo].[Customers] " +
            "WHERE [Name] LIKE (@Name)");

        context.Database.ExecuteSqlCommand(
            "CREATE FUNCTION [dbo].[CustomersByZipCode](@ZipCode nchar(5)) " +
            "RETURNS TABLE " +
            "RETURN " +
            "SELECT [Id], [Name], [ZipCode] " +
            "FROM [dbo].[Customers] " + 
            "WHERE [ZipCode] = @ZipCode");
    }

    protected override void Seed(MyContext context)
    {
        context.Customers.Add(new Customer {Name = "John", ZipCode = "98052"});
        context.Customers.Add(new Customer { Name = "Natasha", ZipCode = "98210" });
        context.Customers.Add(new Customer { Name = "Lin", ZipCode = "98052" });
        context.Customers.Add(new Customer { Name = "Josh", ZipCode = "90210" });
        context.Customers.Add(new Customer { Name = "Maria", ZipCode = "98074" });
        context.SaveChanges();
    }
}

class Program
{
    static void Main()
    {
        using (var ctx = new MyContext())
        {
            const string zipCode = "98052";
            var q = ctx.CustomersByZipCode(zipCode)
                .Where(c => c.Name.Length > 3);
            //Console.WriteLine(((ObjectQuery)q).ToTraceString());
            Console.WriteLine("TVF: CustomersByZipCode('{0}')", zipCode);
            foreach (var customer in q)
            {
                Console.WriteLine("Id: {0}, Name: {1}, ZipCode: {2}", 
                    customer.Id, customer.Name, customer.ZipCode);
            }

            const string name = "Jo%";
            Console.WriteLine("\nStored procedure: GetCustomersByName '{0}'", name);
            foreach (var customer in ctx.GetCustomersByName(name))
            {
                Console.WriteLine("Id: {0}, Name: {1}, ZipCode: {2}", 
                    customer.Id, customer.Name, customer.ZipCode);   
            }
        }
    }
}

In the code above I use a custom initializer to initialize the database and create a table-valued function and a stored procedure (in a real application you would probably use Code First Migrations for this). The initializer also populates the database with some data in the Seed method. The MyContext class is a class derived from the DbContext class and contains two methods that are mapped to store functions created in the initializer. The context class contains also the OnModelCreating method where we register the convention which will do all the hard work related to setting up our store functions. The Main method contains code that invokes store functions created when initializing the database. First, we use the TVF. Note, that we compose the query on the function which means that the whole query will be translated to SQL and executed on the database side. If you would like to see this you can uncomment the line which prints the SQL query in the above snippet and you will see the exact query that will be sent to the database:

SELECT
    [Extent1].[Id] AS [Id],
    [Extent1].[Name] AS [Name],
    [Extent1].[ZipCode] AS [ZipCode]
    FROM [dbo].[CustomersByZipCode](@ZipCode) AS [Extent1]
    WHERE ( CAST(LEN([Extent1].[Name]) AS int)) > 3

(Back to the code) Next we execute the query and display results. Once we are done with the TVF we invoke the stored procedure. This is just an invocation because you cannot build queries on top of results returned by stored procedures. If you need any query-like (or other) logic it must be inside the stored procedure itself and otherwise you end up having a Linq query that is being run against materialized results. That’s pretty much the whole app. Just in case I am pasting the output the app produces below:

TVF: CustomersByZipCode('98052')
Id: 1, Name: John, ZipCode: 98052

Stored procedure: GetCustomersByName 'Jo%'
Id: 1, Name: John, ZipCode: 98052
Id: 4, Name: Josh, ZipCode: 90210
Press any key to continue . . .

Note that in both examples the return types are based on entity types. As I hinted above you can also use complex and scalar types for your results. Take a look at the End-to-End tests in the project itself – all scenarios are tested there.

That’s about what’s in alpha, so you may ask:

what’s next?

If you look at the code there are a few TODOs in the code. One of the most important is the support for nullable parameters. I am also thinking of removing the limitation where the method name in your DbContext derived class must ultimately match the name of the TVF in the database. If the workitem 2192 is resolved for the next version of EF I will be able to add support for non-default mapping. In addition I think it is very close from workitem 2192 to supporting stored procedures returning multiple resultsets. Not sure how useful it would be but it would be cool to see support for this feature which currently is kind of a dead feature because it is supported neither by CodeFirst nor by EF tooling.

Anything else?
The project is open source and is hosted on codeplex. You can get the sources from here. Try it and let me know what you think.

Second Level Cache for EF 6.1

See what’s new in Beta here.

Currently Entity Framework does not natively support second level caching. For pre-EF6 versions you could use EF Caching Provider Wrapper but due to changes to the EF provider model in EF6 it does not work with newest versions of EF. In theory it would be possible to recompile the old caching provider against EF6 but, unfortunately, this would not be sufficient. A number of new features breaking some assumptions made in the old caching provider (e.g. support for passing an open connection when instantiating a new context) have been introduced in EF6 resulting in the old caching provider not working correctly in these scenarios. Also dependency injection and Code-based Configuration introduced in EF6 simplify registering wrapping providers which was rather cumbersome (especially for Code First) in the previous versions. To fill the gap I created the Second Level Cache for EF 6.1 project. It’s a combination of a wrapping provider and a transaction handler which work together on caching query results and keeping the cache in a consistent state. In a typical scenario query results are cached by the wrapping caching provider and are invalidated by the transaction handler if an entity set on which cached query results depend was modified (note that data modification in EF always happens in a transaction hence the need of using transaction handler).
Using the cache is easy but requires a couple steps. First you need to install the EntityFramework.Cache NuGet package. There are two ways to do this from Visual Studio. One way to do this is to use the UI – right click the References node in the Solution Explorer and select the Manage NuGet Packages option. This will open a dialog you use to install NuGet packages. Since the project is currently in the alpha stage you need to select “Include Prelease” in the drop down at the top. Then enter “EntityFramework.Cache” in the search window and, once the package appears, click the “Install” button.

AAAA


Installing EntityFramework.Cache from UI


You can also install the package from the Package Manager Console. Open the Package Manager Console (Tools → NuGet Package Manager → Package Manager Console) and execute the following command:
Install-Package EntityFramework.Cache –Pre
(-Pre allows installing pre-release packages).
Note that the package depends on Entity Framework 6.1. If you don’t have Entity Framework 6.1 in your project it will be automatically installed when you install the EntityFramework.Cache package.
Once the package is installed you need to tell EF to use caching by configuring the caching provider and the transaction handler. You do this by creating a configuration class derived from the DbConfiguration class. In the constructor of your DbConfiguration derived class you need to set the transaction interceptor and the Loaded event handler which will be responsible for replacing the provider. Here is an example of how to setup caching which uses the built-in in-memory cache implementation and the default caching policy.

public class Configuration : DbConfiguration
{
  public Configuration()
  {
    var transactionHandler = new CacheTransactionHandler(new InMmemoryCache());

    AddInterceptor(transactionHandler);

    Loaded +=
      (sender, args) => args.ReplaceService<DbProviderServices>(
        (s, _) => new CachingProviderServices(s, transactionHandler, 
          new DefaultCachingPolicy()));
  }
}

The default caching policy used above is part of the project and allows caching all query results regardless of their size or entity sets used to obtain the results. Both, sliding and absolute, expiration times in the default caching policy are set to maximum values therefore items will be cached until an entity set used to obtain the results depended on is modified. If the default caching policy is not suitable for your needs you can create a custom caching policy in which you can limit what will be cached. To create a custom policy you just need to derive a class from the CachingPolicy class, implement the abstract methods, and pass the policy to the CachingProviderServices during registration. When implementing a custom caching policy there is one thing to be aware of – the expired entries are removed from the cache lazily. It means that an entry will be removed by the caching provider only when the caching provider tries to read the entry and finds it is expired. This is not extremely helpful (especially because since the item is expired the provider will query the database to get fresh results which will be then put to the cache – so effectively the expired entry will be replaced with a new entry) but lets the user decide what the best strategy of cleaning the cache in their case is. For instance, in the InMemoryCache implementation included in the project I created a Purge method. This method could be periodically called to remove stale cache entries.
The project also includes a simple implementation of a cache which caches query results in memory. This is just a sample implementation and if you would like to use a different caching mechanism you are free to do so – you just need to implement the ICache interface. This interface is a slightly modified version of the interface that shipped with the original EF Caching Provider Wrapper which should make moving existing apps using the old caching solution to EF6 easier.

As the old saying goes “A program is worth a 1000 words“, so let’s take a look at the second level cache in action. Here is a complete sample app which is using the cache:

public class Airline
{
  [Key]
  public string Code { get; set; }

  public string Name { get; set; }

  public virtual ICollection<Aircraft> Aircraft { get; set; }
}

public class Aircraft
{
  public int Id { get; set; }

  public string EquipmentCode { get; set; }

  public virtual Airline Airline { get; set; }
}

public class AirTravelContext : DbContext
{
  static AirTravelContext()
  {
    Database.SetInitializer(new DropCreateDatabaseIfModelChanges<AirTravelContext>());  
  }

  public DbSet<Airline> Airlines { get; set; }

  public DbSet<Aircraft> Aircraft { get; set; }
}

public class Configuration : DbConfiguration
{
  public Configuration()
  {
    var transactionHandler = new CacheTransactionHandler(Program.Cache);

    AddInterceptor(transactionHandler);

    Loaded +=
      (sender, args) => args.ReplaceService<DbProviderServices>(
        (s, _) => new CachingProviderServices(s, transactionHandler));
  }
}
  
class Program
{
  internal static readonly InMemoryCache Cache = new InMemoryCache();

  private static void Seed()
  {
    using (var ctx = new AirTravelContext())
    {
      ctx.Airlines.Add(
        new Airline
        {
          Code = "UA",
          Name = "United Airlines",
          Aircraft = new List<Aircraft>
          {
            new Aircraft {EquipmentCode = "788"},
            new Aircraft {EquipmentCode = "763"}
          }
        });

      ctx.Airlines.Add(
        new Airline
        {
          Code = "FR",
          Name = "Ryan Air",
          Aircraft = new List<Aircraft>
          {
            new Aircraft {EquipmentCode = "738"},
          }
        });

      ctx.SaveChanges();
    }
  }

  private static void RemoveData()
  {
    using (var ctx = new AirTravelContext())
    {
      ctx.Database.ExecuteSqlCommand("DELETE FROM Aircraft");
      ctx.Database.ExecuteSqlCommand("DELETE FROM Airlines");
    }
  }

  private static void PrintAirlinesAndAircraft()
  {
    using (var ctx = new AirTravelContext())
    {
      foreach (var airline in ctx.Airlines.Include(a => a.Aircraft))
      {
        Console.WriteLine("{0}: {1}", airline.Code, 
          string.Join(", ", airline.Aircraft.Select(a => a.EquipmentCode)));
      }
    }
  }

  private static void PrintAirlineCount()
  {
    using (var ctx = new AirTravelContext())
    {
      Console.WriteLine("Airline Count: {0}", ctx.Airlines.Count());
    }
  }

  static void Main(string[] args)
  {
    // populate and print data
    Console.WriteLine("Entries in cache: {0}", Cache.Count);
    RemoveData();
    Seed();
    PrintAirlinesAndAircraft();

    Console.WriteLine("\nEntries in cache: {0}", Cache.Count);
    // remove data bypassing cache
    RemoveData();
    // not cached - goes to the database and counts airlines
    PrintAirlineCount();
    // prints results from cache
    PrintAirlinesAndAircraft();
    Console.WriteLine("\nEntries in cache: {0}", Cache.Count);
    // repopulate data - invalidates cache
    Seed();
    Console.WriteLine("\nEntries in cache: {0}", Cache.Count);
    // print data
    PrintAirlineCount();
    PrintAirlinesAndAircraft();
    Console.WriteLine("\nEntries in cache: {0}", Cache.Count);
  }
}

The app is pretty simple – a couple of entities, a context class, code base configuration (should look familiar), a few methods and the Main() method which drives the execution. One method that is worth mentioning is the RemoveData() method. It removes the data from the tables using the SqlExecuteMethod() therefore bypassing the entire Entity Framework update pipeline including the caching provider. This is to show that the cache really works but is at the same type a kind of warning – if you bypass EF you will need to make sure the cache is in a consistent state or you can get incorrect results. Running the sample app results in the following output:

Entries in cache: 0
FR: 738
UA: 788, 763
Entries in cache: 3

Removing data directly from the database
Airline Count: 0
FR: 738
UA: 788, 763

Entries in cache: 4

Entries in cache: 2
Airline Count: 2
FR: 738
UA: 788, 763

Entries in cache: 4
Press any key to continue . . .

Let’s analyze what’s happening here. On line 1 we just report that the cache is empty (no entries in the cache). Seems correct – no queries have been sent to the database so far. Then we remove any stale data from the database, seed the database, and print the contents of the database (lines 2 and 3). Printing the content of the database requires querying the database which should result in some entries in the cache. Indeed on line 4 three entries are reported to be in the cache. Why three if we sent just one query? If you peek at the cache with the debugger you will see that two of these entries are for the HistoryContext so three seems correct. Now (line 6) we delete all the data in the database so when we query the database for the airline count (line 7) the result is 0. However, even though we don’t have any data in the database, we can still print data on lines 8 and 9. This data comes from the cache – note that when we deleted data we used ExecuteSqlCommand() method which bypassed the EF update pipeline and, as a result, no cache entries were invalidated. On line 11 we again report the number of items in the cache. This time the cache contains four entries – two entries for the HistoryContext related queries, one entry for the data that was initially cached and one entry for the result of the query where we asked for the number of airlines in the database. Now we add the data to the database again and again print the number of items in the cache on line 13. This time the number of entries is two. This is because inserting the data to the database invalidated cached results for queries that used either Airlines or Aircraft entity sets and as a result only the results related to the HistoryContext remained in the cache. We again query the database for the number of airlines (line 14) and print all the data from the database. The results for both queries are added to the cache and we end up having four entries in the cache again (line 18).
That’s pretty much it. Try it out and let me what you think (or report bugs). You can also look at and play with the code – it is available on the project website on codeplex.
Note that this version is an alpha version and there is still some work remainig before it can be called done – most notably support for async methods, adding some missing overrides for the ADO.NET wrapping classes. I am also considering adding support for TransactionScope.

What changed in the EF Tooling in Visual Studio 2013 (and Visual Studio 2012 Out Of Band)

The recently shipped version of Visual Studio 2013 contains a new version of EF Tooling (a standalone version for Visual Studio 2012 is also available for download). The main goal of this release was to teach the EF designer how to talk to both EF5 and EF6. It was quite an undertaking given how EF6 differs from EF5 (binary incompatible, no correspondence between provider models and metadata workspace, EF6 runtime assemblies no longer in GAC just to name a few differences) and required changes to the existing functionality. Some of these changes may be confusing to people who used previous versions of EF tooling. The goal of this post is to take a look what has changed, why, and how it works now.
Before EF6 things were reasonably straightforward – from the EF Tooling standpoint there could be only one version of the EF runtime installed on the box – EF1, EF4 or EF5. In addition it was easy to determine the version of EF just by looking at the installed version of the .NET Framework – .NET Framework 3.5 meant EF1, .NET Framework 4 meant EF4 and .NET Framework 4.5 meant EF5. Note that the designer did not need to distinguish between EF4.x versions (treated as EF4) or EF5 for .NET Framework 4 (also treated as EF4) since they were built on top of the core EF runtime version residing in GAC. The installed (GAC’ed) version of the EF runtime automatically implied the latest supported version of Csdl, Ssdl, Msl and Edmx (‘schemas’ as we internally call them) since each version of the core EF runtime introduced a new version of schemas. But then happened EF6 – no longer in GAC, works on .NET Framework 4 and .NET Framework 4.5 (also meaning you could have v3 schemas on .NET Framework 4), binary incompatible with earlier versions of EF, some types moved to new/different namespaces, etc. The only way for tooling to be able to handle all of this was to break a number of assumptions made in the designer and, in some cases, change how the designer works. Below is a list of things that changed the most with a short explanation why they changed and what to expect:

  • For Model First the code is no longer generated when adding an empty model to the project. This is probably the most confusing change (funnily it was internally reported to me as a bug multiple times even before we shipped the new EF Tooling the first time in Visual Studio 2013 RC). The reason for this change is simple – since some types in EF6 were moved to different namespaces the code we used to generate for EF5 no longer works with EF6 (and vice versa). In some cases (e.g. you don’t have a reference to any EF runtime in your project) it is just impossible to tell whether the user is going it need EF5 or EF6 code. The most interesting part is that the code for we used to add when creating an empty model was is in reality not that useful before the database was created – yes, you could write code against the generated context/entities but you would not be really able to run it without the database. Therefore, we changed the workflow a bit and now the code generation templates are added when you create the database from model instead of when you add an empty model to your project because the wizard will make you select the version of EF you would like to use in your project, if there already isn’t one.
  • Code Generation Strategy disabled for EF6 models. Again this is partially related to changes to types and namespaces. Code generated for EF5 would not work with EF6. In addition the code we generate for ObjectContext based context and EntityObject based entities for EF5 (and earlier) applications is generated using System.Data.Entity.Design.dll. System.Data.Entity.Design.dll is part of the .NET Framework and does not fit into the EF6 shipping model. Therefore instead of updating System.Data.Entity.Design.dll we decided to support only T4 template based code generation for EF6. EF Tooling ships with T4 templates for generating DbContext based context and POCO entities. If, for some reason, you absolutely need ObjectContext based context the EF6 templates were posted on VS Gallery. For EF5 (or earlier) applications the Code Generation Strategy drop down is not blocked and you can choose between T4 and LegacyObjectContext (yes, we changed the option names as well because they were a bit ridiculous in VS2012 where ‘None’ basically meant T4 and ‘Default’ meant ObjectContext but the ‘None’ option was default).
  • Impossible to select the version of Entity Framework. To determine the applicable version of EF the designer looks for references to EntityFramework.dll and System.Data.Entity.dll in the project (note that EntityFramework.dll wins over System.Data.Entity.dll when determining the version). If it cannot find any, the user has to select the version in the wizard. Otherwise the latest version of EF found in the project is being used and the selection will not be possible (radio buttons will be disabled or the wizard page will not be shown at all).
  • Using third party providers. In the simple days of EF5 (and earlier) EF providers were registered globally and you could get one basically anytime and anywhere. Those days are long gone. In EF6 EF providers don’t have a global registration point. Rather, they are registered in the config file or using code based configuration. When you select a connection in the wizard the designer tries to find a reference to a corresponding provider in your project. If it cannot find one you won’t be able to use EF6 – instead you will be asked to close the wizard, install the provider and restart the wizard (note that this does not apply to the provider for Sql Server – EF Tooling contains EF6 provider for Sql Server which in the majority of cases make things easier for the user).
  • Retargeting. Depending on the referenced version of Entity Framework in the project changing the target .NET Framework version for the project may but may not change the edmx file. If the project references System.Data.Entity.dll and does not reference EF6 retargeting will result in upgrading/downgrading the version of the edmx file to match the latest supported version of the schema for the given .NET Framework and, as a result, EF runtime version. If the project contains a reference to EF6 toggling the target between .NET Framework 4 and .NET Framework 4.5 will not result in changing the edmx version since EF6 is supported on both versions and understands v3 schemas. Targeting .NET Framework 3.5 should always downgrade to v1 since EF1 is the only version available on .NET Framework 3.5. One caveat to retargeting is that you should always update your NuGet packages after retargeting. This is because EF (and potentially other packages) has a different assembly for .NET Framework 4 and for .NET Framework 4.5 (one of the reasons is that Async is not natively supported on .NET Frramework 4) and even though they are both shipped in the same NuGet package, NuGet does not replace project references after changing the target .NET Framework version. This leads either to missing functionality (e.g. Async not available on .NET Framework 4.5 (when using EF6 for .NET Framework 4 on .NET Framework 4.5)) or weird build errors saying you don’t have reference to EntityFramework.dll (only running MsBuild /v:diag will tell you that the reference you have was ignored since the referenced EntityFramework.dll was built for a later version of .NET Framework than the one the project is currently targeting (happens when using EF6 for .NET Framework 4.5 on .NET Framework 4)).
  • Using EF6 and EF5/EF4 in one project. Initially we did not want to support this scenario at all. However it is not only pretty easy to end up in such a situation but supporting this would also be helpful for people wanting to gradually move legacy, bigger projects from EF5 to EF6, so finally we decided to do what we can to support mixed EF versions in one project. The most important thing the designer has to do to handle this scenario correctly is to ensure that the right provider is being used for a given edmx file. Also, the version of the edmx file needs to be in sync with the referenced version of the EF runtime. Seems easy at first but there are some cases where things may behave a bit weird. A canonical example is when you have a project targeting .NET Framework 4 and you don’t have a reference to any Entity Framework runtime. When you create a new model it will create a v2 edmx since this was the latest supported schema version in .NET Framework 4. Now you create a database from your model and select EF6. Since EF6 supports v3 schemas the version of your edmx file will be changed to v3. Because the number of dimensions for versioning scenarios ended up being bigger than we originally expected and had a number of exceptions on top of that it was initially hard to tell what the outcome for each combination should be. To address this I created a set of rules which were much easier to understand and to follow when working on versioning scenarios. You can find these rules here.
  • In-the-box packages are not the latest available. EF Tooling contains EF6 NuGet packages that will be added to the project when adding a new model if needed. However, since EF Tooling ships with Visual Studio which has a different cadence that EF the included packages may not be the latest ones. In fact in case of EF6 we fixed a few (mostly performance related) bugs after Visual Studio 2013 was locked down. As a result version 6.0.1 of EF was shipped on the same day as Visual Studio 2013 but the version of EF that ships with Visual Studio 2013 is 6.0.0 and does not contain fixes included in 6.0.1. Moreover 6.0.2 is on its way and again – it will be a runtime only release. Updating EF package is as easy as running the following command from Package Manager Console (Tools → Library Package Manager → Package Manager Console)
    Update-Package EntityFramework