Second Level Cache Beta-2 for EntityFramework 6.1+ shipped

When I published the Beta version of EFCache back in May I intentionally did not call it Beta-1 since at that time I did not plan to ship Beta-2. Instead, I wanted to go straight to the RTM. Alas! It turned out that EFCache could not be used with models where CUD (Create/Update/Delete) operations where mapped to stored procedures. Another problem was that in scenarios where there were multiple databases with the same schema EFCache returned cached results even if the database the app was connecting changed. I also got a pull request which I wanted to include. As a result I decided to ship Beta-2. Here is the full list of what’s included in this release:

  • support for models containing CUD operations mapped to stored procedures – invoking a CUD operation will invalidate cache items for the given entity set
  • CacheTransactionHandler.AddAffectedEntitySets is now protected – makes subclassing CacheTransactionHandler easier
  • database name is now part of the cache key – enables using the same cache across multiple databases with the same schema/structure
  • new Cached() extension method forces caching results for selected queries – results for queries marked with Cached() method will be cached regardless of caching policy, whether the query has been blacklisted or if it contains non-deterministic SQL functions. This feature started with a contribution from ragoster
  • new name – Second Level Cache for Entity Framework 6.1+ – (notice ‘+’) to indicate that EFCache works not only with Entity Framework 6.1 but also with newer point releases (i.e. all releases up to, but not including, the next major release)

The new version has been uploaded to NuGet, so update the package in your project and let me know of any issues. If I don’t hear back from you I will release the final version in a month or so.

When 14 pins is not enough.

I had this little project in mind but after chewing on it for a while I figured that to achieve my goal I would need more pins than the Arduino Uno can offer. I started looking at what my options were and soon I found something that looked very promising – shift registers (yes, I realize that this is a very basic stuff for people familiar with elementary electronics but sadly I am not one of them). Conceptually, the way shift registers work is simple – you feed a shift register bit by bit (serial-in) and then tell it to output all data at once (parallel-out). (The opposite (i.e. parallel in, serial out) is also possible but this is out of scope of this post.) Translating this to Arduino terms – with one output pin to send the data and a couple of additional output pins to control the shift register we can have tens of output pins. In the simplest case “tens” means actually 8 but shift registers can be cascaded and each additional register adds 8 additional output pins. To try this out I got myself a few 74HC595 shift registers and a bunch of LEDs and resitors and build this circuit:
PinMultiplication
One thing that turned out to be very helpful was a small sheet I got when I bought the shift registers showing how to cascade them. Even though I was not cascading shift registers (I was too cheap and bought just 10 LEDs) it helped me connect all the power and ground wires to the correct pins. I actually decided to scan the sheet and attach it to this post because I am sure I will lose it sooner than later and won’t be able to find it when I need it. Here it is:
ShiftRegister
Then I had to breathe some life into all the wires and ICs so I wrote some code. I was surprised when it turned out that you literally need less than 10 lines of code (and some data) to get the whole thing up and running:

And here is the code:

 

int clock = 8;
int data = 4;
int latch = 7;

uint8_t patterns[] = 
{ 
  0x00, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80,
  0x00, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80,
  0x00, 0x00, 
  0x01, 0x03, 0x07, 0x0f, 0x1f, 0x3f, 0x7f, 0xff, 0x00,
  0x01, 0x03, 0x07, 0x0f, 0x1f, 0x3f, 0x7f, 0xff, 0x00,
  0x00, 0xff, 0x00, 0xff,
  0x00, 0x00,
  0x81, 0x42, 0x24, 0x18, 0x24, 0x42, 0x81, 
  0x81, 0x42, 0x24, 0x18, 0x24, 0x42, 0x81, 
  0x00, 0x00,
};

void setup() {                
  pinMode(clock, OUTPUT);
  pinMode(data, OUTPUT);
  pinMode(latch, OUTPUT);
}

void loop() {
  static int index = 0;

  digitalWrite(latch, LOW);
  shiftOut(data, clock, MSBFIRST, patterns[index]);
  digitalWrite(latch, HIGH);
  delay(300);

  index = (index + 1) % sizeof(patterns);
}

The code is simple. Since I have just 8 LEDs I can use a byte value to control which of the LEDs should be on. If a bit corresponding to an LED is set to 1 I will turn the LED on, otherwise I will turn it off. The values are stored in an array and every 300th millisecond or so I take the next value from the array and use it turn the LEDs on or off. When I reach the end of the array I start from the first element. Even though the code is so simple there are two interesting bits there. The first one is controlling the shift register. The second one (related to the first one) is how the shiftOut function works. As I said above we need three wires to control the register. They are connected to pins labeled as latch, clock and data in the code. We set latch to LOW to tell the shift register that we are going to send data. Once we send all the data we need to set latch to HIGH. To send the data we use the shiftOut function. What this function does is it reads the passed value bit by bit and for each bit it sets the clock pin to LOW then it sets the data pin to LOW or HIGH depending on the value of the bit and then it sets the data pin to HIGH. It is actually pretty easy to write a counterpart of the shiftOut function on your own. Here is what I came up with (I skipped the bitOrder parameter as I did not need it):

 
void myShiftOut(uint8_t dataPin, uint8_t clockPin, uint8_t val)
{
  for(int mask = 0x80; mask > 0; mask >>= 1)
  {
    digitalWrite(clockPin, LOW);
    digitalWrite(dataPin, val & mask);
    digitalWrite(clockPin, HIGH);
    digitalWrite(dataPin, 0);
  }
  digitalWrite(clockPin, LOW);
}

You can replace the call to the shiftOut function with a call to the myShiftOut function in the first snippet and it will continue to work the same way.

That’s more or less it. Simple but very powerful (and fun)!

Automating creating NuGet packages with MSBuild

NuGet is a great way of shipping projects. You work on a project, you publish a package and it is immediately available to, literally, millions of developers. Creating a package consists of a few steps like authoring a .nuspec file, creating a folder structure, copying the right files to the right folders/subfolders and calling the nuget pack command. While the steps are not complicated they are error prone. I learnt this lesson when I shipped the first alpha versions of some of my NuGet packages. What happened was that I would create a package and then I would start feeling some doubts – did I really build the project before copying the files? did I copy the Release and not the Debug version? did I sign the file? And then people started using my packages and started asking (among other things) for a version that would work on other versions of .NET Framework. This meant that the amount of work to create the package would basically at least double since the steps I outlined above would have to be followed for each targeted platform. I had already decided to automate the process of creating NuGet packages but having to ship a multiplatform NuGet package was a forcing function to actually do the work. I set a few goals before starting working on this:

  • I will be able to create a package with just one simple command
  • I will be able to create a multiplatform package
  • I will be able to exclude/include platform specific code
  • The assemblies included in the package will be signed
  • I will be able to strip InternalsVisibleTo from my assembies
  • I won’t have to check any binaries in to the source control
  • None of the changes will break Visual Studio experience (i.e. I will be able to use Visual Studio the same way I was using it before the changes)

Since I am using Visual Studio for all the projects I ship NuGet packages for using MSBuild to achieve my goal was a no brainer. I started from enabling building the project for multiple .NET Framework versions (in my case I really needed only to be able to target .NET Framework 4 and .NET Framework 4.5). This was pretty straightforward – if you open your .csproj file you will quickly find the following line:

<TargetFrameworkVersion>v4.5</TargetFrameworkVersion>

which, as you probably already guessed, indicates the target .NET Framework version. This can be parameterized as follows:

<TargetFrameworkVersion 
    Condition="'$(TargetFrameworkVersion)' != 'v4.0'">v4.5</TargetFrameworkVersion>

which will enable building the project for .NET Framework 4 just by passing/setting the TargetFrameworkVersion parameter to ‘v4.0′. If any other value is passed/set (or the value is not passed/set at all) the project will be built against .NET Framework 4.5. You can now test the project by building it from the developer command prompt. The following command will build the version for .NET Framework 4:

msbuid myproj.csproj /t:Build /p:TargetFrameworkVersion=v4.0

Note that the above command may fail. One of the reasons might be that your code uses APIs that are available only on .NET Framework 4.5 (async is probably the best example but there are many more). Therefore you may need a mechanism to exclude this code or replace it with a .NET Framework 4 counterpart. Typically this is done using the #ifdef precompiler directive. You just need a constant (let’s call it NET40) that will indicate that the code is being built against .NET Framework 4. We can define this constant in the csproj file depending on the value of the TargetFrameworkVersion property we already set.
To do that we just need to create a new PropertyGroup just below PropertyGroups used to set configuration (i.e. Debug/Release) specific properties. Here is how this new property group would look like:

 
 <PropertyGroup>
    <DefineConstants 
       Condition=" '$(TargetFrameworkVersion)' == 'v4.0'">$(DefineConstants);NET40</DefineConstants>
  </PropertyGroup>

Now we can use the NET40 in the #ifdef precompiler directives to conditionally compile/exclude code for a specific platform.
While we are at it we can take care of removing InternalsVisibleTo attributes from the code. We can use the same trick as we used for the TargetFrameworkVersion and define a constant if a property (let’s call it InternalsVisibleToEnabled) is set to false. By default its value will be set to true true by but when building a package (as opposed to building the project itself) we will set it to false. This will allow us to use the constant in the #ifdef directives to exclude code we don’t want when building packages. With this change the PropertyGroup created above will turn to:

 
<PropertyGroup>
    <DefineConstants 
        Condition=" '$(TargetFrameworkVersion)' == 'v4.0'">$(DefineConstants);NET40</DefineConstants>
<DefineConstants 
    Condition=" '$(InternalsVisibleToEnabled)'">$(DefineConstants);INTERNALSVISIBLETOENABLED</DefineConstants>
</PropertyGroup>

Another important thing to look at are references. If you have a reference to an assembly that is not shipped with the .NET Framework you need to make sure that you are referencing a correct version of this assembly. This is especially conspicuous if you include other multiplatform NuGet packages – referencing a wrong version of the package may cause weird build breaks or, some APIs you would expect to be present will appear to be missing. This can be fixed with conditional references. For instance in some of my projects I am referencing the EntityFramework NuGet package and I need to reference the correct version of EntityFramework.dll depending on the .NET Framework version I compile my project against. To solve this I can use the MSBuild Choose construct like this:

 
<Choose>
  <When Condition="'$(TargetFrameworkVersion)' == 'v4.0'">
    <ItemGroup>
      <Reference Include="EntityFramework, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=MSIL">
        <SpecificVersion>False</SpecificVersion>
        <HintPath>..\packages\EntityFramework.6.1.0\lib\net40\EntityFramework.dll</HintPath>
      </Reference>
      <Reference Include="EntityFramework.SqlServer, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=MSIL">
        <SpecificVersion>False</SpecificVersion>
        <HintPath>..\packages\EntityFramework.6.1.0\lib\net40\EntityFramework.SqlServer.dll</HintPath>
      </Reference>
    </ItemGroup>
  </When>
  <Otherwise>
    <ItemGroup>
      <Reference Include="EntityFramework, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=MSIL">
        <SpecificVersion>False</SpecificVersion>
        <HintPath>..\packages\EntityFramework.6.1.0\lib\net45\EntityFramework.dll</HintPath>
      </Reference>
      <Reference Include="EntityFramework.SqlServer, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=MSIL">
        <SpecificVersion>False</SpecificVersion>
        <HintPath>..\packages\EntityFramework.6.1.0\lib\net45\EntityFramework.SqlServer.dll</HintPath>
      </Reference>
    </ItemGroup>
  </Otherwise>
</Choose>

That’s pretty much it as far as building the project against different versions of the .NET Framework goes. With this we can look at the NuGet side of things. The first thing to do is to open the solution in the Visual Studio, right click on the solution node in the project explorer and select “Enable NuGet Package Restore”. This will do a few things

  • it will enable restoring missing NuGet packages when building. This is very useful and help prevent from having binaries checked in in the source control (you almost always want it)
  • to achieve the above it will create a .nuget folder which will contain a few files with the NuGet.targets being the most important for us
  • it will modify .csproj files to define some properties but most importantly it will include the NuGet.targets file

Note: one of the files NuGet drops to the .nuget folder is Nuget.exe. If you are using source control (if you are not you deserve to be named a lead developer of a new feature in this old COBOL project no one has touched in years – they did not use source control either so you should be fine) you want to check in all the files from the .nuget folder but Nuget.exe.
Save your files or, even better, close Visual Studio. This is important. If you modify your project files both inside and outside VS you will lose changes you made outside VS in the best case (and only if you pick the right option when VS detects you edited files outside VS). In the worst case you will end up in this .csproj limbo when .csproj files are only partially modified by tools and you are not able to recreate what has been lost. The tools no longer work (or work randomly which is even worse) because some changes are lost, the project does not compile and the best option is just to revert all the changes to the last commit (if you are not using source control then, you know, the COBOL project needs a lead (or, actually, any) developer) and start from scratch.
Now we are ready to add the logic to build the NuGet package. We will create a new target called ‘CreatePackage’ (I originally wanted to call it ‘BuildPackage’ but it turned out that a target with that name already exists in the .NuGet.targets file) which will be the entry point to build the package. Currently all my projects I ship NuGet packages for are simple and contain just one assembly with the product code and one assembly with tests. Since tests are not part of the NuGet packages I ship I can add the CreatePackage target to the .csproj file containing the product code. If I had more assemblies I would create a separate file (probably called something like build.proj where I would have all the targets needed to build and package all the assemblies). In this target I would need to:

  • build my assembly/assemblies for each platform I want to target
  • create a folder structure required by NuGet
  • copy artifacts built in the first step to folders from the second step
  • create the package using NuGet.exe pack command

The first thing is to further modify the .csproj file to fix a couple of problems. The first problem we need to fix is that each time we build the project we overwrite existing files. Normally (e.g. when working from Visual Studio) it is not a problem but because we are going to invoke build more than once (we need to build for multiple platforms) subsequent builds would overwrite files built by previous builds. We could solve this by just copying files between builds but the solution I like better is just to be able to tell the build where to place the build artifacts. This can be done by removing setting the OutputPath property in the configuration specific PropertyGroups (e.g. to ‘bin\Release\’) and adding the following conditional OutputPath property definition to the first PropertyGroup after the Configration. The new OutputPath definition would like this:

 
<OutputPath Condition="'$(OutputPath)' == ''">bin\$(Configuration)\</OutputPath>

This allows to pass the OutputPath value as a parameter and if it is not empty it will be used throughout the build. Otherwise we will set a default value which effectively will be the same as the one that would have originally been set.
The second thing to take care of is the case where there is no NuGet.exe file in the .nuget folder (I recommended to not check this file in so it will be missing for newly cloned repos or when you clean the repo with git clean –xdf). This can be easily fixed by adding the following property definition to the first PropertyGroup:

 
<DownloadNuGetExe>true</DownloadNuGetExe>

Now whenever you invoke a target that depends on the NuGet’s CheckPrerequisites target NuGet will check if the NuGet.exe file is present and if it is not it will download it.
With the above changes we are ready for the CreatePackage target. Here is how it looks like (I copied it from the Interactive Pre-Generated Views project):

 
<Target Name="CreatePackage" DependsOnTargets="CheckPrerequisites">
  <Error Text="KeyFile parameter not spercified (/p:KeyFile=MyKey.snk)" Condition=" '$(KeyFile)' == ''" />
  <PropertyGroup>
    <Configuration>Release</Configuration>
    <PackageSource>bin\$(Configuration)\Package\</PackageSource>
    <NuSpecPath>..\tools\EFInteractiveViews.nuspec</NuSpecPath>
  </PropertyGroup>
  <RemoveDir Directories="$(PackageSource)" />
  <MSBuild Projects="$(MSBuildThisFile)" Targets="Rebuild" Properties="InternalsVisibleToEnabled=false;SignAssembly=true;AssemblyOriginatorKeyFile=$(KeyFile);TargetFrameworkVersion=v4.5;OutputPath=bin\$(Configuration)\net45;Configuration=$(Configuration)" BuildInParallel="$(BuildInParallel)" />
  <MSBuild Projects="$(MSBuildThisFile)" Targets="Rebuild" Properties="InternalsVisibleToEnabled=false;SignAssembly=true;AssemblyOriginatorKeyFile=$(KeyFile);TargetFrameworkVersion=v4.0;OutputPath=bin\$(Configuration)\net40;Configuration=$(Configuration)" BuildInParallel="$(BuildInParallel)" />
  <Copy SourceFiles="bin\$(Configuration)\net45\$(AssemblyName).dll" DestinationFolder="$(PackageSource)\lib\net45" />
  <Copy SourceFiles="bin\$(Configuration)\net40\$(AssemblyName).dll" DestinationFolder="$(PackageSource)\lib\net40" />
  <Copy SourceFiles="$(NuSpecPath)" DestinationFolder="$(PackageSource)" />
  <Exec Command='$(NuGetCommand) pack $(NuSpecPath) -BasePath $(PackageSource) -OutputDirectory bin\$(Configuration)' LogStandardErrorAsError="true" />
</Target>

Let’s look at this target line by line since there are some small additions I have not mentioned yet. One of my goals was to be able to sign the assembly. I do it by passing a path to my .snk file. To make sure I don’t build packages with non-signed assemblies I error out on line #2 if the path to the file was not provided (the error also tells me the parameter name so that I don’t have to open the .csproj file each time I need to build the package to remember the name of the parameter used to pass the path to the .snk file). Then I define a few properties I use later:

  • Configuration – I always want to ship Release versions in NuGet packages so it is hardcoded to ‘Release’
  • PackageSource – the path where the target will create folders that will be later consumed by NuGet. Platform specific assemblies will be placed in subfolders
  • NuSpecPath – the path where the .nuspec file lives

After that I build the project. I do it twice – once for each target platform. You can see that when you look at the parameters passed to the Build target – especially the TragetFrameworkVersion (‘v4.5′ vs. ‘v4.0′) and the OutputPath (‘net45′ vs. ‘net40′). SignAssembly and AssemblyOriginatorKeyFile make sure that the assembly will be signed. InternalsVisibleToEnabled is set to false to exclude InternalsVisbileTo attributes. Once the assemblies are build they are copied to the folders the NuGet package will be built from. Note that the Copy MSBuild task will create the target folder if it does not exist. The only thing remaining is to create the NuGet package (at long last!). I execute the NuGet.exe (disguised as $(NuGetCommand) – a variable defined in the NuGet.targets file) with the pack option and provide the path to the .nuspec file, the folder structure to build the package from (the BasePath parameter) and the path to save the NuGet package to. The LogStandardErrorAsError attribute is telling MSBuild to fail the build if the executed command returns a non-zero exit code.
Now I can build my NuGet packages with the following command (from the developer command prompt):
msbuild myproject.csproj /t:CreatePackage /p:KeyFile=MyKey.snk
Despite all the manual edits to the .csproj file Visual Studio continues to work (at least at the same level it did before) which was the last of my goals.
This is it! With this guide you should be able to automate building your NuGet packages. Just in case here are links to some of the changsets I introduced this method in:

Once you understand what’s going on and have the template I provided above it takes less than 10 minutes to adapt it to your project (funnily, even coming up with the first version took me considerably less time (and wine) than describing it in this blog post).
Enjoy (and show me your package)!

Interactive Pre-Generated Views for EF6 Now Supports .NET Framework 4

Interactive Pre-Generated Views for EF6 project has been updated to support both .NET Framework 4 and .NET Framework 4.5. I published the new version (1.0.1) of the EFInteractiveViews NuGet package yesterday. There are no functional changes (or any changes to the code) so if you are already using version 1.0.0 you can continue to use it and you will not miss anything.
If you are not sure what the Interactive Pre-Generated Views is read this blog post.

Second Level Cache Beta for EF 6.1 Available

It took a bit longer than I expected but the Beta version of the Second Level Cache for EF 6.1 is now available on NuGet with the source available on Codeplex.

What’s new in the beta version.

  • Support for .NET Framework 4 – the NuGet package now contains two versions of the second level cache assembly – one that is specific to .NET Framework 4 and one that is specific to .NET Framework 4.5. As a result it is now possible to use second level caching in EF 6.1 applications that target .NET Framework 4. (A side note: you should update NuGet packages if you change the .NET Framework version your application targets to avoid errors where (some of) the referenced assemblies target a different version of .NET Framework than the app itself).
  • Support for async (.NET Framework 4.5 only) – results for queries executed asynchronously are now cached.
  • The CachingPolicy and the DefaultCachingPolicy classes merged
  • The CachingPolicy.CanBeCached method was modified to take the Sql query and parameters. This enables more granular control over the cached results. Note that this is a breaking change from alpha release and you will need to update your code if you created a custom CachingPolicy derived class
  • A new mechanisms allowing excluding caching results for specific queries.

Let’s take a closer look at the last two items. They allow achieving a similar goal but in different ways. Starting from the Beta version the SQL query and query parameters are passed to the CanBeCached method in addition to the store entity sets (which are abstractions of database tables). This allows for inspecting the query and its parameters to decide whether the results yielded by the query should be cached. “Inspecting the query and its parameters” may sound easy but the queries generated by EF tend to be complicated and parsing them may not be trivial. Easier cases are where you just have some queries you never want to cache the results for and instead of “inspecting” you just need to compare if the input query is one of these non-cacheable queries
and if it is return false from the method.
(Side note: I personally believe that with regards to caching you are most often interested in tables the results come from and not in what the query does. In this scenario the affectedEntitySets might be more helpful because you can get the names of the tables used in the query without having to try to actually reverse engineer the query. You can get the names of the tables used in the query as follows:

affectedEntitySets.Select(e => e.Table ?? e.Name);

).
Another way to prevent results for a specific query from being cached is to use the new built-in mechanism which for blacklisting queries. This mechanism consists of two parts – a registrar that contains a list of blacklisted queries (i.e. queries whose results won’t be cached) and the DbQuery.NotCached() (and ObjectQuery.NotCached()) extension methods which make using the registrar easier. As a result blacklisting a query is as easy as appending .NotCached() to the query, just like this:

var q = ctx.Entities.Where(e => e.Flag != null).OrderBy(e => e.Id).NotCached();

Blacklisted queries take precedence (i.e. win) over caching policy and therefore the CachingPolicy.CanBeCached() method will never be called for blacklisted queries.
The registrar itself is public and implements the singleton pattern. You can get the instance using the BlacklistedQueriesRegistrar.Instance property and then you will be able to register (or unregister) blacklisted queries manually (note however that queries are compared using string comparison and therefore the registered query must exactly match the query EF would produce – the extension methods ensure the queries are identical by calling .ToString()/.ToTraceString() on the DbQuery/ObjectQuery instance).
As you can see both CachingPolicy.CanBeCached() and the built-in query blacklisting mechanism allow to prevent results for specific queries from being cached. The difference is that the built-in mechanism is very simple to use but does not give the flexibility and granularity offered by the CachingPolicy.CanBeCached() method. On the other hand the flexibility and granularity of the CachingPolicy.CanBeCached() method is not for free – you need to implement at least some logic yourself.

The road to “RTM”.
I consider the Beta version to be feature complete. I am planning to let it bake for a few weeks, fix reported issues and then release the final version. Your part is to try out the Beta version (or upgrade your projects) and report bugs.

Second Level Cache Alpha 2 Available

About a week ago I published on NuGet the Alpha-2 version of the EFCache. It does not contain a whole lot of new things. In fact it contains just one change – a fix to a bug which prevented from using the cache with databases containing null values. This bug was reported by multiple people and was quite easy to hit since having null values in the database is a very common scenario. The best part of the story is that I actually did not code the fix myself. The fix was kindly contributed by Teoni Valois. Teoni also created an EFCache implementation called DacheCache which uses Dache – a distribute cache for .NET – as the caching mechanisms. You can get DachCache from NuGet.
Again, thanks for the contribution and reporting issues (yes, the version for .NET Framework 4 will be included in the upcoming Beta release). If you hit the bug and gave up on using EFCache give the Alpha-2 a try.

Support for Store Functions (TVFs and Stored Procs) in Code First (Entity Framework 6.1)

Until Entity Framework 6.1 was released store functions (i.e. Table Valued Functions and Stored Procedures) could be used in EF only when doing Database First. There were some workarounds which made it possible to invoke store functions in Code First apps but you still could not use TVFs in Linq queries which was one of the biggest limitations. In EF 6.1 the mapping API was made public which (along with some additional tweaks) made it possible to use store functions in your Code First apps. Note, that it does not mean that things will start working automagically once you upgrade to EF6.1. Rather, it means that it is now possible to help EF realize that it actually is capable of handling store functions even when Code First approach is being used. Sounds exciting doesn’t it? So, probably the question you have is:

How do I do that?

To understand how store functions could be enabled for Code First in EF 6.1 let’s take a look first at how they work in the Database First scenario. In Database First you define methods that are driving the execution of store functions in your context class (typically these methods are generated for you when you create a model from the database). You use these methods in your app by calling them directly or, in case of TVFs, in LINQ queries. One thing that is worth mentioning is that these methods need to follow certain conventions otherwise EF won’t be able to use them. Apart from methods defined in your context class store functions must also be specified in the artifacts describing the model – SSDL, CSDL and MSL (think: edmx). At runtime these artifacts are loaded to MetadataWorkspace object which contains all the information about the model.
In Code First the model is being built from the code when the application starts. Types are discovered using reflection and are configured with fluent API in the OnModelCreating method, attributes and/or conventions. The model is then loaded to the MetadataWorkspace (similarly to what happens in the Database First approach) and once this is done both – Code First and Database First operate in the same way. Note that the model becomes read-only after it has been loaded the MetadataWorkspace.
Because Database First and Code First converge at the MetadataWorkspace level enabling discovery of store functions in Code First along with additional model configuration should suffice to add general support for store functions in Code First. Model configuration (and therefore store function discovery) has to happen before the model is loaded to the MetadataWorkspace otherwise the metadata will be sealed and it will be impossible to tweak the model. There are three ways we can configure the model in Code First – configuration attributes, fluent API and conventions. Attributes are not rich enough to configure store functions. Fluent API does not have access to mapping. This leaves conventions. Indeed a custom model convention seems ideal – it gives you access to the model which in EF 6.1 not only contains conceptual and store models but also modifiable mapping information. So, we could create a convention which discovers methods using reflection, then configures store and conceptual models accordingly and defines the mapping. Methods mapped to store functions will have to meet some specific requirements imposed by Entity Framework. The requirements for methods mapped to table valued functions are the following:

  • return type must be an IQueryable<T> where T is a type for which a corresponding EDM type exists – i.e. is either a primitive type that is supported by EF (for instance int is fine while uint won’t work) or a non-primitive type (enum/complex type/entity type) that has been configured (either implicitly or explicitly) in your model
  • method parameters must be of scalar (i.e. primitive or enum) types mappable to EF types
  • methods must have the DbFunctionAttribute whose the first argument is the conceptual container name and the second argument is the function name. The container name is typically the name of the DbContext derived class however if you are unsure you can use the following code snippet to get the name:
    Console.WriteLine(
        ((IObjectContextAdapter) ctx).ObjectContext.MetadataWorkspace
            .GetItemCollection(DataSpace.CSpace)
            .GetItems<EntityContainer>()
            .Single()
            .Name);
    
  • the name of the method, the value of the DbFunction.FunctionName and the queryString name passed to the CreateQuery call must all be the same
  • in some cases TVF mapping may require additional details – a column name and/or the name of the database schema. You can specify them using the DbFunctionDetailsAttribute. The column name is required if the method is mapped to a TVF that returns a collection of primitive values. This is needed because EF requires providing the name of the column containing the values and there is no way of inferring this information from the code and therefore it has to be provided externally by setting the ResultColumnName property of the DbFunctionDetails attribute to the name of the column returned by the function. The database schema name needs to be specified if the schema of the TVF being mapped is different from the default schema name passed to the convention constructor and can be done by setting the DatabaseSchema property of the DbFunctionDetailsAttribute.

The requirements for methods mapped to stored procedures are less demanding and are the following:

  • the return type has to be ObjectResult<T> where T, similarly to TVFs, is a type that can be mapped to an EDM type
  • you can also specify the name of the database schema if it is different from the default name by setting the DatabaseSchema property of the DbFunctionDetailsAttribute. (Because of how the result mapping works for stored procedures setting the ResultColumnName property has no effect)

The above requirements were mostly about method signatures but the bodies of the methods are important too. For TVFs you create a query using the ObjectContext.CreateQuery method while stored procedures just use ObjectContext.ExecuteFunction method. Below you can find examples for both TVFs and stored procedures (also notice how parameters passed to store functions are created). In addition the methods need to be members of the DbContext derived type which itself is the generic argument of the convention.
Currently only the simplest result mapping where names of the columns returned from the database match the names of the names of the properties of the target type (except for mapping to scalar results) is supported. This is actually a limitation in the EF Code First where more complicated mappings would currently be ignored in most cases even though they are valid from the MSL perspective. There is a chance of having more complicated mappings enabled in EF 6.1.1 if appropriate changes are checked in by the time EF 6.1.1 ships. From here there should be just one step to enabling stored procedures returning multiple resultsets in Code First.
Now you probably are a bit tired of all this EF mumbo-jumbo and would like to see

The Code

To see the custom convention in action create a new (Console Application) project. Once the project has been created add the EntityFramework.CodeFirstStoreFunctions NuGet package. You can add it either from the Package Manager Console by executing

Install-Package EntityFramework.CodeFirstStoreFunctions -Pre 

command or using the UI – right click the References in the solution explorer and select “Manage NuGet Packages”, then when the dialog opens make sure that the “Include Prerelease” option in the dropdown at the top of the dialog is selected and use “storefunctions” in the search box to find the package. Finally click the “Install” button to install the package.

Code First Store Functions NuGet


Installing EntityFramework.CodeFirstStoreFunctions from UI

After the package has been installed copy and paste the code snippet from below to your project. This code demonstrates how to enable store functions in Code First.

public class Customer
{
    public int Id { get; set; }

    public string Name { get; set; }

    public string ZipCode { get; set; }
}

public class MyContext : DbContext
{
    static MyContext()
    {
        Database.SetInitializer(new MyContextInitializer());
    }

    public DbSet<Customer> Customers { get; set; }

    protected override void OnModelCreating(DbModelBuilder modelBuilder)
    {
        modelBuilder.Conventions.Add(new FunctionsConvention<MyContext>("dbo"));
    }

    [DbFunction("MyContext", "CustomersByZipCode")]
    public IQueryable<Customer> CustomersByZipCode(string zipCode)
    {
        var zipCodeParameter = zipCode != null ?
            new ObjectParameter("ZipCode", zipCode) :
            new ObjectParameter("ZipCode", typeof(string));

        return ((IObjectContextAdapter)this).ObjectContext
            .CreateQuery<Customer>(
                string.Format("[{0}].{1}", GetType().Name, 
                    "[CustomersByZipCode](@ZipCode)"), zipCodeParameter);
    }

    public ObjectResult<Customer> GetCustomersByName(string name)
    {
        var nameParameter = name != null ?
            new ObjectParameter("Name", name) :
            new ObjectParameter("Name", typeof(string));

        return ((IObjectContextAdapter)this).ObjectContext.
            ExecuteFunction<Customer>("GetCustomersByName", nameParameter);
    }
}

public class MyContextInitializer : DropCreateDatabaseAlways<MyContext>
{
    public override void InitializeDatabase(MyContext context)
    {
        base.InitializeDatabase(context);

        context.Database.ExecuteSqlCommand(
            "CREATE PROCEDURE [dbo].[GetCustomersByName] @Name nvarchar(max) AS " +
            "SELECT [Id], [Name], [ZipCode] " +
            "FROM [dbo].[Customers] " +
            "WHERE [Name] LIKE (@Name)");

        context.Database.ExecuteSqlCommand(
            "CREATE FUNCTION [dbo].[CustomersByZipCode](@ZipCode nchar(5)) " +
            "RETURNS TABLE " +
            "RETURN " +
            "SELECT [Id], [Name], [ZipCode] " +
            "FROM [dbo].[Customers] " + 
            "WHERE [ZipCode] = @ZipCode");
    }

    protected override void Seed(MyContext context)
    {
        context.Customers.Add(new Customer {Name = "John", ZipCode = "98052"});
        context.Customers.Add(new Customer { Name = "Natasha", ZipCode = "98210" });
        context.Customers.Add(new Customer { Name = "Lin", ZipCode = "98052" });
        context.Customers.Add(new Customer { Name = "Josh", ZipCode = "90210" });
        context.Customers.Add(new Customer { Name = "Maria", ZipCode = "98074" });
        context.SaveChanges();
    }
}

class Program
{
    static void Main()
    {
        using (var ctx = new MyContext())
        {
            const string zipCode = "98052";
            var q = ctx.CustomersByZipCode(zipCode)
                .Where(c => c.Name.Length > 3);
            //Console.WriteLine(((ObjectQuery)q).ToTraceString());
            Console.WriteLine("TVF: CustomersByZipCode('{0}')", zipCode);
            foreach (var customer in q)
            {
                Console.WriteLine("Id: {0}, Name: {1}, ZipCode: {2}", 
                    customer.Id, customer.Name, customer.ZipCode);
            }

            const string name = "Jo%";
            Console.WriteLine("\nStored procedure: GetCustomersByName '{0}'", name);
            foreach (var customer in ctx.GetCustomersByName(name))
            {
                Console.WriteLine("Id: {0}, Name: {1}, ZipCode: {2}", 
                    customer.Id, customer.Name, customer.ZipCode);   
            }
        }
    }
}

In the code above I use a custom initializer to initialize the database and create a table-valued function and a stored procedure (in a real application you would probably use Code First Migrations for this). The initializer also populates the database with some data in the Seed method. The MyContext class is a class derived from the DbContext class and contains two methods that are mapped to store functions created in the initializer. The context class contains also the OnModelCreating method where we register the convention which will do all the hard work related to setting up our store functions. The Main method contains code that invokes store functions created when initializing the database. First, we use the TVF. Note, that we compose the query on the function which means that the whole query will be translated to SQL and executed on the database side. If you would like to see this you can uncomment the line which prints the SQL query in the above snippet and you will see the exact query that will be sent to the database:

SELECT
    [Extent1].[Id] AS [Id],
    [Extent1].[Name] AS [Name],
    [Extent1].[ZipCode] AS [ZipCode]
    FROM [dbo].[CustomersByZipCode](@ZipCode) AS [Extent1]
    WHERE ( CAST(LEN([Extent1].[Name]) AS int)) > 3

(Back to the code) Next we execute the query and display results. Once we are done with the TVF we invoke the stored procedure. This is just an invocation because you cannot build queries on top of results returned by stored procedures. If you need any query-like (or other) logic it must be inside the stored procedure itself and otherwise you end up having a Linq query that is being run against materialized results. That’s pretty much the whole app. Just in case I am pasting the output the app produces below:

TVF: CustomersByZipCode('98052')
Id: 1, Name: John, ZipCode: 98052

Stored procedure: GetCustomersByName 'Jo%'
Id: 1, Name: John, ZipCode: 98052
Id: 4, Name: Josh, ZipCode: 90210
Press any key to continue . . .

Note that in both examples the return types are based on entity types. As I hinted above you can also use complex and scalar types for your results. Take a look at the End-to-End tests in the project itself – all scenarios are tested there.

That’s about what’s in alpha, so you may ask:

what’s next?

If you look at the code there are a few TODOs in the code. One of the most important is the support for nullable parameters. I am also thinking of removing the limitation where the method name in your DbContext derived class must ultimately match the name of the TVF in the database. If the workitem 2192 is resolved for the next version of EF I will be able to add support for non-default mapping. In addition I think it is very close from workitem 2192 to supporting stored procedures returning multiple resultsets. Not sure how useful it would be but it would be cool to see support for this feature which currently is kind of a dead feature because it is supported neither by CodeFirst nor by EF tooling.

Anything else?
The project is open source and is hosted on codeplex. You can get the sources from here. Try it and let me know what you think.

Second Level Cache for EF 6.1

See what’s new in Beta here.

Currently Entity Framework does not natively support second level caching. For pre-EF6 versions you could use EF Caching Provider Wrapper but due to changes to the EF provider model in EF6 it does not work with newest versions of EF. In theory it would be possible to recompile the old caching provider against EF6 but, unfortunately, this would not be sufficient. A number of new features breaking some assumptions made in the old caching provider (e.g. support for passing an open connection when instantiating a new context) have been introduced in EF6 resulting in the old caching provider not working correctly in these scenarios. Also dependency injection and Code-based Configuration introduced in EF6 simplify registering wrapping providers which was rather cumbersome (especially for Code First) in the previous versions. To fill the gap I created the Second Level Cache for EF 6.1 project. It’s a combination of a wrapping provider and a transaction handler which work together on caching query results and keeping the cache in a consistent state. In a typical scenario query results are cached by the wrapping caching provider and are invalidated by the transaction handler if an entity set on which cached query results depend was modified (note that data modification in EF always happens in a transaction hence the need of using transaction handler).
Using the cache is easy but requires a couple steps. First you need to install the EntityFramework.Cache NuGet package. There are two ways to do this from Visual Studio. One way to do this is to use the UI – right click the References node in the Solution Explorer and select the Manage NuGet Packages option. This will open a dialog you use to install NuGet packages. Since the project is currently in the alpha stage you need to select “Include Prelease” in the drop down at the top. Then enter “EntityFramework.Cache” in the search window and, once the package appears, click the “Install” button.

AAAA


Installing EntityFramework.Cache from UI


You can also install the package from the Package Manager Console. Open the Package Manager Console (Tools → NuGet Package Manager → Package Manager Console) and execute the following command:
Install-Package EntityFramework.Cache –Pre
(-Pre allows installing pre-release packages).
Note that the package depends on Entity Framework 6.1. If you don’t have Entity Framework 6.1 in your project it will be automatically installed when you install the EntityFramework.Cache package.
Once the package is installed you need to tell EF to use caching by configuring the caching provider and the transaction handler. You do this by creating a configuration class derived from the DbConfiguration class. In the constructor of your DbConfiguration derived class you need to set the transaction interceptor and the Loaded event handler which will be responsible for replacing the provider. Here is an example of how to setup caching which uses the built-in in-memory cache implementation and the default caching policy.

public class Configuration : DbConfiguration
{
  public Configuration()
  {
    var transactionHandler = new CacheTransactionHandler(new InMmemoryCache());

    AddInterceptor(transactionHandler);

    Loaded +=
      (sender, args) => args.ReplaceService<DbProviderServices>(
        (s, _) => new CachingProviderServices(s, transactionHandler, 
          new DefaultCachingPolicy()));
  }
}

The default caching policy used above is part of the project and allows caching all query results regardless of their size or entity sets used to obtain the results. Both, sliding and absolute, expiration times in the default caching policy are set to maximum values therefore items will be cached until an entity set used to obtain the results depended on is modified. If the default caching policy is not suitable for your needs you can create a custom caching policy in which you can limit what will be cached. To create a custom policy you just need to derive a class from the CachingPolicy class, implement the abstract methods, and pass the policy to the CachingProviderServices during registration. When implementing a custom caching policy there is one thing to be aware of – the expired entries are removed from the cache lazily. It means that an entry will be removed by the caching provider only when the caching provider tries to read the entry and finds it is expired. This is not extremely helpful (especially because since the item is expired the provider will query the database to get fresh results which will be then put to the cache – so effectively the expired entry will be replaced with a new entry) but lets the user decide what the best strategy of cleaning the cache in their case is. For instance, in the InMemoryCache implementation included in the project I created a Purge method. This method could be periodically called to remove stale cache entries.
The project also includes a simple implementation of a cache which caches query results in memory. This is just a sample implementation and if you would like to use a different caching mechanism you are free to do so – you just need to implement the ICache interface. This interface is a slightly modified version of the interface that shipped with the original EF Caching Provider Wrapper which should make moving existing apps using the old caching solution to EF6 easier.

As the old saying goes “A program is worth a 1000 words“, so let’s take a look at the second level cache in action. Here is a complete sample app which is using the cache:

public class Airline
{
  [Key]
  public string Code { get; set; }

  public string Name { get; set; }

  public virtual ICollection<Aircraft> Aircraft { get; set; }
}

public class Aircraft
{
  public int Id { get; set; }

  public string EquipmentCode { get; set; }

  public virtual Airline Airline { get; set; }
}

public class AirTravelContext : DbContext
{
  static AirTravelContext()
  {
    Database.SetInitializer(new DropCreateDatabaseIfModelChanges<AirTravelContext>());  
  }

  public DbSet<Airline> Airlines { get; set; }

  public DbSet<Aircraft> Aircraft { get; set; }
}

public class Configuration : DbConfiguration
{
  public Configuration()
  {
    var transactionHandler = new CacheTransactionHandler(Program.Cache);

    AddInterceptor(transactionHandler);

    Loaded +=
      (sender, args) => args.ReplaceService<DbProviderServices>(
        (s, _) => new CachingProviderServices(s, transactionHandler));
  }
}
  
class Program
{
  internal static readonly InMemoryCache Cache = new InMemoryCache();

  private static void Seed()
  {
    using (var ctx = new AirTravelContext())
    {
      ctx.Airlines.Add(
        new Airline
        {
          Code = "UA",
          Name = "United Airlines",
          Aircraft = new List<Aircraft>
          {
            new Aircraft {EquipmentCode = "788"},
            new Aircraft {EquipmentCode = "763"}
          }
        });

      ctx.Airlines.Add(
        new Airline
        {
          Code = "FR",
          Name = "Ryan Air",
          Aircraft = new List<Aircraft>
          {
            new Aircraft {EquipmentCode = "738"},
          }
        });

      ctx.SaveChanges();
    }
  }

  private static void RemoveData()
  {
    using (var ctx = new AirTravelContext())
    {
      ctx.Database.ExecuteSqlCommand("DELETE FROM Aircraft");
      ctx.Database.ExecuteSqlCommand("DELETE FROM Airlines");
    }
  }

  private static void PrintAirlinesAndAircraft()
  {
    using (var ctx = new AirTravelContext())
    {
      foreach (var airline in ctx.Airlines.Include(a => a.Aircraft))
      {
        Console.WriteLine("{0}: {1}", airline.Code, 
          string.Join(", ", airline.Aircraft.Select(a => a.EquipmentCode)));
      }
    }
  }

  private static void PrintAirlineCount()
  {
    using (var ctx = new AirTravelContext())
    {
      Console.WriteLine("Airline Count: {0}", ctx.Airlines.Count());
    }
  }

  static void Main(string[] args)
  {
    // populate and print data
    Console.WriteLine("Entries in cache: {0}", Cache.Count);
    RemoveData();
    Seed();
    PrintAirlinesAndAircraft();

    Console.WriteLine("\nEntries in cache: {0}", Cache.Count);
    // remove data bypassing cache
    RemoveData();
    // not cached - goes to the database and counts airlines
    PrintAirlineCount();
    // prints results from cache
    PrintAirlinesAndAircraft();
    Console.WriteLine("\nEntries in cache: {0}", Cache.Count);
    // repopulate data - invalidates cache
    Seed();
    Console.WriteLine("\nEntries in cache: {0}", Cache.Count);
    // print data
    PrintAirlineCount();
    PrintAirlinesAndAircraft();
    Console.WriteLine("\nEntries in cache: {0}", Cache.Count);
  }
}

The app is pretty simple – a couple of entities, a context class, code base configuration (should look familiar), a few methods and the Main() method which drives the execution. One method that is worth mentioning is the RemoveData() method. It removes the data from the tables using the SqlExecuteMethod() therefore bypassing the entire Entity Framework update pipeline including the caching provider. This is to show that the cache really works but is at the same type a kind of warning – if you bypass EF you will need to make sure the cache is in a consistent state or you can get incorrect results. Running the sample app results in the following output:

Entries in cache: 0
FR: 738
UA: 788, 763
Entries in cache: 3

Removing data directly from the database
Airline Count: 0
FR: 738
UA: 788, 763

Entries in cache: 4

Entries in cache: 2
Airline Count: 2
FR: 738
UA: 788, 763

Entries in cache: 4
Press any key to continue . . .

Let’s analyze what’s happening here. On line 1 we just report that the cache is empty (no entries in the cache). Seems correct – no queries have been sent to the database so far. Then we remove any stale data from the database, seed the database, and print the contents of the database (lines 2 and 3). Printing the content of the database requires querying the database which should result in some entries in the cache. Indeed on line 4 three entries are reported to be in the cache. Why three if we sent just one query? If you peek at the cache with the debugger you will see that two of these entries are for the HistoryContext so three seems correct. Now (line 6) we delete all the data in the database so when we query the database for the airline count (line 7) the result is 0. However, even though we don’t have any data in the database, we can still print data on lines 8 and 9. This data comes from the cache – note that when we deleted data we used ExecuteSqlCommand() method which bypassed the EF update pipeline and, as a result, no cache entries were invalidated. On line 11 we again report the number of items in the cache. This time the cache contains four entries – two entries for the HistoryContext related queries, one entry for the data that was initially cached and one entry for the result of the query where we asked for the number of airlines in the database. Now we add the data to the database again and again print the number of items in the cache on line 13. This time the number of entries is two. This is because inserting the data to the database invalidated cached results for queries that used either Airlines or Aircraft entity sets and as a result only the results related to the HistoryContext remained in the cache. We again query the database for the number of airlines (line 14) and print all the data from the database. The results for both queries are added to the cache and we end up having four entries in the cache again (line 18).
That’s pretty much it. Try it out and let me what you think (or report bugs). You can also look at and play with the code – it is available on the project website on codeplex.
Note that this version is an alpha version and there is still some work remainig before it can be called done – most notably support for async methods, adding some missing overrides for the ADO.NET wrapping classes. I am also considering adding support for TransactionScope.

Using Pre-Generated Views Without Having To Pre-Generate Views (EF6)

To be able to work with different databases Entity Framework abstracts the store as a set of views which are later used to create queries and CUD (Create/Update/Delete) commands. EF generates views the first time it needs them (typically on the first query). For bigger or more complicated models view generation may take significant time and negatively impact the startup time of the application. This can be worked around by generating views at design time (e.g. using EF Power Tools or my T4 templates for generating views) and compiling them into you EF app.

Notes:

  • There were significant improvements to view generation in EF6 (and more is coming), so make sure using pre-generated views visibly improve the startup time of your app – if they are not – don’t bother.
  • In EF6 (6.0.0 and 6.0.1) we saw several performance regressions (mostly in Code First) that affected the startup time of apps. They were not related to view generation (which in fact in most cases was faster than it was in EF5) so adding pre-generated views does not help. Many of these were fixed in 6.0.2. Always make sure you are running the latest bits and again check if view generation helps – if it does not – don’t bother.

So, for bigger models pre-generated views can help. But they are a bit bothersome. You need to remember about regenerating them each time your model changes. They add to the build time. They make your project more complicated. Not having to worry about views (especially that they are just an implementation detail that ideally would not be even exposed publicly if view generation did not impact the startup performance) makes the life of the developer much easier. Can we somehow combined these two worlds (and not in the way that the manual work done at design time impacts the startup time of the application :))? For instance, what if EF could check at runtime if up-to-date views are available and then use them if they are, or – if no usable views are available – it generates new views and stores them so that they are used the next time the application is run? This way the first time your app starts it will spend some time creating views (this time has to be spent somewhere anyways) but then (assuming the model has not changed) it will just use the views created when the app was run the first time which should result in faster startup time for subsequent runs. Sounds nice doesn’t it? And it actually is now possible. With the EFInteractiveViews NuGet package I published recently you can achieve all this with just a couple lines of code!

So, how does this work?

Assuming you have an existing EF6 application add the EFInteractiveViews NuGet packageto your project – you can right click on the References node in your project, select “Manage NuGet Packages” and add the NuGet package from the UI, or use the following command from the Package Manager Console:

Install-Package EFInteractiveViews

Once you add the package you need to decide where you want to store your views – you can use either a file or the database. If you want to store views in a file add the following code to your app:

using (var ctx = new MyContext())
{
    InteractiveViews
        .SetViewCacheFactory(
            ctx, 
            new FileViewCacheFactory(@"C:\MyViews.xml"));
}

Note that this code has to be executed before EF needs views (typically before you send the first query or the first call to the .SaveChanges() method) so make sure it is in the right place (for instance in a static constructor). The above line registers a view cache factory. Now if EF needs views it will first ask the registered view cache factory for views. The FileViewCacheFactory tries to load views from the file – if the file exists and the views match the model it will return views and EF will not need to generate views. If, however, the file does not exist or the views are outdated the FileViewCacheFactory will generate views, save them to the file and return them to EF.
Storing views in a file is kind of cool but since this is an EF app it has to have a database. Therefore storing views in the database would be even cooler. And this actually is possible – you just need to register a different view cache factory included in the package:

using (var ctx = new MyContext())
{
    InteractiveViews
        .SetViewCacheFactory(
            ctx, 
            new SqlServerViewCacheFactory(ctx.Database.Connection.ConnectionString));
}

Similarly to the FileViewCacheFactory you need to register the SqlServerViewCacheFactory before EF needs views. One more assumption the SqlServerViewCacheFactory makes is that the database exists (if this is not the case and you are using CodeFirst you can create one with ctx.Database.Initialize(force)). The SqlServerViewCacheFactory creates a table in your database to store views. By default the table will be called __ViewCache and will be in the dbo schema. You can change the defaults by passing the name of the table and/or schema in the SqlServerViewCacheFactory constructor.
As you probably guessed the SqlServerViewCacheFactory class is Sql Server specific. What if you are using a different back end? That’s not a big deal too. Create a new class derived from the DatabaseViewCacheFactory and implement three simple abstract methods – CreateConnection(), ViewTableExists() and CreateViewTable() and you are done. Since the EF Interactive Views project is open source you can take a look how I did it for SqlServer. (and yes, the DatabaseViewCacheFactory uses EF6 under the cover to get and update views in the database).

Words of precaution

When using interactive views make sure that you are not running different versions of an app (with different models) against the same database (CodeFirst typically does not allow this unless you are not using an initializer). If you do one version of an app will create views and then the other will not be able to use them so they will be overwritten which will make the views unusable for the version that ran originally and there will be no benefit in using interactive views at all.

That’s pretty much it. As I mentioned above the project is open source and you can find code on codeplex at https://efinteractiveviews.codeplex.com/. Just in case here is the link to the NuGet package: https://www.nuget.org/packages/EFInteractiveViews. Use, enjoy, comment and file bugs!

What changed in the EF Tooling in Visual Studio 2013 (and Visual Studio 2012 Out Of Band)

The recently shipped version of Visual Studio 2013 contains a new version of EF Tooling (a standalone version for Visual Studio 2012 is also available for download). The main goal of this release was to teach the EF designer how to talk to both EF5 and EF6. It was quite an undertaking given how EF6 differs from EF5 (binary incompatible, no correspondence between provider models and metadata workspace, EF6 runtime assemblies no longer in GAC just to name a few differences) and required changes to the existing functionality. Some of these changes may be confusing to people who used previous versions of EF tooling. The goal of this post is to take a look what has changed, why, and how it works now.
Before EF6 things were reasonably straightforward – from the EF Tooling standpoint there could be only one version of the EF runtime installed on the box – EF1, EF4 or EF5. In addition it was easy to determine the version of EF just by looking at the installed version of the .NET Framework – .NET Framework 3.5 meant EF1, .NET Framework 4 meant EF4 and .NET Framework 4.5 meant EF5. Note that the designer did not need to distinguish between EF4.x versions (treated as EF4) or EF5 for .NET Framework 4 (also treated as EF4) since they were built on top of the core EF runtime version residing in GAC. The installed (GAC’ed) version of the EF runtime automatically implied the latest supported version of Csdl, Ssdl, Msl and Edmx (‘schemas’ as we internally call them) since each version of the core EF runtime introduced a new version of schemas. But then happened EF6 – no longer in GAC, works on .NET Framework 4 and .NET Framework 4.5 (also meaning you could have v3 schemas on .NET Framework 4), binary incompatible with earlier versions of EF, some types moved to new/different namespaces, etc. The only way for tooling to be able to handle all of this was to break a number of assumptions made in the designer and, in some cases, change how the designer works. Below is a list of things that changed the most with a short explanation why they changed and what to expect:

  • For Model First the code is no longer generated when adding an empty model to the project. This is probably the most confusing change (funnily it was internally reported to me as a bug multiple times even before we shipped the new EF Tooling the first time in Visual Studio 2013 RC). The reason for this change is simple – since some types in EF6 were moved to different namespaces the code we used to generate for EF5 no longer works with EF6 (and vice versa). In some cases (e.g. you don’t have a reference to any EF runtime in your project) it is just impossible to tell whether the user is going it need EF5 or EF6 code. The most interesting part is that the code for we used to add when creating an empty model was is in reality not that useful before the database was created – yes, you could write code against the generated context/entities but you would not be really able to run it without the database. Therefore, we changed the workflow a bit and now the code generation templates are added when you create the database from model instead of when you add an empty model to your project because the wizard will make you select the version of EF you would like to use in your project, if there already isn’t one.
  • Code Generation Strategy disabled for EF6 models. Again this is partially related to changes to types and namespaces. Code generated for EF5 would not work with EF6. In addition the code we generate for ObjectContext based context and EntityObject based entities for EF5 (and earlier) applications is generated using System.Data.Entity.Design.dll. System.Data.Entity.Design.dll is part of the .NET Framework and does not fit into the EF6 shipping model. Therefore instead of updating System.Data.Entity.Design.dll we decided to support only T4 template based code generation for EF6. EF Tooling ships with T4 templates for generating DbContext based context and POCO entities. If, for some reason, you absolutely need ObjectContext based context the EF6 templates were posted on VS Gallery. For EF5 (or earlier) applications the Code Generation Strategy drop down is not blocked and you can choose between T4 and LegacyObjectContext (yes, we changed the option names as well because they were a bit ridiculous in VS2012 where ‘None’ basically meant T4 and ‘Default’ meant ObjectContext but the ‘None’ option was default).
  • Impossible to select the version of Entity Framework. To determine the applicable version of EF the designer looks for references to EntityFramework.dll and System.Data.Entity.dll in the project (note that EntityFramework.dll wins over System.Data.Entity.dll when determining the version). If it cannot find any, the user has to select the version in the wizard. Otherwise the latest version of EF found in the project is being used and the selection will not be possible (radio buttons will be disabled or the wizard page will not be shown at all).
  • Using third party providers. In the simple days of EF5 (and earlier) EF providers were registered globally and you could get one basically anytime and anywhere. Those days are long gone. In EF6 EF providers don’t have a global registration point. Rather, they are registered in the config file or using code based configuration. When you select a connection in the wizard the designer tries to find a reference to a corresponding provider in your project. If it cannot find one you won’t be able to use EF6 – instead you will be asked to close the wizard, install the provider and restart the wizard (note that this does not apply to the provider for Sql Server – EF Tooling contains EF6 provider for Sql Server which in the majority of cases make things easier for the user).
  • Retargeting. Depending on the referenced version of Entity Framework in the project changing the target .NET Framework version for the project may but may not change the edmx file. If the project references System.Data.Entity.dll and does not reference EF6 retargeting will result in upgrading/downgrading the version of the edmx file to match the latest supported version of the schema for the given .NET Framework and, as a result, EF runtime version. If the project contains a reference to EF6 toggling the target between .NET Framework 4 and .NET Framework 4.5 will not result in changing the edmx version since EF6 is supported on both versions and understands v3 schemas. Targeting .NET Framework 3.5 should always downgrade to v1 since EF1 is the only version available on .NET Framework 3.5. One caveat to retargeting is that you should always update your NuGet packages after retargeting. This is because EF (and potentially other packages) has a different assembly for .NET Framework 4 and for .NET Framework 4.5 (one of the reasons is that Async is not natively supported on .NET Frramework 4) and even though they are both shipped in the same NuGet package, NuGet does not replace project references after changing the target .NET Framework version. This leads either to missing functionality (e.g. Async not available on .NET Framework 4.5 (when using EF6 for .NET Framework 4 on .NET Framework 4.5)) or weird build errors saying you don’t have reference to EntityFramework.dll (only running MsBuild /v:diag will tell you that the reference you have was ignored since the referenced EntityFramework.dll was built for a later version of .NET Framework than the one the project is currently targeting (happens when using EF6 for .NET Framework 4.5 on .NET Framework 4)).
  • Using EF6 and EF5/EF4 in one project. Initially we did not want to support this scenario at all. However it is not only pretty easy to end up in such a situation but supporting this would also be helpful for people wanting to gradually move legacy, bigger projects from EF5 to EF6, so finally we decided to do what we can to support mixed EF versions in one project. The most important thing the designer has to do to handle this scenario correctly is to ensure that the right provider is being used for a given edmx file. Also, the version of the edmx file needs to be in sync with the referenced version of the EF runtime. Seems easy at first but there are some cases where things may behave a bit weird. A canonical example is when you have a project targeting .NET Framework 4 and you don’t have a reference to any Entity Framework runtime. When you create a new model it will create a v2 edmx since this was the latest supported schema version in .NET Framework 4. Now you create a database from your model and select EF6. Since EF6 supports v3 schemas the version of your edmx file will be changed to v3. Because the number of dimensions for versioning scenarios ended up being bigger than we originally expected and had a number of exceptions on top of that it was initially hard to tell what the outcome for each combination should be. To address this I created a set of rules which were much easier to understand and to follow when working on versioning scenarios. You can find these rules here.
  • In-the-box packages are not the latest available. EF Tooling contains EF6 NuGet packages that will be added to the project when adding a new model if needed. However, since EF Tooling ships with Visual Studio which has a different cadence that EF the included packages may not be the latest ones. In fact in case of EF6 we fixed a few (mostly performance related) bugs after Visual Studio 2013 was locked down. As a result version 6.0.1 of EF was shipped on the same day as Visual Studio 2013 but the version of EF that ships with Visual Studio 2013 is 6.0.0 and does not contain fixes included in 6.0.1. Moreover 6.0.2 is on its way and again – it will be a runtime only release. Updating EF package is as easy as running the following command from Package Manager Console (Tools → Library Package Manager → Package Manager Console)
    Update-Package EntityFramework
Follow

Get every new post delivered to your Inbox.

Join 188 other followers