Arduino and the TM1637 4-digit seven-segment display

In one of my Arduino projects I needed to show some numbers. Luckily, a 4 digit display lain around on my desk. Because it had a built-in driver which would save me from all the hassle with shift registers it looked like a perfect fit. I checked the chip the display was using and it was labeled “TM1637″. I was already familiar with the TM1638 (for more details see my recent post about using a TM1638 based board with Arduino) chip so I thought making the TM1637 work should be easy. Unfortunately, while there are some similarities between the chips getting the TM1637 to work took me longer than I had expected. When I finally aligned all the moving pieces I decided to create a standalone sample which will hopefully be helpful to other people playing with TM1637 based displays. To make the sample a good demo I just took my Nokia LCD clock sample, replaced the Nokia LCD display with the TM1637 display and updated the code accordingly and ended up with a TM1637 clock:

TM1637 clock

TM1637 clock


Let’s get started from taking a closer look at the TM1637 display. The TM1637 display has four pins VCC, GND, DIO, CLK. Similarly to the TM1638 the TM1637 uses a protocol to communicate with the display. While commands themselves are very similar to the ones used by the TM1638 board sending the data is a little different. The display does not have the STB pin the TM1638 board had so separating commands and arguments works differently. A start signal needs to be sent before sending a command to the display. Once the command has been sent a stop signal needs to be used. If a command has arguments another start signal needs to be sent and then after all arguments have been sent the stop signal needs to be used. The start signal is just setting the CLK and DIO pins to HIGH and then setting them to LOW. The stop signal is setting the CLK and DIO pins to LOW and then to HIGH. The CLK pin should not change the state more frequently than 250kHz so I added a little delay between state changes. Here is how my corresponding start and stop functions look like:

void start(void) 
{ 
    digitalWrite(clock,HIGH);//send start signal to TM1637 
    digitalWrite(data,HIGH); 
    delayMicroseconds(5); 
 
    digitalWrite(data,LOW); 
    digitalWrite(clock,LOW); 
    delayMicroseconds(5); 
} 
 
void stop(void) 
{ 
    digitalWrite(clock,LOW); 
    digitalWrite(data,LOW); 
    delayMicroseconds(5); 

    digitalWrite(clock,HIGH); 
    digitalWrite(data,HIGH); 
    delayMicroseconds(5); 
} 

The display has three commands:

  • initialize the display and set brightness
  • display value at a given position
  • display values starting from a given position

The first command just initializes the display and sets the brightness. This is done by sending the %1000abbb value where the a bit activates (if set to 1) or deactivates (if set to 0) the display and the bbb bits control the brightness (with 000 being the darkest and 111 being the brightest).
The second command allows controlling a single digit of the display. It requires sending 3 bytes to the display: 0x44 %110000aa X where the aa bits in the second byte are used to select the position of the digit to control while X is the value to be displayed encoded in the standard 7 segment coding (as explained here).
Finally, the last command is used to control multiple digits and can consists of up to 6 bytes. Typically you want use all the digits in which case you send the following bytes to the display – 0x40 0xC0 W X Y Z. The first byte is the command identifier the second byte tells the display to start at the first position and W X Y Z are the values to be displayed on subsequent positions (again in the standard 7 segment encoding).
Here are is a short summary of commands:

START (0x88|%00000aaa) STOP initialize and set the brightness
START 0x44 STOP START (0xC0|%000000aa) X STOP display value X at position %000000aa
START 0x40 STOP START 0xC0 W X Y Z STOP display values W X Y Z starting from position 0

When sending data to the display timing is the tricky part. Apparently (if I understood the machine translation of the data sheet which is in Chinese correctly), the CLK pin should not change the state more frequently than 250kHz. This means that I could not use the Arduino shiftOut function because it does not allow to provide the delay after setting the DIO pin. Instead I had to come up with my own function (I called it writeValue) that emulates the shiftOut function but which uses a delay after setting the DIO pin. (Yes, it did made me think that my other projects may have some kind of issue due to writing data too fast but, hey, they worked!). Another thing is that the chip is acknowledging it received data successfully by setting the DIO pin to HIGH after each byte of data has been written. I have never seen it fail and am not even sure what I would be supposed to do if it did. Probably in my little projects this does not matter too much – I am constantly writing to the display so I hope that a subsequent write will succeed and therefore even though the writeValue function returns a result I am ignoring it. The writeValue function looks as follows:

bool writeValue(uint8_t value) 
{ 
    for(uint8_t i = 0; i < 8; i++) 
    { 
        digitalWrite(clock, LOW); 
        delayMicroseconds(5); 
        digitalWrite(data, (value & (1 << i)) >> i); 
        delayMicroseconds(5); 
        digitalWrite(clock, HIGH); 
        delayMicroseconds(5); 
    } 
 
    // wait for ACK 
    digitalWrite(clock,LOW); 
    delayMicroseconds(5); 
 
    pinMode(data,INPUT); 
 
    digitalWrite(clock,HIGH); 
    delayMicroseconds(5); 
 
    bool ack = digitalRead(data) == 0; 
    
    pinMode(data,OUTPUT); 

    return ack; 
}

Based on this information I created a TM1637 clock as shown on the photo above. I connected a TM1637 display to Arduino as follows:

Arduino              TM1637 display
PIN #7 ------------------ CLK
PIN #8 ------------------ DIO
3.3V   ------------------ VCC
GND    ------------------ GND

The code is on github - clone it and try for yourself.

Using a TM1638 based board with Arduino

At long last I set out to work on one of my Arduino projects when I realized that I didn’t have any 4-digit 7 segment LED display at home. Since it was quite important part of the project I went to ebay and started looking for something when I found this:
TM1638 module
This is an “8-Bit LED 8-Bit Digital Tube 8-Bit Key TM1638”. It seemed to be a very good fit for my project since it not only had the display but it also had some buttons – something I needed too. In addition it had only three control lines which would simplify things a lot by sparing me from dealing with shift registers on my own – something I would have to do had I used the cheapest 7 segment LED display. When I got the board I got baffled a little bit. It was just a piece of hardware without any documentation. I deciphered the symbol on the chip and went for a quest to find some information about how it worked. I found a data sheet but it was mostly in Chinese. Unfortunately 我的中文不好, so I only skimmed it and continued looking. I quickly found that there is a library I could use. However I did want to learn how the thing I bought actually worked, so I spent some time more time looking for more information about the board. Finally, I found this page which turned out to be the most helpful and allowed me to start experimenting.
As I already mentioned the board has just 3 control pins plus power and ground. The control pins are strobe, clock and data. The strobe and clock pins are only OUTPUT while the data pin can be both OUTPUT and INPUT. The strobe pin is used when sending data to the board – you set the strobe pin to LOW before you start sending data – one or more bytes – and then set the strobe pin back to HIGH. Note that there is just one data pin which means the data is being sent 1 bit at a time. This is where the clock pin comes into play. When sending data you set the clock pin to LOW then you set the data pin and set the clock pin back to HIGH to commit the bit value. You are probably already familiar with this pattern (if not take a look at this post) – it is the standard way of sending data with shift registers and therefore we can just use the standard shiftOut function to send 8 bits of data with just one line of code.
The data sent to the board follow a protocol where the first byte tells the board what we want to do (I call it ‘command’) and is followed by zero or more bytes that are arguments for the selected function. Arguments are sent separately (i.e. the strobe pin needs to be set to HIGH after sending the command and again to LOW before sending arguments). The board has 4 functions:

  • activate/deactivate board and initialize display
  • write a byte at specific address
  • write bytes starting from specific address
  • read buttons

To activate the board and set the display brightness we use the 1000abbb (0x8?) command where the bit marked as a is used to activate/deactivate the board and bits marked as bbb are used to set the brightness of the display. For example to activate the board and set the brightness of the display to the maximum value we need to send 0x8f. This function does not have any arguments.
To write a byte at specific address we send the 010000100 (0x44) command followed by the address in the form of 1100aaaa (aaaa bits denote the location we want to write to) followed by the value. For example to write the value 0x45 at address 0x0a we would have to send the following sequence of bytes: 0x44 0xca 0x45.
If we want to write values at consecutive addresses (very helpful to reset the board) we would send 01000000 (0x40) followed by the starting address (again in the form of 1100aaaa) followed by the values we want to write. For instance if we send 0x40 0xc0 0x00 0x01 0x02 0 would be written at address 0, 1 would be written at address 1 and 2 would be written at address 2. Note that we have 4 bits to select the address which means there are sixteen locations that can be written to. If you continue writing after reaching the address 0x0f it will wrap and you will start again from address 0x00.
To read buttons we send the 010000010 (0x42) command, set the data pin as INPUT and read 4 bytes containing button status.

Command Arguments Description
0x8? (1000abbb) (none) activate board (bit a), set brightness (bits b)
0x44 (10000100) 0xc? 0x?? write value 0x?? at location 0xc? (single address mode)
0x40 (10000000) 0xc? 0x?? 0x?? 0x?? write values 0x?? starting from location 0xc? (address auto increment mode)
0x42 (10000010) N/A read buttons

Now we know we can write values to one of the 16 locations. This is the way we turn LEDs on and control the display. The board has two 4 digit 7 segment LEDs displays and 8 LEDs. Each of them has a dedicated address to which a value needs to be written to control the corresponding item. For instance if to turn on the first LED we would write 1 at address 0x01. Below is a list of locations with short explanations.

Address Description
0x00 (0000) display #1
0x01 (0001) LED #1 00000001 – red, 00000010 – green
0x02 (0010) display #2
0x03 (0011) LED #2 00000001 – red, 00000010 – green
0x04 (0100) display #3
0x05 (0101) LED #3 00000001 – red, 00000010 – green
0x06 (0110) display #4
0x07 (0111) LED #4 00000001 – red, 00000010 – green
0x08 (1000) display #5
0x09 (2001) LED #5 00000001 – red, 00000010 – green
0x0a (1010) display #6
0x0b (1011) LED #6 00000001 – red, 00000010 – green
0x0c (1100) display #7
0x0d (1101) LED #7 00000001 – red, 00000010 – green
0x0e (1110) display #8
0x0f (1111) LED #8 00000001 – red, 00000010 – green

(You might have noticed that according to the chart LEDs can be either red or green. I am cheap so I got the cheapest board I could find on ebay and it turned out it had only red LEDs so I was not able to test the green color.)

We now have all the information needed to breathe some life into the board. First we need to connect the TM1638 based board to our Arduino board. This is how I did it:

Arduino              TM1638 based board
3.3V   ------------------ VCC
GND    ------------------ GND
PIN #7 ------------------ STB
PIN #8 ------------------ DIO
PIN #9 ------------------ CLK

The setup function needs to activate and reset the board. For readability I created a helper function for sending commands and a separate function for setup. Here is how the code to setup the board looks like:

const int strobe = 7;
const int clock = 9;
const int data = 8;

void sendCommand(uint8_t value)
{
  digitalWrite(strobe, LOW);
  shiftOut(data, clock, LSBFIRST, value);
  digitalWrite(strobe, HIGH);
}

void reset()
{
  sendCommand(0x40); // set auto increment mode
  digitalWrite(strobe, LOW);
  shiftOut(data, clock, LSBFIRST, 0xc0);   // set starting address to 0
  for(uint8_t i = 0; i < 16; i++)
  {
    shiftOut(data, clock, LSBFIRST, 0x00);
  }
  digitalWrite(strobe, HIGH);
}

void setup()
{
  pinMode(strobe, OUTPUT);
  pinMode(clock, OUTPUT);
  pinMode(data, OUTPUT);

  sendCommand(0x8f);  // activate and set brightness to max
  reset();
}

Here is how this works. The code starts executing from the setup function. First we set pins 7, 8, 9 as output pins. Next we activate the board and set the brightness to the maximum value by sending 0x8f. Finally we reset the board by clearing all the memory locations. We do it by setting the board to the address auto increment mode (0x40), selecting 0 as the starting address (0xc0) and sending 0 sixteen times. Now that the board is ready to work with let’s display something. An easy thing to display will be 8. in the first and last digit position on the display and light the 3rd and 6th LED. To do that we will use the single address mode since the locations we are going to write to are not consecutive. Our loop function that does this looks as follows:

void loop()
{
  sendCommand(0x44);  // set single address

  digitalWrite(strobe, LOW);
  shiftOut(data, clock, LSBFIRST, 0xc0); // 1st digit
  shiftOut(data, clock, LSBFIRST, 0xff);
  digitalWrite(strobe, HIGH);

  digitalWrite(strobe, LOW);
  shiftOut(data, clock, LSBFIRST, 0xc5); // 3rd LED
  shiftOut(data, clock, LSBFIRST, 0x01);
  digitalWrite(strobe, HIGH);

  digitalWrite(strobe, LOW);
  shiftOut(data, clock, LSBFIRST, 0xcb); // 3rd LED
  shiftOut(data, clock, LSBFIRST, 0x01);
  digitalWrite(strobe, HIGH);

  digitalWrite(strobe, LOW);
  shiftOut(data, clock, LSBFIRST, 0xce); // last digit
  shiftOut(data, clock, LSBFIRST, 0xff);
  digitalWrite(strobe, HIGH);
}

You can find the entire sample in the repo on github.
Displaying 8. is cool but it would be even cooler if we knew the relation between the value sent to the board and what will be shown. The board is using the standard 7 segment coding, so the value sent to the board is a byte with bits coded as follows: [DP]GFEDCBA. Each bit will light one segment as per the image below:
7Seg
So, for instance if you wanted to display A you would have to write 0x77 at the corresponding location.

Now we know how to control LEDs and the display. But there is one more thing the board offers – buttons. Reading what buttons are depressed works a little bit different from what we have seen so far. First we need to send the 0x42 command, then we set the data pin as an input pin. Finally we need to read 4 bytes from the board (bit by bit). The first byte contains status for buttons S1 (bit 1) and S5 (bit 4), the second byte contains status for buttons S2 (bit 2) and S6 (bit 5) and so on. If we | (i.e. logical `or`) all the bytes we will end up having a byte where each bit corresponds to one button – if a bit set to one means that the corresponding button is depressed. Here is a little program (I omitted the setup – it is identical as in the first sample) where the board will light an LED over a button that is pressed.

uint8_t readButtons(void)
{
  uint8_t buttons = 0;
  digitalWrite(strobe, LOW);
  shiftOut(data, clock, LSBFIRST, 0x42);

  pinMode(data, INPUT);

  for (uint8_t i = 0; i < 4; i++)
  {
    uint8_t v = shiftIn(data, clock, LSBFIRST) << i;
    buttons |= v;
  }

  pinMode(data, OUTPUT);
  digitalWrite(strobe, HIGH);
  return buttons;
}

void setLed(uint8_t value, uint8_t position)
{
  pinMode(data, OUTPUT);

  sendCommand(0x44);
  digitalWrite(strobe, LOW);
  shiftOut(data, clock, LSBFIRST, 0xC1 + (position << 1));
  shiftOut(data, clock, LSBFIRST, value);
  digitalWrite(strobe, HIGH);
}

void loop()
{
  uint8_t buttons = readButtons();

  for(uint8_t position = 0; position < 8; position++)
  {
    uint8_t mask = 0x1 << position;

    setLed(buttons & mask ? 1 : 0, position);
  }
}

Again, you can find the entire example in my github repo.

These are the basic building blocks and you use them to build something more useful. Here is a video of a little demo project I built:

The code for this demo is also on github.

Enjoy.

C++ Async Development (not only for) for C# Developers Part II: Lambda functions – the most common pitfalls

Warning: Some examples in this post belong to the “don’t do this at home” category.

In the first part of C++ Async Development (not only for) for C# Developers we looked at lambda functions in C++. They are cool but as you probably can guess there are some “gotchas” you need to be aware of to avoid undefined behavior and crashes. Like many problems in C++ these are related to object lifetimes where a variable is used after it has already been destroyed. For instance what do you think will be printed (if anything) if you run the following code:

std::function<int()> get_func()
{
    auto x = 42;

    return [&x]() { return x; };
}

int main()
{
    std::cout << get_func()() << std::endl;
    return 0;
}

?

On my box it prints -858993460 but it could write anything as well as could it crash. I believe a crash would have actually been better because I would not have carried on with a random result. The reason why it happened is simple. The variable x was captured by reference meaning the code inside lambda referred to the actual local variable. The lambda was returned from the get_func() function extending the lifetime of the closure. However the local variable x went out of scope when we left the get_func() so the reference captured in the closure no longer referenced x but rather it referenced a random location in memory that has already been reclaimed and potentially reused. A variation of this problem can also be encountered when working with classes. Take a look at this code:

class widget
{
public:
    std::function<int()> get_func()
    {
        return [this]() { return m_x; };
    }

private:
    int m_x = 42;
};

std::function<int()> get_func()
{
    return widget{}.get_func();
}

int main()
{
    std::cout << get_func()() << std::endl;
    return 0;
}

It basically has the same flaw as the previous example. As you probably remember from the post about lambdas the this pointer is always captured by value. So, the lambda captures a copy of the this pointer to be able to access class members but by the time the lambda is actually invoked the object is gone so the pointer is God knows what…
Now let’s try a little game. Find one difference between this class:

class widget
{
public:
    std::function<int()> get_func()
    {
        return [=]() { return m_x; };
    }

private:
    int m_x = 42;
};

and the class from the previous example. If you have not found it yet look at how we capture variables. Before we captured the this pointer explicitly but now we don’t do it. So, we should good, right? Not so fast. If you look at the lambda body you will see we are still accessing the m_x variable. This is a class variable and class variables cannot be accessed without having an instance of the class the variable is a member of. So, what happened is that when we captured variables implicitly the compiler captured the this pointer because we used a class variable in the lambda body. Since the this pointer is always captured by value the new class has the same problem as the previous one. To make things even more interesting the result would be exactly the same if we captured variables implicitly by reference (e.g. if we used [&] to capture variables). This is because the compiler notices that we use class variables in our lambda and therefore the this pointer needs to be captured. However the this pointers is always captured by value (notice that [&this] won’t compile) so even though we used [&] to request capturing variables by reference the this pointer was still captured by reference.
If you think that the scenarios shown above rarely happen in real life you might be right. However the “real life” is a bit different when doing things asynchronously. In this case lambdas represent tasks that are to be run sometime in the future. In general you have little or no control over how or when these tasks will be run (there are some advanced settings you can pass when scheduling a task which may give you some control – I have not investigated them yet). Basically, when a thread becomes available a scheduled task will be run on this thread. This means that the task (i.e. the lambda) will run completely independently on the code that scheduled it and there is no guarantees that the variables captured when creating a task will be around when the task actually runs. As a result you can encounter scenarios similar to the ones I described above quite often. So, what to do then? Over the past few months I developed a few rules and patterns which are helpful in these situations.

Do not capture implicitly
This is probably a little controversial but I like to capture variables explicitly by listing them in the capture list. This prevents from introducing a bug where I capture the this pointer implicitly and use it even though I don’t have a guarantee that it will be valid when the lambda is invoked. This rule has unfortunately one drawback – sometimes after I refactor the code I forget to remove from the capture list a variable that is no longer used in the lambda. In general, if you understand when not to use the this pointer in your lambda you should be fine capturing implicitly.

Try to capture variables by value if possible/feasible (if you can’t guarantee the lifetime)
If you look at the first case we capture the local variable x by reference but it is not necessary. Had we captured it by value we would have avoided all the problems. For instance we could do this:

std::function<int()> get_func()
{
    auto x = 42;

    return [x]() { return x; };
}

Sometimes you may want to capture by reference only to avoid copying. It is a valid reason and it is fine as long as you can guarantee that the variable captured in the closure outlives the closure itself. Otherwise it is better to be slower and correct instead of faster and return invalid results or crash.
The same applies to capturing class variables. If you can capture them by value – do it. In the second case we could change the code as follows:

class widget
{
public:
    std::function<int()> get_func()
    {
        auto x = m_x;
        return [x]() { return x; };
    }

private:
    int m_x = 42;
};

Now instead of capturing the this pointer we copy the value of the class variable to a local variable and we capture it by value.
This works in simple cases where we don’t modify the variable value in the lambda and don’t expect the variable to be modified between it was captured and the lambda was invoked. Oftentimes this is however not enough – we want to be able to modify the value of the variable captured in the closure and be able to observe the new value. What do we do in these cases?

Use (smart)pointers
The problems outlined above were caused by automatic variables whose lifetimes are tied to their scope. Once they go out of scope they are destroyed and must no longer be used. However, if we create the variable on the heap we will control the lifetime of this variable. Since this is no longer 1998 we will not use raw pointers but smart pointers. Actually, there is even more important reason to not use raw pointers – since we allocated memory we should release it. In simple cases it may not be difficult. You just delete the variable in the lambda and you are done with it. More often than not it is not that simple – you may want to capture the same pointer in multiple closures, someone else may still want to use it after the lambda was invoked, you may want to invoke the lambda multiple times etc. In these cases the simplistic approach won’t work. Even if originally your code worked correctly it is very easy to inadvertently introduce issues like these later during maintenance. Using smart pointers makes life much easier and safer. So, yeah, we will use smart pointers. For local variables it is as easy as:

std::function<int()> get_func()
{
    auto x = std::make_shared<int>(42);

    return [x]() { return *x; };
}

Notice that we create a shared_ptr variable and capture it by value. When a shared_ptr is passed by value its internal ref count is incremented. This prevents from deleting the value the pointer points to. When the pointer goes out of scope the ref count is decremented. Once the ref count reaches 0 the pointer deletes the variable it tracks. In our case we create a closure (the ref count is incremented) and return it to the caller. The caller may extend its lifetime (e.g. by returning it to its caller) or can just invoke it and let it go out of scope. One way or another the closure will eventually go out of scope and when this happens the ref count will be decremented and – if no one else is using the shared_ptr – the memory will be released. One thing worth noting is how using shared_ptrs plays nicely with the previous rule about capturing variables by value.
We can also use shared_ptrs to capture the current instance of a class as a substitute for capturing the this pointer. It’s however a bit more involved than it was in case of variables. First, we cannot just do shared_ptr(this). Doing this could lead to creating multiple shared_ptr instances for a single object (e.g. when a function creating a shared_ptr this way would be called multiple times) which would lead to invoking the destructor of object tracked by the shared_ptr multiple times which not only creates undefined behavior but can also lead to dangling pointers. (Note that this is not specific to the this pointer but applies to any raw pointer). The way around it is to derive the class from the enable_shared_from_this<T> type. Unfortunately classes derived from enable_shared_from_this<T> are not very safe and using them incorrectly may put you into the realm of undefined behavior very quickly. The biggest issue is that to be able to create a shared_ptr from the this pointer for a class derived from enable_shared_from_this<T> the instance must already be owned by a shared_ptr. The easiest way to enforce this is probably by not allowing the user to create instances of the class directly by making all the constructors (including the ones created by the compiler) private and exposing a static factory method that creates a new instance of the class and returns a shared_ptr owning the newly created instance. This is how code capturing a shared_ptr tracking this would look like:

class widget : public std::enable_shared_from_this<widget>
{
public:

    static std::shared_ptr<widget> create()
    {
        return std::shared_ptr<widget>(new widget);
    }

    std::function<int()> get_func()
    {
        auto w = shared_from_this();

        return [w]() { return w->m_x; };
    }

private:

    widget() 
    { }

    widget(const widget&) = delete;

    int m_x = 42;
};

std::function<int()> get_func()
{
    return widget::create()->get_func();
}

int main()
{
    std::cout << get_func()() << std::endl;
    return 0;
}

A word of precaution. One general thing you need to aware of when using shared_ptrs is circular dependencies. If you create a circular dependency between two (or more) shared_ptrs their ref counts will never reach 0 because each shared_ptr is always being referenced by another shared_ptr. As a result the instances the shared_ptrs point to will never be released causing a memory leak. If you cannot avoid a dependency like this you can replace one of the shared_ptrs with a weak_ptr which will break the cycle. weak_ptrs similarly to shared_ptrs can be passed by value so they are easy to use with lambda functions. If you have a weak_ptr and you need a shared_ptr you just call the weak_ptr::lock() function. Note that you need to check the value returned by the weak_ptr::lock() function since it will return an empty shared_ptr if the instance managed by shared_ptr was deleted because all the strong references were removed in the meantime.

This post was quite long and heavy but knowing this stuff will be very helpful later when we dive in the actual async programming which I think we will start looking into in the next part.

C++ Async Development (not only for) for C# Developers Part I: Lambda Functions

And now for something completely different…

I was recently tasked with implementing the SignalR C++ client. The beginnings were (are?) harder than I had imagined. I did some C++ development in late nineties and early 2000s (yes I am that old) but either I have forgotten almost everything since then or I had never really understood some of the C++ concepts (not sure which is worse). In any case the C++ landscape has changed dramatically in the last 15 or so years. Writing tests is a common practice, standard library became really “standard” and C++ 11 is now available. There is also a great variety of third party libraries making the life of a C++ developer much easier. One of such libraries is cpprestsdk codename Casablanca. Casablanca is a native, cross platform and open source library for asynchronous client-server communication. It contains a complete HTTP stack (client/server) with support for WebSockets and OAuth, support for JSon and provides APIs for scheduling and running tasks asynchronously – similar to what is offered by C# async. As such cpprestsdk is a perfect fit for a native SignalR client where the client communicates over HTTP using JSon and takes the full advantage of asynchrony.
I am currently a couple of months into my quest and I would like to share the knowledge about C++ asynchronous development with the cpprestsdk I have gained so far. My background is mostly C# and it took me some time to wrap my head around native async programming even though C# async programming is somewhat similar to C++ async programming (with cpprestsdk). Two things I wish I had had when I started were to know how to translate patterns used in C# async programming to C++ and what the most common pitfalls are. In this blog series I would like to close this gap. Even though some of the posts will talk about C# quite a lot I hope the posts will be useful not only for C# programmers but also for anyone interested in async C++ programming with cpprestsdk.

Preamble done – let’s get started.

Cpprestsdk relies heavily on C++ 11 – especially on newly introduced lambda functions. Therefore before doing any actual asynchronous programming we need to take a look at and understand C++ lambda functions especially that they are a bit different (and in some ways more powerful) than .NET lambda expressions.
The simplest C++ lambda function looks as follows:

auto l = [](){};

It does not take any parameter, does not return any value and does not have any body so doesn’t do anything. A C# equivalent would look like this:

Action l = () => {};

The different kinds of brackets one next to each other in the C++ lambda function may seem a bit peculiar (I saw some pretty heated comments from long time C++ users how they feel about this syntax) but they do make sense. Let’s take a look what they are and how they are used.

[]
Square brackets are used to define what variables should be captured in the closure and how they should be captured. You can capture zero (as in the example above) or more variables. If you want to capture more than one variable you provide a comma separated list of variables. Note that each variable can be specified only once. In general you can capture any variable you can access in the scope and – except for the this pointer – they can be captured either by reference or by value. When a variable is captured by reference you access the actual variable from the lambda function body. If a variable is captured by value the compiler creates a copy of the variable for you and this is what you access in your lambda. The this pointer is always captured by value. To capture a variable by value you just provide the variable name inside the square brackets. To capture a variable by reference you prepend the variable name with &. You can also capture variables implicitly – i.e. without enumerating variables you want to capture – letting the compiler figure what variables to capture by inspecting what variables are being used in the lambda function. You still need to tell how you want to capture the variables though. To capture variables implicitly by value you use =. To capture variables implicitly by reference you use &. What you specify will be the default way of capturing variables which you can refine if you need to capture a particular variable differently. You do that just by adding the variable to the capture list. One small wrinkle when specifying variables explicitly in the capture list is that if the variable is not being used in the lambda you will not get any notification or warning. I assume that the compiler just get rids of this variable (especially in optimized builds) but still it would be nice to have an indication if this happens. Here are some examples of capture lists with short explanations:

[] // no variables captured
[x, &y] // x captured by value, y captured by reference
[=] // variables in scope captured implicitly by value
[&] // variables in scope captured implicitly by reference
[=, &x] // variables in scope captured implicitly by value except for x which is captured by reference
[&, x] // variables in scope captured implicitly by reference except for x which is captured by value

[=, &] // wrong – variables can be captured either by value or by reference but not both
[=, x] // wrong – x is already implicitly captured by value
[&, &x] // wrong – x is already implicitly captured by reference
[x, &x] // wrong – x cannot be captured by reference and by value at the same time

How does this compare to C# lambda expressions? In C# all variables are implicitly captured by reference (so it is an equivalent of [&] used in C++ lambda functions) and you have no way of changing it. You can work around it by assigning the variable you want to capture to a local variable and capture the local variable. As long as you don’t modify the local variable (e.g. you could create an artificial scope just for declaring the variable and defining the lambda expression – since the variable would not be visible out of scope it could not be modified) you would get something similar to capturing by mutable value. This was actually a way to work around a usability issue where sometimes if you closed over a loop variable you would get unexpected results (if you are using Re# you have probably seen the warning reading “Access to foreach variable in closure. May have different behaviour when compiled with different versions of compiler.” – this is it). You can read more about this here (btw. the behavior was changed in C# 5.0 and the results are now what most developers actually expect).

()
Parenthesis are used to define a formal parameter list. There is nothing particularly fancy – you need to follow the same rules as when you define parameters for regular functions.

Return type
The simplest possible lambda I showed above did not have this but sometimes you may need to declare the return type of the lambda function. In simple cases you don’t have to and the compiler should be able to deduce the return type for you based on what your function returns. However, in more complicated cases – e.g. your lambda function has multiple return statements – you will need to specify the return type explicitly. A lambda with an explicitly defined return type of int looks like this:

auto l = []()->int { return 42;};

mutable
Again something that was not needed for the “simplest possible C++ lambda” example but I think you will encounter it relatively quickly when you start capturing variables by value. By default all C++ lambda functions are const. This means that you can access variables captured by value but you cannot modify them. You also cannot call non-const functions on variables captured by value. Prepending lambda body with the mutable keyword makes the above possible. You need to be careful though because this can get tricky. For instance what you think the following function prints?

void lambda_by_value()
{
    auto x = 42;
    auto l = [x]()
    mutable {
        x++;
        std::cout << x << std::endl;
    };

    std::cout << x << std::endl;
    l();
    l();
    std::cout << x << std::endl;
}

If you guessed:

42
43
44
42

– congratulations – you were right and you can skip the next paragraph; otherwise read on.
Imagine your lambda was compiled to the following class (disclaimer: this is my mental model and things probably work a little bit differently and are far more complicated than this but I believe that this is what actually happens at the high level):

class lambda
{
public:
    lambda(int x) : x(x)
    {}

    void operator()()
    {
        x++;
        std::cout << x << std::endl;
    }

private:
    int x;
};

Since we capture x by value the ctor creates a copy of the passed parameter and stores it in the class variable called x. Overloading the function call operator (i.e. operator()) makes the object callable making it similar to a function but the object still can maintain the state which is perfect in our case because we need to be able to access the x variable. Note that the body of the operator() function is the same as the body of the lambda function above. Now let’s create a counterpart of the lambda_by_value() function:

void lambda_by_value_counterpart()
{
    auto x = 42;
    auto l = lambda(x);
    std::cout << x << std::endl;
    l();
    l();
    std::cout << x << std::endl;
}

As you can see the lambda_by_value_counterpart() function not only looks almost the same as the original lambda_by_value() (except that instead of defining the lambda inline we create the lambda class) but it also prints the same result. This shows that when you define a lambda the compiler creates a structure for you that maintains the state and is used across lambda calls. Now also the mutable keyword should make more sense. If, in our lambda class the operator() function was actually const we would have not been able to modify the class variable x. So the function must not be const or the class variable x must be defined as (surprise!)… mutable int x;. (In the example I decided to make the function operator() non-const since it feels more correct).

auto vs. var
Since C++ lambda definitions contain full type specifications for parameters and the return type you can use auto and leave it to the compiler to deduce the type of the lambda variable. In fact I don’t even know how to write the type of the lambda variable explicitly (when I hover over the auto in Visual Studio it shows something like class lambda []void () mutable->void but this cannot be used as the type of the variable). In C# you cannot assign a lambda expression to var for several reasons. You can read up why here.

That’s more or less what I have learned about C++ lambda functions in the past few weeks apart from… common pitfalls and mistakes I am thinking about writing about soon.

Crusader for trailing-whitespace-free code

For whatever reason I hate trailing whitespaces. It might be just my pet peeve (although unjustified conversions with the ‘as’ operator instead of using just regular cast in C# are the winner in this category) because ultimately it is not something that is very important but still, trailing whitespaces annoy me. Sadly, even though I hate trailing whitespaces I tend to introduce them. One of my colleagues, who would flag all of the occurrences of trailing whitespaces in my pull requests even suspected I introduced them intentionally to check if code was being reviewed carefully. I am not that treacherous though and the reason was actually more mundane than that. I just did not see them. I did turn on the “View White Space” option in VS (which btw. made me hate trailing spaces even more) and started seeing all the trailing whitespaces introduced by other people but not by myself. The truth is that if you set Visual Studio to use the dark theme it is often hard to see a single trailing whitespace even with the “View White Space” option turned on. You would think that fixing all the trailing whitespaces in a project would fix the problem but I don’t think it actually would. Firstly, typically you don’t want to touch code that is not relevant to your change. Secondly, even if you fixed it as a separate change (which is actually not that difficult with regular expressions) you will inadvertently introduce a trailing whitespace or two in your very next commit. If you want to be careful and are using git you can do git diff {--cached} and it will highlight trailing whitespaces like this:
git-diff
It’s not the best experience though. You need to leave your editor to see if you have any problems and if you do you need to “map” the diff to your code to find all the lines that need to be fixed. This was driving me crazy so I decided to do something about this. The idea was to show trailing whitespaces in a visible way directly in the editor as soon as you introduced them. This is actually quite easy to do for Visual Studio – you just create an extensibility project (note you have to have Visual Studio SDK installed) – New Project, Template -> Visual C# -> Extensibility and select “Editor Text Adorment – modify a few lines and you are done. It literally took me no more than 2 hours to hack together an extension that does it. It highlights all the trailing whitespaces and dynamically checks if you have any trailing whitespaces in front of your cursor when you type – if you do it will highlight them too. This is how your VS looks like if you have trailing whitespaces:
trailing-spaces-vs
I did use this extension for the past three days (extensive QA rules!) and shared it with my colleague (the one who would before always complain about me introducing trailing whitespaces). It worked and did not feel obtrusive (granted we do not have a lot of trailing whitespaces in the projects we work on but it turned out we do have more than we thought we had). Now, if you are a crusader for trailing-whitespace-free code, you can try it too. (Actually, you can try it even if you aren’t.) I posted this extension on the VS gallery. You can just download the .vsix and double click it to install, or you can install it directly from VS by going to Tools->Extensions and Updates, selecting “Online” on the left side, typing the name (or just “Flagger”) and clicking install.
trailing-spaces-install
If you have too many whitespaces or just think it does not work for you you can uninstall or disable the extension by going to Tools -> Extensions and Updates find the Trailing Space Flagger in the installed extensions and click the “Remove” or “Disable” button:
uninstall-flagger
As usual, the code is open source – you can find it on github.

The final version of the Store Functions for EntityFramework 6.1.1+ Code First convention released

Today I posted the final version of the Store Functions for Entity Framework Code First convention to NuGet. The instructions for downloading and installing the latest version of the package to your project are as described in my earlier blog post only you no longer have to select the “Include Pre-release” option when using UI or use the –Pre option when installing the package with the Package Manager Console. If you installed a pre-release version of this package to your project and would like to update to this version just run the Update-Package EntityFramework.CodeFirstStoreFunctions command from the Package Manager Console.

What’s new in this version?

This new version contains only one addition comparing to the beta-2 version – the ability to specify the name of the store type for parameters. This is needed in cases where a CLR type can be mapped to more than one store type. In case of the Sql Server provider there is only one type like this – the xml type. If you look at the Sql Server provider code (SqlProviderManifest.cs ln. 409) you will see that the store xml type is mapped to the EDM String type. This mapping is unambiguous when going from the store side. However the type inference in the Code First Functions convention works from the other end. First we have a CLR type (e.g. string) which maps to the EDM String type which is then used to find the corresponding store type by asking the provider. For the EDM String type the Sql Server the provider will return (depending on the facets) one of the nchar, nvarchar, nvarchar(max), char, varchar, varchar(max) types but it will never return the xml type. This makes it basically impossible to use the xml type when mapping store functions using the Code First Functions convention even though this is possible when using Database First EDMX based models.
Because, in general case, the type inference will not always work if multiple store types are mapped to one EDM Type I made it possible to specify the store type of a parameter using the new StoreType property of the ParameterTypeAttribute. For instance if you had a stored procedure called GetXmlInfo that takes an xml typed in/out parameter and returns some data (kind of a more advanced (spaghetti?) scenario but came from a real world application where the customer wanted to replace EDMX with Code First so they decided to use Code First Functions to map store functions and this was the only stored procedure they had problems with) you would use the following method to invoke this stored procedure:

[DbFunctionDetails(ResultColumnName = "Number")]
[DbFunction("MyContext", "GetXmlInfo")]
public virtual ObjectResult<int> GetXmlInfo(
    [ParameterType(typeof(string), StoreType = "XML")] ObjectParameter xml)
{
    return ((IObjectContextAdapter)this).ObjectContext
        .ExecuteFunction("GetXmlInfo", xml);
}

Because the parameter is in/out I had to use the ObjectParameter to pass the value and to read the value returned by the stored procedure. Because I used ObjectParameter I had to use the ParameterTypeAttribute to tell the convention what is the Clr type of the parameter. Finally, I also used the StoreType parameter which results in skipping asking the provider for the store type and using the type I passed.

That would be it. See my other blog posts here and here if you would like to see other supported scenarios. The code and issue tracking is on codeplex. Use and enjoy.

The final version of the Second Level Cache for EF6.1+ available.

This is it! Today I pushed the final version of the Second Level Cache for Entity Framework 6.1+ to NuGet. Now you no longer need to use -Pre when installing the package from the Package Manager Console nor remember to select the “Include Prerelease” option from the dropdown when installing the package using the “Manage NuGet Packages” window. The final version is functionally equivalent to the beta-2 version – there wasn’t a single change to the product code between the beta-2 version and this version. I would still encourage you to upgrade to the final version if you are using the beta-2 version.
There is also a new implementation of cache which uses Redis to store cached items. It was created by silentbobbert and you can get it from NuGet. I am pretty excited about this since it opens quite a few new possibilities. Try it out and see how it works.
Update/install, enjoy and file bugs (if you find any).

The Beta Version of Store Functions for EntityFramework 6.1.1+ Code First Available

This is very exciting! Finally after some travelling and getting the beta-2 version of the Second Level Cache for EF 6.1+ out the door I was able to focus on store functions for EF Code First. I pushed quite hard for the past two weeks and here it is – the beta version of the convention that enables using store functions (i.e. stored procedures, table valued functions etc.) in applications that use Code First approach and Entity Framework 6.1.1 (or newer). I am more than happy with the fixes and new features that are included in this release. Here is the full list:

  • Support for .NET Framework 4 – the alpha NuGet package contained only assemblies built against .NET Framework 4.5 and the convention could not be used when the project targeted .NET Framework 4. Now the package contains assemblies for both .NET Framework 4 and .NET Framework 4.5.
  • Nullable scalar parameters and result types are now supported
  • Support for type hierarchies – previously when you tried using a derived type the convention would fail because it was not able to find a corresponding entity set. This was fixed by Martin Klemsa in his contribution
  • Support for stored procedures returning multiple resultsets
  • Enabling using a different name for the method than the name of the stored procedure/function itself – a contribution from Angel Yordanov
  • Enabling using non-DbContext derived types (including static classes) as containers for store function method stubs and methods used to invoke store functions – another contribution from Angel Yordanov
  • Support for store scalar functions (scalar user defined functions)
  • Support for output (input/output really) parameters for stored procedures

This is a pretty impressive list. Let’s take a closer look at some of the items from the list.

Support for stored procedure returning multiple resultsets

Starting with version 5 Entity Framework runtime has a built-in support for stored procedures returning multiple resultsets (only when targeting .NET Framework 4.5). This is not a very well-known feature which is not very surprising given that up to now it was practically unusable. Neither Code First nor EF Tooling supports creating models with store functions returning multiple resultsets. There are some workarounds like dropping to ADO.NET in case of Code First (http://msdn.microsoft.com/en-us/data/JJ691402.aspx) or editing the Edmx file manually (and losing the changes each time the model is re-generated) for Database First but they do not really change the status of the native support for stored procedures returning multiple resultsets as being de facto an unfeature. This is changing now – it is now possible to decorate the method that invokes the stored procedure with the DbFunctionDetails attribute and specify return types for subsequent resultsets and the convention will pick it up and create metadata EF requires to execute such a stored procedure.

Using a different name for the method than the name of the stored procedure

When the alpha version shipped the name of method used to invoke a store function had to match the name of the store function. This was quite unfortunate since most of the time naming conventions used for database objects are different from naming conventions used in the code. This is now fixed. Now, the function name passed to the DbFunction attribute will be used as the name of the store function.

Support for output parameters

The convention now supports stored procedures with output parameters. I have to admit it ended a bit rough because of how the value of the output parameter is being set (at least in case of Sql Server) but if you are in a situation where you have a stored procedure with an output parameter it is better than nothing. The convention will treat a parameter as an output parameter (in fact it will be an input/output parameter) if the type of the parameter is ObjectParameter. This means you will have to create and initialize the parameter yourself before passing it to the method. This is because the output value (at least for Sql Server) is set after all the results returned by the stored procedure have been consumed. Therefore you need to keep a reference to the parameter to be able to read the output value after you have consumed the results of the query. In addition because the actual type of the parameter will be only known at runtime and not during model discovery all ObjectParameter parameters have to be decorated with the ParameterTypeAttribute which specifies the type that will be used to build the model. Finally the name of the parameter in the method must match the name of the parameter in the database (yeah, I had to debug EF code to figure out why things did not work) – fortunately casing does not matter. As I said – it’s quite rough but should work once you align all the moving pieces correctly.

Exmple 1

The following example illustrates how to use the functionality described above. It uses a stored procedure with an output parameter and returning multiple resultsets. In addition the name of the method used to invoke the store procedure (MultipleResultSets) is different from the name of the stored procedure itself (CustomersOrdersAndAnswer).

internal class MultupleResultSetsContextInitializer : DropCreateDatabaseAlways<MultipleResultSetsContext>
{
    public override void InitializeDatabase(MultipleResultSetsContext context)
    {
        base.InitializeDatabase(context);

        context.Database.ExecuteSqlCommand(
        "CREATE PROCEDURE [dbo].[CustomersOrdersAndAnswer] @Answer int OUT AS " +
        "SET @Answer = 42 " +
        "SELECT [Id], [Name] FROM [dbo].[Customers] " +
        "SELECT [Id], [Customer_Id], [Description] FROM [dbo].[Orders] " +
        "SELECT -42 AS [Answer]");
    }

    protected override void Seed(MultipleResultSetsContext ctx)
    {
        ctx.Customers.Add(new Customer
        {
            Name = "ALFKI",
            Orders = new List<Order>
                {
                    new Order {Description = "Pens"},
                    new Order {Description = "Folders"}
                }
        });

        ctx.Customers.Add(new Customer
        {
            Name = "WOLZA",
            Orders = new List<Order> { new Order { Description = "Tofu" } }
        });
    }
}

public class Customer
{
    public int Id { get; set; }
    public string Name { get; set; }
    public virtual ICollection<Order> Orders { get; set; }
}

public class Order
{
    public int Id { get; set; }
    public string Description { get; set; }
    public virtual Customer Customer { get; set; }
}

public class MultipleResultSetsContext : DbContext
{
    static MultipleResultSetsContext()
    {
        Database.SetInitializer(new MultupleResultSetsContextInitializer());
    }

    protected override void OnModelCreating(DbModelBuilder modelBuilder)
    {
        modelBuilder.Conventions.Add(
            new FunctionsConvention<MultipleResultSetsContext>("dbo"));
    }

    public DbSet<Customer> Customers { get; set; }
    public DbSet<Order> Orders { get; set; }

    [DbFunction("MultipleResultSetsContext", "CustomersOrdersAndAnswer")]
    [DbFunctionDetails(ResultTypes = 
        new[] { typeof(Customer), typeof(Order), typeof(int) })]
    public virtual ObjectResult<Customer> MultipleResultSets(
        [ParameterType(typeof(int))] ObjectParameter answer)
    {
        return ((IObjectContextAdapter)this).ObjectContext
            .ExecuteFunction<Customer>("CustomersOrdersAndAnswer", answer);
    }
}

class MultipleResultSetsSample
{
    public void Run()
    {
        using (var ctx = new MultipleResultSetsContext())
        {
            var answerParam = new ObjectParameter("Answer", typeof (int));

            var result1 = ctx.MultipleResultSets(answerParam);

            Console.WriteLine("Customers:");
            foreach (var c in result1)
            {
                Console.WriteLine("Id: {0}, Name: {1}", c.Id, c.Name);
            }

            var result2 = result1.GetNextResult<Order>();

            Console.WriteLine("Orders:");
            foreach (var e in result2)
            {
                Console.WriteLine("Id: {0}, Description: {1}, Customer Name {2}", 
                    e.Id, e.Description, e.Customer.Name);
            }

            var result3 = result2.GetNextResult<int>();
            Console.WriteLine("Wrong Answer: {0}", result3.Single());

            Console.WriteLine("Correct answer from output parameter: {0}", 
               answerParam.Value);
        }
    }
}

The first half of the sample is just setting up the context and is rather boring. The interesting part starts at the MultipleResultSets method. The method is decorated with two attributes – the DbFunctionAttribute and the DbFunctionDetailsAttribute. The DbFunctionAttribute tells EF how the function will be mapped in the model. The first parameter is the namespace which in case of Code First is typically the name of the context type. The second parameter is the name of the store function in the model. The convention treats it also as the name of the store function. Note that this name has to match the name of the stored procedure (or function) in the database and also the name used in the ExecuteFunction call. The DbFunctionDetailsAttribute is what makes it possible to invoke a stored procedure returning multiple resultsets. The ResultTypes parameter allows specifying multiple types each of which defines the type of items returned in subsequent resultsets. The types have to be types that are part of the model or, in case of primitive types, they have to have an Edm primitive type counterpart. In our sample the stored procedure returns Customer entities in the first resultset, Order entities in the second resultset and int values in the third resultset. One important thing to mention is that the first type in the ResultTypes array must match the generic type of the returned ObjectResult. To invoke the procedure (let’s ignore the parameter for a moment) you just call the method and enumerate the results. Once the results are consumed you can move to the next resultset. You do it by calling the GetNextResult<T> method where T is the element type of the next resultset. Note that you call the GetNextResult<T> on the previously returned ObjectResult<> instance. Finally let’s take a look at the parameter. As described above it is of the ObjectParameter type to indicate an output parameter. It is decorated with the ParameterTypeAttribute which tells what is the type of the attribute. Its name is the same as the name of the parameter in the stored procedure (less the casing). We create an instance of this parameter before invoking the stored procedure, enumerate all the results and only then read the value. If you tried reading the value before enumerating all the resultsets it would be null.
Running the sample code produces the following output:

Customers:
Id: 1, Name: ALFKI
Id: 2, Name: WOLZA
Orders:
Id: 1, Description: Pens, Customer Name ALFKI
Id: 2, Description: Folders, Customer Name ALFKI
Id: 3, Description: Tofu, Customer Name WOLZA
Wrong Answer: -42
Correct answer from output parameter: 42
Press any key to continue . . .

Enabling using non-DbContext derived types (including static classes) as containers for store function method stubs and methods used to invoke store functions

In the alpha version all the methods that were needed to handle store functions had to live inside a DbContext derived class (the type was a generic argument to the FunctionsConvention<T> where T was constrained to be a DbContext derived type). While this is a convention used by EF Tooling when generating code for Database First approach it is not a requirement. Since it blocked some scenarios (e.g. using extension methods which have to live in a static class) the requirement has been lifted by adding a non-generic version of the FunctionsConvention which takes the type where the methods live as a constructor parameter.

Support for store scalar functions

This is another a not very well-known EF feature. EF actually knows how to invoke user defined scalar functions. To use a scalar function you need to create a method stub. Method stubs don’t have implementation but when used inside a query they are recognized by the EF Linq translator and translated to a udf call.

Example 2
This example shows how to use non-DbContext derived classes for methods/method stubs and how to use scalar UDFs.

internal class ScalarFunctionContextInitializer : DropCreateDatabaseAlways<ScalarFunctionContext>
{
    public override void InitializeDatabase(ScalarFunctionContext context)
    {
        base.InitializeDatabase(context);

        context.Database.ExecuteSqlCommand(
            "CREATE FUNCTION [dbo].[DateTimeToString] (@value datetime) " + 
            "RETURNS nvarchar(26) AS " +
            "BEGIN RETURN CONVERT(nvarchar(26), @value, 109) END");
    }

    protected override void Seed(ScalarFunctionContext ctx)
    {
        ctx.People.AddRange(new[]
        {
            new Person {Name = "John", DateOfBirth = new DateTime(1954, 12, 15, 23, 37, 0)},
            new Person {Name = "Madison", DateOfBirth = new DateTime(1994, 7, 3, 11, 42, 0)},
            new Person {Name = "Bronek", DateOfBirth = new DateTime(1923, 1, 26, 17, 11, 0)}
        });
    }
}

public class Person
{
    public int Id { get; set; }
    public string Name { get; set; }
    public DateTime DateOfBirth { get; set; }
}

internal class ScalarFunctionContext : DbContext
{
    static ScalarFunctionContext()
    {
        Database.SetInitializer(new ScalarFunctionContextInitializer());
    }

    public DbSet<Person> People { get; set; }

    protected override void OnModelCreating(DbModelBuilder modelBuilder)
    {
        modelBuilder.Conventions.Add(
            new FunctionsConvention("dbo", typeof (Functions)));
    }
}

internal static class Functions
{
    [DbFunction("CodeFirstDatabaseSchema", "DateTimeToString")]
    public static string DateTimeToString(DateTime date)
    {
        throw new NotSupportedException();
    }
}

internal class ScalarFunctionSample
{
    public void Run()
    {
        using (var ctx = new ScalarFunctionContext())
        {
            Console.WriteLine("Query:");

            var bornAfterNoon =
               ctx.People.Where(
                 p => Functions.DateTimeToString(p.DateOfBirth).EndsWith("PM"));

            Console.WriteLine(bornAfterNoon.ToString());

            Console.WriteLine("People born after noon:");

            foreach (var person in bornAfterNoon)
            {
                Console.WriteLine("Name {0}, Date of birth: {1}",
                    person.Name, person.DateOfBirth);
            }
        }
    }
}

In the above sample the method stub for the scalar store function lives in the Functions class. Since this class is static it cannot be a generic argument to the FunctionsConvention<T> type. Therefore we use the non-generic version of the convention to register the convention (in the OnModelCreating method).
The method stub is decorated with the DbFunctionAttribute which tells the EF Linq translator what function should be invoked. The important thing is that scalar store functions operate on a lower level (they exist only in the S-Space and don’t have a corresponding FunctionImport in the C-Space) and therefore the namespace used in the DbFunctionAttribute is no longer the name of the context but has always to be CodeFirstDatabaseSchema. Another consequence is that the return and parameter types must be of a type that can be mapped to a primitive Edm type. Once all this conditions are met you can use the method stub in Linq queries. In the sample the function is converting the date of birth to a string in a format which ends with “AM” or “PM”. This makes it possible to easily find people who were born before or after noon just by checking the suffix. All this happens on the database side – you can tell this by looking at the results produced when running this code which contain the SQL query the linq query was translated to:

Query:
SELECT
    [Extent1].[Id] AS [Id],
    [Extent1].[Name] AS [Name],
    [Extent1].[DateOfBirth] AS [DateOfBirth]
    FROM [dbo].[People] AS [Extent1]
    WHERE [dbo].[DateTimeToString]([Extent1].[DateOfBirth]) LIKE N'%PM'
People born after noon:
Name John, Date of birth: 12/15/1954 11:37:00 PM
Name Bronek, Date of birth: 1/26/1923 5:11:00 PM
Press any key to continue . . .

That’s pretty much it. The convention ships on NuGet – the process of installing the package is the same as it was for the alpha version and can be found here. If you are already using the alpha version in your project you can upgrade the package to the latest version with the Update-Package command. The code (including the samples) is on codeplex.
I would like to thank again Martin and Angel for their contributions.
Play with the beta version and report bugs before I ship the final version.

Second Level Cache Beta-2 for EntityFramework 6.1+ shipped

When I published the Beta version of EFCache back in May I intentionally did not call it Beta-1 since at that time I did not plan to ship Beta-2. Instead, I wanted to go straight to the RTM. Alas! It turned out that EFCache could not be used with models where CUD (Create/Update/Delete) operations where mapped to stored procedures. Another problem was that in scenarios where there were multiple databases with the same schema EFCache returned cached results even if the database the app was connecting changed. I also got a pull request which I wanted to include. As a result I decided to ship Beta-2. Here is the full list of what’s included in this release:

  • support for models containing CUD operations mapped to stored procedures – invoking a CUD operation will invalidate cache items for the given entity set
  • CacheTransactionHandler.AddAffectedEntitySets is now protected – makes subclassing CacheTransactionHandler easier
  • database name is now part of the cache key – enables using the same cache across multiple databases with the same schema/structure
  • new Cached() extension method forces caching results for selected queries – results for queries marked with Cached() method will be cached regardless of caching policy, whether the query has been blacklisted or if it contains non-deterministic SQL functions. This feature started with a contribution from ragoster
  • new name – Second Level Cache for Entity Framework 6.1+ – (notice ‘+’) to indicate that EFCache works not only with Entity Framework 6.1 but also with newer point releases (i.e. all releases up to, but not including, the next major release)

The new version has been uploaded to NuGet, so update the package in your project and let me know of any issues. If I don’t hear back from you I will release the final version in a month or so.

When 14 pins is not enough.

I had this little project in mind but after chewing on it for a while I figured that to achieve my goal I would need more pins than the Arduino Uno can offer. I started looking at what my options were and soon I found something that looked very promising – shift registers (yes, I realize that this is a very basic stuff for people familiar with elementary electronics but sadly I am not one of them). Conceptually, the way shift registers work is simple – you feed a shift register bit by bit (serial-in) and then tell it to output all data at once (parallel-out). (The opposite (i.e. parallel in, serial out) is also possible but this is out of scope of this post.) Translating this to Arduino terms – with one output pin to send the data and a couple of additional output pins to control the shift register we can have tens of output pins. In the simplest case “tens” means actually 8 but shift registers can be cascaded and each additional register adds 8 additional output pins. To try this out I got myself a few 74HC595 shift registers and a bunch of LEDs and resitors and build this circuit:
PinMultiplication
One thing that turned out to be very helpful was a small sheet I got when I bought the shift registers showing how to cascade them. Even though I was not cascading shift registers (I was too cheap and bought just 10 LEDs) it helped me connect all the power and ground wires to the correct pins. I actually decided to scan the sheet and attach it to this post because I am sure I will lose it sooner than later and won’t be able to find it when I need it. Here it is:
ShiftRegister
Then I had to breathe some life into all the wires and ICs so I wrote some code. I was surprised when it turned out that you literally need less than 10 lines of code (and some data) to get the whole thing up and running:

And here is the code:

 

int clock = 8;
int data = 4;
int latch = 7;

uint8_t patterns[] = 
{ 
  0x00, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80,
  0x00, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80,
  0x00, 0x00, 
  0x01, 0x03, 0x07, 0x0f, 0x1f, 0x3f, 0x7f, 0xff, 0x00,
  0x01, 0x03, 0x07, 0x0f, 0x1f, 0x3f, 0x7f, 0xff, 0x00,
  0x00, 0xff, 0x00, 0xff,
  0x00, 0x00,
  0x81, 0x42, 0x24, 0x18, 0x24, 0x42, 0x81, 
  0x81, 0x42, 0x24, 0x18, 0x24, 0x42, 0x81, 
  0x00, 0x00,
};

void setup() {                
  pinMode(clock, OUTPUT);
  pinMode(data, OUTPUT);
  pinMode(latch, OUTPUT);
}

void loop() {
  static int index = 0;

  digitalWrite(latch, LOW);
  shiftOut(data, clock, MSBFIRST, patterns[index]);
  digitalWrite(latch, HIGH);
  delay(300);

  index = (index + 1) % sizeof(patterns);
}

The code is simple. Since I have just 8 LEDs I can use a byte value to control which of the LEDs should be on. If a bit corresponding to an LED is set to 1 I will turn the LED on, otherwise I will turn it off. The values are stored in an array and every 300th millisecond or so I take the next value from the array and use it turn the LEDs on or off. When I reach the end of the array I start from the first element. Even though the code is so simple there are two interesting bits there. The first one is controlling the shift register. The second one (related to the first one) is how the shiftOut function works. As I said above we need three wires to control the register. They are connected to pins labeled as latch, clock and data in the code. We set latch to LOW to tell the shift register that we are going to send data. Once we send all the data we need to set latch to HIGH. To send the data we use the shiftOut function. What this function does is it reads the passed value bit by bit and for each bit it sets the clock pin to LOW then it sets the data pin to LOW or HIGH depending on the value of the bit and then it sets the data pin to HIGH. It is actually pretty easy to write a counterpart of the shiftOut function on your own. Here is what I came up with (I skipped the bitOrder parameter as I did not need it):

 
void myShiftOut(uint8_t dataPin, uint8_t clockPin, uint8_t val)
{
  for(int mask = 0x80; mask > 0; mask >>= 1)
  {
    digitalWrite(clockPin, LOW);
    digitalWrite(dataPin, val & mask);
    digitalWrite(clockPin, HIGH);
    digitalWrite(dataPin, 0);
  }
  digitalWrite(clockPin, LOW);
}

You can replace the call to the shiftOut function with a call to the myShiftOut function in the first snippet and it will continue to work the same way.

That’s more or less it. Simple but very powerful (and fun)!

Follow

Get every new post delivered to your Inbox.

Join 212 other followers