Do NOT forget to close() your WCF ServiceClients!

We had a very disturbing problem today. Our client was doing acceptance testing on the application we’re building for them, and had assembled a team of 15 people to do it. The application ran smoothly, until after a few minutes it came to a grinding halt and simply stopped responding. We checked the server, and the CPU was fine. For a while, after a minute or so it went up to 100%.

The application is an ASP.NET MVC web client, talking to WCF services, which are talking to an SQL Server. Everything running on .NET 4 RC. Well, we recycled the application pool, and everything was fine again, for a few minutes…

Some profiling on the server told us that not only did the application freeze after a few minutes, and the CPU went to 100%, but we also saw a shitload of exceptions. Well, to cut to the chase, some more testing revealed that the only page that was still responsive was the only page that didn’t talk to the WCF service at all. Aha!

In our application, we use the StructureMapControllerFactory from MvcContrib to let StructureMap inject dependencies into our controllers. And our controllers are depending on repositories, that are in turn depending on ServiceClients for the WCF services. So basically, we have a repository like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public class FooRepository: IFooRepository
{
private ServiceClients.IFooService client;

public FooRepository(ServiceClients.IFooService client)
{
this.client = client;
}

public Bar GetBar(int id)
{
return client.GetBar(id);
}
}

And a StructureMap configuration like this:

1
2
ForRequestedType<ServiceClients.IFooService>()
.TheDefault.Is.ConstructedBy(() => new ServiceClients.FooServiceClient());

See the problem? Yep, we never close the client. And I can even remember thinking to myself, back in December when we wrote this: “I wonder if the fact that we don’t close the clients will cause a problem… Well, we’ll cross that bridge when we get to it.”

Today we got to the bridge. And it was under enemy fire. So why didn’t we notice this earlier? This is my guess. We have finite number of available connections to the WCF Service. Let’s say 200. And every time we make a request, we create a new client, and use one of these. But, after a while, usually 1 minute, the connection times out and closes. So we need to make enough requests to use all the available connections before they time out. And what happens then? The requests end up in a queue waiting for a connection, and the application stops responding.

So what is the solution? In the WCF textbook, you are supposed to use a service client like this (and some try and catch and other things, of course):

1
2
3
4
5
6
public Bar GetBar(int id) {
var client = new ServiceClients.FooServiceClient();
var bar = client.GetBar(id);
client.Close();
return bar;
}

But that doesn’t work too good for us, because we want to inject the service client into the constructor, in order to be able to mock it away when testing. So how do we ensure that the service client gets closed? Why, the destructor, of course! I added a destructor to each of my repositories, closing the client:

1
2
3
4
5
6
7
8
~FooRepository()
{
var clientObject = client as ServiceClients.FooServiceClient;
if (clientObject != null && clientObject.State == System.ServiceModel.CommunicationState.Opened)
{
clientObject.Close();
}
}

First, we need to cast the client to the underlying class, because the interface doesn’t have the State property, or the close method. Then, we check that is not already closed. And then we closed it.

I was actually not sure that this would work, because I wasn’t sure that the repositories would be garbage collected in time. Buy they were, and it did. So now we’re happy again!

And why did the CPU go up to 100%? Well, when the requests started timing out, we started to get lots of exceptions, which our logger couldn’t handle. We’ll check on that tomorrow. :-)

Keeping things in sync part 2 – Dropbox and Junction

This is kind of a follow-up to my ancient post from november 2008, Keeping things in sync. I still use more than one computer, three to be precise; an Thinkpad X301 at work, a workstation of sorts at home, and a Netbook in front of the TV. And I want the switch between them to go as smoothly as possible. So, just as I did in 2008, I use Dropbox to store all my documents. Great!

But, as it turns out, not all applications lets you choose where to store the data. About 15 minutes ago, I sat myself down in front of my computer at home, to start writing a blog post, a tutorial to Windows Workflow Foundation 4. I started Windows Live Writer (which is a great app for writing the blog posts), and suddenly recalled that I hade already started on that post, but on my laptop.

Live Writer stores its data (posts and drafts) in a a folder called “My Weblog Posts” in the users “Documents” folder. That is not configurable. But I really would like to keep it in my dropbox instead. If only there was a way…

Wait, there is! Junction to the rescue! As it turns out, Windows (or rather NTFS) has been supporting directory symbolic links, or symlinks, since Windows 2000. A symbolic links is an alias for a directory in a different location, and to applications there is no difference between symbolic links and the actual directory. Unfortunately, there is no built-in tool for creating or managing these in Windows. There is, however, a free downloadable tool, called Junction.

So, here is what I did:

  1. I created a folder in My Dropbox called “Apps”, and under that I created a folder called “My Weblog Posts”.
  2. I moved all the content from “Documents\My Weblog Posts” to “My Dropbox\Apps\My Weblog Posts”.
  3. I deleted “Documents\My Weblog Posts”.
  4. I opened a command window, and executed the following command
1
2
> junction.exe "C:\Users\Johan\Documents\My Weblog Posts"
"C:\Users\Johan\Documents\My Dropbox\Apps\My Weblog Posts"

And voilà, I now have a symlink in my Documents folder, pointing to the folder in My Dropbox. Rinse, and repeat this on my laptop, and suddenly my drafts are available on both!

This is, of course, not only useable for Windows Live Writer, but for all applications that keep its data files in some unconfigurable folder somewhere, that you would like to have available on multiple computers.

Hmmm, maybe I should get back to writing that WF4 tutorial now…

Why Workflow Services Storing Their Physical Location In The Xamlx File Is A Very Bad Idea

Just now I was trying to debug a Workflow Service in WF4, that a colleague of mine had created. Strange thing was, even though I set a breakpoint the debugger didn’t stop, it just returned the answer as if I wasn’t debugging at all. In the same projects, I have other Workflow Services, and I hade no problem debugging those (except that debugging workflows is slooow, but that’s beside the point).

I started looking a the Debug output, when this line caught my eye:

Instrumentation for debugger fails. Reason: Could not find file 'C:\TFS\Butler_WCF\EducationWorkflowServices\AktivitetService.xamlx'

“C:\TFS..” – hey, that’s not were I keep my project files! We, of course, use a source code repository for our code (TFS, actually), and every developer checks out the project to a location of his or her discretion. I, for example, use “C:@Projects” as the root folder. My colleague, let’s call her Inger, because that’s her name, uses “C:\TFS”. But how would the debugger know that, and try to use her structure, just because she created the file, you might wonder.

So did I. A little investigation came up with this. In the Xamlx file for the Workflow Service, right at the top, I found this little nugget:

Why is the physical location of the xamlx file stored IN the file?

Yes, it is true. WF4 keep the physical location of the Workflow Service Xamlx in an attribute called sad:XamlDebuggerXmlReader.Filename in the Xamlx file itself! Naturally, my first instinct was just to remove the the sad:XamlDebuggerXmlReader.Filename attribute. No luck, debugging didn’t work at all. So I changed the attribute to point to my file, in “C:@Projects”. And behold – debugging works.

What the were you thinking, Microsoft? Do you actually believe that every developer on a project has the same physical structure on their machines? Or do you think that there is always only one developer on a WF4 project? Do you think it’s a good idea that we have to remember to change the sad:XamlDebuggerXmlReader.Filename attribute every time we need to debug a Workflow Service?

All workflows and no play makes Johan a dull boy. But actually, I think I’ll write a tutorial to Workflow Foundation 4 soon. I’ll call it “how to actually use it”.

Fun with betas and RC of .NET 4 and AppFabric

UPDATE: Turns out there is a less difficult way to do this. The uninstaller just looks for the config file of .NET 4 beta 2 (v4.0.21006), so all you need to do is copy you machine.config and web.config from \Windows\Microsoft.NET\Framework(64)\v4.0.30128\Config to ..\v4.0.21006\Config (you probably have to create the folder), and uninstall it. It is explained in greater detail in this post. So my weekend was saved.

Although I wouldn’t dare complain about the fact that I get to use all the latest Microsoft-technology in my current project, sometimes it can be troublesome.

As I have mentioned in earlier posts, we are building an application in ASP.NET MVC on .NET 4, primarily because we want to use Workflow Foundation 4. Since we will be using long running workflows, we need to persist them, and it seems the good way to do that is using the new Windows Server AppFabric (previously codename Dublin), which also gives us nice monitoring features for WCF.

Now, we started out using VS2010 and .NET 4 beta 2, and about a month ago we installed the beta 1 of AppFabric. As expected, betas are a little buggy, and when the Release Candidate for VS2010 and .NET 4 was released, naturally we wanted to upgrade. So we did. Everything went smoothly, some small changes in the MVC projects, but nothing major. Until we tried AppFabric. We kept getting this error in the AppFabric Dashboard in IIS:

“The configuration section ‘microsoft.applicationServer/monitoring’ cannot be read because it is missing a section declaration”

This rang a bell, since this is the exception you get if your application pool is not running .NET 4, but that was not the case. Well, to make a long story short, after some research, we came across a post on the MSDN AppFabric forum. Seems like AppFabric beta 1 won’t run on .NET 4 RC at all. It just isn’t supported. And a new beta that will run on RC will be released “soon”.

So basically, we have to make do without persistence until that happens. Our next sprint demo is on the 1st of March, and unless the new beta is released well before that, we will have to be very careful not to recycle our app pool during the demo! :-)

On a finishing note, since AppFabric didn’t work anyway, I tried to uninstall it. Unfortunately, that won’t work either. A helpful reply to my reply on the aforementioned post on the MSDN forum explained what I have to do:

  1. Uninstall VS2010 RC
  2. Install VS2010 beta 2
  3. Uninstall AppFabric beta 1
  4. Uninstall VS2010 beta 2
  5. Install VS2010 RC

Sounds like fun… Maybe I’ll try it this weekend. Nope, didn’t have to. See top. :-)

Intellisense for TDD in Visual Studio 2010

While I’m trying to get the time to write a longer post about lessons learned working with ASP.NET MVC 2 and VS2010, I thought I’d throw a shorter one out there in the meantime.

Last week I was att The Gu’s presentation in Stockholm, and while he said a lot of interesting things about ASP.NET 4 and ASP.NET MVC (and some rather uninteresting things in his sales pitch for Silverlight 4), one thing in particular caught my attention: A new intellisense mode for TDD in Visual Studio 2010!

The standard intellisense in Visual Studio is a little “too good” to work well in a TDD scenario. When I write tests for classes and methods that I haven’t written yet, it happily suggests the closest match (like the test class itself for a class name).

No, stupid Intellisense! I don't want to create an instance of the test class!

Annoying. But in Visual Studio 2010, you can change the intellisense mode to “TDD friendly”, just by pressing Ctrl-Alt-Space (and back again, of course)! And instead of the annoying behaviour pictured above, you get this nice and helpful behaviour:

Yes, helpful Intellisense! I do want to create an instance of a class that doesn't exist, thanks for understanding!

And of course, if I wanted a FooControllerTests instance, I could just press the down arrow and Enter to select it. A small feature, but extremely helpful when doing TDD.

Man, I really suck at writing short posts…

Testing DataAnnotation-based validation in ASP.NET MVC

UPDATE: *Aaron Jensen pointed out that this only works for .NET 4. The DataAnnotation.Validator class mentioned in the post does not exist in .NET 3.5, so this method does not work.*

With .NET Framework 3.5 SP1 came DataAnnotations, and with ASP.NET MVC 2 Preview 1 came built-in support for DataAnnotation validation. This makes basic validation, like required fields, number intervals and so on, very simple. In order to be able to test this validation, though, you have to mimic some of the work that the ASP.NET MVC model binder does. This post will describe both how you get the validation working and how you can test it.

Decorating the properties

The first thing you have to do, is add the metadata to you data classes. You can either do this in the actual class, or you can add a partial class to hold your metadata. Either way has it’s advantages, here I’ll show the simplest on, just adding the attributes to the class.

1
2
3
4
5
6
7
8
9
10
using System.ComponentModel.DataAnnotations;

public class User
{
[Required(ErrorMessage = "Username is required")]
[StringLength(25)]
public string Username { get; set; }

public string Alias { get; set; }
}

All the attributes can be found in System.ComponentModel.DataAnnotations.

Adding validation to the view

Now, we add some validation code to our view, so we can show the error message. Actually, in ASP.NET MVC2 Preview 2 (or later), this code is already present.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
<%= Html.ValidationSummary("Create was unsuccessful. Please correct the errors and try again.") %>

<% using (Html.BeginForm()) {%>
<fieldset>
<legend>Fields</legend>
<p>
<label for="Username">Username:</label>
<%= Html.TextBox("Username") %>
<%= Html.ValidationMessage("Username", "*") %>
</p>
<p>
<label for="Alias">Alias:</label>
<%= Html.TextBox("Alias") %>
<%= Html.ValidationMessage("Alias", "*") %>
</p>
<p>
<input type="submit" value="Create" />
</p>
</fieldset>

<% } %>

Basically, what we do is add placeholders for the individual validationmessages and a validationsummary. These are already built-in in the HtmlHelper.

Checking the ModelState

Next, you have to make sure that your controller actually validates your model. Since we’re doing this TDD-style, first we’ll write a test for this.

1
2
3
4
5
6
7
8
9
10
11
[TestMethod]
public void Create_Will_Not_Accept_Empty_Username()
{
var controller = new UserController();

var user = new User();
var result = controller.Create(user);

Assert.IsFalse(controller.ModelState.IsValid);
Assert.AreEqual(1, controller.ModelState.Count);
}

What we do here is just sending in an empty User into the Create method, and assert that the ModelState of the controller is invalid, which it should be since the User did not have a Username. We also check that there is exactly one model error, since the empty Alias property should not cause an error.

Now we have to implement the Create method in the UserController to satisfy this test.

1
2
3
4
5
6
7
8
9
[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Create(User user)
{if (!ModelState.IsValid)
{
return View();
}
//Create logic goes here
return RedirectToAction("Index");
}

Thanks to our DataAnnotations, validating the User object is really simple. The model binder will check the data annotations, and update the ModelState of the controller. So all we have to do is check if the modelstate is valid, and if not return the create form view again. So now we run our test, and see if things work:

U R FAIL!!

Oops, guess not. Now why is this? Well, as I mentioned in the previous paragraph, the model binder will check the data annotations and update the modelstate. But when we call the Create method from our test, the model binder is never invoked, so no validation takes place. So in order to be sure that our Create method won’t accept any users without names, we need to mimic the behaviour of the model binder.

In order to to this, we need to create a ValidationContext (found in System.ComponentModel.DataAnnotations) for our model (the User), and user the DataAnnotations.Validator class to validate the model. And finally add the errors to the modelstate of the controller. This somewhat cumbersome syntax turns out like this:

1
2
3
4
5
6
7
var validationContext = new ValidationContext(user, null, null);
var validationResults = new List<ValidationResult>();
Validator.TryValidateObject(user, validationContext, validationResults);
foreach (var validationResult in validationResults)
{
controller.ModelState.AddModelError(validationResult.MemberNames.First(), validationResult.ErrorMessage);
}

So, we add this code after we create the user object, but before we call the create method. And then we run the test again.

I love green tests!

Yay, it works! And if try our code in the browser, we get this nice validation message:

image

Now, in an actual application, of course we won’t keep the model validation code in the test, but rather extract a method to a base class. But this should give you the idea of how it’s done. And if you want to make it even fancier, just add a few lines in the view, and you will get Ajax validation as well!

Persistance in WF4 beta 2

Ok, this is my follow-up post to .Net Framework Profiles, where I very naïvely though I persistence in WF4 was all about using the correct target framework. As it turned out, they changed the persistence model between beta 1 and beta 2, and the tutorial was apparently written for beta 1.

So here’s my changes to the Microsoft tutorial How to: Create and Run a Long Running Workflow in order to This works on my machine. It might not work on your machine.make it work for Windows Workflow Foundation 4 beta 2.

The persistence database

First of all, since they have replaced the SqlPersistanceProvider with SqlWorkflowInstanceStore, you have to generate a different persistence database. Instead of the scripts mentioned in the tutorial, you should use SqlWorkflowInstanceStoreSchema.sql and SqlWorkflowInstanceStoreLogic.sql to generate your database. They are still found in C:\Windows\Microsoft.NET\Framework<current version>\sql\en.

Enable persistence in the workflow application

  1. Add the correct references. Disregard the references mentioned in the tutorial. Instead add references to System.Runtime and System.Activities.DurableInstancing. These are both included in the Client Profile, rendering my previous post completely obsolete (and it was only a couple of hours old).

  2. Yep, you should still add a connection string to your persistence database.

  3. Add a using statement for System.Activities.DurableInstancing instead of System.ServiceModel.Persistence.

  4. Next, add a SqlWorkflowInstanceStore to the workflow application

    1
    wfApp.InstanceStore = new SqlWorkflowInstanceStore(connectionString);
  5. In order to tell the workflow to persist when it goes idle, replace the Idle action with a PersistableIdle action.

1
2
3
4
5
6
7
8
9
10
11
12
//Remove this from the previous step
wfApp.Idle = delegate(WorkflowApplicationIdleEventArgs e)
{
idleEvent.Set();
};

//And replace it with this
wfApp.PersistableIdle = delegate(WorkflowApplicationIdleEventArgs e)
{
idleEvent.Set();
return PersistableIdleAction.Persist;
};
  1. Yeah, same, same
  2. Since the database schema is different, instead of looking in the Instances table, look in the **[System.Activities.DurableInstancing].[InstancesTable] **table (yeah, redundancy is fun, always suffix your tables with table and your databases with database!).

Now, it should work. At least it did for me. But just to be safe, I’ll decorate this post with the official Works On My Machine seal of awesomeness.

.Net Framework Profiles

UPDATE 2: I have now written a post describing how I got persistence working in WF4 beta 2.

UPDATE: Seems that they have changed just about everything concerning persistence from WF4 beta 1 to beta 2. So the forth part of the tutorial I’m referring to doesn’t seem to work at all for beta 2. I’ll have to do a follow-up on how it works once I’ve figured it out. /J

Since we are going to use Windows Workflow Foundation 4 in our new project, I though I’d better learn the basics of it. So I found Microsoft’s Getting Started Tutorial, and started following it. The first three parts went well, although I suddenly remembered how much I hate drag-and-drop-programming. But when I came to the fourth part (the most interesting one, I might add), How to Create and Run a Long Running Workflow, I ran in to a small problem.

The tutorial told me to add references to System.WorkflowServiceModel and System.WorkflowServices. Problem is, when I opened the Add References dialog, those assemblies do not exist!

“How can this be?”, you might cry. Well, it turns out that Microsoft in their infinite wisdom* have introcuded something called profiles in the target framework of the project. In .NET 4 there are two profiles: .NET Framework 4 and .NET Framework 4 Client Profile, the client profile being the default (in .NET 3.5 there is also a Server Core Profile). And the client profile does not include System.WorkflowServiceModel and System.WorkflowServices!

So, in the end the solution is simple, just change the target framework to the full .NET Framework 4, and everything will work fine. You could argue that they should have added this in the tutorial, though. :-)

On a completely unrelated note, I just heard that I have been nominated for IT consultant of the year at the 2009 IT Business Awards, and even made it to the final three. Not sure what to make of that…

* I don’t mean this as a mockery, I actually think it’s a good idea.

Bleeding edge

Starting a new project at work today. For once this project will have to use the latest and greatest technology. We want to use the new Windows Workflow Foundation, which means that we have to use .NET 4 and Visual Studio 2010. We will also use ASP.NET MVC, and since we are using VS2010 beta 2, that means ASP.NET MVC 2 Preview 2. Nice!

There are some difficulties, though. For example, we are planning to use StructureMap as for dependency injection, and since MvcContrib has a built-in Controller Factory for StructureMap, we wanted to use that too (it also has a lot of other neat features). Well, turns out that the release versions of MvcContrib don’t support ASP.NET MVC 2. So, I hade to pull the latest nightly from source control. That solved the StructureMap problem, but will it yield new problems? Time will tell.

It’s always interesting to work on the bleeding edge of technology. My new motto is that anything out of beta is too old to be worth running! :-)

Almost equal

Had another interesting problem today. A test that really should have worked started failing most of the time. The test was designed to make sure that a certain DateTime value did not change on update under a special circumstance. It was designed something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[TestMethod]
public void Updating_Foo_Will_Not_Update_ChangeDate()
{
//Create a Foo to work with
Foo foo = new Foo()
{
Name = "Update test foo",
};

foo = repository.SaveFoo( foo );
DateTime lastChanged = foo.Changed.Value;

foo.Bar = new Bar { Id = 1 };
repository.SaveFoo( foo );

var updatedFoo = repository.GetAllFoo().WithId( foo.Id );

Assert.AreEqual( lastChanged, updatedFoo.Changed.Value );
}

This assertion, as you might imagine, was not fulfilled. Stepping through the code showed that the SaveFoo method in the repository did not change the value of foo.Changed, as is was not supposed do. It did, however save the foo to the database. On a first inspection, the two dates also seemed to be exactly the same, so the assertion should have been correct.

Or so I thought. When I looked closer at the dates, it turned out they weren’t exactly the same. Close, but not exactly. More specifically, the Ticks value of them differed. The value of lastChanged was 633922004866809617, while the value of updatedFoo.Changed.Value (that had made a round trip to the database) was 633922004866800000. Not a huge difference, less than 1 ms, but enough to make the test fail. How stupid I was, assuming that the precision of DateTime in .NET, and DateTime in SQL Server was the same! Further reading revealed that SQL Server has a DateTime precision of about 3.33 ms, while .NET has a precision of 100 ns. So everytime I saved the time to the database, it would change it!

The solution? Well, since I didn’t really care about differences of a few milliseconds, I decided to extend the DateTime struct with a brilliant new comparison method:

1
2
3
4
5
public static bool IsAlmostEqualTo( this DateTime dateTime1, DateTime dateTime2 )
{
var diff = dateTime1 - dateTime2;
return Math.Abs( diff.TotalMilliseconds ) <= 5;
}

Because, hey, if I can’t feel the difference, there is no difference! This also made the Assert a little prettier:

1
Assert.IsTrue( lastChanged.IsAlmostEqualTo( updatedFoo.Changed.Value );

Problem solved! :-)