Johan Driessen

Unit testing saving pages in EPiServer CMS 6

I’ve been working quite a lot with EPiServer in the last six years or so, and unit testing has always been a pain in the ass (not that I was writing unit tests anyway six years ago, I hardly knew what I was doing back then… can’t believe I actually got paid! :-)). And it still is, even in the latest version.

Luckily there are some EPiServer developers that are either smarter than me, or just have more free time, for example Joel Abrahamson who has created a wrapper project, EPiAbstractions, for wrapping the unmockable EPiServer classes in mockable classes and interfaces. Wonderful!

We started using this in our current project recently, and mostly it works fine. Instead of using EPiServer.DataFactory inside our business classes, we now inject IDataFactoryFacade from EPiAbstractions, and use that instead. Like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
//Old way
public class Foo
{
public PageData GetPageById(int id)
{
return EPiServer.DataFactory.Instance.GetPage(
new PageReference(id));
}
}

//New way
public class Foo
{
private IDataFactoryFacade dataFactory;

public Foo(IDataFactoryFacade dataFactory) {
this.dataFactory = dataFactory;
}

public PageData GetPageById(int id)
{
return dataFactory.GetPage(new PageReference(id));
}
}

Simple enough, it let’s us mock the DataFactory with Moq, and we can use an IoC container like StructureMap to inject the the DataFactory class, or we can create a constructor overload that uses DataFactoryFacade.Instance.

The problem

So, with all these nice interfaces that we are able to mock, what is the problem? Well, I was trying to write a test for this method:

1
2
3
4
5
6
7
8
9
10
public void SetAnmälanTillåten(int verksamhetPageId, bool anmälanTillåten)
{
var page = _dataFactory.GetPage(new PageReference(verksamhetPageId));
if (page == null)
return;

var writeablePage = page.CreateWritableClone();
writeablePage.Property["TillåtAnmälan"].Value = anmälanTillåten;
_dataFactory.Save(writeablePage, EPiServer.DataAccess.SaveAction.Publish, EPiServer.Security.AccessLevel.NoAccess);
}

This is a pretty simple method, it gets a page from the DataFactory, sets a property, and saves it again. So we just need to mock GetPage and Save, right? I tried it, and I got this error:

1
2
3
4
5
6
7
The type initializer for 'EPiServer.Web.PermanentLinkMapStore' threw an exception.
System.TypeInitializationException
EPiServer.Web.PermanentPageLinkMap Find(EPiServer.Core.PageReference)
System.Guid get_GuidValue()
EPiServer.Core.PropertyData CreateWritableClone()
EPiServer.Core.PropertyDataCollection CreateWritableClone()
EPiServer.Core.PageData CreateWritableClone()

GetPage was mocked like this (somewhat simplified):

1
2
3
4
5
6
7
var page = new PageData();
page.Property.Add( "PageName", new PropertyString( "Name" ) );
page.Property.Add( "PageLink", new PropertyPageReference(
new PageReference( 4711 ) ) );
page.Property.Add( "TillåtAnmälan", new PropertyBoolean() );
dataFactoryMock.Setup( d => d.GetPage(
It.Is<PageReference>( p => p.ID == 4711 ) ) ).Returns( page );

So, what is the problem? Well, a look inside EPiServer.dll told me that when calling CreateWriteableClone for the PageData object, it in turn calls CreateWriteableClone for each property in the PropertyDataCollection. When it comes to the PageLink property, it calls PropertyPageReference.CreateWriteableClone:

PropertyPageReference.CreateWriteableClone() implementation from EPiServer CMS 6

This doesn’t look so bad, does it? But the problem is in this.GuidValue:

PropertyPageReference.GuidValue implementation from EPiServer CMS 6

So, unless the field _pageGuid has a non-empty value, it will call PermanentLinkMapStore.Find, and we will get the error. Unfortunately, since the setter won’t let us manually set the value without calling Modified(), which will also make external calls, we can’t use it to set the value.

The solution

System.Reflection to the rescue! With a little helper method for setting the private field, I was able to get the test working:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
private static void SetPrivateField<T>( object o, string fieldName, T newValue )
{
Type type = o.GetType();
BindingFlags privateBindings = BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.Static;
FieldInfo field = type.GetField( fieldName, privateBindings );
field.SetValue( o, newValue );
}

//Mocking code
var page = new PageData();
page.Property.Add( "PageName", new PropertyString( pageName ) );
var propertyPageReference = new PropertyPageReference( new PageReference( 4711) );
SetPrivateField<Guid>( propertyPageReference, "_pageGuid", Guid.NewGuid() );
page.Property.Add( "PageLink", propertyPageReference );

dataFactoryMock.Setup( d => d.GetPage( It.Is<PageReference>( p => p.ID == 4711) ) )
.Returns( page );

And, yay, we have a working test! (Of course, the actual test also had to do a setup for Save, but I’ll leave that as an exercise for the reader.

Finally, there may well be an easier way to do this. If so, I would be most interested to hear of it! Now, I’ll get back to writing unit tests for my project (we still have pretty bad code coverage…). I have a feeling I’ll have to revisit this topic in the future, as more interesting mocking scenarios appear, though.

Powershell makes Windows Server AppFabric tolerable

I have been working quite a bit with Windows Server AppFabric lately, especially the WF Instancing. AppFabric is kind of nice, but unfortunately the GUI inside IIS is mind-numbingly slow! Luckily, there is a better and faster way of querying and working with AppFabric: Powershell.

It turns out that AppFabric comes with a whole bunch of Powershell cmdlets, that allows you to do just about anything you can do in the GUI (and lots of stuff you can’t do in the GUI). If you start Powershell Modules, and type in

1
> Get-Command -Module ApplicationServer

you will get a long list of available commands (or cmdlets).

A long list of available commands

I won’t go in to details, but rather just give you a very useful example of what you can do. When developing workflows, you get a lot of broken workflow instances in your instance store. You often need to find these, and delete them. You can do this in the GUI, of course, but it takes a long time. In Powershell, you can use the commands Get-ASAppServiceInstance and Remove-ASAppServiceInstance. First, we get all the instances for a specific web site:

1
> Get-ASAppServiceInstance -Sitename "Default Web Site"

List of workflow instances

Now, if just want to remove all persisted instances, we can do that by piping the results to Remove-ASAppServiceInstance:

1
> Get-ASAppServiceInstance -Sitename "Default Web Site" | Remove-ASAppServiceInstance

And just like that, all instances are gone! This is just a taste of what you can do, for more info on the commands, check out the documentation on MSDN. It’s still a bit painful to work with AppFabric and WF, but this at least makes it tolerable.

Resizing cross-domain Iframes

UPDATE 2013-03-20
This example is very obsolete, now that we have HTML5 and window.postMessage for sending messages between windows. Unless you are specifically targeting antique browsers like IE7, you should use that instead of this pile of garbage.

—————– WARNING: OBSOLETE JUNK FOLLOWS ————————-

Recently (last week, actually) I came across a problem that turned out to be quite a challenge for me. Possibly due to me lacking some javascript skillz. Anyway, I need to write it down for future reference, as I’m rather happy with the solution in the end.

We have an application that is showed in an Iframe inside a SharePoint site. Initially it was deployed as an application within the Sharepoint site on the same server, but for various reasons we are moving it to another server. Now, since it is an iframe, and we don’t want that to be visible, we need to resize the iframe whenever the content changes size. Now, if the top window and the page in the iframe are on the same server (same domain, actually), this is not a problem. We just call a resize method in the top window on load of the iframe, or whenever we load something ajaxy, like this:

1
2
3
4
5
6
7
8
9
10
11
//In the top window
resizeIframe = function(height) {
var theFrame = document.getElementById('theFrame');
theFrame.height = height;
};

//In the frame page, called whenever something changes the content
resize = function() {
height = document.body.clientHeight + 50;
top.resizeIframe(height);
};

Piece of cake! When we move the application to another domain, however, this won’t work anymore. We are not allowed to call top.resizeIframe from the frame page anymore, since that is considered cross-site scripting. What to do, what to do… Well, in this excellent article, Michael Mahemoff lays out a few options for handling this. The one that seemed to have the most appeal is what he calls “the Marathon version”, which leverages the fact that you can open another iframe inside your iframe, loading a page on the server containing the top page. This new page is allowed to call functions and access the DOM in the top page, and by passing parameters to this page you can set the height. This is basically how it works:

image

Of course, the new inframe is invisible, and is also removed after a few seconds. So, my application now contains this instead:

1
2
3
4
5
6
7
8
9
10
11
runXDomainScript = function(params) {
var iframe = document.createElement("iframe");
iframe.src = 'http://portal.example.com/xss.html?' + params;
document.getElementById('xdomaincontainer').appendChild(iframe );
setTimeout("document.getElementById('xdomaincontainer').innerHTML = ''", 2000);
};

resize = function() {
height = document.body.clientHeight + 50;
runXDomainScript('height=' + height);
};

And portal.example.com/xss.html is a really simple file that looks like this:

1
2
3
4
5
6
7
8
9
<html>
<head>
<meta http-equiv="cache-control" content="public">
<script type="text/javascript">
var h = getQuerystring('h');
top.setFrameHeight(h);
</script>
</head>
</html>

And yeah, getQuerystring is a utility method to read a parameter from the querystring which I will not clutter the code with here. It’s in the example files at the end of this already very long post. Now, when I want to resize my iframe, I just create a new iframe with the .src set to xss.html, with the height as a querystring parameter. And since xss.html is located on the same server as the top window, it is allowed to change the height!

Great! Only, there was another problem. In our application, we are using modal dialogs (taking advantage of the excellent nyroModal library). Now, these dialogs are automatically positioned in the center of the window, which works fine if you’re not in an iframe. So what we need to do is find out how far the user has scrolled the top window, and then move the modal dialog (let’s just call it the div) to the visible portion of the screen. So, we try a function like this:

1
2
3
4
5
6
//In the framed application
moveModal = function(id) {
var offsetHeight = top.window.pageYOffset || top.document.documentElement.scrollTop || top.document.body.scrollTop;
var topPos = Math.max(offsetHeight - 100, 20);
document.getElementById(id).style.top = topPos + 'px';
};

Now, as you might guess, this won’t work. Why? Because we are not allowed to get the pageYOffset property from top.window, because that would be cross-site scripting (not documentElement.scrollTop or document.body.scrollTop either, those are there to make this cross-browser, which I hate). So, now we have a double problem. We need to

  1. Call a function whenever a modal dialog is shown.
  2. Read a property from the top window
  3. Do some calculations
  4. Set a property in the framed page

1 can only be done in the framed application, 2 can only be done in the top window domain (portal.example.com), 3 can be done anywhere and 4 can only be done in the framed application. How do we solve this? Why, the same way as we solved the resizing, only in more steps. Enter the xsscallback page! Here is how it will work:

image

Confusing? A little. But this is the same as earlier, only once we’re done with calculating the offset in xss.html on the portal site, instead of just quitting, we open another iframe, back to the framed site, and open another static html file there, sending the result of the calculation back in the querystring. This html file, xsscallback.html, then can call the function in the original framed page to actually move the div. Code.

1
2
3
4
5
6
7
8
9
//In the framed page
centerModal = function(id) {
runXDomainScript('cmd=offset&id=' + id);
};

moveModalTo = function(id, offsetHeight) {
var topPos = Math.max(offsetHeight - 100, 20);
document.getElementById(id).style.top = topPos + 'px';
};
1
2
3
4
5
6
7
8
9
10
11
12
13
//In xss.html in the portal site
callback = function(params) {
var url = decodeURI(getQuerystring('url'));
var iframe = document.createElement("iframe");
iframe.src = url + '?' + params;
document.getElementById('xDomainScriptContainer').appendChild(iframe);
}

if (cmd == 'offset') {
var offsetHeight = top.window.pageYOffset || top.document.documentElement.scrollTop || top.document.body.scrollTop;
var id = getQuerystring('id');
callback('cmd=offset&h=' + offsetHeight + '&id=' + id);
}

And finally, the xsscallback.html file:

1
2
3
4
5
6
7
8
9
10
11
12
13
<html>
<head>
<meta http-equiv="cache-control" content="public">
<script type="text/javascript">
var cmd = getQuerystring('cmd');
if ( cmd == 'offset') {
var h = getQuerystring('h');
var id = getQuerystring('id');
parent.parent.centerModalDialog(id, parseInt(h));
}
</script>
</head>
</html>

And voìla – the div gets moved. In order to make this work flawlessly, you should also make sure that the xss html files are cacheable and served as quickly as possible. But from my experience, this seems to work fine even on slow machines and slow networks. You could also increase the time until the iframes get removed.

Since I have omitted quite a few details in this post, I’ll end by giving you some working code. These are meant to be deployed under different domains in you web server, for example portal.local and app.local. They are not exactly like my examples here, but close enough.

Get the examples

By the way, I’ll leave it as an exercise for the reader to figure out why I couldn’t simply use position: fixed for the modal dialog.

Debugging Silverlight needs IE?

I’ve decided to learn at least a little Silverlight, since it seems to be pretty cool, and I feel the need to learn something new (yeah, yeah, I’ll learn Erlang or something equally hardcore next). Since I basically don’t know any Silverlight at all, I though I’d start by following a tutorial of some kind. And since the Silverlight Training Course for Silverlight 4 on Channel 9 was announced pretty much the same day, I chose that one.

In Lab 3, I had some difficulties (probably due to my inability to press the correct keys on the keyboard), and felt that I needed to set a few breakpoints and debug. So I did, and pressed F5. But unfortunately, my breakpoints were not hit. Instead I got the always equally entertaining message

1
The breakpoint will not currently be hit. No symbols have been loaded for this document.

Ok, no biggie, we’ve all seen this before. Remove all bin and obj folders, clean, rebuild, shift-F5 to force reload in the browser – but no, still nothing.

However, one thing struck me. I had noticed that my default browser, Firefox, was bad at loading the latest version of the Silverlight app when I started debugging. So I just pasted the URL into Internet Explorer instead and, if necessary, pressed Shift-F5 to force a reload. But now I tried to actually set IE as the default browser (which is insane, I know), and lo and behold - debugging works!

So I draw the conclusion that you need to have IE as the default browser to get Silverlight debugging to work. If so, it should probably be pointed out somewhere, since pretty much every web developer has Firefox or Chrome as the default browser (can’t live without Firebug)…

A little more experimentation has led me to believe that IE does not need to be the default browser of the system, it just needs to be the default browser for the debug start page. If I right-clicked on the start page in the Web project and clicked “Browse with…”, and set IE as default, debugging works. If I set Firefox or Chrome as default, debugging doesn’t work, even if I open the page manually in IE.

I have yet to find out if this is by design or by accident…

Getting an ASP.NET 4 application to work on IIS6

Now, for the last 5 months or so, I’ve been involved in a project where we develop an application with ASP.NET MVC on .NET 4 (we started on the beta 2). The main part of this application is hosted on servers running Windows 2008 Server R2 and IIS 7.5. But unfortunately, one part of the application has to be deployed to servers running Windows 2003 Server, and thus IIS 6.0.

Turns out, it’s a bit tricky to get .NET 4 working on IIS 6.0.

I did the usual stuff, I installed .NET 4 Framework, restarted the server, created a new application in IIS, running in its own Application Pool, and changed the ASP.NET version to 4. Then I tried to access my application. I was greeted by this:

The page cannot be found - WTF??

Strange. I’m pretty sure this is not what should happen. Now, since I’m running ASP.NET MVC, and have to do some mapping stuff on IIS6 (described for example by Phil Haack back in 2008), I naturally suspected this to be the problem. So I created a new application, and just added a default.aspx that prints the current date. Exactly the same result. Weird.

After some extensive googling (Yes, googling. Not Binging.) I got the tip to look at my IIS logfile to see what the error was, and I found this row:

1
2010-04-13 14:27:05 W3SVC674436196 127.0.0.1 GET / - 80 - 127.0.0.1 Mozilla/4.0+(*snip*) 404 2 1260

So, the error is 404.2. A look at the IIS status codes told me what this means: 404.2 – Lockdown policy prevents this request. WTF? Well, it turns out that just because you install .NET 4 on Windows 2003 Server, you’re not automatically allowed to use it i IIS! The article provided a few useful links to the Knowledge base:

  • 328505 How to list Web Server Extensions and Extension files in IIS 6.0
  • 328360 How to enable and disable ISAPI extensions and CGI applications in IIS 6.0

Following the instructions from the first of these, I checked what extensions where available on my server:

Picture of ASP.NET 4 not being allowed.

Notice that the Status for the .NET 4 ASP.NET ISAPI extension is 0? Well, that means that it is disabled. Now, if you read the other knowledge base article, it will tell you how to enable it. Just run the following command:

1
cscript iisext.vbs /EnFile C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\aspnet_isapi.dll

And If I now run the /ListFile command again, I will se that the ASP.NET 4 extension is now showing a little “1” as status, telling me that it is now enabled.

Yay, ASP.NET 4 is enabled!

And after this, my application worked fine. And not only the test application that printed today’s date, but even my ASP.NET MVC application!

Do NOT forget to close() your WCF ServiceClients!

We had a very disturbing problem today. Our client was doing acceptance testing on the application we’re building for them, and had assembled a team of 15 people to do it. The application ran smoothly, until after a few minutes it came to a grinding halt and simply stopped responding. We checked the server, and the CPU was fine. For a while, after a minute or so it went up to 100%.

The application is an ASP.NET MVC web client, talking to WCF services, which are talking to an SQL Server. Everything running on .NET 4 RC. Well, we recycled the application pool, and everything was fine again, for a few minutes…

Some profiling on the server told us that not only did the application freeze after a few minutes, and the CPU went to 100%, but we also saw a shitload of exceptions. Well, to cut to the chase, some more testing revealed that the only page that was still responsive was the only page that didn’t talk to the WCF service at all. Aha!

In our application, we use the StructureMapControllerFactory from MvcContrib to let StructureMap inject dependencies into our controllers. And our controllers are depending on repositories, that are in turn depending on ServiceClients for the WCF services. So basically, we have a repository like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public class FooRepository: IFooRepository
{
private ServiceClients.IFooService client;

public FooRepository(ServiceClients.IFooService client)
{
this.client = client;
}

public Bar GetBar(int id)
{
return client.GetBar(id);
}
}

And a StructureMap configuration like this:

1
2
ForRequestedType<ServiceClients.IFooService>()
.TheDefault.Is.ConstructedBy(() => new ServiceClients.FooServiceClient());

See the problem? Yep, we never close the client. And I can even remember thinking to myself, back in December when we wrote this: “I wonder if the fact that we don’t close the clients will cause a problem… Well, we’ll cross that bridge when we get to it.”

Today we got to the bridge. And it was under enemy fire. So why didn’t we notice this earlier? This is my guess. We have finite number of available connections to the WCF Service. Let’s say 200. And every time we make a request, we create a new client, and use one of these. But, after a while, usually 1 minute, the connection times out and closes. So we need to make enough requests to use all the available connections before they time out. And what happens then? The requests end up in a queue waiting for a connection, and the application stops responding.

So what is the solution? In the WCF textbook, you are supposed to use a service client like this (and some try and catch and other things, of course):

1
2
3
4
5
6
public Bar GetBar(int id) {
var client = new ServiceClients.FooServiceClient();
var bar = client.GetBar(id);
client.Close();
return bar;
}

But that doesn’t work too good for us, because we want to inject the service client into the constructor, in order to be able to mock it away when testing. So how do we ensure that the service client gets closed? Why, the destructor, of course! I added a destructor to each of my repositories, closing the client:

1
2
3
4
5
6
7
8
~FooRepository()
{
var clientObject = client as ServiceClients.FooServiceClient;
if (clientObject != null && clientObject.State == System.ServiceModel.CommunicationState.Opened)
{
clientObject.Close();
}
}

First, we need to cast the client to the underlying class, because the interface doesn’t have the State property, or the close method. Then, we check that is not already closed. And then we closed it.

I was actually not sure that this would work, because I wasn’t sure that the repositories would be garbage collected in time. Buy they were, and it did. So now we’re happy again!

And why did the CPU go up to 100%? Well, when the requests started timing out, we started to get lots of exceptions, which our logger couldn’t handle. We’ll check on that tomorrow. :-)

Keeping things in sync part 2 – Dropbox and Junction

This is kind of a follow-up to my ancient post from november 2008, Keeping things in sync. I still use more than one computer, three to be precise; an Thinkpad X301 at work, a workstation of sorts at home, and a Netbook in front of the TV. And I want the switch between them to go as smoothly as possible. So, just as I did in 2008, I use Dropbox to store all my documents. Great!

But, as it turns out, not all applications lets you choose where to store the data. About 15 minutes ago, I sat myself down in front of my computer at home, to start writing a blog post, a tutorial to Windows Workflow Foundation 4. I started Windows Live Writer (which is a great app for writing the blog posts), and suddenly recalled that I hade already started on that post, but on my laptop.

Live Writer stores its data (posts and drafts) in a a folder called “My Weblog Posts” in the users “Documents” folder. That is not configurable. But I really would like to keep it in my dropbox instead. If only there was a way…

Wait, there is! Junction to the rescue! As it turns out, Windows (or rather NTFS) has been supporting directory symbolic links, or symlinks, since Windows 2000. A symbolic links is an alias for a directory in a different location, and to applications there is no difference between symbolic links and the actual directory. Unfortunately, there is no built-in tool for creating or managing these in Windows. There is, however, a free downloadable tool, called Junction.

So, here is what I did:

  1. I created a folder in My Dropbox called “Apps”, and under that I created a folder called “My Weblog Posts”.
  2. I moved all the content from “Documents\My Weblog Posts” to “My Dropbox\Apps\My Weblog Posts”.
  3. I deleted “Documents\My Weblog Posts”.
  4. I opened a command window, and executed the following command
1
2
> junction.exe "C:\Users\Johan\Documents\My Weblog Posts"
"C:\Users\Johan\Documents\My Dropbox\Apps\My Weblog Posts"

And voilà, I now have a symlink in my Documents folder, pointing to the folder in My Dropbox. Rinse, and repeat this on my laptop, and suddenly my drafts are available on both!

This is, of course, not only useable for Windows Live Writer, but for all applications that keep its data files in some unconfigurable folder somewhere, that you would like to have available on multiple computers.

Hmmm, maybe I should get back to writing that WF4 tutorial now…

Why Workflow Services Storing Their Physical Location In The Xamlx File Is A Very Bad Idea

Just now I was trying to debug a Workflow Service in WF4, that a colleague of mine had created. Strange thing was, even though I set a breakpoint the debugger didn’t stop, it just returned the answer as if I wasn’t debugging at all. In the same projects, I have other Workflow Services, and I hade no problem debugging those (except that debugging workflows is slooow, but that’s beside the point).

I started looking a the Debug output, when this line caught my eye:

Instrumentation for debugger fails. Reason: Could not find file 'C:\TFS\Butler_WCF\EducationWorkflowServices\AktivitetService.xamlx'

“C:\TFS..” – hey, that’s not were I keep my project files! We, of course, use a source code repository for our code (TFS, actually), and every developer checks out the project to a location of his or her discretion. I, for example, use “C:@Projects” as the root folder. My colleague, let’s call her Inger, because that’s her name, uses “C:\TFS”. But how would the debugger know that, and try to use her structure, just because she created the file, you might wonder.

So did I. A little investigation came up with this. In the Xamlx file for the Workflow Service, right at the top, I found this little nugget:

Why is the physical location of the xamlx file stored IN the file?

Yes, it is true. WF4 keep the physical location of the Workflow Service Xamlx in an attribute called sad:XamlDebuggerXmlReader.Filename in the Xamlx file itself! Naturally, my first instinct was just to remove the the sad:XamlDebuggerXmlReader.Filename attribute. No luck, debugging didn’t work at all. So I changed the attribute to point to my file, in “C:@Projects”. And behold – debugging works.

What the were you thinking, Microsoft? Do you actually believe that every developer on a project has the same physical structure on their machines? Or do you think that there is always only one developer on a WF4 project? Do you think it’s a good idea that we have to remember to change the sad:XamlDebuggerXmlReader.Filename attribute every time we need to debug a Workflow Service?

All workflows and no play makes Johan a dull boy. But actually, I think I’ll write a tutorial to Workflow Foundation 4 soon. I’ll call it “how to actually use it”.

Fun with betas and RC of .NET 4 and AppFabric

UPDATE: Turns out there is a less difficult way to do this. The uninstaller just looks for the config file of .NET 4 beta 2 (v4.0.21006), so all you need to do is copy you machine.config and web.config from \Windows\Microsoft.NET\Framework(64)\v4.0.30128\Config to ..\v4.0.21006\Config (you probably have to create the folder), and uninstall it. It is explained in greater detail in this post. So my weekend was saved.

Although I wouldn’t dare complain about the fact that I get to use all the latest Microsoft-technology in my current project, sometimes it can be troublesome.

As I have mentioned in earlier posts, we are building an application in ASP.NET MVC on .NET 4, primarily because we want to use Workflow Foundation 4. Since we will be using long running workflows, we need to persist them, and it seems the good way to do that is using the new Windows Server AppFabric (previously codename Dublin), which also gives us nice monitoring features for WCF.

Now, we started out using VS2010 and .NET 4 beta 2, and about a month ago we installed the beta 1 of AppFabric. As expected, betas are a little buggy, and when the Release Candidate for VS2010 and .NET 4 was released, naturally we wanted to upgrade. So we did. Everything went smoothly, some small changes in the MVC projects, but nothing major. Until we tried AppFabric. We kept getting this error in the AppFabric Dashboard in IIS:

“The configuration section ‘microsoft.applicationServer/monitoring’ cannot be read because it is missing a section declaration”

This rang a bell, since this is the exception you get if your application pool is not running .NET 4, but that was not the case. Well, to make a long story short, after some research, we came across a post on the MSDN AppFabric forum. Seems like AppFabric beta 1 won’t run on .NET 4 RC at all. It just isn’t supported. And a new beta that will run on RC will be released “soon”.

So basically, we have to make do without persistence until that happens. Our next sprint demo is on the 1st of March, and unless the new beta is released well before that, we will have to be very careful not to recycle our app pool during the demo! :-)

On a finishing note, since AppFabric didn’t work anyway, I tried to uninstall it. Unfortunately, that won’t work either. A helpful reply to my reply on the aforementioned post on the MSDN forum explained what I have to do:

  1. Uninstall VS2010 RC
  2. Install VS2010 beta 2
  3. Uninstall AppFabric beta 1
  4. Uninstall VS2010 beta 2
  5. Install VS2010 RC

Sounds like fun… Maybe I’ll try it this weekend. Nope, didn’t have to. See top. :-)

Intellisense for TDD in Visual Studio 2010

While I’m trying to get the time to write a longer post about lessons learned working with ASP.NET MVC 2 and VS2010, I thought I’d throw a shorter one out there in the meantime.

Last week I was att The Gu’s presentation in Stockholm, and while he said a lot of interesting things about ASP.NET 4 and ASP.NET MVC (and some rather uninteresting things in his sales pitch for Silverlight 4), one thing in particular caught my attention: A new intellisense mode for TDD in Visual Studio 2010!

The standard intellisense in Visual Studio is a little “too good” to work well in a TDD scenario. When I write tests for classes and methods that I haven’t written yet, it happily suggests the closest match (like the test class itself for a class name).

No, stupid Intellisense! I don't want to create an instance of the test class!

Annoying. But in Visual Studio 2010, you can change the intellisense mode to “TDD friendly”, just by pressing Ctrl-Alt-Space (and back again, of course)! And instead of the annoying behaviour pictured above, you get this nice and helpful behaviour:

Yes, helpful Intellisense! I do want to create an instance of a class that doesn't exist, thanks for understanding!

And of course, if I wanted a FooControllerTests instance, I could just press the down arrow and Enter to select it. A small feature, but extremely helpful when doing TDD.

Man, I really suck at writing short posts…