Applying MSBuild Config Transformations to any config file without using any Visual Studio extensions

As I have mentioned in previous posts, I frequently use a setup where our TeamCity servers creates deploy packages that are semi-automatically deployed to the web servers. A great help in achieving that is Visual Studios Web.config transformations. However, frequently we have the need to transform other config-files as well, either because we're not in a web project, or because we simple have multiple config files.

I've had some success using a Visual Studio plug-in called SlowCheetah. Unfortunately it does not really play well with TeamCity. Sometimes it works, sometimes not. More the latter than the former. So recently I made an effort to solve this without using SlowCheetah, or any other extension. As it turns out, you can. And it's not even particularly difficult.

First of all, you need to have the Visual Studio Web Application build targets installed on your build server. This can be achieved either by installing an express version of Visual Studio or the Visual Studio Shell Redistributable Package.

Then, create an App.config file in your project. I placed in in a folder called "Config", to avoid any automatic behaviour from Visual Studio. Then, create your transform files, App.Debug.config, App.Release.config and whatever you need (I usually don't use those, but rather Test, Prod, Acceptance etc). Now, these will all be placed beside App.config, and not be linked to it as with Web.config transforms. Not to worry, we'll fix that shortly!

Next, unload your project, and edit the .csproj file. First we'll fix the linking of the files. This is done simply by adding a DependentUpon element inside each Item. Let's say you have this:

<None Include="Config\App.config" />
<None Include="Config\App.Debug.config" />

Simply change it to this:

<None Include="Config\App.config" />
<None Include="Config\App.Debug.config">
  <DependentUpon>App.config</DependentUpon>
</None>

Now, let's move on the real trick. In order to make msbuild transform your config file, we need to att a build event. At the end of the file, add

<UsingTask TaskName="TransformXml" 
AssemblyFile="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v12.0\Web\Microsoft.Web.Publishing.Tasks.dll" /> <Target Name="AfterBuild"> <TransformXml Source="Config\App.config" Transform="Config\App.$(Configuration).config" Destination="$(OutputPath)\$(AssemblyName).$(OutputType).config" /> </Target>

You need to make sure that the path in the second import matches the version of your Visual Studio build targets.

In this example, I have a console application, so I want the result of my transformation to end up in the output directory, and be named as AssemblyName.exe.config, e.g. bin\Debug\MyConsoleApplication.exe.config. In a web application where I have other config files, I would use something like

<TransformXml Source="Config\NLog.config"
              Transform="Config\NLog.$(Configuration).config"
              Destination="NLog.config" />

And if you have more than one config file that you would like transformed, you can of course add several TransformXml-lines. After you're done, just reload the project, and hopefully everything works. At least it works on my machine!

Finally, I should add that I found another Visual Studio extension that seems to work better than SlowCheetah (at least sometimes) called Configuration Transform and make this entire post unnessecary. On the other hand, this way there is less magic and more control, which I personally like. And if your extension suddenly breaks after an update, it might come handy!

UPDATE 2014-03-20 -- I realised that unless your destination file is included in the project, or rather has a suitable Build Action, it will not be included in the deploy package, or deployed at all. Usually the build action should be "Content". You don't have to worry about the content of the destination file, though, as it will be replaced on every build. I prefer not to have it checked in to source control, though, since it would be pretty pointless to check in every change to an auto-generated file.

 

Tags: , , , continuous delivery

Visual Studio feature - Preview web.config transforms

On my way to continuous delivery, I often use Visual Studios built-in support for Web.config transformations, at least for relatively simple situations. This allows you to automatically create variations of you web.config file for different deployment environments. And with the help of the excellent Visual Studio plug-in SlowCheetah, you can apply this to other config files as well.

This is all great, but I found it a little tricky to verify that my config transformations yielded the expected result, I basically had to create the deploy packages and check inside them. But then I  noticed this little gem in Visual Studio:

Yes, there is a preview transform option in the context menu for transform.config-files!

If you right-click on a config transformation file, you get the "Preview transform" option. This was, I think, introduced in Visual Studio 2012. I was just a little slow to notice it. But it's great! If you select it, you get to see the original config file and the transformed file side by side, with all the changes highlighted.

Click me if you can't read! :-)

All the differences between the files are highlighted on the right, and also in the code window itself. This really makes it very easy to verify that the transforms are correct.

Tags: , , continuous delivery

Manipulating history with the HTML5 History API and AngularJS

As I've mentioned earlier, I've been working quite a lot with AngularJS lately, most recently on a search function on a website. Naturally, since this is an ajax application, the search result page never reloads when I perform a search. Never the less, I would like to

  1. Be able to go back and forth between my searches with the standard browser functions
  2. See my new query in the location bar
  3. Reload the page and have the latest query - not the initial one - execute again
  4. Make this invisible to the user, that means no hashbangs - only a nice ?query=likethis.

Fortunately, HTML5 is there for me, with the history api! This is supported in recent versions of Chrome, Firefox and Safari, as well as in Internet Explorer 10. Unfortunately, no support in IE9 or earlier. Anyway, in AngularJS, we don't want to access the history object directly, but rather use the $location abstraction.

The first thing we need to to is set AngularJS to use HTML5 mode for $location. This changes the way $location works, from the default hash mode to querystring mode.

angular.module('Foobar', [])
    .config(['$locationProvider', function($locationProvider) {
         $locationProvider.html5Mode(true);
    }]);

Note: Setting html5Mode to true seems to cause problems in browsers that doesn't support it, even though it's supposed to just ignore it and use the default mode. So it might be a good idea to check for support before turning it on, for example by checking Modernizr.history.

Now, all I have to do to whenever I perform a search is to update the location to reflect the new query.

$scope.search = function() {
    //Do magic search stuff

    var path = $location.path(); //Path without parameters, e.g. /search (without ?q=test)
    $location.url(path + '?q=' + $scope.query);
};

This makes the querystring change when I perform a search, and it also takes care of the reloading of the page. It does not, however make the back and forward button work. Sure, the location changes when you click back, but the query isn't actually performed again. In order to make this work, you need to do some work when the location changes.

In plain javascript, you would add a listener to the popstate event. But, you know, AngularJS and all that, we wan't to use the $location abstraction. So instead, we create a $watch that checks for changes in $location.url().

$scope.$watch(function () { return $location.url(); }, function (url) {
    if (url) {
        $scope.query = $location.search().q
        $scope.search();
    }
});

And that's pretty much it! Now you can step back and forth in history with the browser buttons, and have AngularJS perform the search correctly every time!

Tags: , , ,

Using an AngularJS directive to hide the on screen keyboard on submit

Recently, I’ve been working on a responsive web site running AngularJS.  In this, we have a search form. As search forms usually works, you enter a query in a text field, and then you click the button to search. Or, more likely, you hit enter. Now, since this is a responsive site, this needs to work on a phone too. This is were it gets a bit more tricky.

A search form

When you click the text field, your on screen keyboard pops up. You enter your query, and click Enter, or Next, or Go or whatever the button is called on your keyboard. This submits the form, but unfortunately, the soft keyboard still lingers. If you had clicked the button instead, your keyboard would have disappeared. Why? Because when you click enter on your on screen keyboard, your text field still has focus!

So in order to rectify this, we need to make sure that the text field loses focus when the enter/go/next-button is clicked. It’s easy to make an object lose focus, you just call blur() on it. The problem is when to call it. What happens when you press the enter key on your on screen keyboard? Well, that depends.

Take this form for example:

<form>
  <input type="text" name="query" />
  <button>OK</button>
</form>

Now, in this case, when I press enter in the text field, two events will fire: the submit event on the form, and the click event on the button. The exact same thing happens when you click the button Why? Because the default behaviour of a button in HTML5 is to submit the form. In order to prevent this, you need to use <button type="button">OK</button> instead.

If you do this, pressing enter in the text field will still submit the form, but not click the button. And vice versa, clicking the button will fire the click event on the button, but not the submit event on the form!

Since this is an AngularJS application, the easiest way is to specify a submit action for the form, and then make sure the form submits whether you click the button or press enter. Like this:

<form ng-submit="foo()">
  <input type="text" ng-model="query" name="query" />
  <button type="submit">OK</button>
</form>

A nice and pretty form, but this still leaves me with the original problem, who do I make sure that the text field loses focus whenever I click enter/ok/go on my soft keyboard, or to put it in more technical terms, how do I run blur() on the text field whenever the submit event fires? This is still an AngularJS app, so the answer is of course – a directive! I came up with this:

angular.module('Foobar', [])
.directive('handlePhoneSubmit', function () {
    return function (scope, element, attr) {
        var textFields = $(element).children('input[type=text]');
        
        $(element).submit(function() {
            console.log('form was submitted');
            textFields.blur();
        });
    };
});

Then, you just apply this to the form: <form handle-phone-submit>, and like magic, your soft keyboard disappears whether you press enter/ok/go or the button. There’s a jsfiddle with a working example.

I allowed myself to use jQuery within the directive, but naturally you could do this with just the jQuery lite implementation that always exist in Angular. In that case, the directive function could look something like this:

return function (scope, element, attr) {
    var textFields = element.find('input');
    
    element.bind('submit', function() {
        console.log('form was submitted');
        textFields[0].blur();
    });
};

There are some limitations there, but it seems to work ok.

Creating an SSH tunnel on a Mac

A while ago I wrote a post about how to set up an SSH tunnel to access remote desktop. Since then I've started using a Macbook Air, and naturally I want to do the same.

As it turns out, however, I'm terrible at remembering the syntax. So here it is, for my own future reference:

I want to access Remote Desktop on RDPSERVER, port 3389, and map it to port 3391 on my local machine. To to this I will connect to SSHSERVER and tunnel through that.

> ssh -l username -L localhost:3391:RDPSERVER:3389 SSHSERVER

Easy, peasy. Now I will never forget it again!

Tags: , ,

Updated Anti-XSRF Validation for ASP.NET MVC 4 RC

In our project, we’ve been using Phil Haack’s method for preventing cross-site request forgeries for JSON posts by inserting the request verification token as a header in the request, and then using a custom ValidateJsonAntiForgeryToken attribute to validate it. And it’s been working just fine.

However, with the recent release of ASP.NET 4 MVC RC, it didn’t work anymore. To my initial dismay, it didn’t even compile anymore. Turns out that the method, AntiForgery.Validate(HttpContextBase httpContext, string salt) , that hade been used to validate the tokens is now obsolete.

However, this turned out to be a good thing, as the MVC developers have made it easier to configure the anti-XSRF validation. You can now provide the tokens directly to the Validate method, and thus there is no need to create a wrapper for the HttpContext and the HttpRequest anymore.

Instead, you can just call the validate method with the proper tokens directly from your attribute:

[AttributeUsage(AttributeTargets.Method | AttributeTargets.Class, 
                AllowMultiple = false, Inherited = true)]
public sealed class ValidateJsonAntiForgeryTokenAttribute 
                            : FilterAttribute, IAuthorizationFilter
{
    public void OnAuthorization(AuthorizationContext filterContext)
    {
        if (filterContext == null)
        {
            throw new ArgumentNullException("filterContext");
        }

        var httpContext = filterContext.HttpContext;
        var cookie = httpContext.Request.Cookies[AntiForgeryConfig.CookieName];
        AntiForgery.Validate(cookie != null ? cookie.Value : null, 
                             httpContext.Request.Headers["__RequestVerificationToken"]);
    }
}

And, just to make this post complete, in case the original post is removed, this is the javascript required to add the header to the request:

postJsonWithVerificationToken: function (options) {
    var token = $('input[name=""__RequestVerificationToken""]').val();
    var headers = {};
    headers['__RequestVerificationToken'] = token;

    $.ajax({
        cache: false,
        dataType: 'json',
        type: 'POST',
        headers: headers,
        data: options.jsonData,
        contentType: 'application/json; charset=utf-8',
        url: options.url,
        success: options.onSuccess,
        error: options.onError
    });
}

Finally, you just use it as you would the standard ValidateAntiForgeryToken attribute, by decorating your action method like this:

[HttpPost, ValidateJsonAntiForgeryToken]
public ActionResult DoSomething(Foo foo) {
    //Do something
}

And in your form, you just call @Html.AntiForgeryToken(), just as you would for a normal form post.

A lot cleaner than the previous haack (although it was very clever)!

Setting up an SSH tunnel to access Remote Desktop using Putty and SSHD on Linux

In Approach, we have servers (ya, rly!). From time to time I need to access them. Usually, I do this by connecting to our VPN. At times, however, that is not possible, like for instance when some overzealous firewall administrator has blocked outgoing PPTP. Then you might have the need for an alternate approach, like using an SSH tunnel (provided that the administrator didn’t block outgoing SSH as well…)

And yes, I am aware that there are plenty of guides how to setup an SSH tunnel with Putty, but I found that they are either 1) overly verbose, 2) not exactly describing my problem or 3) wrong.

The situation

  • I want to access a Windows Server using Remote Desktop
  • There is a Linux server running in the same network section. Mine is running Ubuntu, so all examples will be in Ubuntuish.
  • I have a Windows 7 client machine, with the Putty ssh client

 

On the Linux server

  1. Make sure that SSH is installed (it probably is, otherwise, how do you access the server?). If not, $ sudo apt-get install ssh.
  2. Edit /etc/ssh/sshd_config.
    Check that it says AllowTcpForwarding yes somewhere. If not, add it.
  3. Reload sshd.
    $ sudo /etc/init.d/sshd reload or
    $ sudo service ssh reload depending on your Ubuntu version.

 

On the Windows client

  1. If you don’t have Putty yet, get it.
  2. Configure putty to connect to your Linux server using SSH 2.
  3. Under Connection – SSH – Tunnels, map a local port to remote desktop on your Windows server. Usually, the remote port is 3389. The local port can be anything, except 3389 or 3390, which won’t work*.
    Configuration of SSH Tunnels in putty
  4. Save the session and connect to your linux server. You need to log on, and keep the connection open.
  5. Open a remote desktop session to the source port on localhost.
    Opening a new Remote Desktop connection
  6. Profit.

 

* Because 3389 is probably already running a RDP server, and 3390 is blocked in Windows 7 for some reason.

Pass phrase generator in Swedish

Some time ago, the XKCD comic had a strip (below) about why pass phrases makes a hell of a lot better passwords than the standard passwords that many systems force us to use. I really liked the idea, and tried to use phrases instead.

As it turns out though, it’s pretty hard to come up with random phrases on your own. They are not particularly random. Luckily, Jeff Preshing felt the same, and created passphra.se, where you can generate random pass phrases in English, Spanish or French. So I’ve been using that for a while.

But meanwhile, I was thinking “It would be neater to have this in Swedish”. And this weekend I finally managed to get away from doing other stuff long enough to create Lösenfras (Pass phrase).

First I had to find a suitable Swedish dictionary, that was free to use and downloadable. I found Den Stora Svenska Ordlistan with a CC license. I hacked together a small utility to extract just the words in json format, and found that it contained about 88 000 words. Awesome! Four random words from that dictionary gives you 60 quintillion permutations. That should be secure enough!

So then I created a small web page that loads all the words, and suggests four random words. Unfortunately, the phrases it suggested were not particularly easy to remember. How about rullgardinsmeny dagligdags vänsterextremistisk naturahushållning or valutareglering finnighet proletariat queerfeministisk? Yes, Swedish has a lot of compound words.

So after some fiddling around and a good suggestion from my illegitimate wife I decided to remove all personal named, words with other letters than a-z and åäö, and all words with more than two syllables, which in Swedish pretty much equals two vowels, since we don’t really have syllables with more than one vowel. That left me with about 24 000 words. Still a lot, gives 331 quadrillion permutations, and is gives a lot easier to remember phrases.

So, there it is. Lösenfras. Enjoy. I know I will!

lösenfras

One might wonder why I would write a post about a Swedish pass phrase generator in English. Well, I just couldn’t be bothered to make the lang property of my html tag dynamic. And any non-Swedish speaking readers are welcome to use the pass phrase generator of course, although I believe the phrases will be just a tiny bit harder for you to memorize…

Rånmord granhäck kod anslå.

Fixing the blog preview theme in Windows Live Writer

I’ve been using Windows Live Writer to write my blog post ever since I started blogging. Even though the 2011 update made it a little worse than before, it’s still a pretty nice tool to write your posts. Especially with the PreCode plugin that allows me to write code with support for SyntaxHighlighter.

However, since I changed the theme of my blog recently, something went awry with the preview theme in Live Writer. More specifically, the white background for the blog posts disappeared, so it looked like this:

before

You’ll have to excuse the GUI being in Swedish, the computer I took the screenshot on came preinstalled with Swedish, and I although I’ve changed the system language to English, I’ve yet to find a way to change the language in Live Writer… Anyway, as you can see, it’s not optimal if you want to see what you’re writing.

I suppose I could change the Html of my blog, but there’s really nothing wrong with it, I would rather not change it just because Live Writer can’t interpret it correctly. Luckily, there’s another way. The theme files that Live Writer generates gets stored in C:\Users\{username}\AppData\Roaming\Windows Live Writer\blogtemplates. In this folder, there are some more folders with Guid-names. You just have to figure out which one is the correct one. In my case it’s easy, since I just have one blog setup.

Inside the Guid-named folder, you’ll find and index.html file, along with some images and other stuff. When I opened the index.htm file, it looked like this:

<!DOCTYPE html><!-- saved from url=(0025)http://johan.driessen.se/ -->

<HTML lang="en"><HEAD><TITLE>All posts -- Johan Driessen's blog</TITLE>
<META charset="utf-8">
<META content="IE=edge,chrome=1" http-equiv="X-UA-Compatible">
<META name="viewport" content="width=device-width, initial-scale=1.0"><LINK title="Johan Driessen's blog (RSS)" rel="alternate" type="application/rss+xml" href="http://johan.driessen.se/feed/rss/"><LINK title="Johan Driessen's blog (Atom)" rel="alternate" type="application/atom+xml" href="http://johan.driessen.se/feed/atom/"><LINK title="RSD" rel="EditURI" type="application/rsd+xml" href="http://johan.driessen.se/rsd.xml"><LINK rel="icon" type="image/gif" href="http://johan.driessen.se/favicon.ico"><LINK rel="stylesheet" type="text/css" href="file:///C:/Users/jdr1/AppData/Roaming/Windows Live Writer/blogtemplates/3bd255a5-c119-4680-b880-561c5d8efdbd/8e466ec1-3cf5-4403-8f5c-96137b8aa6d3/combined_92C867FD2A8F31A0C.css"><LINK rel="stylesheet" href="file:///C:/Users/jdr1/AppData/Roaming/Windows Live Writer/blogtemplates/3bd255a5-c119-4680-b880-561c5d8efdbd/8e466ec1-3cf5-4403-8f5c-96137b8aa6d3/SmallScreen.css">

<meta http-equiv="CONTENT-TYPE" content="text/html; utf-8"></HEAD>
<BODY>
<DIV id="container"><H1><A href="http://johan.driessen.se/">{post-title}</A></H1>{post-body}</DIV></BODY></HTML>

Aha, lots of elements missing around the post. So I changed the important part to this:

<DIV id="container">
<section id="main">
    <article class="post">
        <H1><A href="http://johan.driessen.se/">{post-title}</A></H1>
        <section class="content">
            {post-body}
        </section>
    </article>
</section>
</DIV>

And after restarting Windows Live Writer, it looks like this instead:

after

Changing the html code in index.htm fixes the appearance of the Edit mode. In order to fix the appearance of the Preview mode, I had to change the index[1].htm file as well, but I just pasted the same html there, and everything looked as it should!

Repositories, can't live with them, can't live without them?

I always struggle with design. Systems design, that is, not graphical design (well, ok, that too…). One thing I never seem to be really happy with is Repositories. They often seem to be a lot of work, and never really seem to fit.

Once upon a time, I used to follow a strict chain of command in the design:
Controller –> Service –> Repository –> Database
This naturally led to quite a lot of useless service methods, that did nothing but forward the call to the repository. Does this seem familiar?

public List<Bar> GetBarList() {
    return repository.GetBarList();
}

public void SaveFoo(Foo foo) {
    repository.SaveFoo(foo);
}

Not particularly useful. So when I started working on this blog engine (Alpha) almost a year and a half ago, I made the bold choice to allow the controller to speak directly to the repository, as well as the service! If nothing else, it spared me from writing a few brain-dead methods. The downside was that the responsibilities became a little less clear.

I also tried to avoid repeating myself in the repositories by creating a base repository class (EntityRepository), and letting the specific repositories inherit from than and extend it if necessary. And by the way, I’m using RavenDB for Alpha, so Session is an instance of IDocumentSession. And IEntity just makes sure that the entities have an Id property. Anyway, it looked something like this (check out the Alpha source at revision dc0837bac57e for the real code):

public interface IEntityRepository<T> where T : IEntity
{
    void Save(T entity);
    T GetById(string id);
}

public abstract class EntityRepository<T> : IEntityRepository<T> where T: IEntity
{
    //Ignoring how Session got there...

    public virtual void Save(T entity)
    {
        Session.Store(entity);
        Session.SaveChanges();
    }

    public virtual T GetById( string id )
    {
        return Session.Load<T>( id );
    }
}

public interface IFooRepository : IEntityRepository<Foo>
{
    Foo GetByBar(Bar bar);
}

public class FooRepository : EntityRepository<Foo>, IFooRepository
{
    public Foo GetByBar(Bar bar)
    {
        return Session.Query<Post>().SingleOrDefault(x => x.Bar == bar);
    }
}

Now, that’s a little bit nicer than writing the exact same code for every different entity. But we still have these repositories, that frankly doesn't do much. They are just ceremony. Sure, they abstract away the actual persistence method, the interfaces could look the same if I used SQL Server. Except that they wouldn't, since I would be using an int instead of a string for Id. And I have never ever had to change the data store in a project. And if I did, I would probably rewrite a lot of stuff. There's really no point in abstracting away your data store, it only forces you to use the lowest common denominator. 

So, what to do? Well, for now, I'm just getting the rid of the repositories. I'm letting the controllers access the IDocumentSession directly. Since querying RavenDB is more or less the same as querying a generic list, I feel pretty safe letting the controllers do that, at least for the simple stuff. The more complicated stuff I'm moving to the services. So it will basically be the same architecture as before, just without the repositories.

But that's just the first step! After that, the services will have to go as well, in favor of intelligent queries and commands, in a more CQRS-ish way. I'm thinking something in line of what Rob Ashton outlined in a great blog post last summer. We'll see how it goes.

So far I've managed to get rid of all Repositories except the PostRepository. I'd give it a week, tops, then it's gone. So I guess you can live without them.

Disclaimer: I realize that there are scenarios where Repositories are useful. For example, in the project I'm currently getting paid to work on (unlike Alpha), we get our data from services, which we use WCF to communicate with. This means a lot of mapping between data contracts and entities, error handling and other fun stuff. In this case, the Repositories actually do stuff, and can be justified.

Tags: , , ,
Show more posts