Hello darkness, my old friend

If you are reading this on a computer[1] with a dark system theme, you might notice that this blog now also has a dark theme. Although dark themes to be all the craze nowadays, I’ve been using dark themes for quite some time, and I’ve been wanting to implement a dark theme option for my blog since forever. But I could never decide on wether it was to be something that would change automatically according to the time of day, or if the sun was up or not, or something the visitor could toggle.

Well, as it turns out, while I have been procasticating, the browser vendors have solved the problem for me! Earlier this year a new CSS media query was introduced: prefers-color-scheme.

This little gem equals dark if the system has a dark color scheme, and light otherwise. And it is supported by the latest versions of Firefox, Chrome, Safari and even Edge[2]. It works something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
/* Default color scheme */
body {
background-color: #fff;
color: #000;
}

/* Color scheme for dark mode */
@media (prefers-color-scheme: dark) {
body {
background-color: #000;
color: #555;
}
}

If the browser does not support prefers-color-scheme, or if it has a preferred color scheme other than “dark” (i.e. light), it will just ignore the overrides in the media query. So this is basically all I needed to do (well, I had to make a few more changes) to make the theme of the site follow the system theme. Sweet!


  1. 1.A smartphone is a computer.
  2. 2.According to caniuse.com

Some Problems and Solutions When Creating Xamaring Android Bindings

As announced in my last post, we recently created Xamarin Bindings for the Adyen Android SDK. In this post, I thought I would share som experiences in creating those bindings, like what I kind of problems we ran into, and how we fixed them.

The process of creating Xamaring bindings can be a bit tricky. The process is documented at docs.microsoft.com, but I struggled quite a while in getting it to work.

First of all, you need the actual Android libraries that you want to create the bindings for. These are (at least in this case) available at jcenter. Then you need to figure out exactly which libraries you need. In order to do this you can look in the *.pom file for a specific library to find out what other libraries it depends on.

Adyen advocates the use of their Drop-in solution which includes all supported payment types, but this also means that we would have to create bindings for all those libraries. This would amount to about 25 different libraries! However, many of the payment types supported were not interesting to us, at least not right now. So instead we opted to use only the Card Component and the Redirect Component, which would only require us to create bindings for 7 libraries[1].

There are a couple of different ways to create bindings, but as Adyen provides AAR files, I basically followed the steps on the Binding an .AAR page. This means creating a separate Xamarin Bindings Library for each AAR file, and the easiest way is to start at the “bottom”, and create a binding for the library that does not have any other java dependencies, in this case adyen-cse and work you way up, adding references to the other bindings as you go along. The Android dependencies in the POM files can simply be added as NuGet package references. Then you compile the project.

It won’t compile!

Right. Most of the time, when you create a binding, add the AAR file and try to compile, it won’t work the first time. This could be due to a number of problems, but in this project, I’ve mainly had a handful of problems, which I’ll elaborate further below.

Problem 1 - Wrong return type

Sometimes the generated code will have the wrong return type. This is often because of the difference between how interfaces and generics work in Java and C#.

For example, in the original code for LogoConnection in base-v3, the call() method returns a BitmapDrawable, which is ok, since the class implements the interface java.util.concurrent.Callable<T>, which is a generic interface, so you can have call() return a specific type.

In Xamarin, however, the interface java.util.concurrent.Callable is not generic (I don’t know why), and thus LogoConnection.Call() must have a return type of Java.Lang.Object. In the generated code, however, the return type is still BitmapDrawable. Fortunately, this is an easy fix!

Every generated method and class has method/class reference as a comment above it. This can be used to modify the generated code in the Metadata.xml file. On of the modifications that can be made is to change the return type. The following node changes the return type of the call method to Java.Lang.Object:

1
<attr path="/api/package[@name='com.adyen.checkout.base.api']/class[@name='LogoConnection']/method[@name='call' and count(parameter)=0]" name="managedReturn">Java.Lang.Object</attr>

The path is just copied from the comment above the method in the generated code, but it is pretty straight forward anyway.

Problem 2 - Wrong parameters

Another problem that can occur, and that is related to the previous one is that sometimes generated methods have the wrong parameter types. This is not quite as easily fixed, as I have not found a way to modify the parameters of a method solely by a Metadata.xml node.

Example: In com.adyen.checkout.base.ui.adapter.ClickableListRecyclerAdapter, the onBindViewHolder method takes a generic ViewHolderT as the first parameter. But in the generated code, ClickableListRecyclerAdapter is no longer generic, so OnBindViewHolder instead takes a Java.Lang.Object, as can be seen in the snippet below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// Metadata.xml XPath method reference: path="/api/package[@name='com.adyen.checkout.base.ui.adapter']/class[@name='ClickableListRecyclerAdapter']/method[@name='onBindViewHolder' and count(parameter)=2 and parameter[1][@type='ViewHolderT'] and parameter[2][@type='int']]"
[Register ("onBindViewHolder", "(Landroid/support/v7/widget/RecyclerView$ViewHolder;I)V", "GetOnBindViewHolder_Landroid_support_v7_widget_RecyclerView_ViewHolder_IHandler")]
public override unsafe void OnBindViewHolder (global::Java.Lang.Object viewHolderT, int position)
{
const string __id = "onBindViewHolder.(Landroid/support/v7/widget/RecyclerView$ViewHolder;I)V";
IntPtr native_viewHolderT = JNIEnv.ToLocalJniHandle (viewHolderT);
try {
JniArgumentValue* __args = stackalloc JniArgumentValue [2];
__args [0] = new JniArgumentValue (native_viewHolderT);
__args [1] = new JniArgumentValue (position);
_members.InstanceMethods.InvokeVirtualVoidMethod (__id, this, __args);
} finally {
JNIEnv.DeleteLocalRef (native_viewHolderT);
}
}

However, since ClickableRecyclerAdapter inherits from Android.Support.V7.Widget.RecyclerView.Adapter, OnBindViewHolder needs to take a RecyclerView.ViewHolder as its first argument. The solution to this problem - and many others - is to remove the generated method in the Metadata.xml, and add a modified version in the Additions folder:

1
<remove-node path="/api/package[@name='com.adyen.checkout.base.ui.adapter']/class[@name='ClickableListRecyclerAdapter']/method[@name='onBindViewHolder' and count(parameter)=2 and parameter[1][@type='ViewHolderT'] and parameter[2][@type='int']]" />
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
//Namespace should match that of the generated class
namespace Com.Adyen.Checkout.Base.UI.Adapter
{
//Note that this is a partial class
public partial class ClickableListRecyclerAdapter
{
//This code is identical to the generated code above,
//except for the type of the first parameter
[Register("onBindViewHolder", "(Landroid/support/v7/widget/RecyclerView$ViewHolder;I)V", "GetOnBindViewHolder_Landroid_support_v7_widget_RecyclerView_ViewHolder_IHandler")]
public override unsafe void OnBindViewHolder(RecyclerView.ViewHolder viewHolderT, int position)
{
const string __id = "onBindViewHolder.(Landroid/support/v7/widget/RecyclerView$ViewHolder;I)V";
IntPtr native_viewHolderT = JNIEnv.ToLocalJniHandle(viewHolderT);
try
{
JniArgumentValue* __args = stackalloc JniArgumentValue[2];
__args[0] = new JniArgumentValue(native_viewHolderT);
__args[1] = new JniArgumentValue(position);
_members.InstanceMethods.InvokeVirtualVoidMethod(__id, this, __args);
}
finally
{
JNIEnv.DeleteLocalRef(native_viewHolderT);
}
}
}
}

Problem 3 - Missing method

In at least one case, the generated code was simply missing a method that was required by the base class or interface. The method for fixing this is pretty much like described above, although you obviously don’t need to remove it in metadata.xml. You also have to figure out how the method should be implemented, but that is not as difficult as it sounds, as all implementations follow the same pattern.

In my case, the generated class Com.Adyen.Checkout.Card.CardListAdapter was missing the OnBindViewHolder method, which is required by the RecyclerView.Adapter base class, and is obviously present in the original code.

This solution, then, is to add a partial CardListAdapter class in the Additions folder, and add the OnBindViewHolder implementation in it. In this case it was very easy, since I could just basically copy the OnBindViewHolder implementation from ClickableListRecyclerAdapter above (or any other class that has it).

Problem 4 - Other unfixeable problem -> Kill it!

Sometimes you will get another problem, that is not as easy to fix, for whatever reason. In many cases, you can solve this problem just by removing the offending method altogether. If it is not a method that you need to call directly from the app, and not a method that is required for implementing an Interface or an abstract base class, you can probably remove it with a remove-node line in Metadata.xml and be done with it.

The reason for this is, of course, that once the call to a native method has been made, for example with InvokeVirtualVoidMethod as above, subsequent calls will be completely native, so it doesn’t matter if the methods have .NET wrappers or not. At least that is my understanding of it.

Bug in the AAR file

When I tried to use the Card Component in the Demo App, I got the build error Multiple substitutions specified in non-positional format; did you mean to add the formatted="false" attribute?. Turns out there is (at least at the time of writing) a bug in strings.xml in the card-ui library.

1
2
3
4
5
<resources>
<!-- snip -->
<string name="expires_in">%s/%s</string>
<!-- snip -->
</resources>

Turns out you can’t have multiple %s in a string resource because of reasons. If you do, you need to add formatted="false" to the node. I fixed this by editing the AAR file (it’s just a zip file, really), and adding the attribute in /res/values/values.xml (which is a squashed version of all xml files in the res folder).

Unfortunately, this means I had to check in the modified AAR file. For the rest of the files, I have a Cake build script that just downloads all the AAR files from jcenter. But hopefully it will be fixed in the next release of card-ui.

I hope someone who has to create Xamarin Bindings will find this rather long and unstructured post useful. If nothing else, it will help me remember the problems I had and how I solved them for the next time.


  1. 1.Actually, I finished the first version of these bindings in June. Unfortunately, just as I though I was done, I noticed that the Adyen developer documentation] had changed substantially. While I was working on this they had release an RC version of version 3.0, which was totally different from version 2.4.5 that I had been working on. So I basically had to start all over again and create new bindings for v3. The old bindings are available at github (tag: 2.4.5), and also at NuGet (Approach.Adyen.UI.Droid), should anyone be interested. But it's probably better to use the new ones.

Announcing Xamarin Android Bindings for Adyen Checkout

I’ve been working on implementing Adyen payments for a customer lately. They have been using another PSP for many years, but are now switching to Adyen. This is super easy on the web site, but as it turns out, not so easy in the mobile app.

Adyen offers a lot of sdk’s, including an Android SDK. The app I’m working on, however, is developed in Xamarin, and unfortunately, Adyen does not offer a Xamarin SDK. That means that in order to use the Android SDK, we have had to create Xamaring bindings for the java SDK.

We have created a set of Xamarin Android Bindings for the Adyen Checkout components. So far, we have only implemented the Card Component and the Redirect Component, because that was all we needed at the time.

The components are available as NuGet packages:

The source code is available at our Github account, should you want to build your own components, or maybe fix a bug or two. There is also a Demo app in the github repository, which should help you use the components. So yeah, that’s our first official public open source project!

I have published a follow up post where I dwelve a little deeper into the problems I ran into while creating Xamarin Bindings for Android, and how to fix some of them. So check that out as well, if you’re into that kind of stuff!

Resolving ILogger with Nancy and TinyIoC

This is a shorter follow-up post to my recent post about configuring NLog and ILogger in ASP.NET Core. As I mentioned there, since we’re using Nancy for our project, we can’t just use the built-in dependency resolver in ASP.NET Core, since Nancy uses it’s own dependency resolution.

In most cases, we use Autofac and the Nancy Autofac bootstrapper, but in this case, we were using the default TinyIoC implementation, so that’s what I’ll write about in this post. I might write another follow-up post when I implement this for Autofac.

First of all, we need to pass the ILoggerFactory that we configured in the previous post. Since this is available in Startup.Configure we can just pass it on to our Nancy bootstrapper.

1
2
3
4
5
6
7
8
9
10
11
public class Startup
{
public void Configure(IApplicationBuilder app, IHostingEnvironment env,
ILoggerFactory loggerFactory, IConfiguration configuration)
{
app.UseOwin(x => x.UseNancy(new NancyOptions
{
Bootstrapper = new CustomBootstrapper(env, configuration, loggerFactory)
}));
}
}

Now, if we were content with just resolving then non-generic version of ILogger this wouldn’t be much of a problem, we could just create a default logger, and register that. But since we want to use the generic ILogger<T>, it’s a little more complicated.

So we can use this custom bootstrapper:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
public class CustomBootstrapper : DefaultNancyBootstrapper
{
//Of course we have a constructor that takes the arguments passed from Startup
//and sets them as fields, but that seems obvious.

protected override void ApplicationStartup(TinyIoCContainer container, IPipelines pipelines)
{
base.ApplicationStartup(container, pipelines);

//Fallback for non-generic logger
var defaultLogger = loggerFactory.CreateLogger("Default");
container.Register<ILogger>(defaultLogger);
//The generic constructor for Logger needs ILoggerFactory
container.Register<ILoggerFactory>(loggerFactory);
//Register generic logger as multi instance
container.Register(typeof(ILogger<>), typeof(Logger<>)).AsMultiInstance();
//TinyIoC cannot resolve ILogger<> directly in modules for some reasons,
//so we have to register this one manually.
container.Register<ILogger<API.Modules.FooBarModule>>(
(c, an) => loggerFactory.CreateLogger<API.Modules.FooBarModule>());
}
}

Now, there a couple of things that are important here:

  • We need to register ILoggerFactory even though we aren’t going to use it, since the generic constructor to ILogger needs it.
  • The generic logger needs to be registered with .AsMultiInstance(), otherwise it will be resolved only the first time, and the same (and wrong) generic instance will be re-used after that.
  • For some reason it seems the resolution of ILogger<> doesn’t work in the modules themselves. This might have something to do with how Nancy auto discovers the modules, or it might have something to do with TinyIoC, I don’t know. But since generally we do very little logging in the modules themselves, we just manually register the loggers that we need for the modules. Other options would be to for example
    • Use the non-generic ILogger in the modules
    • Use the ILoggerFactory instead in the module instead, and manually create a generic logger with loggerFactory.CreateLogger<FooBarModule>

I’m sure there are other, and probably better ways to this, but this seems to work well enough.

Repairing a cracked PCB in a Commodore 1901 monitor

This is the second and final part in a very short series where I improve and repair my Commodore 1901 monitor. In part 1 I added a SCART connector with analog RGB and audio support, but also discovered that the colours were a bit off – especially when using an RGBi input, such as CGA – and discovered a crack on the PCB. In this part, I will repair the PCB and hopefully fix the colours.

First I had to have a good look at the crack. It was in the lower left corner of the PCB, close to the potentiometers that adjust the color levels, as marked in the picture below.

After pulling the board out a bit and turning the monitor upside down, I could get a closer look at the crack.

That doesn’t look too fucking good. No less than nine (9) traces are broken. Fortunately, since this is an old monitor, the PCB is single layer, so there are no traces on the back, and no traces inside. The easiest way to repair a broken trace on a PCB is to find a solder joint on each side of the crack and solder a wire over the crack. But I also wanted to try another way. So for the first three traces, where there was enough space, I just scraped a bit of the outer layer off, and soldered a very short piece of wire right over the crack.

For the rest of the broken traces, there just wasn’t enough room to use this method, at least not with the tools and skill at my disposal. So I had to solder wires over the rest of the cracks.

After this I reassembled the monitor (well, actually, I finished the SCART mod as well) and connected my Bondwell Model 8 to the RGBi-input. To my great surprise everything worked perfectly! The lovely CGA palette of white, cyan and magenta was as vibrant as ever with no sign of the yellowish tint from before, and some careful banging on the side of the screen no longer causes the colors to change. So I have to label this a complete success!

I now have the perfect monitor for my small[1] collection of retro computers. It takes RGBi, SCART with analog RGB and separate Chroma and Luma input (like S-VIDEO). And it even has a built-in speaker! The only input I so far haven’t had much success with is composite. If I connect composite to the Luma input (the yellow RCA jack), I get a monochrome picture (not a great surprise). If I connect it to the Chroma instead, I get no picture at all. If I split the composite cable and connect it to both, I still only get monochrome. If anyone has a working way to connect a composite signal to separate luma and chroma inputs, I would be very interested. A minor annoyance though, as I can connect composite to a TV instead. So, yay, working Commodore 1901 monitor!

Finally, here is a picture of my five year old son playing Krakout on the repaired monitor!


  1. 1.I would consider my collection small. There are others in my family who would voice a different opinion...

Properly configuring NLog and ILogger in ASP.NET Core 2.2

Ever since we started using dotnet core a couple of years ago, both for new projects and porting old projects, we’ve been struggling with configuration. Especially regarding logging. The official documentation has been – to put it mildly – confusing and inconsistent, and to make matters worse, we’ve been wanting to use NLog as well. In the old days (e.g. when we used .NET Framework 4.x) using NLog was pretty easy, we just added a NLog configuration section to web.config (or a separate file if we were being fancy), and then just accessed the static instance of NLog with LogManager.GetCurrentClassLogger(). This, however, does not work particularly well in dotnet core, for the following reasons:

  • Dotnet Core does not like static accessors
  • Dotnet Core really would prefer if we used the ILogger interface to log stuff
  • We don’t have a web.config anymore

So, over the last years I’ve tried different approaches to this, without ever being fully happy with the result. But with recent versions of dotnet, and multiple more or less ugly attempts, I feel I finally have a pretty good grasp of how to set everything up properly, so I though I’d better write it down for future reference before it slips my mind again (my mind is very good at remembering release years for old movies, but not so great at remembering dotnet configuration syntax).

So, first things first. We have an asp.net core web app targeting netcoreapp2.2, and in order to use NLog for the logging, we need two additional package references:

1
2
<PackageReference Include="NLog.Extensions.Logging" Version="1.5.0" />
<PackageReference Include="Nlog.Web.AspNetCore" Version="4.8.2" />

Then, we need to configure the app configuration in Program.cs. In older versions of dotnet core most of this setup was done in Startup.cs, but it has since mostly been moved to the Program class.. Besides setting up the logging, we also configure the rest of the app configuration here, e.g. setting up appsettings.json. For more fundamental information about the Program.cs and Startup.cs classes, see docs.microsoft.com.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
//This method is called from Main
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>()
.ConfigureAppConfiguration((hostingContext, config) =>
{
var env = hostingContext.HostingEnvironment;

//Read configuration from appsettings.json
config
.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
.AddJsonFile($"appsettings.{env.EnvironmentName}.json",
optional: true, reloadOnChange: true);
//Add environment variables to config
config.AddEnvironmentVariables();

//Read NLog configuration from the nlog config file
env.ConfigureNLog($"nlog.{env.EnvironmentName}.config");
})
.ConfigureLogging(logging =>
{
logging.ClearProviders();
logging.AddDebug();
logging.AddConsole();
logging.AddNLog();
});

The key here is of course the env.ConfigureNLog($"nlog.{env.EnvironmentName}.config") statement, which allows us to read the NLog configuration from a standard NLog configuration file, just as we did in the old .NET Framework. The ConfigureNLog extension method is provided by the Nlog.Web.AspNetCore package. In my example I have different nlog config files for different environments, just as I have different appsettings for different environments. The nlog.*.config files are automagically copied to the publish directory, just as the appsetting files. We also configure the different loggers, and add a Debug, a Console and an NLog logger, which all will the receive the same logging data.

This also has the additional benefit of getting rid of a very annoying warning that you get if you still use the old method of adding loggers in Startup.cs:

ConsoleLoggerExtensions.AddConsole(ILoggerFactory)' is obsolete: 'This method is obsolete and will be removed in a future version. The recommended alternative is AddConsole(this ILoggingBuilder builder).

And with this, we’re pretty much finished. All setup regarding logging and app configuration can be removed from Startup.cs unless you need to do other fancy stuff there. Since IConfiguration and ILoggerFactory is already configured in Program.cs, you may have to inject them in Startup. This can be done in either the constructor or in the ConfigureServices or Configure methods. I really can’t say which is best.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public class Startup
{
public Startup(IHostingEnvironment env, IConfiguration config)
{
//I guess you could store config as a field here and access it in the other methods
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env,
ILoggerFactory loggerFactory, IConfiguration configuration)
{
//You can inject both ILoggerFactory and IConfiguration directly
//into the configuration methods as well
}
}

If you are using the standard asp.net core dependency resolution, this is it! You can inject ILogger or (preferably) the generic ILogger<FooBar> anywhere you want to log stuff, and just log away. In our case, we use Nancy and TinyIoC (or frequently Autofac) for dependency injection, which makes things a little more complicated, but that will make for an excellent post of its own!

Adding an RGB SCART connector to a Commodore 1901 monitor

So, this post is going to be a departure from most of the previous content on this blog. This may or may not be indicative of future posts.

A couple of months ago, I bought a cheap, used Commodore 1901 monitor[1] from Tradera. The Commodore 1901 has digital RGBi input using D-SUB 9 connector, as well as separate Luminance and Chrominance inputs via RCA jacks (which is the same signal as S-Video, just different connectors). I thought this would be a good monitor for my old Bondwell Model 8 computer, which only has CGA output (and probably deserves a post of its own). It would probably also work with my Commodore 64. The Commodore 1901 also has a built-in speaker, that connects with yet another RCA plug, so I wouldn’t even need a separate speaker.

When I connected my Bondwell to the monitor, it was indeed glorious, as evident in the image below. What is harder to see is that the colours where a bit off, there was a bit of a yellow tint that kind of came and went.

I managed to find the service manual for the Commodore 1901 monitor, and found that there were a couple of potentiometers that could be adjusted if the colour was off. So I opened the monitor and adjusted the potentiometers which at least made the colour a little bit better. Unfortunately, I also noticed that the PCB had a small crack, which caused some bad connections, and was probably the cause of the colour problem. More about this later.

But what I also found, to my great surprise, was a number of solder points that looked like they could fit a SCART connector, and a matching hole in the metal backplate. What on earth could this be for? Maybe this monitor came in a different version[2], with a SCART connector? But if so, what kind of signals was used[3]? And did this version actually use those signals? Would it be possible to get analog RGB input by adding a SCART connector?

A bit of research indicated that yes, this might indeed be possible. I found a thread on amibay.com and blog post by a Danish guy (unfortunately missing all images[4]) that discussed this. The problem seemed to be that the solder points for the SCART connector on the PCD are oriented backwards, so that a standard 90-degree connector won’t fit. So the usual solution seemed to have been to solder wires between the PCB and the SCART plug. However, I managed to find an angled SCART connector on eBay that seemed to be oriented the other way around. It sure looked like it would fit!

So, the first thing to do was to remove the solder blocking the holes. Time to heat up my solder sucker!

After this, it was simply a matter of fitting the SCART connector and soldering it in place. Or rather, it would have if the darn plug would have fitted through the hole in the metal frame!! When I had fitted the legs through the holes in the PCB, it was completely impossible to get the plug through the hole. In the end, I hade to bring out a knife and go to town on the poor plug.

Finally, I was able to fit the SCART connector through the hole, and solder it in place.

And now, the moment of truth. Would this work? I have an Amiga 600 with a SCART cable that carries not only analog RGB video, but also sound. So maybe I would get sound through the built-in speaker as well? Time to connect a cable. Would it even fit the mangled SCART connector?

The answer to the last question is yes, it fits. And the answer to rest of the questions are yes, everythings works perfectly! I get a crystal clear image from the amiga, and I get the sound through the speaker! The only thing left to do was to make a hole in the plastic cover as well, which was easy since there was already an indication in the cover where to cut.

So, after cutting a hole in the cover, it was just a matter of putting everything back, and look at the nice result:

And finally, here is a picture of the Amiga workbench on the Commodore 1901 monitor:

So hooray, everything is great! Except for the crack in the PCB, remember? Since I had the monitor open, and the soldering iron out, I decided to see if I could fix that as well. But I believe this post is long enough already, so that will have to wait until part 2.


  1. 1.The Commodore 1901 monitor was a PAL-only monitor produced between 1986 and 1988, and was meant to be used together with the Commodore 128. It is not as famous as the 1084 monitor but, as we will see, with the SCART modification it is just as useful!
  2. 2.The monitor was actually manufactured by Thompson. And Thompson did release their own version of it with a SCART connector, the Thompson 450G. Why the Commodore version came without it, I do not know.
  3. 3.The SCART connector actually carries a lot of different signals. It can carry composite video, s-video and RGB, and event YPbPr, as well as stereo sound. Wikipedia has a good article.
  4. 4.While writing this post I checked the blog post again, and now it seems all images are back! This would have made it easier for me when I actually was working on the monitor!

The blog is now even more static than before!

This blog has been rather static lately. In fact, I haven’t written a new post since 2014. I’m sure that has nothing to do with the fact that I became a father for the second time in 2014. Obviously a coincidence.

But now it’s time for the blog to become static in a whole new way! For more than 8 years, this blog has been running on my own blog engine, but now the time has come to leave that behind, and move to the wonderful world of static site generators. I know, I’m a couple of years late too the party, but better late than never!

After looking at a lot of alternatives, and sinking way to many hours into making a combination of gulp and metalsmith work the way I wanted it to, I finally decided on using Hexo to generate the site. That worked pretty much out to box, although I still feel I have enough flexibiliy to make it work that way I want. Hexo generates a completely static site, and I just need a lightly configured Nginx server in front of it, mainly to keep some old links alive.

So now the source code for both the posts and the scaffolding for the blog lives on Github, and publishing a new post is a simple matter of writing it in markdown and pushing to Github.

Or, well, it will be once I finish the script that auto publishes the blog. So, any day now.

I also gave the blog a new coat of paint, which was sorely needed. Hopefully this will lower the bar for me to write new posts, which might mean that I will be able to produce more than one every five years. Fingers crossed.

Deploying to remote IIS with MsDeploy

We’ve been using MsDeploy to automate our web site deploys for some time. Our build server (running TeamCity) creates the deploy packages, and a PowerShell script on the production server downloads the packages and deploys them to IIS. Recently, we added a fallback-server in another physical location in case there is a problem with the normal server. Naturally, we want to make sure that all the web sites are up to date on the fallback server as well. And that means we want to make the scripts that deploy the site on the production server also deploy to the fallback server.

Now, MsDeploy has support for deploying to other servers, but as it turns out, it can be a little tricky to get it working. One option is to use a windows user with administrator privileges on the target server, but we didn’t really want to do that. The other option is to use an IIS Manager User. This options require a couple of steps to get the authentication working.

1. Create a new IIS Manager User

The first thing you need to do is to create an IIS Management User. This is done by opening the IIS Manager, clicking on the server node, and then Management - IIS Manager Users. Add a new user, let’s call it “deploy” with the password “password”.

2. Allow the IIS Manager User on the site

The next step is to give the user permissions to deploy on all the sites that are to be deployed this way. Click on the site node and then on IIS Manager Permissions. Under Actions, click on Allow User.

Select IIS Manager, and then click Select to find your user. Unfortunately, you have to repeat this process for each site.

3. Give IIS Management Service permissions on site

A not so obvious step is that you need to make sure that the IIS Management Service has permissions to actually perform the deploy on each site. The easiest way to do this is to right-click on the site in IIS Manager, and select Edit Permissions. Under the Security tab, give Local Service “Full control”.

By default, this IIS Management Service runs as Local Service, but if you have changed that, you’ll have to use that account instead. It might work with only modify permissions, but it didn’t for me.

4. Run msdeploy with the correct parameters

Finally, the trickiest part is getting the parameters to msdeploy right! This is what we ended up using.

> msdeploy.exe -verb=sync -source:package="PACKAGE.zip" -dest:auto,computerName=https://FALLBACKSERVER:8172/msdeploy.axd?site=SITENAME,userName=deploy,password=PASSWORD,authType=basic -setParam:"IIS Web Application Name"="SITENAME" -allowUntrusted=true -skip:Directory="App_Data"

There are some things worth mentioning here. First, you need to use the full url to the server (including msdeploy.axd) with the sitename as a querystring parameter in order to be able to use a IIS Manager User, since they only have permissions on individual sites. Otherwise the authentication will fail. Also, you need to set authType=basic, otherwise it will try to use a Windows user instead.

Applying MSBuild Config Transformations to any config file without using any Visual Studio extensions

As I have mentioned in previous posts, I frequently use a setup where our TeamCity servers creates deploy packages that are semi-automatically deployed to the web servers. A great help in achieving that is Visual Studios Web.config transformations. However, frequently we have the need to transform other config-files as well, either because we’re not in a web project, or because we simple have multiple config files.

I’ve had some success using a Visual Studio plug-in called SlowCheetah. Unfortunately it does not really play well with TeamCity. Sometimes it works, sometimes not. More the latter than the former. So recently I made an effort to solve this without using SlowCheetah, or any other extension. As it turns out, you can. And it’s not even particularly difficult.

First of all, you need to have the Visual Studio Web Application build targets installed on your build server. This can be achieved either by installing an express version of Visual Studio or the Visual Studio Shell Redistributable Package.

Then, create an App.config file in your project. I placed in in a folder called “Config”, to avoid any automatic behaviour from Visual Studio. Then, create your transform files, App.Debug.config, App.Release.config and whatever you need (I usually don’t use those, but rather Test, Prod, Acceptance etc). Now, these will all be placed beside App.config, and not be linked to it as with Web.config transforms. Not to worry, we’ll fix that shortly!

Next, unload your project, and edit the .csproj file. First we’ll fix the linking of the files. This is done simply by adding a DependentUpon element inside each Item. Let’s say you have this:

1
2
<None Include="Config\App.config" />
<None Include="Config\App.Debug.config" />

Simply change it to this:

1
2
3
4
<None Include="Config\App.config" />
<None Include="Config\App.Debug.config">
<DependentUpon>App.config</DependentUpon>
</None>

Now, let’s move on the real trick. In order to make msbuild transform your config file, we need to add a build event. At the end of the file, add

1
2
3
4
5
6
7
<UsingTask TaskName="TransformXml"
AssemblyFile="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v12.0\Web\Microsoft.Web.Publishing.Tasks.dll" />
<Target Name="AfterBuild">
<TransformXml Source="Config\App.config"
Transform="Config\App.$(Configuration).config"
Destination="$(OutputPath)\$(AssemblyName).$(OutputType).config" />
</Target>

You need to make sure that the path in the second import matches the version of your Visual Studio build targets.

In this example, I have a console application, so I want the result of my transformation to end up in the output directory, and be named as AssemblyName.exe.config, e.g. bin\Debug\MyConsoleApplication.exe.config. In a web application where I have other config files, I would use something like

1
2
3
<TransformXml Source="Config\NLog.config"
Transform="Config\NLog.$(Configuration).config"
Destination="NLog.config" />

And if you have more than one config file that you would like transformed, you can of course add several TransformXml-lines. After you’re done, just reload the project, and hopefully everything works. At least it works on my machine!

Finally, I should add that I found another Visual Studio extension that seems to work better than SlowCheetah (at least sometimes) called Configuration Transform and make this entire post unnecessary. On the other hand, this way there is less magic and more control, which I personally like. And if your extension suddenly breaks after an update, it might come handy!

*UPDATE 2014-03-20 *– I realised that unless your destination file is included in the project, or rather has a suitable Build Action, it will not be included in the deploy package, or deployed at all. Usually the build action should be “Content”. You don’t have to worry about the content of the destination file, though, as it will be replaced on every build. I prefer not to have it checked in to source control, though, since it would be pretty pointless to check in every change to an auto-generated file.