Johan Driessen

Resolving ILogger with Nancy and TinyIoC

This is a shorter follow-up post to my recent post about configuring NLog and ILogger in ASP.NET Core. As I mentioned there, since we’re using Nancy for our project, we can’t just use the built-in dependency resolver in ASP.NET Core, since Nancy uses it’s own dependency resolution.

In most cases, we use Autofac and the Nancy Autofac bootstrapper, but in this case, we were using the default TinyIoC implementation, so that’s what I’ll write about in this post. I might write another follow-up post when I implement this for Autofac.

First of all, we need to pass the ILoggerFactory that we configured in the previous post. Since this is available in Startup.Configure we can just pass it on to our Nancy bootstrapper.

1
2
3
4
5
6
7
8
9
10
11
public class Startup
{
public void Configure(IApplicationBuilder app, IHostingEnvironment env,
ILoggerFactory loggerFactory, IConfiguration configuration)
{
app.UseOwin(x => x.UseNancy(new NancyOptions
{
Bootstrapper = new CustomBootstrapper(env, configuration, loggerFactory)
}));
}
}

Now, if we were content with just resolving then non-generic version of ILogger this wouldn’t be much of a problem, we could just create a default logger, and register that. But since we want to use the generic ILogger<T>, it’s a little more complicated.

So we can use this custom bootstrapper:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
public class CustomBootstrapper : DefaultNancyBootstrapper
{
//Of course we have a constructor that takes the arguments passed from Startup
//and sets them as fields, but that seems obvious.

protected override void ApplicationStartup(TinyIoCContainer container, IPipelines pipelines)
{
base.ApplicationStartup(container, pipelines);

//Fallback for non-generic logger
var defaultLogger = loggerFactory.CreateLogger("Default");
container.Register<ILogger>(defaultLogger);
//The generic constructor for Logger needs ILoggerFactory
container.Register<ILoggerFactory>(loggerFactory);
//Register generic logger as multi instance
container.Register(typeof(ILogger<>), typeof(Logger<>)).AsMultiInstance();
//TinyIoC cannot resolve ILogger<> directly in modules for some reasons,
//so we have to register this one manually.
container.Register<ILogger<API.Modules.FooBarModule>>(
(c, an) => loggerFactory.CreateLogger<API.Modules.FooBarModule>());
}
}

Now, there a couple of things that are important here:

  • We need to register ILoggerFactory even though we aren’t going to use it, since the generic constructor to ILogger needs it.
  • The generic logger needs to be registered with .AsMultiInstance(), otherwise it will be resolved only the first time, and the same (and wrong) generic instance will be re-used after that.
  • For some reason it seems the resolution of ILogger<> doesn’t work in the modules themselves. This might have something to do with how Nancy auto discovers the modules, or it might have something to do with TinyIoC, I don’t know. But since generally we do very little logging in the modules themselves, we just manually register the loggers that we need for the modules. Other options would be to for example
    • Use the non-generic ILogger in the modules
    • Use the ILoggerFactory instead in the module instead, and manually create a generic logger with loggerFactory.CreateLogger<FooBarModule>

I’m sure there are other, and probably better ways to this, but this seems to work well enough.

Repairing a cracked PCB in a Commodore 1901 monitor

This is the second and final part in a very short series where I improve and repair my Commodore 1901 monitor. In part 1 I added a SCART connector with analog RGB and audio support, but also discovered that the colours were a bit off – especially when using an RGBi input, such as CGA – and discovered a crack on the PCB. In this part, I will repair the PCB and hopefully fix the colours.

First I had to have a good look at the crack. It was in the lower left corner of the PCB, close to the potentiometers that adjust the color levels, as marked in the picture below.

After pulling the board out a bit and turning the monitor upside down, I could get a closer look at the crack.

That doesn’t look too fucking good. No less than nine (9) traces are broken. Fortunately, since this is an old monitor, the PCB is single layer, so there are no traces on the back, and no traces inside. The easiest way to repair a broken trace on a PCB is to find a solder joint on each side of the crack and solder a wire over the crack. But I also wanted to try another way. So for the first three traces, where there was enough space, I just scraped a bit of the outer layer off, and soldered a very short piece of wire right over the crack.

For the rest of the broken traces, there just wasn’t enough room to use this method, at least not with the tools and skill at my disposal. So I had to solder wires over the rest of the cracks.

After this I reassembled the monitor (well, actually, I finished the SCART mod as well) and connected my Bondwell Model 8 to the RGBi-input. To my great surprise everything worked perfectly! The lovely CGA palette of white, cyan and magenta was as vibrant as ever with no sign of the yellowish tint from before, and some careful banging on the side of the screen no longer causes the colors to change. So I have to label this a complete success!

I now have the perfect monitor for my small[1] collection of retro computers. It takes RGBi, SCART with analog RGB and separate Chroma and Luma input (like S-VIDEO). And it even has a built-in speaker! The only input I so far haven’t had much success with is composite. If I connect composite to the Luma input (the yellow RCA jack), I get a monochrome picture (not a great surprise). If I connect it to the Chroma instead, I get no picture at all. If I split the composite cable and connect it to both, I still only get monochrome. If anyone has a working way to connect a composite signal to separate luma and chroma inputs, I would be very interested. A minor annoyance though, as I can connect composite to a TV instead. So, yay, working Commodore 1901 monitor!

Finally, here is a picture of my five year old son playing Krakout on the repaired monitor!


  1. 1.I would consider my collection small. There are others in my family who would voice a different opinion...

Properly configuring NLog and ILogger in ASP.NET Core 2.2

Ever since we started using dotnet core a couple of years ago, both for new projects and porting old projects, we’ve been struggling with configuration. Especially regarding logging. The official documentation has been – to put it mildly – confusing and inconsistent, and to make matters worse, we’ve been wanting to use NLog as well. In the old days (e.g. when we used .NET Framework 4.x) using NLog was pretty easy, we just added a NLog configuration section to web.config (or a separate file if we were being fancy), and then just accessed the static instance of NLog with LogManager.GetCurrentClassLogger(). This, however, does not work particularly well in dotnet core, for the following reasons:

  • Dotnet Core does not like static accessors
  • Dotnet Core really would prefer if we used the ILogger interface to log stuff
  • We don’t have a web.config anymore

So, over the last years I’ve tried different approaches to this, without ever being fully happy with the result. But with recent versions of dotnet, and multiple more or less ugly attempts, I feel I finally have a pretty good grasp of how to set everything up properly, so I though I’d better write it down for future reference before it slips my mind again (my mind is very good at remembering release years for old movies, but not so great at remembering dotnet configuration syntax).

So, first things first. We have an asp.net core web app targeting netcoreapp2.2, and in order to use NLog for the logging, we need two additional package references:

1
2
<PackageReference Include="NLog.Extensions.Logging" Version="1.5.0" />
<PackageReference Include="Nlog.Web.AspNetCore" Version="4.8.2" />

Then, we need to configure the app configuration in Program.cs. In older versions of dotnet core most of this setup was done in Startup.cs, but it has since mostly been moved to the Program class.. Besides setting up the logging, we also configure the rest of the app configuration here, e.g. setting up appsettings.json. For more fundamental information about the Program.cs and Startup.cs classes, see docs.microsoft.com.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
//This method is called from Main
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>()
.ConfigureAppConfiguration((hostingContext, config) =>
{
var env = hostingContext.HostingEnvironment;

//Read configuration from appsettings.json
config
.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
.AddJsonFile($"appsettings.{env.EnvironmentName}.json",
optional: true, reloadOnChange: true);
//Add environment variables to config
config.AddEnvironmentVariables();

//Read NLog configuration from the nlog config file
env.ConfigureNLog($"nlog.{env.EnvironmentName}.config");
})
.ConfigureLogging(logging =>
{
logging.ClearProviders();
logging.AddDebug();
logging.AddConsole();
logging.AddNLog();
});

The key here is of course the env.ConfigureNLog($"nlog.{env.EnvironmentName}.config") statement, which allows us to read the NLog configuration from a standard NLog configuration file, just as we did in the old .NET Framework. The ConfigureNLog extension method is provided by the Nlog.Web.AspNetCore package. In my example I have different nlog config files for different environments, just as I have different appsettings for different environments. The nlog.*.config files are automagically copied to the publish directory, just as the appsetting files. We also configure the different loggers, and add a Debug, a Console and an NLog logger, which all will the receive the same logging data.

This also has the additional benefit of getting rid of a very annoying warning that you get if you still use the old method of adding loggers in Startup.cs:

ConsoleLoggerExtensions.AddConsole(ILoggerFactory)' is obsolete: 'This method is obsolete and will be removed in a future version. The recommended alternative is AddConsole(this ILoggingBuilder builder).

And with this, we’re pretty much finished. All setup regarding logging and app configuration can be removed from Startup.cs unless you need to do other fancy stuff there. Since IConfiguration and ILoggerFactory is already configured in Program.cs, you may have to inject them in Startup. This can be done in either the constructor or in the ConfigureServices or Configure methods. I really can’t say which is best.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public class Startup
{
public Startup(IHostingEnvironment env, IConfiguration config)
{
//I guess you could store config as a field here and access it in the other methods
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env,
ILoggerFactory loggerFactory, IConfiguration configuration)
{
//You can inject both ILoggerFactory and IConfiguration directly
//into the configuration methods as well
}
}

If you are using the standard asp.net core dependency resolution, this is it! You can inject ILogger or (preferably) the generic ILogger<FooBar> anywhere you want to log stuff, and just log away. In our case, we use Nancy and TinyIoC (or frequently Autofac) for dependency injection, which makes things a little more complicated, but that will make for an excellent post of its own!

Adding an RGB SCART connector to a Commodore 1901 monitor

So, this post is going to be a departure from most of the previous content on this blog. This may or may not be indicative of future posts.

A couple of months ago, I bought a cheap, used Commodore 1901 monitor[1] from Tradera. The Commodore 1901 has digital RGBi input using D-SUB 9 connector, as well as separate Luminance and Chrominance inputs via RCA jacks (which is the same signal as S-Video, just different connectors). I thought this would be a good monitor for my old Bondwell Model 8 computer, which only has CGA output (and probably deserves a post of its own). It would probably also work with my Commodore 64. The Commodore 1901 also has a built-in speaker, that connects with yet another RCA plug, so I wouldn’t even need a separate speaker.

When I connected my Bondwell to the monitor, it was indeed glorious, as evident in the image below. What is harder to see is that the colours where a bit off, there was a bit of a yellow tint that kind of came and went.

I managed to find the service manual for the Commodore 1901 monitor, and found that there were a couple of potentiometers that could be adjusted if the colour was off. So I opened the monitor and adjusted the potentiometers which at least made the colour a little bit better. Unfortunately, I also noticed that the PCB had a small crack, which caused some bad connections, and was probably the cause of the colour problem. More about this later.

But what I also found, to my great surprise, was a number of solder points that looked like they could fit a SCART connector, and a matching hole in the metal backplate. What on earth could this be for? Maybe this monitor came in a different version[2], with a SCART connector? But if so, what kind of signals was used[3]? And did this version actually use those signals? Would it be possible to get analog RGB input by adding a SCART connector?

A bit of research indicated that yes, this might indeed be possible. I found a thread on amibay.com and blog post by a Danish guy (unfortunately missing all images[4]) that discussed this. The problem seemed to be that the solder points for the SCART connector on the PCD are oriented backwards, so that a standard 90-degree connector won’t fit. So the usual solution seemed to have been to solder wires between the PCB and the SCART plug. However, I managed to find an angled SCART connector on eBay that seemed to be oriented the other way around. It sure looked like it would fit!

So, the first thing to do was to remove the solder blocking the holes. Time to heat up my solder sucker!

After this, it was simply a matter of fitting the SCART connector and soldering it in place. Or rather, it would have if the darn plug would have fitted through the hole in the metal frame!! When I had fitted the legs through the holes in the PCB, it was completely impossible to get the plug through the hole. In the end, I hade to bring out a knife and go to town on the poor plug.

Finally, I was able to fit the SCART connector through the hole, and solder it in place.

And now, the moment of truth. Would this work? I have an Amiga 600 with a SCART cable that carries not only analog RGB video, but also sound. So maybe I would get sound through the built-in speaker as well? Time to connect a cable. Would it even fit the mangled SCART connector?

The answer to the last question is yes, it fits. And the answer to rest of the questions are yes, everythings works perfectly! I get a crystal clear image from the amiga, and I get the sound through the speaker! The only thing left to do was to make a hole in the plastic cover as well, which was easy since there was already an indication in the cover where to cut.

So, after cutting a hole in the cover, it was just a matter of putting everything back, and look at the nice result:

And finally, here is a picture of the Amiga workbench on the Commodore 1901 monitor:

So hooray, everything is great! Except for the crack in the PCB, remember? Since I had the monitor open, and the soldering iron out, I decided to see if I could fix that as well. But I believe this post is long enough already, so that will have to wait until part 2.


  1. 1.The Commodore 1901 monitor was a PAL-only monitor produced between 1986 and 1988, and was meant to be used together with the Commodore 128. It is not as famous as the 1084 monitor but, as we will see, with the SCART modification it is just as useful!
  2. 2.The monitor was actually manufactured by Thompson. And Thompson did release their own version of it with a SCART connector, the Thompson 450G. Why the Commodore version came without it, I do not know.
  3. 3.The SCART connector actually carries a lot of different signals. It can carry composite video, s-video and RGB, and event YPbPr, as well as stereo sound. Wikipedia has a good article.
  4. 4.While writing this post I checked the blog post again, and now it seems all images are back! This would have made it easier for me when I actually was working on the monitor!

The blog is now even more static than before!

This blog has been rather static lately. In fact, I haven’t written a new post since 2014. I’m sure that has nothing to do with the fact that I became a father for the second time in 2014. Obviously a coincidence.

But now it’s time for the blog to become static in a whole new way! For more than 8 years, this blog has been running on my own blog engine, but now the time has come to leave that behind, and move to the wonderful world of static site generators. I know, I’m a couple of years late too the party, but better late than never!

After looking at a lot of alternatives, and sinking way to many hours into making a combination of gulp and metalsmith work the way I wanted it to, I finally decided on using Hexo to generate the site. That worked pretty much out to box, although I still feel I have enough flexibiliy to make it work that way I want. Hexo generates a completely static site, and I just need a lightly configured Nginx server in front of it, mainly to keep some old links alive.

So now the source code for both the posts and the scaffolding for the blog lives on Github, and publishing a new post is a simple matter of writing it in markdown and pushing to Github.

Or, well, it will be once I finish the script that auto publishes the blog. So, any day now.

I also gave the blog a new coat of paint, which was sorely needed. Hopefully this will lower the bar for me to write new posts, which might mean that I will be able to produce more than one every five years. Fingers crossed.

Deploying to remote IIS with MsDeploy

We’ve been using MsDeploy to automate our web site deploys for some time. Our build server (running TeamCity) creates the deploy packages, and a PowerShell script on the production server downloads the packages and deploys them to IIS. Recently, we added a fallback-server in another physical location in case there is a problem with the normal server. Naturally, we want to make sure that all the web sites are up to date on the fallback server as well. And that means we want to make the scripts that deploy the site on the production server also deploy to the fallback server.

Now, MsDeploy has support for deploying to other servers, but as it turns out, it can be a little tricky to get it working. One option is to use a windows user with administrator privileges on the target server, but we didn’t really want to do that. The other option is to use an IIS Manager User. This options require a couple of steps to get the authentication working.

1. Create a new IIS Manager User

The first thing you need to do is to create an IIS Management User. This is done by opening the IIS Manager, clicking on the server node, and then Management - IIS Manager Users. Add a new user, let’s call it “deploy” with the password “password”.

2. Allow the IIS Manager User on the site

The next step is to give the user permissions to deploy on all the sites that are to be deployed this way. Click on the site node and then on IIS Manager Permissions. Under Actions, click on Allow User.

Select IIS Manager, and then click Select to find your user. Unfortunately, you have to repeat this process for each site.

3. Give IIS Management Service permissions on site

A not so obvious step is that you need to make sure that the IIS Management Service has permissions to actually perform the deploy on each site. The easiest way to do this is to right-click on the site in IIS Manager, and select Edit Permissions. Under the Security tab, give Local Service “Full control”.

By default, this IIS Management Service runs as Local Service, but if you have changed that, you’ll have to use that account instead. It might work with only modify permissions, but it didn’t for me.

4. Run msdeploy with the correct parameters

Finally, the trickiest part is getting the parameters to msdeploy right! This is what we ended up using.

> msdeploy.exe -verb=sync -source:package="PACKAGE.zip" -dest:auto,computerName=https://FALLBACKSERVER:8172/msdeploy.axd?site=SITENAME,userName=deploy,password=PASSWORD,authType=basic -setParam:"IIS Web Application Name"="SITENAME" -allowUntrusted=true -skip:Directory="App_Data"

There are some things worth mentioning here. First, you need to use the full url to the server (including msdeploy.axd) with the sitename as a querystring parameter in order to be able to use a IIS Manager User, since they only have permissions on individual sites. Otherwise the authentication will fail. Also, you need to set authType=basic, otherwise it will try to use a Windows user instead.

Applying MSBuild Config Transformations to any config file without using any Visual Studio extensions

As I have mentioned in previous posts, I frequently use a setup where our TeamCity servers creates deploy packages that are semi-automatically deployed to the web servers. A great help in achieving that is Visual Studios Web.config transformations. However, frequently we have the need to transform other config-files as well, either because we’re not in a web project, or because we simple have multiple config files.

I’ve had some success using a Visual Studio plug-in called SlowCheetah. Unfortunately it does not really play well with TeamCity. Sometimes it works, sometimes not. More the latter than the former. So recently I made an effort to solve this without using SlowCheetah, or any other extension. As it turns out, you can. And it’s not even particularly difficult.

First of all, you need to have the Visual Studio Web Application build targets installed on your build server. This can be achieved either by installing an express version of Visual Studio or the Visual Studio Shell Redistributable Package.

Then, create an App.config file in your project. I placed in in a folder called “Config”, to avoid any automatic behaviour from Visual Studio. Then, create your transform files, App.Debug.config, App.Release.config and whatever you need (I usually don’t use those, but rather Test, Prod, Acceptance etc). Now, these will all be placed beside App.config, and not be linked to it as with Web.config transforms. Not to worry, we’ll fix that shortly!

Next, unload your project, and edit the .csproj file. First we’ll fix the linking of the files. This is done simply by adding a DependentUpon element inside each Item. Let’s say you have this:

1
2
<None Include="Config\App.config" />
<None Include="Config\App.Debug.config" />

Simply change it to this:

1
2
3
4
<None Include="Config\App.config" />
<None Include="Config\App.Debug.config">
<DependentUpon>App.config</DependentUpon>
</None>

Now, let’s move on the real trick. In order to make msbuild transform your config file, we need to add a build event. At the end of the file, add

1
2
3
4
5
6
7
<UsingTask TaskName="TransformXml"
AssemblyFile="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v12.0\Web\Microsoft.Web.Publishing.Tasks.dll" />
<Target Name="AfterBuild">
<TransformXml Source="Config\App.config"
Transform="Config\App.$(Configuration).config"
Destination="$(OutputPath)\$(AssemblyName).$(OutputType).config" />
</Target>

You need to make sure that the path in the second import matches the version of your Visual Studio build targets.

In this example, I have a console application, so I want the result of my transformation to end up in the output directory, and be named as AssemblyName.exe.config, e.g. bin\Debug\MyConsoleApplication.exe.config. In a web application where I have other config files, I would use something like

1
2
3
<TransformXml Source="Config\NLog.config"
Transform="Config\NLog.$(Configuration).config"
Destination="NLog.config" />

And if you have more than one config file that you would like transformed, you can of course add several TransformXml-lines. After you’re done, just reload the project, and hopefully everything works. At least it works on my machine!

Finally, I should add that I found another Visual Studio extension that seems to work better than SlowCheetah (at least sometimes) called Configuration Transform and make this entire post unnecessary. On the other hand, this way there is less magic and more control, which I personally like. And if your extension suddenly breaks after an update, it might come handy!

*UPDATE 2014-03-20 *– I realised that unless your destination file is included in the project, or rather has a suitable Build Action, it will not be included in the deploy package, or deployed at all. Usually the build action should be “Content”. You don’t have to worry about the content of the destination file, though, as it will be replaced on every build. I prefer not to have it checked in to source control, though, since it would be pretty pointless to check in every change to an auto-generated file.

Visual Studio feature - Preview web.config transforms

On my way to continuous delivery, I often use Visual Studios built-in support for Web.config transformations, at least for relatively simple situations. This allows you to automatically create variations of you web.config file for different deployment environments. And with the help of the excellent Visual Studio plug-in SlowCheetah, you can apply this to other config files as well.

This is all great, but I found it a little tricky to verify that my config transformations yielded the expected result, I basically had to create the deploy packages and check inside them. But then I noticed this little gem in Visual Studio:

Yes, there is a preview transform option in the context menu for transform.config-files!

If you right-click on a config transformation file, you get the “Preview transform” option. This was, I think, introduced in Visual Studio 2012. I was just a little slow to notice it. But it’s great! If you select it, you get to see the original config file and the transformed file side by side, with all the changes highlighted.

Click me if you can't read! :-)

All the differences between the files are highlighted on the right, and also in the code window itself. This really makes it very easy to verify that the transforms are correct.

Manipulating history with the HTML5 History API and AngularJS

As I’ve mentioned earlier, I’ve been working quite a lot with AngularJS lately, most recently on a search function on a website. Naturally, since this is an ajax application, the search result page never reloads when I perform a search. Never the less, I would like to

  1. Be able to go back and forth between my searches with the standard browser functions
  2. See my new query in the location bar
  3. Reload the page and have the latest query - not the initial one - execute again
  4. Make this invisible to the user, that means no hashbangs - only a nice ?query=likethis.

Fortunately, HTML5 is there for me, with the history api! This is supported in recent versions of Chrome, Firefox and Safari, as well as in Internet Explorer 10. Unfortunately, no support in IE9 or earlier. Anyway, in AngularJS, we don’t want to access the history object directly, but rather use the $location abstraction.

The first thing we need to to is set AngularJS to use HTML5 mode for $location. This changes the way $location works, from the default hash mode to querystring mode.

1
2
3
4
angular.module('Foobar', [])
.config(['$locationProvider', function($locationProvider) {
$locationProvider.html5Mode(true);
}]);

Note: Setting html5Mode to true seems to cause problems in browsers that doesn’t support it, even though it’s supposed to just ignore it and use the default mode. So it might be a good idea to check for support before turning it on, for example by checking Modernizr.history.

Now, all I have to do to whenever I perform a search is to update the location to reflect the new query.

1
2
3
4
5
6
$scope.search = function() {
//Do magic search stuff

var path = $location.path(); //Path without parameters, e.g. /search (without ?q=test)
$location.url(path + '?q=' + $scope.query);
};

This makes the querystring change when I perform a search, and it also takes care of the reloading of the page. It does not, however make the back and forward button work. Sure, the location changes when you click back, but the query isn’t actually performed again. In order to make this work, you need to do some work when the location changes.

In plain javascript, you would add a listener to the popstate event. But, you know, AngularJS and all that, we wan’t to use the $location abstraction. So instead, we create a [$watch](http://docs.angularjs.org/api/ng.$rootScope.Scope#$watch) that checks for changes in $location.url().

1
2
3
4
5
6
$scope.$watch(function () { return $location.url(); }, function (url) {
if (url) {
$scope.query = $location.search().q
$scope.search();
}
});

And that’s pretty much it! Now you can step back and forth in history with the browser buttons, and have AngularJS perform the search correctly every time!

Using an AngularJS directive to hide the on screen keyboard on submit

Recently, I’ve been working on a responsive web site running AngularJS. In this, we have a search form. As search forms usually works, you enter a query in a text field, and then you click the button to search. Or, more likely, you hit enter. Now, since this is a responsive site, this needs to work on a phone too. This is were it gets a bit more tricky.

A search form

When you click the text field, your on screen keyboard pops up. You enter your query, and click Enter, or Next, or Go or whatever the button is called on your keyboard. This submits the form, but unfortunately, the soft keyboard still lingers. If you had clicked the button instead, your keyboard would have disappeared. Why? Because when you click enter on your on screen keyboard, your text field still has focus!

So in order to rectify this, we need to make sure that the text field loses focus when the enter/go/next-button is clicked. It’s easy to make an object lose focus, you just call blur() on it. The problem is when to call it. What happens when you press the enter key on your on screen keyboard? Well, that depends.

Take this form for example:

1
2
3
4
<form>
<input type="text" name="query" />
<button>OK</button>
</form>

Now, in this case, when I press enter in the text field, two events will fire: the submit event on the form, and the click event on the button. The exact same thing happens when you click the button Why? Because the default behaviour of a button in HTML5 is to submit the form. In order to prevent this, you need to use <button type="button">OK</button> instead.

If you do this, pressing enter in the text field will still submit the form, but not click the button. And vice versa, clicking the button will fire the click event on the button, but not the submit event on the form!

Since this is an AngularJS application, the easiest way is to specify a submit action for the form, and then make sure the form submits whether you click the button or press enter. Like this:

1
2
3
4
<form ng-submit="foo()">
<input type="text" ng-model="query" name="query" />
<button type="submit">OK</button>
</form>

A nice and pretty form, but this still leaves me with the original problem, who do I make sure that the text field loses focus whenever I click enter/ok/go on my soft keyboard, or to put it in more technical terms, how do I run blur() on the text field whenever the submit event fires? This is still an AngularJS app, so the answer is of course – a directive! I came up with this:

1
2
3
4
5
6
7
8
9
10
11
angular.module('Foobar', [])
.directive('handlePhoneSubmit', function () {
return function (scope, element, attr) {
var textFields = $(element).children('input[type=text]');

$(element).submit(function() {
console.log('form was submitted');
textFields.blur();
});
};
});

Then, you just apply this to the form: <form handle-phone-submit>, and like magic, your soft keyboard disappears whether you press enter/ok/go or the button. There’s a jsfiddle with a working example.

I allowed myself to use jQuery within the directive, but naturally you could do this with just the jQuery lite implementation that always exist in Angular. In that case, the directive function could look something like this:

1
2
3
4
5
6
7
8
return function (scope, element, attr) {
var textFields = element.find('input');

element.bind('submit', function() {
console.log('form was submitted');
textFields[0].blur();
});
};

There are some limitations there, but it seems to work ok.