Fixing the missing Azure Context in Azure Powershell

So, there I was trying to run some command in Azure Powershell, when suddenly I was greeted with the following error:

> Some-Az-Action-That-Is-None-Of-Your-Business

Your Azure credentials have not been set up or have expired, please
run Connect-AzAccount to set up your Azure credentials.

Alright, I thought, maybe by credentials actually have expired, better do what it says!

> Connect-AzAccount

WARNING: To sign in, use a web browser to open the page 
https://microsoft.com/devicelogin and enter the code DVS4FGXQR to authenticate.

Said, and done, surely it will work now!?

> Some-Az-Action-That-Is-None-Of-Your-Business

No tenant found in the context.  Please ensure that the credentials you provided
are authorized to access an Azure subscription, then run Connect-AzAccount to login.

Now, this is weird. I’m, pretty sure I just signed in with the correct Azure account. What just happened?

❯ Get-AzContex

Name                                     Account    SubscriptionName Environment TenantId
----                                     -------    ---------------- ----------- --------
Default                                  jd@approa…                  AzureCloud

Hey, that’s not correct! Where is my subscription and my tenant? This is getting curiouser and curiouser! I will spare you a couple of rounds of trying to sign in with different accounts, trying to manually set the tenant and duckduckgoing the shit out of this. Suffice to say that in the end it seems this is something that happens, your Azure Context becomes corrupted in some way. Fortunately, there is an easy way to fix it:

1
2
> Clear-AzContext
> Connect-AzAccount

Simply clear the Azure context and re-connect, and it should all start working again. Now, if you run Get-AzContext again you will hopefully see something more like this:

Name                                     Account    SubscriptionName Environment TenantId
----                                     -------    ---------------- ----------- --------
Betala per användning (0058c7f9-344f-4d… jd@approa… Betala pe…       AzureCloud  77eb242d…

Weirdly enough, this happened to me on two different computers, just two days apart. It didn’t take nearly as long to fix it the second time, but at least it made me write this blog post!

Calling the Swish Payment API from Azure AppService

For the last couple of months, I’ve been working on a new version of a site for a client that uses Swish for payments. This new version will be hosted as an Azure App Service. The Swish API is pretty nice and straight forward, but for some reason they have implemented authentication and security using client certificates instead of something like OAuth 2. This makes it a bit more difficult, especially in Azure, since neither the server certificate for the API or the client certificates are signed by trusted authorities.

In fact, during my research[1] I found many claims that it simply does not work, that you have to use a virtual machine in order to make the calls to the Swish API work. However, I also this answer on Stack Overflow that claimed that the trick was simply to upload all certificates to Azure, and this turned out to be true.

So, in order to remember this for the next time, and hopefully help anyone else with the same problem, I decided to write a more comprehensive guide on how to get this working.

1. Download the simulator certificates

All examples will be based on the Swish Test Merchant certificates, which can be downloaded from the Swish developer page (click View Guide in the Simulator Guide box[2]). Extract the downloaded file and locate the file named Swish_Merchant_TestCertificate_1234679304.p12 (or whatever they may have changed it to), and change the extension to pfx, since that is the extension that Azure will expect later[3].

2. Extract all the root certificates from the .p12/.pfx file

The .pfx file contains the whole certificate chain, and when working on a Windows machine, it will be enough to install that to your CurrentUser och LocalMachine store (depending on how you run your application), but in Azure you will need to upload all certificates separately. It is therefore necessary to extract the certificates from the file. This can be done in a number of different ways. If you are working on a Windows machine, you could just install the certificate, and then go into the certificate store and export the resulting certificates.

However, you can also do it from the command line with openssl. Now, I’m no expert on openssl, and I’m sure there is a better way to do this, this answer on the Unix & Linux StackExchange for example suggests piping the result through sed, but we’re only doing this once, so this works good enough.

First, list the certs in .pfx file and send the results to a text file[4].

openssl pkcs12 -nokeys -info -in ./Swish_Merchant_TestCertificate_1234679304.pfx -passin pass:swish > all_the_certs.txt

Then, open the text file in a text editor, and locate the two last certificates (the first one is your actual client certificate, you can ignore that for now). Now, copy everything from and including -----BEGIN CERTIFICATE----- to (and still including) -----END CERTIFICATE----- and paste it to a new file called Swedbank_Customer_CA1_v1_for_Swish_Test.cer, which should then look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
-----BEGIN CERTIFICATE-----
MIIFvzCCA6egAwIBAgIQbPUmsaJntOLs9L2jgZxlQzANBgkqhkiG9w0BAQ0FADBq
MSswKQYDVQQDDCJTd2VkYmFuayBSb290IENBIHYyIGZvciBTd2lzaCBUZXN0MREw
DwYDVQQFEwhTV0VEU0VTUzEbMBkGA1UECgwSU3dlZGJhbmsgQUIgKHB1YmwpMQsw
CQYDVQQGEwJTRTAeFw0xODAzMjAxMjU0MjJaFw0zODAzMjAxMjIyMTJaMGoxKzAp
BgNVBAMMIlN3ZWRiYW5rIEN1c3RvbWVyIENBMSB2MSBmb3IgU3dpc2gxETAPBgNV
BAUTCFNXRURTRVNTMRswGQYDVQQKDBJTd2VkYmFuayBBQiAocHVibCkxCzAJBgNV
BAYTAlNFMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAl/wQeoczfPad
DPNIYJvhvwavDxIxGyGnQ2WDsXO7LQH7hz/TWNx0Ava6jRYT5pUubpBsALk/xEuA
uKjUEUAY5uumPn/nMNVeMmpnqMHs7A3kfkr8F+o4AUM3mIgVe3inMF/mvlASfpGp
4TS0ZCLZLE+OiG5REecVmgjn0i/JorXtXMGSWoqbAZCpqgS9uD3MQb9ua7TRTvkQ
knUH30/GaX5i8KK/r45SRXBRLTxxk0ySk4AcUR21TLb2WOMV9BbBwdq336mSErgz
wMy6G5EGlNhety3g+QRoc1ou0+oMw9tLdgsrhIx9opHF/M+E8bXXje5WZ8d9eyF/
Eq0kIqvWm4c3TzLPS43DNhJCOBGO6GMV8neuXM0TKiBrub3/yli5BapCH62SENEF
ZJG/ZB9fnRupDsn5vlt3MwtbipjnNhH4umcKacF/YtBdCUh1+CQ3U6lx2OolCYZG
cn1YDD7DNzb5I71TTaWyC+FgbXFtGX/48UXBIrqc9A054o76A4eYKM+GQPNz3B27
vZwzRC7Xjryg4uimmYzNRdJ3jz1q9PllUsOq/ElNr0ALj3MLKqOOtItb91SyE+NP
gKbf0MiTYS2k4aTxpDuh972cz1UPgHFuixat3Zj42ZVRLyX4IcaPk7Vbh3GjkO66
1qvgR6xInJoUts8J9HHbXgtjllmsf2UCAwEAAaNhMF8wDwYDVR0TAQH/BAUwAwEB
/zARBgNVHQ4ECgQITvODy60zFEAwFAYDVR0gBA0wCzAJBgcqhXCBbQEBMBMGA1Ud
IwQMMAqACEemPD8kK8dIMA4GA1UdDwEB/wQEAwIBBjANBgkqhkiG9w0BAQ0FAAOC
AgEAdVwklqAdT3Ztn4RgLcg+hPKtu5wVYVqOnilVqQHSWx/ygcgUMspvZ+N6qYIQ
NgBXTKaIUX6vFK4wvTgEaZnBUkuyYR9p/ZrAz7AjfdeUi2RRt5OLFAWwdddUZHC7
qvRG6bzreUZvgrZ6lh0ercl07b9xSr8pc1lNy3ksbkW2lvTxBeRQDKJJQ53EYA9I
/fFJ7H6vsUhWdn4x3GRLuDoOp0x6BdKGuSR06p5kjgytPCXxJt1QCnCvCsWKUZfe
4XmASXG/Kku7dih9uDl2CWAfCWwu6eUSZxRLxWM0Htb1semJRo0AMsRW4Xb9uWtw
i3XKT9oZbRDjaOn5NPPfWfYufjzh6AodH6aftSWns7MKSvhif+2mBBcF8mVySD2u
SQZUNJ9YLBk26ii0jQ1k6ll5fCWutcEkvvdswq6R9cDm04DYKJionClg2isREy9m
9PGWUntX3qL17fRTAAvTP3I918QTnMDEjIq5PaOxAgcuhoepweVbFHmyWLO04xpf
DkHghRnId3UV2XELlKGcmuobuBsPWeSMEGDg96z49owzDBm2W6PFw0t7wwrbQN6J
VCSlxq1hRaehRdYfqIcnvCbRZtsR1p6oQMKUK17NUyyr7OLLm+BUBpi0g8fPuV5h
g7XJuK5Z3/D+uKRYfpw4SnSyk3s4RfDg4M2rgE80lplYnE8=
-----END CERTIFICATE-----

Repeat for the second certificate, but call the file Swedbank_Root_CA_v2_for_Swish_Test.cer.

Upload the certificates to Azure and configure your App Service

Now you need to upload the certificates to you App Service in Azure. Log in to your Azure Portal and navigate to TLS/SSL settings in you App Service. Click on the Private Key Certificates (.pfx) tab, and then on Upload Certificate.

t understand the instructions, this is where you should click!

Select the Swish_Merchant_TestCertificate_1234679304.pfx file[5] and enter the password (which is just swish for the simulator certificate). Click Upload and you should see the certificate in the Private Key Certificates list. If you do this right now, you will also get a warning that the certificate is about to expire, which is true, but there is no newer certs yet, so we’ll just have to repeat this process in a month or two.

Update 2020-05-15

New certificates for the Swish sandbox finally arrived (mss_test_1.8.1)! I have verified that this exact same procedure works for them as well. However, they are now issued by Nordea rather than Swedbank, so it would make sense to call the certicates something with Nordea instead! :-)

Make sure to change the thumbprints as well!

Next, you need to upload the public CA certs that you extracted earlier. Click on the Public Key Certificates (.cer) tab, and then on Upload Public Key Certificate. Select the Swedbank_Customer_CA1_v1_for_Swish_Test.cer file, and give it a name, something like Swedbank Customer CA1 v1 for Swish Test maybe? Click Upload and to the same for the other certificate.

In order for you App Service to be able to access the certificates, you need to set the environment variable WEBSITE_LOAD_CERTIFICATES. The value should be either * if you want to load all certificates, or a comma separated list of thumbprints if you want to limit which certificates that are available to your application. So go to Configuration -> Application settings and click New application setting to add the setting.

NOTE: If you have multiple slots in your App Service, you have to do this for all slots!

In order to make sure that the certificates are uploaded correctly, and available to your application, you can use the debug console. Go to Advanced Tools (under the Development Tools heading), ang click Go to open it in a new window. Select Debug Console -> PowerShell in the top menu.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
Kudu Remote Execution Console
Type 'exit' then hit 'enter' to get a new powershell process.
Type 'cls' to clear the console

PS D:\home> dir cert:/CurrentUser/my
dir cert:/CurrentUser/my

PSParentPath: Microsoft.PowerShell.Security\Certificate::CurrentUser\my

Thumbprint Subject
---------- -------
B71A8C2A7E0535AB50E4D9C715219DEEEB066FA4 C=SE, O=Swedbank AB (publ), SERIAL...
88884499BFFEF7FA80AB2CE2336568AE49C6D313 C=SE, O=Swedbank AB (publ), SERIAL...
76B6E2CB1BBA1BBC8A0C276AEF882B16AC48E7E0 CN=1233782570, O=5564010055, C=SE

You should be able to see all the certificates you have uploaded here. Take note of the thumbprints, you will need them shortly!

Use the client certificates when connecting to the Swish REST API

These examples will use the HttpClient in dotnet core 3.1, because that is what I’ve been using. What we need to do is create our HttpClient with a custom HttpClientHandler that includes all the client certificates. This also lets us handle the validation failure that will occur for the server certificate, since it is not signed by a trusted authority.

The first step is to load all the certificates. You will need to pass all the thumbprints, both for your private client certificate and for the public certs you extracted.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public X509Certificate2Collection GetCertificates(string[] thumbprints)
{
var certStore = new X509Store(StoreName.My, "CurrentUser");
certStore.Open(OpenFlags.ReadOnly);

var certificates = new X509Certificate2Collection();

foreach(var thumbprint in thumbprints) {
var certs = certStore.Certificates.Find(X509FindType.FindByThumbprint, thumbprint, false);
certificates.AddRange(certs);
}

return certificates;
}

The next step is to create our HttpClient with the custom HttpClientHandler:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
public HttpClient CreateClient()
{
var handler = new HttpClientHandler()
{
//The server certificate is not signed by a trusted authority,
//so we need to override the validation
ServerCertificateCustomValidationCallback = (sender, certificate, chain, sslPolicyErrors) =>
{
//If you want to, you can check that the server certificate
//thumbprint matches the expected. However, just recently they
//changed the certificate without notifying anyone, so this
//may cause problems. You could also just return true and be done with it.
var match = certificate?.Thumbprint == serverCertThumbprint;
return match;
}
};

handler.ClientCertificateOptions = ClientCertificateOption.Manual;

var clientCerts = GetCertficates(/* Pass your array of thumbprints here */);
foreach (var cert in clientCerts)
{
handler.ClientCertificates.Add(cert);
}

return new HttpClient(handler);
}

And that should be it! Unless I forgot something. If you follow this guide, and still have problems, please comment below, and I will try to update the guide. I do have it working now, so I should be able to figure out the problem!


  1. 1.Yes, that is what we call googling now!
  2. 2.This will almost certainly be incorrect within minutes of publishing this, since they seem to change it between every time I look!
  3. 3.p12 and pfx are just two different extension for the same certificate format, PKCS12.
  4. 4.The password for the simulator certificates really is just swish. For production certs, you will of course need to use the real password.
  5. 5.You renamed it, remember?

Setting up a Compact Flash card with Classic Workbench and WHDLoad for Amiga 600/1200

About a year ago, I bought an Amiga 600. It did not have a hard drive, but all Amiga 600 have an IDE port, and you can use a Compact Flash card with a CF-IDE adapter as a hard drive. That worked fine, and I could install Workbench on the CF card, and use it. But eventually I though I should try to use WHDLoad, so that I could run more games directly from the hard drive. I found a video by Nostalgia Nerd on Youtube, where he goes through the process of installing Classic Workbench and WHDLoad on a Compact Flash card, unfortunately this video is (currently) four years old, and also very… quick? With the help of the comments[1] and some trial and error, I managed to get it working, though.

Now, recently I was generously given an Amiga 1200. This one actually had a hard drive, but who knows how long that will keep on working, so I though I’d replace it with a Compact Flash card as well. This time, however, for the benefits of my readers[2] I thought I’d write down the process in a blog post. I thought it might be useful because a) I prefer written instructions, and b) it would give me a chance to update and correct the instructions so that they actually work.

The original video can be found here, along with instructions and download links.

I won’t go into the hardware side of this, because there’s really nothing to it. But you will need a Compact Flash card with a capacity of at least 4GB, some kind of CF-IDE adapter (not neccessarily that one) and of course a Compact Flash reader for your computer. This tutorial also assumes that you are running Windows, although it should be possible to use pretty much the same procedure with FS-UAE on Linux or Mac.

Step 1 - Download stuff

First of all, you will need to download a bunch of software.

WinUAE - Amiga emulator

This tutorial uses the WinUAE amiga emulator, which can be found on winuae.net. I was using version 4.0.1, although I now see that version 4.3.0 is available. From what I can tell, the difference seems to be very minor, so it shouldn’t matter.

Kickstart / Workbench

You will need a copy of the Kickstart ROM and Workbench disk images, version 3.0 or 3.1. This is still under copyright, and at least the Workbench images can be bought from amigaforever.com. They can also be found on several places on the Internet, as usual DuckDuckGo is your friend.

PFS3 File System support

You probably want to use the PFS3 file system, and you will need to download the handler for that from http://aminet.net/package/disk/misc/pfs3aio. This is not strictly necessary, but it’s faster than the standard AFFS and supports larger partitions.

Classic WB

Hard disk images for Classic WB can be found at (classicwb.abime.net)[http://classicwb.abime.net/]. I used the LITE version for the Amiga 1200, but for an Amiga 600 you probably want the 68K version.

You will also need kickstart files for WHDLoad. These can be found for example at (paradise.untergrund.net)[https://paradise.untergrund.net/tmp/PREMIUM/amiga_tools/], it’s the kickstarts.lha file you’re looking for.

Game and Demo packs

The original tutorial suggested that game packs could be downloaded from http://kg.whdownload.com/kgwhd/, but that doesn’t seem to work anymore. I downloaded both games and demo packs from ftp://grandis.nu/Commodore_Amiga/Retroplay/, but they are available from many more places, just search for whdload games pack.

Put everything in a folder somewhere on your PC.

Step 2 - Clean the CF card

In order to use the Compact Flash card in an Amiga, you need to remove all previous file system information from it. In order to do this, run diskpart in an Administrative command prompt.

First, list your disk by entering list disk. This should give you a result something like this:

Next, select your compact flash card, in my case it’s disk 6: select disk 6. Be very, very sure that you select the correct disk. You will destroy everything on it.

If you want to be completely sure that you have selected the correct disk, you can run detail disk just to verify. When you are 100% sure, run clean. This completely wipes the file system information from the disk, making it ready to use in the Amiga. exit diskpart.

Step 3 - Configure virtual Amiga in WinUAE

Start WinUAE as Administrator. Now we need to configure the system, and make it a little faster than a real Amiga. Otherwise this process will take literally[3] forever.

CPU and FPU

  • CPU: 68040 (or maybe 68060) / JIT
  • FPU: CPU Internal / More Compatible
  • CPU Emulation speed: Fastest possible

Chipset

Uncheck cycle exact, otherwise leave the default values.

RAM

Add some Z3 Fast RAM, I used 64 MB, just as in the original video.

ROM

Use the appropriate Kickstart ROM (probably the same as in your amiga). It needs to be version 3.0 or 3.1.

Floppy drives

Add the Workbench installer disk image to DF0:. It should not be write protected (use a copy if you don’t want to risk modifying your original image).

Hard drive (Compact Flash, really)

Now you need to add your compact flash card as a file system, as well as the pfs3 handler. Enter the CD & Hard drives section.

Add the pfs3aio archive as device f

If you want to use the PFS3 file system (which is recommended), you need to mount the archive with the handler as a file system as well:

  • Click “Add Directory or Archive”
  • Select “Archive or plain file”
  • Select pfs3aio.lha
  • Device name: f

Add the Compact Flash card

The next step is to add your Compact Flash card, and this is why you need to run WinUAE as administrator, otherwise it won’t work.

  • Click “Add Hard Drive”
  • Select the Compact Flash card as hard drive (it might be called something completely different on your machine)
  • Change from UEA to IDE (Auto)
  • Make sure Read/Write is checked
  • Click “Add hard drive”

Start the virtual machine!

Step 3 - Partition Compact Flash card

Now we need to partition and format the Compact Flash card for use in an amiga.

  • Open the Install disk and the HDTools drawer.
  • Start HDToolbox, you should see Interface SCSI, Address 0, LUN 0, Status Unknown
  • Click “Change drive type” -> “Define new” -> “Read configuration” -> “Continue” to configure the CF drive (ignore the values read, the Amiga does not really understand 4 GB Drive)
  • Click OK and go back to the list of hard drives in the system.
  • Click “Partition Drive”
  • Set up a small(ish) system partition, like 250 MB. Change the name to DH0.
  • Set up the rest of the CF Card as a partition, name it DH1.

Optional: Use the pfs3 file system

  • Check Advanced options and then click “Add/Update”
  • Click Add New File System
  • Enter filename f:pfs3aio (NOT pfs3_aio-handler as is claimed in the video, that is no longer correct) and click OK
  • Change DosType to 0x50465303 and remember to press Enter in the field
  • Click OK and OK to get back to your partitions
  • Select DH0, and click “Change” to change to the new file system
  • Select Custom File System or PFS/03 (depending on your Workbench version, I think)
  • Make sure Identifier says 0x50465303 (otherwise change it)
  • Change MaxTransfer to 0x1fe00 (and press enter)
  • Click OK
  • Repeat for DH1 (you don’t have to add the PFS3 file system again)

Now we’re done with the partitioning. Click OK, and then “Save changes to drive” (if you get an error here, you may want to try another Card Reader). Exit HDToolbox and reset the virtual Amiga.

Step 3 - Install Classic WB

Alright, if you’re still with me, it’s finally time to install Classic Workbench!

First, format the partitions by right clicking on them and select Icons -> Format disk from the menu. Name DH0 System and DH1 whatever you want (I just named mine Stuff). Make sure to use Quick Format. Confirm all warnings.

Then, press F12 to enter the WinUAE settings and go to CD & Hard Drives. Now you need to add the System.hdf file that you extracted from the Classic WB archive you downloaded in Step 1. Click Add Hardfile and select the System.hdf file. Make sure that the HD Controller is UAE, and name the device DH2. You should set boot prio to 1 (not 0).

You can remove the pfs3aio device, and then go to Floppy drives and eject all floppy drives. Restart the virtual machine.

It should now boot into the Classic WB installer. Follow the instructions (there are many, many options, and I have no good advice to give about them), and when prompted to insert a Workbench disk, press F12 to enter settings and do that. This is your change to choose between Workbench 3.0 and 3.1.

After the installation is done, and you have restarted, you probably will not see you compact flash partitions. This is because the Amiga gets confused by the two System partions. Rename the Classic WB partition to System2 (or something other than just System) and restart the virtual machine. You should now see all partitions.

Now you need to copy all the System/Workbench files from the System.hbf image to the System partition on the Compact Flash card. Start DOPUS by clicking RUN and selecting DOPUS. Select DH2: on the left (if DH2 does not appear in the list, you may have to type it in), and DH0: on the right. Select DH2 and click “All” to select all files, and then “Copy” to copy everything to the CF card. This will take a while.

After the copying is done, press F12 again to go into settings, and remove the System.hbf image from the hard disks. You should now only have your Compact Flash card left. Reset the virtual machine, and you should hopefully boot back into Classic Workbench.

Congratulations, you now have a working Compact Flash card for use in your Amiga. At this point, you could install it in the Amiga, start it, and everything should work. However, the point of Amiga is playing games, so we have one step left!

Step 4 - Copy Games and Demos for WHDLoad

First, we need to mount the folder where you put your games, demos and kickstarts as a file system in the virtual amiga.

  • Go into WinUAE settings -> CD & Hard Drives and click “Add Directory or Archive”.
  • Click “Select Directory” and point to where your Games and Demos are.
  • Put PC as both Device name and Volume label. Uncheck bootable. Click OK, and reset the machine.
  • You should now see a drive called PC on your workbench.

Second, we need to copy all the kickstart files. WHDLoad uses these to emulate[4] the correct environment for the games and applications.

  • If you haven’t done so already, unpack the kickstarts.lha archive into a folder.
  • Open DOPUS again, and select PC for the left side, and navigate into where you unpacked your kickstarts.
  • Copy all the kickstarts file to DH0:Devs/Kickstarts. Overwrite any files already there.

The Games and Demos need to be unpack into individual folders grouped by initial. For example Games/A/AnotherWorld_v2.4_0425. For games beginning with a number, the folder should be called 0_9. This can be done on the PC, or you can unpack them using DOPUS (as long as you have grouped them by initial).

Depending on the size of your CF card, all games might not fit, or if you just don’t want that many, you can just select the ones you like. I think it’s fine to group them into fewer folders then, e.g. A_E, F_K et cetera. At least the demos I downloaded were grouped like that, and it seems to work fine.

Now, use DOPUS again to copy the files from PC to DH1. If you did not unpack the archives earlier you can use Arc Ext to extract all the archives, buy you will have to do it folder by folder. I copied them to DH1:Gamesand DH1:Demos, but you can organise your files however you want.

Go back into settings, and remove all file systems except for the Compact Flash card. Reset the system, and it should boot back into Classic WB on your Compact Flash card.

Time to configure the system so that WHDLoad can find your games and demos!

  • Right click the top bar and select Workbench -> Startup from the drop down menu. Click Assign.
  • Change the locations for Games (and Demos) to where you put them. In my case, change the line that reads Assign >NIL: A-Games: SYS:Games to Assign >NIL: A-Games: DH1:Games (and likewise for demos).
  • Click the close icon in the top left corner and then click Save. Reset the machine again.

Finally, we need to add the games (and demos) to WHDLoad. Double click on the Files drawer in the bottom, and select AddGames. This may take some time. Do the same for AddDemos.

Now you can verify that the games are available. Right click on the desktop (of the Amiga!) and select RUN -> Games. This should bring up the GamesMenu where you now should see a long list of games.

Step 5 - Hardware install

There is not really much to this, and the video explains it pretty good. Use a CF-IDE adapter of some kind, and connect it to the IDE port of the Amiga. That’s it.

UPDATE: When I tried to put the CF card in my Amiga 1200, it didn’t recognize it, even though it had worked in my Amiga 600. I thought I had the same CF-IDE adapter, but on closer inspection it turned out they were not exactly the same. They both say CF-IDE44/2.0 ADAPTER, but the one that works has version V.H2, while the other one has version V.B1. And it seems that other people have had the same issue with the V.B1. So if you use this kind of CF-IDE adapter, make sure it says V.H2 and NOT V.B1!

Start the Amiga (the real one), and it should boot to your Compact Flash card. Bring up the RUN -> Games menu, and double click a game to start it!


  1. 1.Youtube comments are more useful than their reputation would have you believe!
  2. 2.That is, me, a few months from now.
  3. 3.figuratively
  4. 4.It might not technically be emulation, but I have a very rudimentary idea of how WHDLoad works...

Hello darkness, my old friend

If you are reading this on a computer[1] with a dark system theme, you might notice that this blog now also has a dark theme. Although dark themes to be all the craze nowadays, I’ve been using dark themes for quite some time, and I’ve been wanting to implement a dark theme option for my blog since forever. But I could never decide on wether it was to be something that would change automatically according to the time of day, or if the sun was up or not, or something the visitor could toggle.

Well, as it turns out, while I have been procasticating, the browser vendors have solved the problem for me! Earlier this year a new CSS media query was introduced: prefers-color-scheme.

This little gem equals dark if the system has a dark color scheme, and light otherwise. And it is supported by the latest versions of Firefox, Chrome, Safari and even Edge[2]. It works something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
/* Default color scheme */
body {
background-color: #fff;
color: #000;
}

/* Color scheme for dark mode */
@media (prefers-color-scheme: dark) {
body {
background-color: #000;
color: #555;
}
}

If the browser does not support prefers-color-scheme, or if it has a preferred color scheme other than “dark” (i.e. light), it will just ignore the overrides in the media query. So this is basically all I needed to do (well, I had to make a few more changes) to make the theme of the site follow the system theme. Sweet!


  1. 1.A smartphone is a computer.
  2. 2.According to caniuse.com

Some Problems and Solutions When Creating Xamaring Android Bindings

As announced in my last post, we recently created Xamarin Bindings for the Adyen Android SDK. In this post, I thought I would share som experiences in creating those bindings, like what I kind of problems we ran into, and how we fixed them.

The process of creating Xamaring bindings can be a bit tricky. The process is documented at docs.microsoft.com, but I struggled quite a while in getting it to work.

First of all, you need the actual Android libraries that you want to create the bindings for. These are (at least in this case) available at jcenter. Then you need to figure out exactly which libraries you need. In order to do this you can look in the *.pom file for a specific library to find out what other libraries it depends on.

Adyen advocates the use of their Drop-in solution which includes all supported payment types, but this also means that we would have to create bindings for all those libraries. This would amount to about 25 different libraries! However, many of the payment types supported were not interesting to us, at least not right now. So instead we opted to use only the Card Component and the Redirect Component, which would only require us to create bindings for 7 libraries[1].

There are a couple of different ways to create bindings, but as Adyen provides AAR files, I basically followed the steps on the Binding an .AAR page. This means creating a separate Xamarin Bindings Library for each AAR file, and the easiest way is to start at the “bottom”, and create a binding for the library that does not have any other java dependencies, in this case adyen-cse and work you way up, adding references to the other bindings as you go along. The Android dependencies in the POM files can simply be added as NuGet package references. Then you compile the project.

It won’t compile!

Right. Most of the time, when you create a binding, add the AAR file and try to compile, it won’t work the first time. This could be due to a number of problems, but in this project, I’ve mainly had a handful of problems, which I’ll elaborate further below.

Problem 1 - Wrong return type

Sometimes the generated code will have the wrong return type. This is often because of the difference between how interfaces and generics work in Java and C#.

For example, in the original code for LogoConnection in base-v3, the call() method returns a BitmapDrawable, which is ok, since the class implements the interface java.util.concurrent.Callable<T>, which is a generic interface, so you can have call() return a specific type.

In Xamarin, however, the interface java.util.concurrent.Callable is not generic (I don’t know why), and thus LogoConnection.Call() must have a return type of Java.Lang.Object. In the generated code, however, the return type is still BitmapDrawable. Fortunately, this is an easy fix!

Every generated method and class has method/class reference as a comment above it. This can be used to modify the generated code in the Metadata.xml file. On of the modifications that can be made is to change the return type. The following node changes the return type of the call method to Java.Lang.Object:

1
<attr path="/api/package[@name='com.adyen.checkout.base.api']/class[@name='LogoConnection']/method[@name='call' and count(parameter)=0]" name="managedReturn">Java.Lang.Object</attr>

The path is just copied from the comment above the method in the generated code, but it is pretty straight forward anyway.

Problem 2 - Wrong parameters

Another problem that can occur, and that is related to the previous one is that sometimes generated methods have the wrong parameter types. This is not quite as easily fixed, as I have not found a way to modify the parameters of a method solely by a Metadata.xml node.

Example: In com.adyen.checkout.base.ui.adapter.ClickableListRecyclerAdapter, the onBindViewHolder method takes a generic ViewHolderT as the first parameter. But in the generated code, ClickableListRecyclerAdapter is no longer generic, so OnBindViewHolder instead takes a Java.Lang.Object, as can be seen in the snippet below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// Metadata.xml XPath method reference: path="/api/package[@name='com.adyen.checkout.base.ui.adapter']/class[@name='ClickableListRecyclerAdapter']/method[@name='onBindViewHolder' and count(parameter)=2 and parameter[1][@type='ViewHolderT'] and parameter[2][@type='int']]"
[Register ("onBindViewHolder", "(Landroid/support/v7/widget/RecyclerView$ViewHolder;I)V", "GetOnBindViewHolder_Landroid_support_v7_widget_RecyclerView_ViewHolder_IHandler")]
public override unsafe void OnBindViewHolder (global::Java.Lang.Object viewHolderT, int position)
{
const string __id = "onBindViewHolder.(Landroid/support/v7/widget/RecyclerView$ViewHolder;I)V";
IntPtr native_viewHolderT = JNIEnv.ToLocalJniHandle (viewHolderT);
try {
JniArgumentValue* __args = stackalloc JniArgumentValue [2];
__args [0] = new JniArgumentValue (native_viewHolderT);
__args [1] = new JniArgumentValue (position);
_members.InstanceMethods.InvokeVirtualVoidMethod (__id, this, __args);
} finally {
JNIEnv.DeleteLocalRef (native_viewHolderT);
}
}

However, since ClickableRecyclerAdapter inherits from Android.Support.V7.Widget.RecyclerView.Adapter, OnBindViewHolder needs to take a RecyclerView.ViewHolder as its first argument. The solution to this problem - and many others - is to remove the generated method in the Metadata.xml, and add a modified version in the Additions folder:

1
<remove-node path="/api/package[@name='com.adyen.checkout.base.ui.adapter']/class[@name='ClickableListRecyclerAdapter']/method[@name='onBindViewHolder' and count(parameter)=2 and parameter[1][@type='ViewHolderT'] and parameter[2][@type='int']]" />
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
//Namespace should match that of the generated class
namespace Com.Adyen.Checkout.Base.UI.Adapter
{
//Note that this is a partial class
public partial class ClickableListRecyclerAdapter
{
//This code is identical to the generated code above,
//except for the type of the first parameter
[Register("onBindViewHolder", "(Landroid/support/v7/widget/RecyclerView$ViewHolder;I)V", "GetOnBindViewHolder_Landroid_support_v7_widget_RecyclerView_ViewHolder_IHandler")]
public override unsafe void OnBindViewHolder(RecyclerView.ViewHolder viewHolderT, int position)
{
const string __id = "onBindViewHolder.(Landroid/support/v7/widget/RecyclerView$ViewHolder;I)V";
IntPtr native_viewHolderT = JNIEnv.ToLocalJniHandle(viewHolderT);
try
{
JniArgumentValue* __args = stackalloc JniArgumentValue[2];
__args[0] = new JniArgumentValue(native_viewHolderT);
__args[1] = new JniArgumentValue(position);
_members.InstanceMethods.InvokeVirtualVoidMethod(__id, this, __args);
}
finally
{
JNIEnv.DeleteLocalRef(native_viewHolderT);
}
}
}
}

Problem 3 - Missing method

In at least one case, the generated code was simply missing a method that was required by the base class or interface. The method for fixing this is pretty much like described above, although you obviously don’t need to remove it in metadata.xml. You also have to figure out how the method should be implemented, but that is not as difficult as it sounds, as all implementations follow the same pattern.

In my case, the generated class Com.Adyen.Checkout.Card.CardListAdapter was missing the OnBindViewHolder method, which is required by the RecyclerView.Adapter base class, and is obviously present in the original code.

This solution, then, is to add a partial CardListAdapter class in the Additions folder, and add the OnBindViewHolder implementation in it. In this case it was very easy, since I could just basically copy the OnBindViewHolder implementation from ClickableListRecyclerAdapter above (or any other class that has it).

Problem 4 - Other unfixeable problem -> Kill it!

Sometimes you will get another problem, that is not as easy to fix, for whatever reason. In many cases, you can solve this problem just by removing the offending method altogether. If it is not a method that you need to call directly from the app, and not a method that is required for implementing an Interface or an abstract base class, you can probably remove it with a remove-node line in Metadata.xml and be done with it.

The reason for this is, of course, that once the call to a native method has been made, for example with InvokeVirtualVoidMethod as above, subsequent calls will be completely native, so it doesn’t matter if the methods have .NET wrappers or not. At least that is my understanding of it.

Bug in the AAR file

When I tried to use the Card Component in the Demo App, I got the build error Multiple substitutions specified in non-positional format; did you mean to add the formatted="false" attribute?. Turns out there is (at least at the time of writing) a bug in strings.xml in the card-ui library.

1
2
3
4
5
<resources>
<!-- snip -->
<string name="expires_in">%s/%s</string>
<!-- snip -->
</resources>

Turns out you can’t have multiple %s in a string resource because of reasons. If you do, you need to add formatted="false" to the node. I fixed this by editing the AAR file (it’s just a zip file, really), and adding the attribute in /res/values/values.xml (which is a squashed version of all xml files in the res folder).

Unfortunately, this means I had to check in the modified AAR file. For the rest of the files, I have a Cake build script that just downloads all the AAR files from jcenter. But hopefully it will be fixed in the next release of card-ui.

I hope someone who has to create Xamarin Bindings will find this rather long and unstructured post useful. If nothing else, it will help me remember the problems I had and how I solved them for the next time.


  1. 1.Actually, I finished the first version of these bindings in June. Unfortunately, just as I though I was done, I noticed that the Adyen developer documentation] had changed substantially. While I was working on this they had release an RC version of version 3.0, which was totally different from version 2.4.5 that I had been working on. So I basically had to start all over again and create new bindings for v3. The old bindings are available at github (tag: 2.4.5), and also at NuGet (Approach.Adyen.UI.Droid), should anyone be interested. But it's probably better to use the new ones.

Announcing Xamarin Android Bindings for Adyen Checkout

I’ve been working on implementing Adyen payments for a customer lately. They have been using another PSP for many years, but are now switching to Adyen. This is super easy on the web site, but as it turns out, not so easy in the mobile app.

Adyen offers a lot of sdk’s, including an Android SDK. The app I’m working on, however, is developed in Xamarin, and unfortunately, Adyen does not offer a Xamarin SDK. That means that in order to use the Android SDK, we have had to create Xamaring bindings for the java SDK.

We have created a set of Xamarin Android Bindings for the Adyen Checkout components. So far, we have only implemented the Card Component and the Redirect Component, because that was all we needed at the time.

The components are available as NuGet packages:

The source code is available at our Github account, should you want to build your own components, or maybe fix a bug or two. There is also a Demo app in the github repository, which should help you use the components. So yeah, that’s our first official public open source project!

I have published a follow up post where I dwelve a little deeper into the problems I ran into while creating Xamarin Bindings for Android, and how to fix some of them. So check that out as well, if you’re into that kind of stuff!

Resolving ILogger with Nancy and TinyIoC

This is a shorter follow-up post to my recent post about configuring NLog and ILogger in ASP.NET Core. As I mentioned there, since we’re using Nancy for our project, we can’t just use the built-in dependency resolver in ASP.NET Core, since Nancy uses it’s own dependency resolution.

In most cases, we use Autofac and the Nancy Autofac bootstrapper, but in this case, we were using the default TinyIoC implementation, so that’s what I’ll write about in this post. I might write another follow-up post when I implement this for Autofac.

First of all, we need to pass the ILoggerFactory that we configured in the previous post. Since this is available in Startup.Configure we can just pass it on to our Nancy bootstrapper.

1
2
3
4
5
6
7
8
9
10
11
public class Startup
{
public void Configure(IApplicationBuilder app, IHostingEnvironment env,
ILoggerFactory loggerFactory, IConfiguration configuration)
{
app.UseOwin(x => x.UseNancy(new NancyOptions
{
Bootstrapper = new CustomBootstrapper(env, configuration, loggerFactory)
}));
}
}

Now, if we were content with just resolving then non-generic version of ILogger this wouldn’t be much of a problem, we could just create a default logger, and register that. But since we want to use the generic ILogger<T>, it’s a little more complicated.

So we can use this custom bootstrapper:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
public class CustomBootstrapper : DefaultNancyBootstrapper
{
//Of course we have a constructor that takes the arguments passed from Startup
//and sets them as fields, but that seems obvious.

protected override void ApplicationStartup(TinyIoCContainer container, IPipelines pipelines)
{
base.ApplicationStartup(container, pipelines);

//Fallback for non-generic logger
var defaultLogger = loggerFactory.CreateLogger("Default");
container.Register<ILogger>(defaultLogger);
//The generic constructor for Logger needs ILoggerFactory
container.Register<ILoggerFactory>(loggerFactory);
//Register generic logger as multi instance
container.Register(typeof(ILogger<>), typeof(Logger<>)).AsMultiInstance();
//TinyIoC cannot resolve ILogger<> directly in modules for some reasons,
//so we have to register this one manually.
container.Register<ILogger<API.Modules.FooBarModule>>(
(c, an) => loggerFactory.CreateLogger<API.Modules.FooBarModule>());
}
}

Now, there a couple of things that are important here:

  • We need to register ILoggerFactory even though we aren’t going to use it, since the generic constructor to ILogger needs it.
  • The generic logger needs to be registered with .AsMultiInstance(), otherwise it will be resolved only the first time, and the same (and wrong) generic instance will be re-used after that.
  • For some reason it seems the resolution of ILogger<> doesn’t work in the modules themselves. This might have something to do with how Nancy auto discovers the modules, or it might have something to do with TinyIoC, I don’t know. But since generally we do very little logging in the modules themselves, we just manually register the loggers that we need for the modules. Other options would be to for example
    • Use the non-generic ILogger in the modules
    • Use the ILoggerFactory instead in the module instead, and manually create a generic logger with loggerFactory.CreateLogger<FooBarModule>

I’m sure there are other, and probably better ways to this, but this seems to work well enough.

Repairing a cracked PCB in a Commodore 1901 monitor

This is the second and final part in a very short series where I improve and repair my Commodore 1901 monitor. In part 1 I added a SCART connector with analog RGB and audio support, but also discovered that the colours were a bit off – especially when using an RGBi input, such as CGA – and discovered a crack on the PCB. In this part, I will repair the PCB and hopefully fix the colours.

First I had to have a good look at the crack. It was in the lower left corner of the PCB, close to the potentiometers that adjust the color levels, as marked in the picture below.

After pulling the board out a bit and turning the monitor upside down, I could get a closer look at the crack.

That doesn’t look too fucking good. No less than nine (9) traces are broken. Fortunately, since this is an old monitor, the PCB is single layer, so there are no traces on the back, and no traces inside. The easiest way to repair a broken trace on a PCB is to find a solder joint on each side of the crack and solder a wire over the crack. But I also wanted to try another way. So for the first three traces, where there was enough space, I just scraped a bit of the outer layer off, and soldered a very short piece of wire right over the crack.

For the rest of the broken traces, there just wasn’t enough room to use this method, at least not with the tools and skill at my disposal. So I had to solder wires over the rest of the cracks.

After this I reassembled the monitor (well, actually, I finished the SCART mod as well) and connected my Bondwell Model 8 to the RGBi-input. To my great surprise everything worked perfectly! The lovely CGA palette of white, cyan and magenta was as vibrant as ever with no sign of the yellowish tint from before, and some careful banging on the side of the screen no longer causes the colors to change. So I have to label this a complete success!

I now have the perfect monitor for my small[1] collection of retro computers. It takes RGBi, SCART with analog RGB and separate Chroma and Luma input (like S-VIDEO). And it even has a built-in speaker! The only input I so far haven’t had much success with is composite. If I connect composite to the Luma input (the yellow RCA jack), I get a monochrome picture (not a great surprise). If I connect it to the Chroma instead, I get no picture at all. If I split the composite cable and connect it to both, I still only get monochrome. If anyone has a working way to connect a composite signal to separate luma and chroma inputs, I would be very interested. A minor annoyance though, as I can connect composite to a TV instead. So, yay, working Commodore 1901 monitor!

Finally, here is a picture of my five year old son playing Krakout on the repaired monitor!


  1. 1.I would consider my collection small. There are others in my family who would voice a different opinion...

Properly configuring NLog and ILogger in ASP.NET Core 2.2

Ever since we started using dotnet core a couple of years ago, both for new projects and porting old projects, we’ve been struggling with configuration. Especially regarding logging. The official documentation has been – to put it mildly – confusing and inconsistent, and to make matters worse, we’ve been wanting to use NLog as well. In the old days (e.g. when we used .NET Framework 4.x) using NLog was pretty easy, we just added a NLog configuration section to web.config (or a separate file if we were being fancy), and then just accessed the static instance of NLog with LogManager.GetCurrentClassLogger(). This, however, does not work particularly well in dotnet core, for the following reasons:

  • Dotnet Core does not like static accessors
  • Dotnet Core really would prefer if we used the ILogger interface to log stuff
  • We don’t have a web.config anymore

So, over the last years I’ve tried different approaches to this, without ever being fully happy with the result. But with recent versions of dotnet, and multiple more or less ugly attempts, I feel I finally have a pretty good grasp of how to set everything up properly, so I though I’d better write it down for future reference before it slips my mind again (my mind is very good at remembering release years for old movies, but not so great at remembering dotnet configuration syntax).

So, first things first. We have an asp.net core web app targeting netcoreapp2.2, and in order to use NLog for the logging, we need two additional package references:

1
2
<PackageReference Include="NLog.Extensions.Logging" Version="1.5.0" />
<PackageReference Include="Nlog.Web.AspNetCore" Version="4.8.2" />

Then, we need to configure the app configuration in Program.cs. In older versions of dotnet core most of this setup was done in Startup.cs, but it has since mostly been moved to the Program class.. Besides setting up the logging, we also configure the rest of the app configuration here, e.g. setting up appsettings.json. For more fundamental information about the Program.cs and Startup.cs classes, see docs.microsoft.com.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
//This method is called from Main
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>()
.ConfigureAppConfiguration((hostingContext, config) =>
{
var env = hostingContext.HostingEnvironment;

//Read configuration from appsettings.json
config
.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
.AddJsonFile($"appsettings.{env.EnvironmentName}.json",
optional: true, reloadOnChange: true);
//Add environment variables to config
config.AddEnvironmentVariables();

//Read NLog configuration from the nlog config file
env.ConfigureNLog($"nlog.{env.EnvironmentName}.config");
})
.ConfigureLogging(logging =>
{
logging.ClearProviders();
logging.AddDebug();
logging.AddConsole();
logging.AddNLog();
});

The key here is of course the env.ConfigureNLog($"nlog.{env.EnvironmentName}.config") statement, which allows us to read the NLog configuration from a standard NLog configuration file, just as we did in the old .NET Framework. The ConfigureNLog extension method is provided by the Nlog.Web.AspNetCore package. In my example I have different nlog config files for different environments, just as I have different appsettings for different environments. The nlog.*.config files are automagically copied to the publish directory, just as the appsetting files. We also configure the different loggers, and add a Debug, a Console and an NLog logger, which all will the receive the same logging data.

This also has the additional benefit of getting rid of a very annoying warning that you get if you still use the old method of adding loggers in Startup.cs:

ConsoleLoggerExtensions.AddConsole(ILoggerFactory)' is obsolete: 'This method is obsolete and will be removed in a future version. The recommended alternative is AddConsole(this ILoggingBuilder builder).

And with this, we’re pretty much finished. All setup regarding logging and app configuration can be removed from Startup.cs unless you need to do other fancy stuff there. Since IConfiguration and ILoggerFactory is already configured in Program.cs, you may have to inject them in Startup. This can be done in either the constructor or in the ConfigureServices or Configure methods. I really can’t say which is best.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public class Startup
{
public Startup(IHostingEnvironment env, IConfiguration config)
{
//I guess you could store config as a field here and access it in the other methods
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env,
ILoggerFactory loggerFactory, IConfiguration configuration)
{
//You can inject both ILoggerFactory and IConfiguration directly
//into the configuration methods as well
}
}

If you are using the standard asp.net core dependency resolution, this is it! You can inject ILogger or (preferably) the generic ILogger<FooBar> anywhere you want to log stuff, and just log away. In our case, we use Nancy and TinyIoC (or frequently Autofac) for dependency injection, which makes things a little more complicated, but that will make for an excellent post of its own!

Adding an RGB SCART connector to a Commodore 1901 monitor

So, this post is going to be a departure from most of the previous content on this blog. This may or may not be indicative of future posts.

A couple of months ago, I bought a cheap, used Commodore 1901 monitor[1] from Tradera. The Commodore 1901 has digital RGBi input using D-SUB 9 connector, as well as separate Luminance and Chrominance inputs via RCA jacks (which is the same signal as S-Video, just different connectors). I thought this would be a good monitor for my old Bondwell Model 8 computer, which only has CGA output (and probably deserves a post of its own). It would probably also work with my Commodore 64. The Commodore 1901 also has a built-in speaker, that connects with yet another RCA plug, so I wouldn’t even need a separate speaker.

When I connected my Bondwell to the monitor, it was indeed glorious, as evident in the image below. What is harder to see is that the colours where a bit off, there was a bit of a yellow tint that kind of came and went.

I managed to find the service manual for the Commodore 1901 monitor, and found that there were a couple of potentiometers that could be adjusted if the colour was off. So I opened the monitor and adjusted the potentiometers which at least made the colour a little bit better. Unfortunately, I also noticed that the PCB had a small crack, which caused some bad connections, and was probably the cause of the colour problem. More about this later.

But what I also found, to my great surprise, was a number of solder points that looked like they could fit a SCART connector, and a matching hole in the metal backplate. What on earth could this be for? Maybe this monitor came in a different version[2], with a SCART connector? But if so, what kind of signals was used[3]? And did this version actually use those signals? Would it be possible to get analog RGB input by adding a SCART connector?

A bit of research indicated that yes, this might indeed be possible. I found a thread on amibay.com and blog post by a Danish guy (unfortunately missing all images[4]) that discussed this. The problem seemed to be that the solder points for the SCART connector on the PCD are oriented backwards, so that a standard 90-degree connector won’t fit. So the usual solution seemed to have been to solder wires between the PCB and the SCART plug. However, I managed to find an angled SCART connector on eBay that seemed to be oriented the other way around. It sure looked like it would fit!

So, the first thing to do was to remove the solder blocking the holes. Time to heat up my solder sucker!

After this, it was simply a matter of fitting the SCART connector and soldering it in place. Or rather, it would have if the darn plug would have fitted through the hole in the metal frame!! When I had fitted the legs through the holes in the PCB, it was completely impossible to get the plug through the hole. In the end, I hade to bring out a knife and go to town on the poor plug.

Finally, I was able to fit the SCART connector through the hole, and solder it in place.

And now, the moment of truth. Would this work? I have an Amiga 600 with a SCART cable that carries not only analog RGB video, but also sound. So maybe I would get sound through the built-in speaker as well? Time to connect a cable. Would it even fit the mangled SCART connector?

The answer to the last question is yes, it fits. And the answer to rest of the questions are yes, everythings works perfectly! I get a crystal clear image from the amiga, and I get the sound through the speaker! The only thing left to do was to make a hole in the plastic cover as well, which was easy since there was already an indication in the cover where to cut.

So, after cutting a hole in the cover, it was just a matter of putting everything back, and look at the nice result:

And finally, here is a picture of the Amiga workbench on the Commodore 1901 monitor:

So hooray, everything is great! Except for the crack in the PCB, remember? Since I had the monitor open, and the soldering iron out, I decided to see if I could fix that as well. But I believe this post is long enough already, so that will have to wait until part 2.


  1. 1.The Commodore 1901 monitor was a PAL-only monitor produced between 1986 and 1988, and was meant to be used together with the Commodore 128. It is not as famous as the 1084 monitor but, as we will see, with the SCART modification it is just as useful!
  2. 2.The monitor was actually manufactured by Thompson. And Thompson did release their own version of it with a SCART connector, the Thompson 450G. Why the Commodore version came without it, I do not know.
  3. 3.The SCART connector actually carries a lot of different signals. It can carry composite video, s-video and RGB, and event YPbPr, as well as stereo sound. Wikipedia has a good article.
  4. 4.While writing this post I checked the blog post again, and now it seems all images are back! This would have made it easier for me when I actually was working on the monitor!