markround.com

Code, Amiga, Sound Engineering and other things of interest...

Fuse, SOBJS and Multiple Assigns: Amiga OS 4 Learning Curve Part 1

| Comments

A little while ago, an updated port of the FUSE ZX Spectrum emulator was uploaded to OS4Depot. My first computer was a ZX Spectrum 48k, although I eventually ended up with a +3 model before I upgraded to my first Amiga. I was looking forward to getting this emulator installed so I could re-visit some of the classic 8-bit games I used to play; my favorite game is still Manic Miner although I never manage to get past the “Eugene’s Lair” level on my first attempt!

I hit a few stumbling blocks along the way though, so this “simple” exercise turned into a trip down the rabbit-hole which ended up teaching me a lot about my new X5000 system and AmigaOS 4. What follows is pretty much a blow-by-blow account of what I discovered along the way. As I’m still pretty new to OS4, I’m sure there’s probably a much neater solution to all this though, so any comments and feedback are all welcome!

Anyway, here’s the first problem I hit: I downloaded the ZIP archive for FUSE, unpacked it with UnArc and tried to launch it by double-clicking on the Icon. Instead of seeing the FUSE window appear, I instead got this error message:

I remember having this kind of issue all the time on my classic Amigas – a program would require some library that I didn’t have installed. Back in the day, if I was lucky I’d get an error message telling me what to download, or a note in a README guide somewhere. When that failed, I’d usually break out the fantastic and indispensable SnoopDOS utility to find out what a program was looking for. In this case however, it was clearly telling me what file I needed, but I had no idea where to install it. On my classic Amiga systems, standard Amiga libraries (with the .library extension) simply got dropped in the LIBS: volume, which was usually in the Libs directory on your startup disk. But I had no idea what to do with ELF-format shared libraries.

Anyway, I searched on OS4Depot for “libpng”, and found this package. Looking at the contents of the archive, I could see this contained a libpng16.so.16.34.0 file. This looked to be exactly what I was after! I downloaded the libpng archive, and unpacked it to a new directory in my Downloads folder (I really dislike archives that unpack into the current directory; it makes cleaning up and keeping track of where files came from so much harder). This left me with a bunch of library files and an AutoInstall file. I looked at this, and it turned out to be a simple AmigaDOS script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
Delete SDK:Local/clib2/bin/libpng#? QUIET
Delete SDK:Local/newlib/bin/libpng#? QUIET

Delete SDK:Local/clib2/lib/pkgconfig/libpng#? QUIET
Delete SDK:Local/newlib/lib/pkgconfig/libpng#? QUIET

Delete SDK:Local/clib2/lib/libpng#? QUIET
Delete SDK:Local/newlib/lib/libpng#? QUIET

Delete SDK:Local/clib2/include/libpng#? ALL QUIET
Delete SDK:Local/newlib/include/libpng#? ALL QUIET

Delete SDK:Local/clib2/include/png.#? QUIET
Delete SDK:Local/clib2/include/pngconf.#? QUIET
Delete SDK:Local/clib2/include/pnglibconf.#? QUIET
Delete SDK:Local/newlib/include/png.#? QUIET
Delete SDK:Local/newlib/include/pngconf.#? QUIET
Delete SDK:Local/newlib/include/pnglibconf.#? QUIET

Copy Local/ ALL CLONE QUIET SDK:local/
CD SDK:local/newlib/lib
MakeLink libpng.so libpng16.so.16.34.0 SOFT
MakeLink libpng16.so.16 libpng16.so.16.34.0 SOFT
MakeLink libpng16.so libpng16.so.16.34.0 SOFT

Fairly rudimentary, but it’s clear what it’s doing: It first deletes any existing libpng-related files, and then copies things to the required locations, and sets up some symbolic links in SDK:local/newlib/lib. If you’re familiar with how Unix systems handle shared libraries, this should look pretty similar. I also discovered from that script that AmigaDOS can make use of symbolic links – I’m not sure when support for that was added, but I don’t recall it being around on my classic Amigas.

Only problem was, I didn’t have any SDK: assign, or any directory structure resembling the archive on my system. After some searching online, it appears this is created by the official SDK for AmigaOS 4, which can be downloaded from the Hyperion website. I picked the most recent version, SDK_53.30.lha (Software Development Kit for AmigaOS 4.1 Final Edition) and downloaded it.

After I’d unpacked it, I ran through the supplied installation utility, only changing the location to my standard Software: assign. I selected a “Full Install”, as this bundles a lot of very useful-looking GNU tools and utilities, as well as include files and compilers:

Definitely a lot to go exploring through on another occasion! I noticed that it had also modified my S:User-Startup file to make sure the SDK: assign is created, and it also dropped in another startup file that is run on boot:

1
2
3
4
;BEGIN AmigaOS 4.1 SDK
assign SDK: Software:SDK
execute SDK:S/sdk-startup
;END AmigaOS 4.1 SDK

Looking at SDK:S/sdk-startup, it appears that this sets up assigns and other things that are needed for the bundled compilers and utilities. In any case, I now had my SDK: assign, so I could go back to my libpng download directory and run execute AutoInstall from an AmigaDOS shell. This completed, and I could then see that the libpng files were copied to the correct destination under my SDK root. So, I clicked on the FUSE icon again, and…. same error: “Failed to load shared object libpng16.so.16.34.0”. Damn.

At this point, I discovered there was a SOBJS: assign which appeared to point to the System:Sobjs directory which also seemed to be full of .so library files. I decided to try and copy the libpng library there:

1
copy SDK:local/newlib/lib/libpng16.so.16.34.0 system:SObjs/

And I then re-launched Fuse. Now, I saw a different error – this time, complaining about not being able to find libz.so.1. One step forward, two steps back…. This libz.so.1 file however was in the SDK:local/newlib/lib directory, but I decided against copying everything in there over as it seemed pretty hacky, and a sure-fire way to screw something up. Plus, it seemed that there must be a way of adding this SDK lib directory to the system-wide library search path.

Now, on Linux or Unix systems, I’d have looked for a way to either re-compile Fuse with the correct -L and -R flags, or found a way of adding the SDK to the LD_LIBRARY_PATH variable. In this case though, I was totally stuck and in a very unfamiliar place. I shelved my “get FUSE installed” project for the time being, and instead took a brief diversion into getting QT installed. Quite by chance, I noticed that it added something very interesting to the end of my S:User-Startup:

1
2
3
4
5
;BEGIN Qt-4.7
assign Qt: "Software:Programs/Qt-4.7"
path Qt:bin ADD
assign sobjs: Qt:lib ADD
;END Qt-4.7

Whoah. It looked like it was appending a path onto an existing Assign – I had no idea that AmigaDOS supported this! I’ve used the ADD syntax to the path command many times when adding additional directories full of tools, but never thought to try it with an Assign. It looks like it was added into AmigaDOS release 2, so it’s always been available to me ever since I got my first A600. Another new thing I learned!

So, I then added the following to the end of my S:User-startup:

1
2
; Needed for FUSE etc to pick up on new libraries
assign Sobjs: SDK:local/newlib/lib ADD

I rebooted, clicked on the Fuse icon, and…..(cue drum-roll)

Success! Although, I have to admit that at this point, I lost around 2-3 hours downloading and playing tons of classic games from the World Of Spectrum archives :)

However, this little distraction did teach me some new things:

  • AmigaDOS supports both hard and soft symbolic links.
  • Assign can take an ADD parameter which appends directories onto a logical assign.
  • The SOBJS: assign appears to be where programs look for ELF-format shared libraries on startup.
  • The OS4 SDK contains some very useful CLI tools and is worth installing on any Amiga OS 4 system even if you don’t intend to develop software with it.
  • Even after 30-odd years, there’s still something new for me to learn about classic AmigaDOS commands!

Finally, I’d just like to note that shortly after I solved the FUSE problem, someone very helpfully posted a comment which highlights an alternative fix – simply copying the libpng library to the FUSE directory. Although I wish I had thought of trying that, this little diversion proved to be a very useful exercise and I’m glad I got to dig a little deeper “under the hood” of my new X5000.

Classic Amiga Emulation on the X5000

| Comments

While I’ve been having a lot of fun with the new software written specifically for AmigaOS 4, the bulk of my software is still “classic” titles that used to run on my A1200. One of the first things I did when I set up my X5000 was to transfer my old Amiga’s hard drive over so I could continue running this library of software. I also wanted to set up an emulation of my A1200 so I can quickly launch a classic Workbench 3.9 session and pick up all my old projects and bits of code I’d written over the years.

Fortunately, the X5000 and AmigaOS 4 offers a variety of ways of running all your old software. For this blog post, I’ll break it down into four areas:

  • Running software from disk images (such as ADF, DMS etc.)
  • Running games and demos from WHDLoad
  • Full system emulation with E-UAE
  • Running 68k system software

Disk images

The first “mode” I wanted to explore was running classic Amiga games and demos from ADF disk images.

I discovered that thanks to the pre-installed RunInUAE tool, this was about as easy as it gets – you simply double-click a supported disk image filetype such as an ADF or DMS image, and RunInUAE then boots a pre-configured fullscreen classic Amiga environment with your disks attached.

It relies on the filename to select the model of classic Amiga to emulate so if your disk image has the letters AGA in the filename, then an A1200 will be emulated. Otherwise, it falls back to a classic A500 setup. It also supports multi-disk demos and games very simply: If the filenames for your disk images are identical apart from a trailing numerical character, it will “insert” both disks into the system. As an example, here’s what my directory of classic “Grapevine” disk magazines looks like after I’d renamed them according to RunInUAE’s standards:

By default, the emulation runs in fullscreen mode. You can quit the emulation by holding down Ctrl and pressing the Alt and q keys, and Ctrl-Alt-s switches out of fullscreen mode. You can also tweak the settings for an individual game by dropping it onto the RunInUAE window and right-clicking to see a menu where you can change the input devices (joysticks etc.), screenmode and other things. This creates a customised .uaerc file for that particular image which you can see opened in Multiview below:

Here, I’ve switched to windowed mode instead of full-screen. You can also see in the generated .uaerc that it provides a Workbench volume providing a 3.1 installation mounted on a virtual DH0:. This installation lives under System:Emulation/AmigaDisks/Workbench3.1 – this will be important later on… In addition, my usual volumes (Software, Work and System) are also mounted into the virtual Amiga in read-only mode.

A nice touch is that the DiskImageGUI utility also supports Amiga disk images, so you can also mount the images as virtual IDFn: volumes. As an example, I’ve mounted the Grapevine disk image with Mounter and you can see it show up on my X5000 desktop, accessible just like a real floppy image:

The DiskImageGUI runs as a commodity after you close the window, so you can always bring it back with the shortcut Ctrl-Alt-d.

WHDload

WHDLoad is an awesome tool developed for classic 68k Amigas that makes lots of games and demos runnable straight from a hard drive, even for programs that didn’t natively support a HD install option. There are lots of archives available on the Internet of thousands of classic games and demos, all pre-configured and ready to go. These can also run on the X5000 and AmigaOS 4 through RunInUAE, as it provides a WHDLoad “shim” which boots an emulated Amiga and runs the real WHDLoad from inside that.

However, the bundled WHDLoad binary is pretty old, and is also unregistered so before a game starts you are presented with a “please register” message and a timer that counts down before the game will actually start. Newer versions of WHDLoad include numerous improvements and also remove the registration requirement, so the first thing I did was update the installation. I went to the WHDLoad homepage and downloaded the latest “usr” package which was a standard LHA archive. After extracting it, I ran through the installer utility, but (and this is important!) I selected the C directory under System:Emulation/AmigaDisks/Workbench3.1 as the destination. Do not install this into your AmigaOS 4 System volume, as otherwise it’ll overwrite the WHDLoad “shim”. You can see the correct path specified in the screenshot below:

Once this was done, all my WHDLoad games and demos worked exactly as they did on my A1200! Here’s what the emulation screen displays briefly before the game starts, showing the updated WHDLoad install:

Joystick tweaks

The only AmigaInput supported gamepad I have for my X5000 doesn’t let me use the “hat” switchpad in most games. For some things this is fine, but for other games I prefer the feel of the classic pad instead of using the joystick nubs. I found a fantastic tool on http://os4depot.net/ called AmigaInputAnywhere which lets you use an AmigaInput device to trigger keypresses or other actions. With this, I could set my gamepad’s “hat” controller to trigger the numeric keys (8=Up, 2=Down, 4=Left, 6=Right, 5=Fire) which can then be selected as an emulated Joystick in RunInUAE. For the games where I preferred the Hat switch, I just opened up RunInUAE and dropped the game icons on the open window, right clicked and selected Per-game settings –> Set joystick to –> kbd1 (number pad). In the next screenshot, you can see the AmigaInput prefs open, along with my AmigaInputAnywhere configuration and a test session in the shell where moving the hat switchpad generated a stream of number-pad keypresses:

Full system emulation

Although I could now run my old games and demos through RunInUAE, I also wanted to emulate my A1200 as a “full system” which boots into my Workbench 3.9 installation so I can continue with my old projects. I had the directories prepared with the contents of my A1200’s hard disk copied over; this was covered in my earlier blog post here.

To setup this emulation environment, I decided to use an updated E-UAE which has been released on OS4Depot here. This version of E-UAE has JIT support which can add a significant speed boost, although it’s worth experimenting with it as some things can become unstable when enabling it. I also definitely didn’t want to interfere with my stock E-UAE installation. Even though it’s old, it still seems to work fine for my disk images and WHDLoad games, so instead I unpacked this updated release into a seperate directory at Software:Programs/E-UAE_1.0.0.

I then created the Configuration file at Software:Programs/E-UAE_1.0.0/.uaerc based on a template from Amigaone X5000 Blog. I made a few simple changes: Disabled the floppy drive sound, enabled the JIT (after some experimentation, my installation seemed to run better with it) and changed the paths to my emulated hard drives and ROM image. For reference, my slightly modified .uaerc file is listed below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
kickstart_rom_file=A1200:rom/amiga-os-310-a1200.rom
kickstart_key_file=A1200:rom/rom.key

amiga.floppy_path=A1200:Floppies

floppy_speed=400
floppy0sound=0
scsi=true
scsi_device=true
show_leds=false
filesystem2=rw,DH0:System:A1200:Drives/System,1
filesystem2=rw,DH1:Software:A1200:Drives/Software,0
filesystem2=rw,DH2:Work:A1200:Drives/Work,0

joyport0=mouse
joyport1=joy0

cpu_type=68040
cpu_compatible=false
cpu_cycle_exact=false
cpu_speed=max

chipset=aga
immediate_blits=yes
collision_level=full

chipmem_size=16
bogomem_size=0

fastmem_size=0
z3mem_size=256
gfxcard_size=32

sound_output=exact
sound_channels=mixed
sound_frequency=44100
sound_latency=120
sound_max_buff=8192

amiga.screen_type=ask
amiga.publicscreen=ask
gfx_fullscreen_amiga=false

amiga.use_dither=false
gfx_framerate=1

# Display settings
gfx_width=1024
gfx_height=768
gfx_width_windowed=720
gfx_height_windowed=568
gfx_fullscreen_amiga=no
gfx_fullscreen_picasso=yes
gfx_lores=false
gfx_linemode=double
gfx_correct_aspect=no
gfx_center_horizontal=smart
gfx_center_vertical=smart

# Miscellaneous
use_debugger=no
ppc.use_tbc=false

# JIT
cachesize=8192
cpu_compatible=false
cpu_cycle_exact=false
blitter_cycle_exact=false
comp_constjump=yes
comp_trustbyte=indirect
comp_trustword=indirect
comp_trustlong=indirect
comp_optimize=true

I created a little script file to launch the emulation environment, this was a simple text file with the following lines:

1
2
cd Software:Programs/E-UAE_1.0.0
uae

I saved this as “a1200” in my Work:c directory I keep for my own scripts and tools (this is also added to my path in my S:User-Startup). I also ran Protect Work:c/a1200 +s to mark it as an executable script. So that I could also launch this from X-Dock, I copied over a nice .info icon file to this directory and renamed it as a1200.info. I then modified the tooltypes as per the screenshot below:

This sets the default tool to C:IconX which launches script files, and also set the window to the NIL: device so I didn’t get a screen of UAE debugging output.

With that in place, all I have to do now is click on the icon in my X-Dock, and I get a windowed emulation of my A1200 just as I’d left it!

I can easily put this into fullscreen mode by using the Ctrl-Alt-s shortcut, and I also discovered that Ctrl-Alt-g “grabs” the mouse pointer inside the emulation which is really useful when it’s running in windowed mode.

Classic 68k binaries

AmigaOS 4 also includes a built-in JIT 68k emulation layer called “Petunia” that allows classic system-legal 68k binaries and libraries to run unmodified. For these to run, they must only use OS system calls and avoid “hitting the hardware” – so anything that bangs on the custom chipset can’t be used (although there’s always RunInUAE for such things).

Many tools and utilities work just fine however, especially more recent 68k binaries. In the screenshot below, I’m running the 68k binaries of the Workbench 3.1 “calculator” tool, and also AmTelnet for some classic BBS action. AmTelnet is a particularly good example of what can run: It’s a full MUI program which also installs some custom 68k libraries. Apart from the occasional Grim Reaper crash on launch, it functions just like a PPC-native program. It’s fully integrated into the updated Workbench environment, and if I never looked at the task listing in SysMon and saw the “68k” field in the “Arch” column, I’d have thought it was a native application!

So there you have it! I hope that’s been of some help and interest :) I’ll continue to update this blog with more of my Amiga experiments, and feel free to leave a comment with any questions!

New Amiga X5000

| Comments

As you may have seen with my latest music project, I’ve been getting back into the Amiga scene in a big way over the last year. Granted, a large part of this is nostalgia on my part; the Amiga was a lifeline to me during my teenage years and was responsible for starting my twin interests of computing and music. But I’ve always been amazed by the sheer tenacity of the Amiga scene – nearly 30 years on from when I first got my Amiga 600, the scene is still going (albeit fractured into different camps) and new systems are still being created for this legendary platform.

Sadly, my A1200 recently died and while I haven’t given up on it yet, it’s going to take me some time to source spare parts and experiment with fixing it. This left a large Amiga-shaped hole to be filled, and I’ve always been fascinated by the next-gen PPC systems. I finally took the plunge and ordered my maxed-out X5000 from AmigaKit in the UK, with the aim of getting a more modern, reliable system and moving all my projects from the A1200 over onto it.

Although I used OS 3.x pretty heavily “back in the day”, I jumped ship from the Amiga scene in the late 90s/early 2000s before PPC accelerators had really become wide-spread. I’ve also never had access to the current breed of next-gen PPC systems, so this series of Amiga-related posts is intended to document my adventures and experiments with my new Amiga, from the perspective of an old Amigan getting back into the scene. Hopefully, once I’ve repaired my A1200 there also will be a series of posts on the classic 68k scene. I’m also a little rusty on all things Amiga, so if I’ve made any glaring mistakes please feel free to contact me or leave a comment and I’ll fix it!

While I’m at it, I should give a special mention to two fantastic blogs:

Both of these provided a wealth of information on next-gen systems, and I was particularly pleased to see Epsilon received his X5000 only a few days before I got mine. I know there will be a fair amount of overlap between my blog posts and theirs, but I hope my take on the X5000 will maybe help convince a few more old Amigans to join the party!

Anyway, back to the X5000. About a week after I ordered it, it had been built and tested (great communication from AmigaKit by the way – they’re only a small operation, but I have to hand it to them keeping the Amiga dream alive!) and I received a shipping notification and tracking number. The following day, this very exciting parcel is what turned up on my doorstep:

Very well packaged and protected during the journey.

And here’s the assorted collection of “extras” that came with this X5000. The brown cardboard box was taped inside the case, in one of the spare HD slots. Inside was a very useful collection of spare screws, bolts, blanking panels and a cable. I’m not entirely sure what the cable was for, it looked like something left over from the motherboard. I’ll keep it anyway just in case it turns out to be useful!

I also received a USB drive pre-installed with AmigaOS 4.1 FE pre-release for the X5000 and A-Eon Enhancer Software. This can be used as a recovery drive, so the first thing I did while I was getting the rest of the kit ready was to copy the contents to my NAS using an Apple Macbook Pro laptop and the dd command, so I have the USB image safe somewhere else in case I ever need it.

As the USB recovery drive comes pre-installed with the Enhancer package, you can also continue to access your SFS/02 partitions and filesystems – apparently, SFS2 support has been dropped from recent updates of AmigaOS 4.1. It seems these days as though Hyperion isn’t really actively developing AmigaOS 4 any further, and therefore much of the updates to drivers and the rest of the OS is now coming directly from A-Eon instead. Having the USB drive pre-installed and ready to boot is a real nice touch as it also boots straight to a desktop like a Linux “live CD”, so can be used for repair work in case you muck something up like your startup-sequence and need a way to edit it.

Along with the USB drive, I also receieved a registration card for AmigaOS, a DVD copy of the Enhancer Pack, and a printed manual for the motherboard. Creating an account and registering on the Hyperion website also gave me access to a ISO image download of “Amiga OS 4.1 Final Edition Pre-Release”, which I also downloaded to keep safe. I don’t think I’ll ever use this image as the USB recovery stick has everything I need on it, but it’s nice to have anyway.

Speaking of the motherboard, here’s a shot looking down on the inside of the X5000 after I removed the side panel to check everything was properly connected:

A nice, tidy case with plenty of room for expansion! There’s a single stick of RAM and a PCI sound card along with the PCIe RadeonHD graphics card pre-installed.

I then put the cover back on, connected my USB mouse and keyboard (Logitech K120) to the top two black USB sockets at the rear, hooked up power and turned the system on. I was greeted with the awesome animated boot logo:

Once the ball starts spinning, you can press the space key to get at the early-boot menu, or you can simply wait and after some seconds it boots straight into AmigaOS. Under a minute later, I was looking at my brand new Workbench! An awesome moment, and I was really excited to start exploring the new OS and machine. Note that my “screenshots” at this step are all taken with my phone camera as I was going to reformat and re-partition my drives so didn’t yet want to mess around with taking screengrabs and worrying how to transfer them across.

Anyway, here’s what I saw after first boot – a 1290x1080 HD workbench with an animated “CANDI” background and a few widgets like the clock and calendar:

One thing to note is that the keyboard didn’t seem to function as expected unless it was plugged into one of the black USB ports. I could type OK, and it worked in the U-boot environment, but the Windows keys weren’t recognised as “Amiga” keys, and I also had problems with the other modifier keys like Ctrl and Alt not being recognised. Moving the keyboard to the black USB ports seemed to resolve this, though.

I clicked around a few drawers and experimented with some of the included programs, but had to stop the fun as I had work to do installing an additional SSD before I could really continue. My plan was to keep the SSD shipped with the X5000 as a “System and Software” volume, and install a second SSD for my Work volume. I also planned on making room for some additional partitions which I may require later if I wanted to experiment with MorphOS or different filesystems.

Pretty much any SSD should work, but I decided on a Samsung EVO as I’ve had good experience with them in the past. here’s my new Samsung SSD installed in one of the spare mounting brackets from the X5000:

I then had to remove the other side-panel so I could access the bundle of SATA and power cables that had been cable-tied neatly together. First step was to free these up by cutting some ties so I had enough room to move cables around to their desired position. As the motherboard only has two SATA connectors, my plan was simply to disconnect the DVD drive (I very rarely use optical media these days) and use that SATA port for my new SSD. I’ll later add an additional supported PCI SATA controller so I can re-connect the DVD drive and maybe an additional drive for later use.

Here’s my new drive in place with SATA and power connected:

There was, however, a small problem I encountered during this work!

When I looked closely at the SATA connectors, I noticed that glue had been applied around them. You can see this in the close-up image below; presumably this was to stop connectors coming loose in transit. This was a good idea, but it made things a little difficult when it came to moving things around inside.

Anyway, the connector plugged into the DVD drive was pretty firmly glued at the DVD end, so I decided instead to leave that end of the connector in place and remove the cable from the motherboard end. I carefully applied gentle pressure to pull the cable free from the motherboard and – disaster! The SATA socket actually lifted clear off the motherboard, still attached to the cable and leaving some very gentle looking pins exposed. Argh! It seems the SATA connectors (and maybe others) are not firmly held in place on the board like they are on most PC motherboards I’ve previously used.

I stopped what I was doing, calmed down, made myself a coffee and then went back to work (making very, very certain I was earthed with an anti-static wristband) and carefully pushed the SATA connector back onto the motherboard pins with the help of a magnifying glass and a pair of insulated tweezers I got from an old toolset. It eventually slotted home without damaging or bending any of the pins and I was good to go; a hair-raising moment though and I’d advise extreme caution when attempting this on any X5000 motherboard!

So, with my new SSD attached I quickly checked everything looked OK from Uboot. I simply pressed the space bar when the boing ball started spinning, and selected the System Information menu option. I saw the following screen displayed which confirmed my new SSD drive was recognised:

So, now on to partitioning and laying things out to my usual preferences. On all my Amigas, I usually have 3 partitions :

  • DH0: (System) : Boot drive containing the OS and essential libs
  • DH1: (Software) : All programs, games, 3rd party libs etc.
  • DH2: (Work) : My equivalent of a “Home” directory. Pictures, music, downloads, code projects etc.

The X5000 came with a single 240GB SanDisk SSD installed, and this was the default partition layout:

There was one 20GB System partition, and the remainder of the disk was left as a single large “Work” partition. I wanted to reduce this in size to use as my Software volume, and as I mentioned earlier, also allow some space at the end of the drive for a later MorphOS partition. I have no plans right now to install MorphOS, but I can see that it’ll be something worth experimenting with, especially if the on-board graphics card gets driver support (currently, to use MorphOS on the X5000, you’ll need to replace the supplied graphics card). So, I reduced DH1 in size, leaving enough space to create some more partitions as needed:

I then also partitioned my new SSD and added a single large Work partition. A quick reboot and format of the drives later, and I was ready to start installing software, copying data over and customising the install!

So now my next step was to transfer all the data over from my A1200. I removed the internal CF card from it, and placed it in an external USB adapter that supports CF, SD and micro-SD. The adpter is a “Cateck Aluminum Superspeed USB 3.0 Multi-in-1 3-Slot Card Reader”, which I purchased from Amazon here. It works very nicely in Linux, Windows, Mac OS and AmigaOS! Here’s the A1200s CF card inserted and ready for transfer:

I now faced another problem – I could mount the partitions using the “Mounter” utility after selecting the usbdisk.device device, but the A1200s partitions were labelled as DH0: to DH2:, which clashed with my internal drives. I found the documentation that said that I should alter the tooltypes of the Mounter utility to include NOUNMOUNT, which should have renamed the partitions and prevented the internal drives form unmounting:

However, no matter what I did, I couldn’t get this to work. So instead, I took a different approach – I simply opened the A1200s drive in Media Toolbox, and changed the A1200 partitions to become DH3: to DH5:. With this done, I could mount them without the internal drives unmounting, and could start to copy everything over. I first created a new directory and aliased this under my work partition to hold all my A1200 files:

1
2
3
4
5
6
makedir Work:A1200
Assign A1200: Work:A1200
makedir A1200:Drives
makedir A1200:Drives/System
makedir A1200:Drives/Software
makedir A1200:Drives/Work

And I also added the Assign command to my S:User-Startup for later use.

I started copying the files by using Directory Opus as you can see in this next screenshot, with the A1200 partitions showing up on the right hand side of the screen with their OS3.x icons:

Later on, I reverted back to a simple AmigaDOS shell session and did something like this for the remaining partitions:

1
2
copy DH4: A1200:Drives/Software all
copy DH5: A1200:Drives/Work all

After some more work customising my “classic” environment (details in a forthcoming post), here’s my desktop as it was at the end of my first day:

You can see I’ve tweaked it to my liking, added a classic Amiga wallpaper, setup X-Dock with a few useful shortcuts, and I also have my A1200 emulated in E-UAE, along with my old WHDload games ready for action. Setting up and tweaking my emulation configuration is definitely too much for this already very long post, so I’ll save those details for another time.

Until then, I hope this has been an interesting overview of my new system!

Flashback

| Comments

This is my rock/metal cover of a tune from the classic Mahoney & Kaktus Amiga demo, “Sounds of Gnome” (http://www.pouet.net/prod.php?which=5583). Specifically, the tune was called “Jobba” and the intro also borrows from the intro song on The Great Giana Sisters.

I loved this track when I was a kid, and watching M&K demos (along with the other late 80s/early 90s classics from Anarchy, Rebels, Animate!, D.O.C. etc.) was what drew me into the UK demoscene (shout out to any old members of NFA!). I remember watching this on a friend’s A500 and it was one of the first things I ran on my first Amiga (A600, eventually traded up to an A1200).

As well as getting me into the scene, the Amiga also kick-started my interest in making computer-based music. Mahoney (who wrote the original tune) was also responsible for NoiseTracker which was also my first introduction to making music for myself. I eventually ended up on OctaMed and now my Amiga is solely there for retro sessions as I’m on Cubase these days. But I still miss the DIY approach of Trackers, and I learnt so much from playing around with other people’s MODs.

The Amiga was a real life-line to a geeky kid like I was back then, and it set me on my path for both my career working in IT, and my hobby of making music. The Amiga scene gave me friendship, an escape from troubles of teenage life, and showed me that it’s only imagination that holds you back from achieving something that’s supposed to be “impossible”.

So this is a tribute to the Amiga scene really; I’m not exactly the worlds greatest guitarist but I had the time of my life recording this. I also stuck a shot in, right at the end of my pimped-out A1200 (HXCE CF, Blizzard 1230, 16Mb. Indivision fixer) playing the original demo and track.

Hope you enjoy, and remember : Only Amiga Makes It Possible!

The River

| Comments

Here’s my latest music project – a little progressive rock/metal track called “The River”. Based on some riffs I had knocking around in my head for several years now, and I finally got them down in a form I’m pretty happy with!

Three Years of Tiller

| Comments

On July 18th, 2014 – 3 years to the day of writing this post – I pushed the first public code of Tiller to Github. Back then, it was just a simple little tool that I wrote (mostly as a learning exercise), found useful and thought others may like to use.

Since then, there have been:

  • Nearly 60,000 downloads from rubygems.org
  • 575 commits
  • 244 stars on Github
  • 54 releases
  • 52 issues closed
  • 16 new plugins created
  • 11 pull requests from other developers
  • Over 2,000 lines of documentation written

And countless emails, comments and discussions on the chatroom.

I’d be the first to admit that it’s certainly not the best code you’ll ever see, and in terms of project size it’s tiny compared to some of the other things you’ll see hosted on Github. But something that was once solely mine has clearly resonated with others (The 1.1.0 release actually contained more of other people’s code than my own changes!), spawned a community and has found a place as a building block in projects I’d never have imagined, from people I’ve never even met.

It’s a feeling I didn’t expect to experience outside of my musical endeavours. As when I’ve slaved over a mixing desk for days or spent hours trying to perfect a bass riff, it’s been a labour of love. For me, it’s the same nerve-racking sensation publishing a mastered musical track, or pushing a new code release to rubygems.org. I find myself second-guessing my decisions every time, and – despite working on these things because it’s like an itch I have to scratch – I can’t help but wonder whether anyone will like what I’ve just created.

Just like making music, I strongly believe that code can be an art form. It’s something that ends up reflecting a little of yourself – and whatever your endeavours, you’ll only ever get better at as you continue, receive helpful criticism and words of encouragement.

So here we are, 3 years later from that first tentative code push. It’s immensely satisfying to see a community spring up around this little tool, and I want to take a moment to thank everyone who has ever emailed me, submitted a bug report, come up with suggestions, submitted code or otherwise joined in helping this project fill a niche. You’re my encouragement to keep up with this project, learn more, grow as a developer and above all – keep having fun!

Recent updates

Anyway, enough of my meandering musings.. It’s been quite a while since I last posted about Tiller, but that doesn’t mean things have been quiet! On the contrary, development is busier than ever and there have been a lot of new features and improvements added over the last year. There is of course the Changelog where you can see all these updates, but I thought I’d take the opportunity to provide a quick round-up of some of the highlights.

0.9.0

Tiller 0.9.0 was released on the 10th of August 2016, and the 0.9 series included a lot of new features and some nice improvements:

Precedence changes

In previous versions of Tiller, global values were merged together in the order that plugins were loaded. Then, the same was done for template values. Finally, template values were merged over the top of global values. As had been discussed in the Gitter chatroom many times, this led to some counter-intuitive behaviour! Starting with Tiller 0.9.0, the behaviour has now been greatly simplified.

We now go through the plugins in order, and for each one we merge template values over global values, then proceed onto the next plugin. To put it simply: A template value will take priority over a global value, and any value from a plugin loaded later will take priority over any previously loaded plugins.

Vaults!

The 0.9.x series also saw two “Vault” plugins. The first supports Hashicorp’s Vault product and was provided in an awesome Pull Request including test cases and documentation from the fantastic liquid-sky.

The second plugin adds support for Ansible Vault – a simple way of encrypting a YAML file and passing it into a Docker container. I’m particularly fond of this plugin as it means it’s trivial to ship a safely encrypted set of YAML values or credentials inside your container, and unlock it at run-time. Even if you’re not using Ansible for orchestration or configuration management (and you should!), Ansible vault is simple to use:

1
2
$ ansible-vault create my_secret_vars.yaml
Vault password: <enter password here>

And that’s it! You can then edit the file with ansible-vault edit and bundle it into your container. See the plugin documentation for more examples.

Config.d

Tiller 0.9.5 added support for merging configuration in a config.d directory structure (and 0.9.6 improved it). This was another simple change suggested by a user (thanks, rafik777!) but means it’s easier now to create layered containers and split configuration out over multiple files.

1.0.0

The big 1.0.0 release! This marked the start of using semantic versioning, and as always, a big priority is never to break existing installs. I now strongly recommend a modern Ruby installation (see the requirements page for more details) but will continue to test on older installations.

Apart from some nice new features such as exec on write and dynamic values, the big highlight for me of this series so far is the 1.1.0 release. As I mentioned above, for the first time since I started this little project, I made a release with more of other people’s changes in it than my own code!

The future

There’s several new features coming in future releases, and I’ve already started to lay the foundations for plugin versioning. This means I (and other plugin authors) can experiment with new features or radically alter the internal workings of Tiller without breaking compatibility with existing installations. The first thing that will use this will be a new feature to specify multiple “targets” per template. So you’ll be able to do something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
vhost_template.erb:
  - target: /etc/httpd/sites_available/my_site.conf
    user: root
    group: httpd
    perms: 0644
    values:
      site_name: "My site"
      site_fqdn: www.mysite.example.com
  - target: /etc/httpd/sites_available/other_site.conf
    user: root
    group: httpd
    perms: 0644
    values:
      site_name: "My other site"
      site_fqdn: www.myothersite.example.com

… and so on. As always, feel free to submit an issue, join the chatroom or send me an email with any suggestions or improvements you’d like to see. And keep having fun!

Tiller 0.8.0 Released!

| Comments

Tiller has just seen its 0.8.0 release, and as you’d expect from a major version bump, it’s a big one. There’s some breaking changes if you write your own plugins – nothing too major, a quick search & replace should fix things for you. But, more importantly, there’s a major new feature which brings two big improvements.

I’ve added a “helper modules” feature, which lets you group together custom utility functions in Ruby that can be called from within templates. And I’ve included a built-in function called Tiller::render that lets you include and parse nested sub-templates from another template.

Sub-Templates

Now you can include other templates in your templates by using the built-in Tiller::render helper function. This may be useful if you have a large configuration file to generate: you can now split it up into separate files for each section and combine them together at run-time. Or, you may need to generate a lot of similar configuration blocks (for example, Nginx virtual hosts). You can now iterate over a list and render a single sub-template for each block.

For example, if you have a template called main.erb, you can include another template called sub.erb by calling this function inside main.erb:

main.erb
1
2
3
This is the main.erb template. 
This will include the sub.erb template below this line:
<%= Tiller::render('sub.erb') -%>

You can nest sub-templates as deeply as you wish, so you can have sub-templates including another sub-template and so on.

Helper modules

The sub-template system builds on top of a new “helper module” feature. A helper module is intended to group together small blocks of code that you can include in your templates to perform mark-up or other utility functions. They aren’t intended to replace the existing Data- and Template-source plugins; if you need to get some values into your templates, or hook up to some external service, these are probably still the best way to go about it.

But you can see how if you had a more complicated transformation to do (e.g. convert markdown text into HTML) or needed to include some logic in a function, this would help clean up your templates, as well as keep a clean separation of code and configuration.

As an example, this is how you’d add a Lorem Ipsum generator for filling in place-holder text. We’ll simply wrap the excellent forgery gem, so first make sure you have it installed:

1
2
3
4
5
$ gem install forgery
Successfully installed forgery-0.6.0
Parsing documentation for forgery-0.6.0
Done installing documentation for forgery after 0 seconds
1 gem installed

I’ll also assume you are using the default directory for custom plugins; if you want to change this, use the -l/--lib-dir option to Tiller to point to somewhere else.

First, create a file named /usr/local/lib/tiller/helper/lorem_ipsum.rb , and put the following code inside:

lorem_ipsum.rb
1
2
3
4
5
6
7
require 'forgery'

module Tiller::LoremIpsum
  def self.words(num)
    Forgery(:lorem_ipsum).words(num)
  end
end

Note that I defined this inside a custom namespace so it doesn’t clash with anything else. You can then load this module by adding the following to the top-level of your common.yaml:

1
helpers: [ "lorem_ipsum" ]

Now, in your templates you can call this function like so:

1
This is some place-holder content : <%= Tiller::LoremIpsum.words(10) %>

When you run Tiller with the -v (verbose) flag, you’ll see Helper modules loaded ["lorem_ipsum"] amongst the output, and your template will contain the following text when generated :

1
This is some place-holder content : lorem ipsum dolor sit amet consectetuer adipiscing elit proin risus

Tiller 0.8 Changes for Custom Plugins

| Comments

Posted this in the Gitter chat but just to spread it to a wider audience : Tiller 0.8.x will be coming in a little bit; the reason for the 0.8 version bump is that there are some internal changes which will break any custom plugins you may have written. The relevant commit is here: markround/tiller@c2f6a4f.

In short, I’m moving from per-instance @log and @config vars, and instead moving to a singleton pattern so there are now single Tiller::log and Tiller::config module variables. A simple search-and-replace (e.g. s/\@log/Tiller::log/g and s/\@config/Tiller::config/g) on your custom plugins should be all you need once 0.8.x is released.

The Pleiades

| Comments

And now a diversion from most of my geeky posts! I’ve just finished (well, as “finished” as most of my musical projects get) my latest track: “The Pleiades”, featuring the talents of my sister, psy-trance producer Spinney Lainey on flute. I’ve still got a long way to go on my journey through the world of music production, but this is the first thing I’ve felt more or less happy with and wanted to share it with the world. Hope you enjoy!

New Consul Plugin for Tiller

| Comments

It’s only a minor version number bump, but Tiller 0.7.8 now brings a major new plugin which I’m really excited about : It now supports the awesome Consul system from Hashicorp. Consul is a distributed key/value store and service discovery mechanism – the basic idea is that you have a Consul cluster consisting of multiple nodes for high availability and performance. You then place all your configuration values in it, and also register your services (like web server backends, databases, cache nodes and so on) with it. This means you can have a dynamic environment where components discover their configuration and other nodes in your infrastructure on-the-fly: no more hard-coding database URIs or load balancer pools!

This makes it an ideal fit for a “cloud” Docker environment using something like Swarm, Kubernetes or Mesosphere/Marathon. Your containers can run on any host, advertise whatever ports they like, and Consul will make sure that everything can find what it needs to. The only sticking point is how to get your configuration to your applications.

If you’re writing your own microservices from scratch, you can of course talk directly to the Consul API, but for other things that require configuration files (Nginx, Apache, Rails applications and so on) you need a tool to talk to Consul and generate the files for you. Hashicorp (the company behind Consul) do have a tool called consul-template which does this, but Tiller has (to my admittedly biased point of view!) several advantages, not least the ability to use straight-forward Ruby ERB templates and embedded Ruby code instead of Go template syntax, and the ability to load other data source plugins.

So you although you can fetch everything from Consul, Tiller lets you do things like store templates in files or retrieve them from a HTTP web service, and then “chain” key/value stores : Provide values from defaults files first, then check Consul, and finally over-ride some settings from environment variables at run-time.

If you’re new to Tiller, I recommend having a quick look at the documentation, and following some of my Tiller blog posts, in particular this article which walks through a practical example of using Tiller inside Docker.

That said, here follows a quick example of using Tiller to fetch data from Consul. In the examples below, I’m just generating some demonstration template files for simplicity. In a real-world situation, these template files would be application configuration files like nginx.conf, mongod.conf and so on.

Getting started

The Tiller Consul plugin requires the diplomat Ruby gem to be installed, so assuming you have a working Ruby environment, this should be all you need:

1
2
3
4
$ gem install diplomat tiller
Successfully installed diplomat-0.17.0
Successfully installed tiller-0.7.8
2 gems installed

Start a Consul server

Go to the Consul downloads page, and grab both the Web UI and binary download for your platform, and unzip them in the same location. I’ll do this with shell commands below, for the Mac OS platform (replace “darwin” with “linux” if you are using a Linux system):

1
2
3
4
5
$ mkdir consul
$ cd consul
$ wget https://releases.hashicorp.com/consul/0.6.4/consul_0.6.4_web_ui.zip
$ wget https://releases.hashicorp.com/consul/0.6.4/consul_0.6.4_darwin_amd64.zip
$ for z in *.zip; do unzip $z; done

Now, start up Consul in stand-alone mode :

1
2
3
$ ./consul agent -server -bootstrap \
  -client=0.0.0.0 -data-dir=./data \
  -ui -ui-dir=. -advertise=127.0.0.1

You’ll see some startup messages :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
==> WARNING: Bootstrap mode enabled! Do not enable unless necessary
==> Starting Consul agent...
==> Starting Consul agent RPC...
==> Consul agent running!
         Node name: 'mdr.local'
        Datacenter: 'dc1'
            Server: true (bootstrap: true)
       Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 8600, RPC: 8400)
      Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302)
    Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
             Atlas: <disabled>

==> Log data will now stream in as it occurs:

    2016/05/12 19:56:14 [INFO] raft: Node at 127.0.0.1:8300 [Follower] entering Follower state

Leave this process running in your shell, and with a browser check your server is up and running by visiting [http://localhost:8500/ui]. You should see a screen similar to the following :

Default consul screen

We’ll now populate it with some test data. I have a script to do this, so download and run it, passing the base URL to your Consul server as the only argument :

1
2
3
4
$ wget https://raw.githubusercontent.com/markround/tiller/master/features/support/consul_test_data.rb
$ chmod +x consul_test_data.rb
$ ./consul_test_data.rb http://localhost:8500
Populating Consul at http://localhost:8500 with test data

Now, if you visit your Consul page and click on the “Key / Value” link at the top, you’ll see a bunch of data under the /tiller path. Here’s the view of the global values :

Populated consul screen

And if you click around further, you can find the template definitions also stored in Consul :

Template view inside consul

Incidentally, this is the same data that is used in my automated tests that check all the features of Tiller are working correctly whenever I make any changes. You can see the results of these tests over at Travis CI, or run them yourself if you clone the Git source and run bundle exec rake features

Tiller setup

Now your Consul server is ready to go, so here’s how to hook Tiller up to it. Just create your common.yaml file with the following contents (just running “true” after we’ve finished for demonstration purposes – in a real Docker container, this would be your application or webserver binary etc.):

common.yml
1
2
3
4
5
6
7
8
9
---
exec: ["true"]
data_sources: ["consul"]
template_sources: [ "consul" ]

consul:
  url: "http://127.0.0.1:8500"
  register_services: true
  register_nodes: true

And run Tiller against it, using the “development” environment :

1
2
3
4
5
6
7
8
9
10
11
12
$ tiller -b . -v -e development
tiller v0.7.8 (https://github.com/markround/tiller) <github@markround.com>
Using configuration from .
Using plugins from /usr/local/lib/tiller
Using environment development
Template sources loaded [ConsulTemplateSource]
Data sources loaded [ConsulDataSource]
Available templates : ["template1.erb", "template2.erb"]
...
...
Child process exited with status 0
Child process finished, Tiller is stopping.

And there you have it. You’ll have ended up with a couple of files : template1.txt and template2.txt in your current directory, which have been entirely populated with templates and values all served from Consul:

1
2
3
4
5
6
7
8
9
10
11
$ cat template1.txt

This is template1.
This is a value from Consul : development value from consul for template1.erb
This is a global value from Consul : consul global value
This is a per-environment global : This is over-written for template1 in development
If we have enabled node and service registration, these follow.
Nodes : {"mdr.local"=>"127.0.0.1"}
Services : {"consul"=>[#<OpenStruct Node="mdr.local", Address="127.0.0.1",
ServiceID="consul", ServiceName="consul", ServiceTags=[], ServiceAddress="",
ServicePort=8300, ServiceEnableTagOverride=false, CreateIndex=3, ModifyIndex=4>]}

Have a try at running Tiller with the “production” environment and see what changes. You can also try changing the values or even the templates themselves inside Consul to see the changes reflected whenever you run Tiller.

What next ?

So, assuming you’re using Tiller as a Docker CMD or ENTRYPOINT to generate your configuration files before handing over to a replacement process, you can now create dynamic containers that are populated with data entirely from a Consul cluster. Or you can chain one of the many other plugins (environment, environment_json, defaults, file and so on) together, so your container can get core values from variety of sources.

You can also make use of Consul’s service registration system (perhaps by using the fantastic Registrator) and populate your configuration files dynamically with auto-discovered microservice backends and much more.

Check out the full Consul plugin documentation for more information, read the rest of the docs or drop by the new Gitter chatroom for help and advice.

As always, all feedback and comments welcome!