If you ever get a chance to look through the classic Amiga OS source-code still floating around some murky corners of the internet, it is a thing of beauty and astonishing capabilities. It’s an inspirational piece of computing history with unmatched capabilities for the time. Remember, this was all on a computer released in the 1980s with 512Kb memory, a 7Mhz 68000 16-bit CPU, and a single floppy drive with 880Kb storage. On these limited specs, AmigaOS provided a pre-empt...
Posts by Year
- 2023 2
- 2022 4
- 2021 1
- 2020 3
- 2019 7
- 2018 4
- 2017 2
- 2016 7
- 2015 3
- 2014 14
- 2013 1
- 2012 3
- 2010 4
- 2009 9
- 2008 4
- 2007 3
- 2006 10
- 2005 12
- 2002 1
I’ve long since been a die-hard BeOS fan and have been running the open-source recreation Haiku for many years. I think it’s interesting to explore the “alternative OS” world and consider some great ideas that for whatever reason never caught on elsewhere. The way Haiku handles package management and its alternative approach to an “immutable system” is one of those ideas I find really cool.
Earlier this year, I finally discovered as an adult that I am “on the spectrum” with what used to be called Asperger’s Syndrome. The diagnosis helped make sense of a lot things and has given me a greater insight into my “way of being in the world”. Whilst there are times I struggle with things that neuro-typical people usually find easy, or I find some situations draining, the condition has also brought me many positives which often get overlooked when talking about Autis...
In Part 3 I covered the backend server processes and protocols, CI/CD pipelines and unit tests I used to build the TNFS site. In this (much shorter) part, I’d like to take a step back from the hardcore geekery, and wrap up with my thoughts on the whole thing.
In Part 2 I discussed the server environment, as well as how I built and launched the first prototype version of the site. I hit some speedbumps along the way and quickly reached the limits of what I could do with a pure client-only 1980s BASIC codebase. In this part, I’ll look at how I moved to a backend API system and how all this is deployed and tested.
In Part 1, I explored the hardware and development environment. In this article, I’ll cover the server-side components as well as coding and launching the first iteration of the site along with some of the limitations I encountered when programming on such an old system.
When I was around 8-9 years old, I received a Sinclair ZX Spectrum home computer for my birthday. One of my earliest memories I remember is sitting with my Dad, reading the manual to work out the magic commands to load the games from cassette tape. This series of articles covers the technical details and “behind the scenes” processes of building a Spectrum platform/application using modern toolsets.
After months of being on the waiting list, I recently recieved my Apollo Vampire V1200V2 accelerator! Since then, my Amiga has had a new lease of life so I thought I’d write an update covering all the stuff I would have wanted to know while I was waiting for the package to arrive. What follows is a review/retrospective on my first month or so of usage of this card - I’ll cover the ordering experience, hardware, installation, updates and software and wrap up with my thought...
Just looking through some old videos and found this footage of me going off on one at our gig in February earlier this year, before the pandemic had really hit in the UK.
Well, that went better than expected :)
A few months ago, reading the news about Linux dropping support for floppy disks set off a whole bunch of memories and emotions around this long obsolete and “dead” data storage format. I hadn’t thought about or used floppy disks for around 20 odd years now, at some point I must have copied my last disk but I honestly can’t remember what it was or when. The floppy disk was one of those rare things that became part of my everyday life like my keys or school pencil-case. It...
When the Amiga was introduced in the mid 80s, pretty much all displays available to consumers were 4:3 ratio TVs, and you plugged your computer into your TV via a RF modulator box. Later, dedicated monitors like the Phillips 8833 Mk2 (which I originally had as a teenager in the 90s) became available and these offered a much improved, sharper image but were still in the 4:3 ratio as this is what the native Amiga, PC and gaming console screenmodes had been designed around.
It’s been a busy few months here, but I’ve still found time to enjoy my Amiga systems. I’ve been grabbing the odd hour here and there to continue my efforts setting up an Amiga development environment and “dip my toe in the water” again. My setcmd utility is progressing nicely and I’ve learned a lot about the tools like AmigaGuide and the Installer that I used on my A1200 back in the day. I thought since it’s been a while, I’d write a quick “brain dump” post and cover two ...
The A1200 lives!
As I mentioned in my last post, I’ve been developing a new “SetCmd” tool for my Amiga systems, both for my own practical usage and also to serve as a re-introduction to the Amiga developer environment. This series of blog posts will cover the progress of this tool, as well as explore the challenges and technologies from my perspective of a returning AmigaOS fan.
I’ve been having a lot of fun with my X5000 over the past few weeks (more blog posts to come!) but I’ve been working on something recently that I wanted to share. I’ve been enjoying re-learning AmigaDOS and as an exercise for myself, set about building a tool I plan on releasing in the near future. Inspired by some Linux distributions’ “alternatives” system, It’s called setcmd (short for “Set Command”) and lets you easily and quickly switch between different versions of a ...
Happy New Year everyone! I’ve got big plans for my Amiga projects in 2019, but thought I’d start off the New Year with a blog post on a not-particularly “exciting” topic, but an important one nonetheless: Backups. As I am experimenting more with my X5000 and Amiga OS 4.1, I’ve been getting particularly “twitchy” that I didn’t have a solid backup/restore plan in place, particularly as some of my experiments will invariably go wrong and I’ll need a way to roll back my change...
A little while ago, an updated port of the FUSE ZX Spectrum emulator was uploaded to OS4Depot. My first computer was a ZX Spectrum 48k, although I eventually ended up with a +3 model before I upgraded to my first Amiga. I was looking forward to getting this emulator installed so I could re-visit some of the classic 8-bit games I used to play; my favorite game is still Manic Miner although I never manage to get past the “Eugene’s Lair” level on my first attempt!
While I’ve been having a lot of fun with the new software written specifically for AmigaOS 4, the bulk of my software is still “classic” titles that used to run on my A1200. One of the first things I did when I set up my X5000 was to transfer my old Amiga’s hard drive over so I could continue running this library of software. I also wanted to set up an emulation of my A1200 so I can quickly launch a classic Workbench 3.9 session and pick up all my old projects and bits of ...
As you may have seen with my latest music project, I’ve been getting back into the Amiga scene in a big way over the last year. Granted, a large part of this is nostalgia on my part; the Amiga was a lifeline to me during my teenage years and was responsible for starting my twin interests of computing and music. But I’ve always been amazed by the sheer tenacity of the Amiga scene - nearly 30 years on from when I first got my Amiga 600, the scene is still going (albeit fract...
This is my rock/metal cover of a tune from the classic Mahoney & Kaktus Amiga demo, “Sounds of Gnome” (http://www.pouet.net/prod.php?which=5583). Specifically, the tune was called “Jobba” and the intro also borrows from the intro song on The Great Giana Sisters.
Here’s my latest music project - a little progressive rock/metal track called “The River”. Based on some riffs I had knocking around in my head for several years now, and I finally got them down in a form I’m pretty happy with!
On July 18th, 2014 - 3 years to the day of writing this post - I pushed the first public code of Tiller to Github. Back then, it was just a simple little tool that I wrote (mostly as a learning exercise), found useful and thought others may like to use.
Tiller has just seen its 0.8.0 release, and as you’d expect from a major version bump, it’s a big one. There’s some breaking changes if you write your own plugins - nothing too major, a quick search & replace should fix things for you. But, more importantly, there’s a major new feature which brings two big improvements.
Posted this in the Gitter chat but just to spread it to a wider audience : Tiller 0.8.x will be coming in a little bit; the reason for the 0.8 version bump is that there are some internal changes which will break any custom plugins you may have written. The relevant commit is here: markround/tiller@c2f6a4f.
And now a diversion from most of my geeky posts! I’ve just finished (well, as “finished” as most of my musical projects get) my latest track: “The Pleiades”, featuring the talents of my sister, psy-trance producer Spinney Lainey on flute. I’ve still got a long way to go on my journey through the world of music production, but this is the first thing I’ve felt more or less happy with and wanted to share it with the world. Hope you enjoy!
It’s only a minor version number bump, but Tiller 0.7.8 now brings a major new plugin which I’m really excited about : It now supports the awesome Consul system from Hashicorp. Consul is a distributed key/value store and service discovery mechanism - the basic idea is that you have a Consul cluster consisting of multiple nodes for high availability and performance. You then place all your configuration values in it, and also register your services (like web server backends...
Thanks to a nice suggestion by kydorn in issue #18, the file datasource supports a global_values: block, so you can now create defaults per environment (and use the defaults datasource to provide a default for all environments). This is available in Tiller 0.7.7, which was just released.
Another super-quick update : There’s now a Gitter chatroom for Tiller. Feel free to drop by for help and chat with other users!
I recently released Tiller 0.7.6 with a new feature for the environment_json plugin. This can be used to provide dynamic configuration for Docker containers, using JSON in an environment variable. I previously posted a complete example of how you’d make use of this. However, most Tiller plugins provide values as “global” sources, which will be over-ridden by local, template-specific values. The documentation makes note of this in the “Gotchas” section, but there’s now anot...
Yesterday, I released Tiller 0.7.0, and as you might imagine with a version “bump” from the 0.6.x series, there’s a few changes. Some of it is internal stuff that you’ll probably only notice if you’re building your own plugins for Tiller, but there is a fairly big new feature for end users: Single-file configuration.
If you’ve been using Docker for a while, you probably know that you can use Tiller to generate configuration files inside your containers. This means you can provide a single container for running in a variety of different environments (think of a web application that needs different database URIs and credentials depending on where it is run).
Just a quick “heads up” for users of Tiller - version 0.5.0 has just been released and has a few new features, and some other changes. Firstly, I have added support for per-environment common settings. Normally, you’d do things like enable the API, set the exec parameter and so on in common.yaml, but as per issue #10, you can now specify / over-ride these settings in your environment configuration blocks. Simply drop them under a common: section, e.g.
Tiller v0.3.0 has just been released, which brings a couple of changes. The first is that the ordering of plugins specified in common.yaml is now significant. Tiller will run these plugins in the order that they are specified; this is important as before the order was effectively random, so your templates may change with this release if you rely on merging values from different sources (hence the major version bump indicating a breaking change).
A quick update for users of my Solaris 11 x86 packages. I’ve created a GNU Bash 4.3 package which includes the patch for the much-publicized Shellshock vulnerability. As the package name “bash” also matches the one provided by Oracle, as usual you’ll just need to specify the full FMRI when installing:
Tiller 0.2.2 now brings a simple HTTP API that allows you to query the state of the Tiller configuration inside running Docker containers. This may be particularly useful when you’re attempting to debug configuration issues; you can quickly and easily check the templates, global values and Tiller configuration.
Tiller 0.1.4 has just been released, and brings a few new improvements. Firstly, you can now use -b,-l and -e command-line flags to set the tiller_base, tiller_lib, and enviroment values respectively. This makes things a little neater when debugging or testing new configurations on the command line.
After a recent query was posted on the Tiller issue tracker, I thought it might be useful to show a complete example of using Tiller and one of it’s plugins to create a dynamic Docker container. I assume you’re at least somewhat familiar with Tiller. If not, take a quick look through my documentation and other related blog posts.
Just a quick update : Tiller has just been updated to v0.0.7. I’ve just added a new EnvironmentDataSource which is super-simple, but very convenient. It makes all your environment variables accessible to templates (as global values) by converting them to lower-case, and prefixing them with env_.
After several days of hacking on my Runner.rb project, I’m pleased to say that I’ve completed a much more polished and complete solution to shipping multiple configuration files in a single Docker container. The project is now known as Tiller and has been published as a Ruby gem so you can just run gem install tiller to install it. You can find a good overview of it, and how it works over at the Github README.md, but it’s still essentially the same approach :
I’ve been using Docker containers on Linux systems for a while now, and have recently developed a neat solution to what I imagine is a fairly common problem.
Sensu is a monitoring framework written in Ruby. It’s small and very easy to extend, as well as being extremely scalable. It uses a message bus to communicate with it’s different components (clients, servers, api hosts) as well as external systems such as Graphite, which can be used to produce graphs and store metrics.
I had a bash at finishing off the Ruby gem dependencies for Sensu on Solaris 11 over the last few days (and a bunch of other stuff,but I’ll write about that a bit later).
My work on packaging up Sensu for Solaris 11 continues, and I’ve just tackled the Ruby 2.0.0 package and some associated Ruby Gems. These are all pre-compiled x86 Solaris 11 IPS packages and are currently available in the /dev branch of my repositories. To install everything in one go, just do the following :
As part of my work to package up the fantastic Sensu monitoring framework for Solaris 11, I have just uploaded a complete package of Redis 2.8.9 to my IPS package repositories (x86 only at the moment - see the docs linked below). This also includes a SMF manifest, so a pkg install redis should provide with all you need to get going straight away:
Yesterday, Oracle announced Solaris 11.2 which includes a lot of interesting new features; not least of which is a full OpenStack distribution. There’s a lot of other improvements as well to all areas of the OS, from ZFS administration to the IPS system and Automated Installer. It also looks like Puppet is now included for systems management. All in all, a good release although it still pains me to see the “Oracle” logo slapped over everything, as well as the general lack ...
Well, it was about time. This blog has been stagnating for a long time, partly due to the clunky PHP-based system that used to run it. Needless to say, although I used to do a lot with PHP, I’m now old enough to know better. I’ve therefore done a “rip & replace” update on this site, switching instead to the wonderful, Ruby-based Octopress - a static blogging system built on top of Jekyll.
This is a bit of a departure from the rest of my blog posts, as it relates to my main hobby and current interest - Bass guitar and amplification. I play in a band and have spent a lot of time building out my main rig for live shows and rehersals, but I recently ran into a problem with the latest addition. I found a solution to it (and lots of people suffering from the same issue), so I’m posting it here in the hope that it might help someone else.I had just bought an Avid ...
I’ve spent the last week experimenting with IPv6; it now means that my whole home network and this website run over IPv6 as well as IPv4. As I’ve spent a while playing with this technology, I thought I’d write my notes up here in the hopes that it will help someone else.
I have finally got a working build environment for my SGI IRIX systems (an R14k Fuel and a dual R12k Ocatane2) and have packaged some open-source software for the fantastic Nekoware project. If you’re a fan of classic Unix systems, I strongly recommend heading over to their forums - there’s also a pretty strong Sun and HP contingent there among the SGI fanatics! Anyway - the two packages I have built so far are the fantastic pv (Pipe Viewer) tool and Mercurial DVCS. PV i...
I’ve finally had the chance to devote some time to experimenting with some of the new features in Solaris 11. This article is really just intended as a walk-through of my first few weeks using Solaris 11 - a "kick of the tyres", so to speak. There is far too much that is new for me to cover everything, so I’ll be adding to this article and updating this site as I go through it. I’m also assuming the reader is familiar with Solaris 10; if you feel some parts need clarificat...
Introduction I’ve been using and evaluating Citrix XenServer now for a while, and felt I should really post a review. I haven’t seen much detailed coverage of this product at the level I’m interested in, so what follows is my take on it from a Unix sysadmin’s perspective. There won’t be any funky screenshots or graphics; instead, I tried to cover the sort of things I wanted to know about when I was looking at it as a candidate for our virtualization solution at work.
We have recently started using Citrix Xenserver in production at work (fantastic product, see my review for more information) and needed a simple backup solution. Our VMs run from an iSCSI SAN and are backed up daily through various methods - e.g. Bacula for the Unix/Linux systems. However, we wanted the ability to quickly roll back to a previous VM snapshot, and get up and running quickly if our SAN failed for whatever reason. Our solution was to create a large shared NFS...
Well, that’s that, then. Solaris as we knew it is pretty much dead. I’ve suspected for a while now that Oracle’s intentions regarding Solaris were not what the community, or us "old-school" Solaris sysadmins wanted or had hoped for.
One of my favourite interview questions I used to ask candidates was a variation of "Desert Island Discs" : Imagine you are going off to be a sysadmin on a desert island, with no internet access, and further imagine that the previous sysadmin was a total fascist with a minimalist install policy. We’re talking a bare-bones "classic" Solaris installation, or a minimal Debian system here. You’ve got SSH installed, but not much else. Before you hop on the boat, however, you ar...
Thanks to the awesome work of Boogie Shafer, there is now a FreeBSD port of my iostat scripts and templates for Cacti. I have included the modified tarball that was sent to me, this is inside the archive as "cacti-iostat-1.x-boogie_freebsd_linux_changes.tar.gz".
I’ve just got a new array to play with at work for a small Xen virtualisation setup. It’s the Dell MD3000i, which I’ve seen a few posts about before but though I’d chime in with my experiences. It is a budget array, but I have to say for the price it’s not a bad bit of kit.
This is part 5 of a series on building a redundant iSCSI and NFS SAN with Debian.
Just a quick update to my Cacti iostat monitoring scripts and templates - thanks to the work of Marwan Shaher and Eric Schoeller, the package now supports Solaris! The updated package is available here : cacti-iostat-1.4.tar.gz. I have also updated the original blog post with the new package.
My response to today’s news:
I was talking with a friend a few days ago, and the subject of password security came up. Now, we all know that we’re supposed to pick a secure password, use at least 8 characters and never to pick a word from the dictionary. But then I was asked how long it would take to brute-force a password using a dictionary attack, and I had to admit I had no idea. I knew it would only be a matter of minutes, but wanted to give it a try.
Just a quick post this time, as I thought this may help others in the same situation I found myself in recently. At work, we’ve been using OpenVPN which works a treat with Unix clients; Windows clients (Vista in particular) were more problematic, though. None of our regular users have admin privileges (for obvious reasons), but this caused problems with the routing setup: users could use the GUI tool, but could not create the necessary routes required to direct traffic ov...
This is part 4 of a series on building a redundant iSCSI and NFS SAN with Debian.
This is part 3 of a series on building a redundant iSCSI and NFS SAN with Debian.
I’ve been looking for ages for a tool to parse the output from "iostat" on Linux, and graph it in Cacti. I found a few scripts and templates that did some of what I was looking for (disk I/O etc.), but nothing that gave me the full set of statistics such as queue length, utilisation, service time etc. I finally got round to writing my own set of templates and a data gathering script to provide this information, and it seems to work very well. So that others can benefit, I’...
Earlier on today, the main Blastwave website got replaced by this message :
This is part 2 of a series on building a redundant iSCSI and NFS SAN with Debian.
It’s been a while now since I last updated this blog with any decent material so I thought I’d dust off some of my notes on building a redundant iSCSI and NFS SAN using Debian Etch.
As I’ve been investigating ZFS for use on production systems, I’ve been making a great deal of notes, and jotting down little "cookbook recipies" for various tasks. One of the coolest systems I’ve created recently utilised the zfs send & receive commands, along with incremental snapshots to create a replicated ZFS environment across two different systems. True, all this is present in the zfs manual page, but sometimes a quick demonstration makes things easier to unders...
While browsing the ZFS man page recently, I made an interesting discovery: ZFS can export block devices from a zpool, which means you can separate "ZFS the volume manager" from "ZFS the filesystem". This may well be old news to many; however I haven’t seen many references to this on the web, so thought I’d post a quick blog update.
I’ve recently been experimenting with ZFS in a production environment, and have discovered some very interesting performance characteristics. I have seen many benchmarks indicating that for general usage, ZFS should be at least as fast if not faster than UFS (directio not withstanding - not that UFS directio itself is any faster, but anything that does it’s own memory management such as InnoDB or Oracle will suffer from the double-buffering effect unless ZFS has been tuned...
I’ve been investigating the new improved mod_proxy in Apache 2.2.x for use in our new production environment, and in particular the built-in load balancing support. It was always possible to build a load-balanced proxy server with Apache before, using some mod_rewrite voodoo, but having a whole set of directives that do all the hard work for you is a great feature. There is however, a catch. It won’t work out of the box with PHP sessions, or many other applications. I’ve ...
At work, we just migrated a database server from a Sun Fire V240 to a Sun X4100. This makes it the first AMD64 system we’ve put into production, and the performance advantage is staggering. I could post the benchmarks and various statistics, but I believe the following graphs from the cut-over paint a far more interesting and convincing argument for the price/performance benefit of Sun’s AMD64 offerings…
Blastwave PHP4 packages are available in /testing. These bump PHP4 up to 4.4.3, which is primarily a bug fix and security upgrade. It also fixes an issue with the packaging of the current 4.4.2 packages, which resulted in a non-working PEAR installation due to an error in the upstream source tarball. <p />I hope to get these packages released to unstable in the next few days - I’ve been running them for a few days here and there appears to be no issues, but as always...
At work, we’re developing a brand new in-house CMS based on the Symfony framework. As it uses no mod_rewrite rules or other Apache dependencies and is a "clean break" for us, I figured it would be an ideal candidate for benchmarking under LigHTTPd, comparing it to Apache 2.2 in order to give me some statistics to compliment my last blog entry on the subject. The results from the "ab" Apache-benchmark tool are pretty stunning - although I’m still at a loss as to explain ju...
In my role as a sysadmin, the bulk of the Unix systems I administer are web servers, running the now standard open-source stack of Apache, MySQL and PHP (note that whatever my personal misgivings may be about those elements, they are pretty much the standard now and what’s been mandated at work). If you’re using PHP on Unix, it’s pretty much taken for granted that you’ll be running it through Apache via mod_php. In fact, it almost goes without syaing that if you’re doing a...
PostgreSQL 8.1.4 has just been released, and Blastwave has updated packages as usual. This is a major update security-wise; if you’re running a PostgreSQL server you really need to upgrade as soon as possible. The PostgreSQL site has a page detailing all you need to know about this particular problem, and how it may affect you. Packages will be making their way out to the mirrors shortly, and I’ll also be updating my PHP packages the moment the upstream source is released.
I found one of my favourites quotes from Carl Sagan again after loosing it a while back and felt compelled to post it here. It just blows me away each time, particularly when you put it in context with this picture, taken by Voyager 1 as it was 4 billion miles away from Earth and swung round for one last photo of "home".
Warning: Rant ahead.
Hot on the heels of the 8.1.2 package, 8.1.3 has been released. This fixes a serious security issue, and while it hasn’t yet made it into Blastwave’s “unstable” tree, you can grab it from our testing directory. Expect to see it available from our mirrors through pkg-get shortly.
New PostgreSQL and related packages (postgresql, libpq and postgresqlcontrib) have been released to the “unstable” tree. These packages bump the version to 8.1.2, and include a number of important bug fixes. Of particular importance is the resolution of a bug present in previous versions of PostgreSQL that could lead to data loss - for more details, please see the 8.1.2 release announcment.
While I’ve been preparing an update to the 2.2.6 Blastwave packages of nessus, Teneable just released their new 3.0 package - offering a whole host of enhancements including a very funky looking RSS feed for plugin updating, and major performance improvements to name just two. Except this time, I’m not doing my usual w00t-dance, and I won’t be packaging it, or even running it, for that matter.
Yesterday, Sun announced the availability of their new CoolThreads powered servers. They’re powered by the latest incarnation of the UltraSPARC range of processors - so naturally, you get all the full binary compatibility assurances that brings. Sun are making much of the efficiency and "greenness" of these new boxes; but while I’m all for saving penguins and polar bears, what really stands out is the sheer performance these boxes bring. Check out the entry-level T1000 , f...
I’m pleased to announce that PostgreSQL 8.1.0 packages are now available from Blastwave, and should be making their way out to the mirrors as I type. This set of packages includes libpq, the core postgresql package, contrib, and the updated set of JDBC drivers.
Finally got them off to the mirrors. And of course, life being what it is, the moment I do that, PHP 4.4.1 source is released. Hey ho… Still, the official announcement is included in the full text of this post. Expect to see 4.4.1 packages in a few days… Oh, and PostgreSQL 8.1.0 packages as soon as the final release is announced. I’ve been building all the betas and RCs so am ready to get new packages out there as soon as it’s available. <blockquote> New PHP 4.4.0 pa...
Now the Blastwave “stable” freeze is over, there’s a further round of updates for my PHP4 packages which fix a few bugs and add a few new features. All available from http://www.blastwave.org/testing/ as usual.
Packages at http://www.blastwave.org/testing/ - just an incremental upgrade this time. From the Nessus news file :
My new PHP packages are ready for testing at the usual place. They are taken from the new 4.4.0 release - the full release announcement is available on the PHP site.
I have finished work on a bunch of new PHP packages - there are a lot of changes with these, so be warned that there will probably be a few tweaks needed here and there before they’re ready for the prime time. However, if you’re feeling brave and would like to test, I’d appreciate any comments or feedback. They are all available at the usual place, http://www.blastwave.org/testing/ . A full list of packages is included in the full body of this entry.
Well, by now, pretty much every news site and blog across the world has picked up on the fact that Apple are making the move to x86. It doesn’t really come as much surprise; IBM made promises it hasn’t met, and the G4/G5 line has stagnated while x86 romped past it. As a recent Mac convert, I can’t help but feel in some small way annoyed - but it’s mostly down to the fact that the resale value of my Powerbook just took a nosedive.
A bit of a bumper update from me… several packages are now available for testing at http://www.blastwave.org/testing. Any comments welcome - providing no major glitches are uncovered, these should make their way out to the main pkg-get repository and mirrors in a week or so.
Although things have been quiet on the Blastwave front recently, it hasn’t been because of lack of activity. I am currently working on a set of PostgreSQL 8.0.3 packages, which resolve several bugs as well as some fairly serious security issues, the full details of which can be found here.
Now I’ve got my Powerbook back from repairs, I decided to continue my experiment to get Solaris 10 installed under Virtual PC 7. I’ve finally managed to get a useable system, although it did take a fair amount of hacking. The end result is a fully useable Solaris 10 installation complete with zones and dTrace, and provides a useful base for installing Blastwave packages.
Despite Solaris 9 being out for some time now, I have yet to see many overviews of it. I could find technical documents and “executive summarys” all over the ‘net - but nothing that provided me with an idea of what the new OS would be like. In light of the fact that I’ve recently purchased another UltraSPARC machine for home use, I downloaded the 3 discs from Sun, grabbed a coke and settled down to install.