Dev

Canonical intros Microcloud: Simple, free, on-prem Linux clustering


Canonical hosted an amusingly failure-filled demo of its new easy-to-install, Ubuntu-powered tool for building small-to-medium scale, on-premises high-availability clusters, Microcloud, at an event in London yesterday.

The intro to the talk leaned heavily on Canonical’s looming 20th anniversary, and with good reason. Ubuntu has carved out a substantial slice of the Linux market for itself on the basis of being easier to use than most of its rivals, at no cost – something that many Linux players still seem not to fully comprehend.

The presentation was as buzzword-heavy as one might expect, and it’s also extensively based on Canonical’s in-house tech, such as the LXD containervisor, Snap packaging, and, optionally, the Ubuntu Core snap-based immutable distro. (The only missing buzzword didn’t crop up until the Q&A session, and we were pleased by its absence: it’s not built on and doesn’t use Kubernetes, but you can run Kubernetes on it if you wish.) We’re certain this is going to turn off or alienate a lot of the more fundamentalist Penguinistas, but we are equally sure that Canonical won’t care. In the immortal words of Kevin Smith, it’s not for critics.

Youtube Video

Microcloud combines several existing bits of off-the-shelf FOSS tech in order to make it easy to link from three to 50 Ubuntu machines into an in-house, private high-availability cluster, with live migration and automatic failover. It uses its own LXD containervisor to manage nodes and workloads, Ceph for distributed storage, OpenZFS for local storage, and OVN to virtualize the cluster interconnect. All the tools are packaged as snaps. It supports both x86-64 and Arm64 nodes, including Raspberry Pi kit, and clusters can mix both architectures.

The event included several demonstrations using an on-stage cluster of three ODROID machines with “Intel N6005” processors, so we reckon they were ODROID H3+ units – which we suspect the company chose because of their dual Ethernet connections. Sadly for Canonical – although amusingly for viewers – the first demo was bringing up a new cluster, and this failed, repeatedly. You can watch the demo for yourself, although due to it going badly awry, it over-ran from a planned 45 minutes to some 80 minutes. To give the org credit, they found and fixed the problem – a typo in a script, apparently – but as a result, the sequencing of the demos was disrupted and the result was a little confusing.

There’s nothing that is profoundly ground-breaking here. All these components are out there and can be put together, but as Linux.org’s introduction to clustering puts it:

The article breaks Linux clusters down rather well, incidentally, and we recommend it. It splits them into four different types: HA, load-balancing, distributed, and parallel (Beowulf style supercomputer) clusters. These definitions are valid, but the problem is that they also overlap. For instance, Kubernetes creates and distributes “pods” of containers across multiple nodes, mainly to create very scalable web server infrastructure, but it also incorporates both load-balancing and high-availability functionality. Canonical already has its own Kubernetes distro, which it somewhat ambiguously calls MicroK8s. (We say this because we’ve heard this pronounced both micro-coo-ber-net-es and micro-kates.) Apparently, you can run MicroK8s on Microcloud.

There are also already many existing tools out there which claim to ease cluster creation and management. OpenStack is an obvious one, and of course Canonical already has one in that space. This vulture’s favorite description of OpenStack was that it lets you take a few racks full of servers in your garage, and turn them into your own private AWS.

Microcloud, as its name implies, is aimed at significantly smaller clusters, but it also looks somewhat more flexible than most existing Kubernetes-based systems. Multiple vendors have tools for easily building Kubernetes clusters; for instance, in a prior role, this vulture wrote the original installation guide for SUSE’s CaaSP, now discontinued and replaced by its Rancher acquisition. The thing is that while it can do other things, Kubernetes is strongly aimed at microservices: lots of small containers, typically each holding a single app, all working together to provide web servers that can scale on demand very rapidly.

Microcloud is a way to build LXD clusters, and while LXD can support microservices, that’s not its primary focus. LXD is aimed at running “system containers” containing whole Linux installations, complete with their own init systems, sharing only the kernel. This in principle means that you could migrate an existing distro – any distro with any init system, not just Ubuntu – into an LXD container, so long as that distro is able to run on top of an Ubuntu kernel.

If the distro requires its own kernel for some reason, then since version 4, LXD has also been able to run full virtual machines, as we described when Canonical took LXD back in-house earlier this year. Once Canonical’s audibly stressed Thomas Parrott finally got it working, he demonstrated it running a full Ubuntu 22.04 GNOME desktop, as well as live-migrating a running NextCloud instance from one node to another.

As Canonical’s own documentation on HA clustering demonstrates, it’s no trivial matter. Other vendors have their own tooling to simplify it, such as SUSE’s HA extension to SLE, but this is an enterprise tool for its enterprise distro. Microcloud is free and clusters the free Ubuntu desktop or server distros.

Microcloud is an interesting pitch. Back in the early 1990s, The Reg FOSS desk was a sysadmin for a cluster of DEC VAX machines running OpenVMS, and remains somewhat nostalgic for the clustering and version-control features that were integral parts of that operating system. As we have previously noted, Linux is a UNIX™, and Unix doesn’t have anything like this – although its woefully neglected successor Plan 9 does. However, these days, such functionality has been implemented on top of Linux via multiple layers of abstraction and indirection – a true baklava code implementation. Just as it was complex to implement, it has also been complex to deploy.

Ubuntu has become popular not because of particular technical merit, but because it’s easy and it’s free. Over nearly two decades, its success as the go-to desktop distro led to it also becoming a player in the server space, as impecunious youths slowly graduated to working in IT and naturally reached for the distro they knew best. This vulture has worked for both of the main commercial enterprise Linux vendors, and came away with the impression they still don’t fully comprehend these key twin advantages – and their loyal fans certainly don’t, and will miss no chance to bash the opposition. Holy wars have been a profound, if regrettable, element of the Unix world since well before Intel released the 80286.

Lots of enthusiasts enjoy criticizing Snap, but then, they also enjoy attacking systemd and Wayland and various other innovations – most of which are designed to make stuff easier without concern for the feelings of hardcore shell junkies. Although controversial, this kind of stuff simply does not matter to ordinary folks who just want stuff to be easy, such as an app store that enables them to install Chrome and Steam.

Canonical has a wide variety of desktops and both Ubuntu Server and Ubuntu Core, supplemented by its decade-old Juju automation tools. It’s notable that some of its tools avoid complexity that other vendors embrace but then try to hide.

Red Hat has been slow to adopt its own next-gen storage system, Stratis, although it has finally landed in RHEL 9.3. As a direct result of not having a file system capable of copy-on-write (COW) snapshots, its Flatpak cross-distro app format relies on some formidably complex underpinnings to deliver rollback: OStree, basically “Git for binaries” – and famously, nobody really understands Git.

Even so, Flatpak remains poor at handling command-line tools and can’t be used to build a distro, for which you need to tackle OStree head-on. SUSE, in contrast, has a much simpler transactional package management system based around RPM, because instead, SUSE leans heavily on Btrfs, which is complex and sometimes fragile.

Ubuntu dodges this complexity. Snap works by keeping each app in a single, compressed file, making transactionality easy without COW or anything resembling Git. For its next-gen filesystem, it uses ZFS, arguably the most mature filesystem in the Unix world. At the Ubuntu Summit, Ubuntu SABDFL Mark Shuttleworth told us that before Canonical adopted ZFS, he personally checked with Oracle and with Free Software Foundation legal counsel Eben Moglen and got their OK. So, despite licensing concerns, in the latest Ubuntu 23.10, root on ZFS reappeared as an experimental option. We have no inside info on whether this will happen, but ZFS includes block deduplication, so in principle, some future release of Ubuntu could enable ZFS dedup and so reduce the storage space needed by large monolithic Snap packages, especially multiple versions of them.

Canonical has always avoided the ready-rolled server market – perhaps because there were already several contenders even back in 2010. We can’t think of any other vendors chasing the low-end clustering market, though: hitherto that’s been the big boys’ domain. The integration between LXD, Snap packaging, and advanced tools like Ceph, OVN and ZFS could be a compelling bundle. (It could also relegate LXD fork Incus to relative irrelevancy.) If Ubuntu can bring its one-two punch of a free product and simple deployment to Linux clustering, it could have a hit on its hands.

Youtube Video ®



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.