:: Re: [DNG] Incus
Página superior
Eliminar este mensaje
Responder a este mensaje
Autor: Martin Steigerwald
Fecha:  
A: dng
Asunto: Re: [DNG] Incus
Steve Litt - 29.06.24, 08:37:47 CEST:
> Simon Walter said on Fri, 28 Jun 2024 22:47:00 +0200
>
> >On 2024-06-28 04:18, o1bigtenor via Dng wrote:
> >> LXD was enough - - - - the joys I experienced there - - - well - - -
> >> the old
> >> saying is once burnt and twice shy!
> >
> >Ah... So it's a fork of LXD. OK. I proceed with more caution then.
>
> Is there something wrong with LXD?


Apparently so.

It is a Canonical project and some LXD developers forked as Incus cause
they have not been happy with Canonical taking more control over the
project. That is why they focus to provide a fully community led
alternative to LXD. An excerpt from the announcement:

"The goal of Incus is to provide a fully community led alternative to
Canonical’s LXD as well as providing an opportunity to correct some
mistakes that were made during LXD’s development which couldn’t be
corrected without breaking backward compatibility."

"There is no clearly defined roadmap at this point. Incus will be tracking
changes happening in LXD and will likely in time diverge from it as
different decisions get made.
A stable release of Incus is likely at least a couple of months away so
existing LXD users shouldn’t rush to find a way to migrate quite yet!"

https://linuxcontainers.org/incus/announcement/

There have been I think mailing list posts with more detailed reasons for
the fork.

That written of course Incus currently is still just LXD. So whatever
mistakes the Incus developers are referring to may still be in there,
given the short time frame since they forked.

As for practical reasons not to use LXD or Incus… I have no idea. The last
time I had to do with containers has been a long time ago, except for the
Proxmox VE maintained LXC containers.

I decided not wanting to do to much of the low level work myself and use
LXC directly. I also had an discussion with a former co-worker who did
very much with LXC and also read some blog posts. He uses LXC directly.
But especially I did not like to micro manage the network stuff.

So for me it was either use Proxmox VE with heavily relies on Systemd
currently, although even they switched away from recommending
systemd-timesyncd for example. Or to use Incus as an LXD fork and trust
the Linux Containers community.

The decision was to use Incus. I created a runit service dir template for
it and it is running just fine. My hoster would not support VMs inside
their VMs anyway and even if they did, Incus can do VMs as well.

As for practical issues regarding LXD I hand that to Simon and     
o1bigtenor. I never used LXD having been very well aware that it basically 
is a Canonical project. So I can say nothing about possible issues with 
it. I do not trust Canonical very much. 


So far Incus just does what I expect from it. Except maybe for a Linux
Container having no path set which broke starting services with runit in a
Devuan Daedalus container. I already referred to the bug report. However…
that may not even be Incus specific behavior but the default of the
underlying LXC stuff.

Also I am happy that I can use Incus with quite minimal dependencies and I
can use it on BTRFS to snapshot or copy containers in an instant.

It was also easy enough after some research to tell Incus to forward
certain ports into certain containers, like for example 80 and 443 into my
reverse proxy. And it does the DNS and DHCP for containers with dnsmasq
automatically. I assigned fixed and readable addresses like 10.10.10.20,
fd10:10:10:10::20 as I really like setups that have some aesthetic aspect
in them. But I can just use host names within the container network, so I
can change IP addresses around at will in Incus without breaking anything.

All in all I am quite happy so far. Whatever issues LXD may have, I had no
major issues with Incus so far. I hope it stays that way. Especially with
the next major upgrade. So far it is just running 6.0.0 of the Incus
packages. Upgraded from I think 0.6.0 which I think has been quite
similar, so no major upgrade but mainly an adaption of the version number.
That upgrade went totally smooth.

I have made a simple shell script which updates my containers and with the
Alpine based Nextcloud installation it was just about 5 minutes to upgrade
to their last release. Maybe even less. Upgrade alpine packages via the
script. And then manually inside the container "occ upgrade" and "occ
db:add-missing-indices". Of course a major upgrade to a newer PHP version
would be some more work. As for the web servers: Maybe I split things too
much, having a dedicated web server for even some sub domains, however it
was just so easy to do. Copy an existing Nginx or Apache web server
container, adapt, copy web site files there. Done. That way I can use
Apache for some older legacy sites of mine without having to worry
migrating everything to Nginx. I could still do a migration of a single
Apache web server at a time to Nginx later. Of course it means to keep
more containers up to date. But with Alpine its a breeze. Dist-upgrades
are less than a minute with "apk update" and "apk upgrade". Their package
manager goes with Warp 9,9 or so.

I do not (yet) use Ansible on my private setup and having containers means
I can migrate stuff anywhere easily. I did not yet test all of this, but I
can tell Incus to create a backup of a certain container and import that
into another host. I could even cluster Incus, but I have no plans of
doing so at the moment. I feel more independent from my hoster, cause
migrating my setup to somewhere else or even distributing it over more
than one server would be much less work than what it was been before with
having everything run directly on the server. Also there is at least some
isolation between different things.

It is also quite lightweight, especially with Alpine inside containers:

% free -m
         total     used      free    shared  buff/cache   available
Mem:      7946     2401       627        98        5320        5545
Swap:        0        0         0


That is for 12 containers, 11 Alpine and one Devuan. Including a Nextcloud
with MariaDB, Nginx and Coturn for WebRTC video chat, a mail server with
Postfix, Dovecot and Redis based Rspamd, a Quassel Core server. The current
server is quite a bit more powerful than what would strictly be needed.
But those >5 GiB caches can for sure help performance :). Even though
BTRFS reaches scrubbing speeds up to 4 GiB/s or even a bit more when the
AMD EPYC 84 core server the VM is hosted on is quiet enough. And even when
it isn't I easily get more than 1,5 GiB/s. So having everything in memory
may not even matter all that much.

So to put it short: Incus gets the job done for me.

--
Martin