Autor: Martin Steigerwald Fecha: A: dng Asunto: Re: [DNG] Irony
Martin Steigerwald - 07.11.23, 22:37:47 CET: > […] It was one of the reasons for switching to Devuan:
>
> I switched swap mounting to be label based in fstab but forgot to set
> the label on swap volume. Systemd refused to boot. And even when
> booting in Debian's emergency mode back then, it refused to start ssh
> daemon, cause if system does not start 100% correct then of cause ssh
> daemon cannot be started. And for refusing to start ssh daemon it even
> took 90 seconds. You read that right: Systemd refused to execute on
> "systemctl start ssh" cause of failed mounting of swap! Are you even
> kidding me? Of cause the machine had more than enough RAM to start the
> ssh daemon without working swap!
I have another ridiculous one, just experienced recently on Ubuntu Server
22.04 LTS.
For some reason after an upgrade, ssh daemon only came up partially. From
the error message I thought it was due to some outdated crypto related
library still in memory, but I am not completely sure as a service restart
should cause it to load the new libraries. But maybe it broke the running
ssh daemon and caused the systemd service to be in inconsistent state
which may have caused all of the following.
Now what did systemd-tmpfiles do as I attempted to restart the ssh daemon
in order to learn more about the issue?
It removed the directory "/run/sshd" as it stopped the service, but as
there was some error in bringing up ssh daemon again it did not create it
again! Yet ssh daemon process was running, and complaining about "/run/
sshd" missing which let to refusal of accepting new ssh connections. I
created "/run/sshd" manually and ssh daemon was immediately accepting new
connections again.
Yes, you read that right: Systemd made ssh inoperable cause it thought it
did not come up okay, but despite the error the ssh daemon was running and
accepting new connections once I created "/run/sshd" again! Lucky me that
running connections are not terminated on a ssh daemon restart!
This all or nothing binary approach about system and service state in
implicit Systemd policies is causing all kind of issues you would not have
without that nonsense!
You just don't do that. Ever.
So before a reboot of the server in order to be safe than sorry I
installed a cron job, a good old crond based cron job that creates "/run/
sshd" in case it does not exist, cause as it was a remote server I could
not effort to loose ssh access.
After reboot that ssh daemon startup error was gone, but I still have the
cron job in place, with a longer interval now, *just in case*. Cause what
do I know, it may happen again on another update!
Once again I thought: Are you even kidding me? You can't be serious about
that nonsense or can you?