:: [DNG] [OT] Re: Another great articl…
Góra strony
Delete this message
Reply to this message
Autor: Martin Steigerwald
Data:  
Dla: dng
Temat: [DNG] [OT] Re: Another great article about overbloatatiousness and complexification
Hi Steve.

Albeit at least somewhat off-topic for the scope of the Devuan project,
still an interesting subject.

Steve Litt - 19.02.24, 06:18:11 CET:
> See https://www.theregister.com/2024/02/12/drowning_in_code/


Indeed, indeed. I can really recommend to read through Nikolaus Wirth's
plea for lean software.

For a little educational fun project I tried out assembling a hello world
example with flat assembler in Debian package fasm. The example source is
also in the package. The executable is less than 300 bytes. Yes, you read
that right: *bytes*. Hello world unstripped with gcc compiled C source is
about 15,5 KiB. Even stripped it is about 14,1 KiB. So we waste about 15
KiB for every single executable in Debian/Devuan it seems. On my system
that is already more than 4000 files in /usr/bin and /usr/sbin alone.

As I worked on an Amiga related book project, a German language book about
AmigaOS 3.2 I worked quite a bit with AmigaOS again. Even after decades I
could still tell what every file is for. AmigaOS boots from hard disk
faster than Linux boots from SSD and that on a lot slower hardware.
Granted, it had no memory protection, multi user and some other features
that I'd consider essential these days. However… as Nikolaus Wirth
correctly states that still does not explain the bloat in current
software. Even Minix could be booted from floppy disk back in my Amiga 500
times. On the Amiga :).

What went wrong? The essay of Nikolaus Wirth at least pin points some
possible causes. A good one is a lack of taking the time to think through
things *before* and *during* writing code. The extremely fast cycle of
software developments. All three month a new Linux kernel. How insane can
it even get? No one really understands it anymore. Do you understand the
complete Linux kernel? Not even a chance I'd say. It would be very
beneficial to reduce the pace. Doing things more slowly is not a weakness
but a virtue.

It would be very challenging to pull it off, but sometimes I think it would
be a good idea to look at all what worked great, all the design mistakes
and so on and start from scratch. But one thing is: Once a standard has
been established, usually from the first and not the best thing that was
there, a standard has been established. We are still adhering to non
optimal keyboard layouts, a less than optimal CPU architecture like X86…
and so on and so forth. During my little fun project I looked at how Linux
takes parameters for syscalls from X86_64 registers. It is complete
insanity. How can one design a processor and a syscall interface this way?
Already the order to number arguments into registers… When I compare this
to the cleanliness of Motorola 68k assembly I don't really know what to
say anymore. Maybe I am just not clever enough to understand the geniality
of the approach, but really it does not fit in my mind. I would never do
something like this.

I am using Linux cause currently IMHO it is still one of the better
choices, not cause it is what I'd consider optimum. It is one of the
better choices out of a couple of not really optimal choices.

IMHO there are leaner and better operating system designs. But none of it
has an application landscape and hardware support that would make it
practical to use as a desktop system.

Regarding clean and lean Alpine Linux really is interesting. A basic
system with SSD installs to about 15-20 MiB of storage. Yes, that is
right: MiB. And no Systemd there either. They use OpenRC.

Ciao,
--
Martin