Auteur: Tor Myklebust Date: À: T.J. Duchene CC: dng Sujet: Re: [Dng] [OT] Debian problems with Jesse - was simple backgrounds
On Mon, 2 Mar 2015, T.J. Duchene wrote:
> A respectable percentage of today's Linux distribution is a kludge of rapid
> pre-existing hacks that do not always work well when layered. You have
> something like adduser or other command utilities written in Perl, which are
> then called by init scripts, which can then be called by still more scripting
> in the form of a X GUI application to manage users. At any stage it can be
> prone to unforeseen interpretation problems.
This is indeed true, but it seems like a social problem rather than a
technical problem. People can, and will, write garbage software no matter
what tools they have. It might pay to let them do this with as little
pain as possible so they can go back to working on the thing they were
actually interested in doing.
It sounds here like you don't like that your X application is running
scripts, probably incorrectly, to do basic stuff. And how running a
script can fail, with error codes like "too many open files" or "too many
processes" or "not enough memory" when the task in question doesn't
obviously involve allocating memory or spinning up a process. If you're
in the mood to lay blame in a situation like this, I sugges directing it
at the X application's author rather than the author of the script the
application calls.
> I'm just asking at what point would it be more beneficial to simply code a
> library in something like C, and be done with it?
I think the answer to this question is more complicated than can be
described by a "tipping point" in a cost-benefit analysis. I think it's
context-dependent and I think it can change over the lifetime of the
software.
Performance concerns, for instance, often crop up later in the lifetime of
a project once it has a sizable userbase (or once somebody asks for the
resource-intensive feature enough times). Should we code everything in C
from the start just so we don't have to handle the performance problems in
the unlikely event that the project succeeds? Maybe, but what if that
constraint makes the project take quite a bit longer? And what if that
reduces its chances of success?
Performance concerns can cut the other way, too, under the name
"scalability." Because it's easier to write really fast C code, you can
get a lot farther with your really fast C code before hitting a
performance wall. That sounds good, but it means your code can evolve
into a bigger mess by the time you have to address your choice of data
structures, or parallelisation, or distribution.
> Perl, Python, and Java,and a number of other languages just do not function
> well for certain kinds of tasks that require efficient resource management
> over time, yet they are constantly being used by the opensource community
> today in places were it might be advisable to reconsider.
This is true. I've found it's not that hard to avoid running crap perl
and python software.
It's interesting that you'd mention Java here. I don't much like the Java
language or the Java programming culture, but Java bytecode has the
interesting property that, with a little plumbing, one can send executable
code over the network and have it run on a remote machine. This actually
winds up being useful for large-scale data crunching, where you want to
move the code to the data rather than the data to the code wherever
possible. I wouldn't know how to build a system that does this in C (for
instance) that isn't brittle.
>> Why pick on perl?
>
> Only because Perl makes itself the biggest target for example. I don't HATE
> Perl. I've even written a lot of Perl code in my day. I also recognize it
> for what it is. Perl is something that should be restricted to unimportant,
> small user jobs that won't be used too often, and most certainly never used
> with root permission. As everyone knows, Perl has a very ugly history of
> permissions flaws, that can rear its head if someone does not compile it
> properly.
>
>> Why not pick on the huge number of low-quality C libraries that are out
>> there?
>
> Yes, there can be low quality libraries in C. The main argument for
> using Perl, or other similar languages, instead of C is that there is
> less chance of errors (and thus better software) and you spend less time
> using it.
>
> If as you point out you can have can have crappy code anywhere so that
> tosses out the first argument. The second, that Perl is a timesaver is
> entirely subjective. With the right libraries, and enough actual use,
> anyone can write a small utility just as efficiently in C or a similar
> language.
It depends on what the utility is. C does not support certain useful
forms of abstraction available in other languages. (I'm not talking about
inheritance here. Generics and mixins are to my knowledge both impossible
to do in a performant and syntactically transparent way in C. Ditto for
anonymous functions. The way you emulate tagged unions in C---a struct
with a tag field and a union member---is a little scary in any large code
base because incomplete 'switch' statements won't raise compile-time
warnings.)
>> I don't think you said what you were trying to say. Judicious use of
>> abstraction is what lets us write useful software in the first place.
>
> True, however, there also comes a point where writing software in highly
> abstracted languages (usually the interpreted sort) has diminishing
> returns. I feel that they have been overused in the Linux ecosystem as
> whole. My whole rant is really just chatting with the community and
> seeing if anyone else shares that opinion. If you don't personally,
> that's just fine by me. =)
I think you have aimed your criticism at the wrong target. It is annoying
that "new" and "user-friendly" have both become synonymous with "does not
work under heavy load or unusual conditions" in Linux. It wasn't always
that way. But I would look toward the people building brittle stuff that
instead of the guy who wrote adduser if I wanted to diagnose the problem.