Author: Adam Borowski Date: To: dng Subject: Re: [DNG] Studying C as told. (For help)
On Wed, Jun 22, 2016 at 12:35:22PM +0100, Simon Hobson wrote: > Rainer Weikusat <rweikusat@???> wrote:
> > Lastly, if the target system is Linux (as here), one can safely assume
> > that EBCDIC won't be used.
> >
> > None of this matter anyhow for solving algorithm exercises in an
> > entry-level book about a programming language.
>
> On the other hand, it might just give a newbie to C (and others) some
> hints that might save their bacon (or that of others) later. When I first
> learned any programming, we too got to work on the basis that "letters are
> A-Z and a-z"
Two important points:
* this is true only in ASCII
but
* this _is_ true in ASCII.
Thus, you need to understand that you get only basic Latin[3] -- ie,
English[1] letters. But, considering that getting C to support non-ASCII
letters is non-trivial, this is a reasonable simplification, especially for
a newbie. Of course, you lose Polish letters like ą, ł, ż or ź, or more
foreign stuff like ᛞ or あ.
So here's a trick question: how many letters there are between 'Z' and 'А'?
(This is more insidious than it looks like.)
[1]. Actually, the English alphabet had more letters: þ, ð, ƿ[2] and ȝ, but
they got dropped as early printing presses imported from Germany lacked
these characters. Before the technology was copied and fonts could be
manufactured domestically, the English suddenly had orders of magnitude more
reading material lacking their letters than older handwritten works.
(This is a gross simplification.) So let's have this in mind when you skip
support for non-modern-English characters.
[2]. Ƿ got dropped earlier due to French-influenced ortography.
[3]. And even that is untrue: Latin never had w, j or u.
--
An imaginary friend squared is a real enemy.