:: Re: [DNG] Why C/C++ ?
Αρχική Σελίδα
Delete this message
Reply to this message
Συντάκτης: Dan Purgert
Ημερομηνία:  
Προς: dng
Αντικείμενο: Re: [DNG] Why C/C++ ?
On Aug 11, 2024, Didier Kryn wrote:
> Le 11/08/2024 à 15:38, Dan Purgert via Dng a écrit :
> > On Aug 11, 2024, Didier Kryn wrote:
> > >     Any one having a notion of mathematics will call you a liar. And it
> > > forces you to use == to mean equality, which has been the source of zillions
> > > of bugs, because, in C, an assignment is an expression.
> > Why? You're reassigning the variable 'x' to the value 2 ...
>
>     The thing is there is the "equal" sign, which has a universal meaning
> and which is needed with this very meaning, in every computer language, just
> to mean, as it stands "equal". It is just absurd to change its meaning to
> "becomes" and invent "equal equal" to mean "equal". FORTRAN use ".eq." IIRC,
> to mean "equal", which, at least, makes errors detectable at compile time.


Was it the same in C precursors? Or is it an oddity of C that's just
sort of stuck around? The statement "a=b;" meaning "set 'a' to the value
'b' " is pretty common syntax in the programming languages I'm familiar
with; likewise "a==b" being "test if a is the same value as b".

Maybe I've only worked with things that've derived their general
behavior from C ...

> > >     In any human language, people count things by giving number 1 to the
> > > first and number n to the nth. In C the first has number 0 and the nth has
> > > number n-1. Just for the sake of pointer arithmetics.
> > ... and the languages that start at 1 provide nothing but off-by-one
> > errors :P
>     Dunno what you mean. Do you think C so much that you don't count like
> humans? I've seen this already, you wouldn't be the zeroth (~:


At least when programming, anyway :D

That's not to say it didn't take "learning" and/or cause a lot of
"WHY??!?!?!" the night before homework was due.

> > >     In C, INT_MAX+1 == -1   ( INT_MAX is defined in <limits.h>)
> > Shouldn't that be "INT_MIN", when you overflow a signed integer?
> >
> > signed 8 bit INT_MAX is 0b01111111 (127)
> >               INT_MAX+1 = 0b10000000 (-128)

> >
> > Should hold true for 16,32,64-bit (signed) integers ... or am I having
> > problems doing math this morning? 🙂
> >
>   You're right, INT_MAX + 1 == INT_MIN, that is 2147483647 + 1 ==
> -2147483648 on x86-64. In a better language, this would raise an exception.


Well, gcc does get mad and tells me I overflowed the integer...


--
|_|O|_|
|_|_|O| Github: https://github.com/dpurgert
|O|O|O| PGP: DDAB 23FB 19FA 7D85 1CC1 E067 6D65 70E5 4CE7 2860