:: Re: [DNG] Max Load Average
Top Page
Delete this message
Reply to this message
Author: Martin Steigerwald
Date:  
To: Devuan ML
Subject: Re: [DNG] Max Load Average
Hi Simon, hi.

Simon Hobson - 16.07.24, 22:06:18 CEST:
> nisp1953 via Dng <dng@???> wrote:
> > Another question here. What is the max load average I can run on my
> > laptop? I am using Devuan 5.0 on a Lenovo T480S Thinkpad.
>
> Coming late to this …
>
> As others have said, there is no “correct” answer.
>
> I went for a look, and via StackExchange and ServerFault items came to
> this blog
> https://www.brendangregg.com/blog/2017-08-08/linux-load-averages.html
> which I think is a really interesting discussion on what it means - and
> it’s history.


Which is exactly the one I mentioned and explained in my first response to
this question. Did you read that?

> But everything is a case of “relative to what is normal FOR YOU”. And
> what is acceptable in terms of response times is also a matter of
> “what’s OK for you” - e.g. adding a few seconds response delay on a
> mail server probably won’t register, on an interactive terminal it will
> drive the users mad.


Important distinction.

If its fast enough for you, it is fine. Simple as that.

> As to what is safe, in general you should be able to load up a processor
> fully and not suffer problems. Of course, if the design is ... a bit
> marginal, then there may be issues. Or there may be a fault. I had a
> laptop (Apple MacBook) some time ago which was dual-core (I think 8
> threads) but couldn’t run two cores at high load without shutting down
> for thermal protection. There was a known problem with that model where
> there could be inadequate contact between processor and heatsink - the
> cure being to take it apart, clean the faces, and reassemble with the
> right thermal compound/pad. Apple actually did that for me at one of
> their genius bars even though it was out of warranty :-) Before that,
> if I was going to do a CPU intensive task, I’d use a utility that was
> part of the developer tools to disable one core - running one core flat
> out was safe, running two at a high load was guaranteed to cause a
> shutdown which was “annoying”.


I'd indeed suspect a hardware issue in case a thermal shutdown happens due
to just loading the CPU. Actually with decent hardware I am sure you can
just run stress -c 100 for days without any thermal shutdown. Should you?
Again, no. Why put unnecessary load onto a machine? Unless you like to
test whether the hardware delivers on any promises of the vendor. I'd
basically return a ThinkPad it is cannot do stress -c 100 for a day
without a thermal shutdown. But I do not even see a sense to test it. I am
so certain that with any decently made laptop from any vendor it would not
cause any issue that I do not even see a need to test it.

If you however set the internal or discrete GPU to maximum MHz for long
amount of times while having 28 degrees Celsius with lots of humidity in
your room… you get what you ask for.

> * I once admined a SCO OpenServer system where the application software

[…]
> Of course, had this been Linux, we could have just stuffed a few G of
> RAM in the system and it would have held the entire database in RAM !


Interesting story.

What I wonder is how out of memory conditions are / were handled by Unix
based operating systems. That means once swap is exhausted as well.

I still got not over the fact that in Linux the out of memory killer just
forcefully terminates processes until it is fine again.

A reliable operating should never *ever* forcefully kill a process without
the user asking it to. But as long as some apps allocate virtual address
space as if there was no tomorrow…

Do you know anything about that?

On the new ThinkPad T14 AMD Gen 5 there once was a reset of the internal
GPU of the AMD Ryzen processor while I was playing Supertuxkart which
really play with decent fps using all settings maxed out. This basically
quit the game as well. Also not what I would consider to be reliable
operating of processes.

Ciao,
--
Martin