:: Re: [DNG] savings from parallelism …
Página superior
Eliminar este mensaje
Responder a este mensaje
Autor: Simon Hobson
Fecha:  
A: dng
Temas antiguos: Re: [DNG] if2mac init.d service for persistent network interface names
Asunto: Re: [DNG] savings from parallelism (Was: if2mac init.d service for persistent network interface names)
Didier Kryn <kryn@???> wrote:

> Therefore I suspect the authors managed to launch several threads in order to save 0.01s of the boot time. Or to loose more because thread scheduling might well consume more than what parallelism saves.


In the general case, parallelism only saves wall clock time IFF you have a number of processes that have to wait on outside events while not (significantly) using resources on the machine - or if they are exceedingly computationally intensive that running tasks across multiple cores gives a saving (not common during startup). So if you have things like bringing up interfaces - waiting for WiFi to connect and DHCP to get an address, that sort of thing. But even then there's probably little to be saved since you usually have most of the system waiting for the network to be up before it can proceed.
But otherwise, especially with a spinning disk, parallelism will slow things down because you force the disk to go off here there and everywhere getting data for different processes. Not applicable during startup, but there are memory considerations* too if the jobs are large. With SSD this is much less of a problem.


* As an aside, at a previous job many years ago, they got a network of Apollo workstations in for running engineering software. The whole thing was primarily driven by the naval architects for doing complex fluid dynamics and structural modelling - and at the time Apollo had the higher spec number cruncher. For context, this was when a 286 with a couple of megs of RAM was considered high end - Apollo were using (from memory) Motorola 68000 range processors and I think most of the workstations had 68020. They had to stop people running their own jobs on the big machine simply because if asked to run more than one then it would slow to a crawl when it started swapping. But users were unable to grasp the concept of "wait your f'in turn" (some would even cancel other running jobs to get theirs to run faster) - so restrictions were imposed and only the admins could run jobs on it, everyone else had to put their requests in a queue.

Simon