:: Re: [DNG] [devuan-dev] Debian Buste…
Página Principal
Delete this message
Reply to this message
Autor: Simon Hobson
Data:  
Para: dng
Assunto: Re: [DNG] [devuan-dev] Debian Buster release to partially drop non-systemd support
Steve Litt <slitt@???> wrote:

> "Multi-seat" makes little sense now that when you add a user you can give him or her a $400 computer with which he can share the server's data.


I would beg to disagree - at least for some workloads. I think "it depends" is often teh answer to the question of "is multi-seat of any use ?"

For your typical "office stuff" - WP, Email, etc - then I;d agree, a desktop PC with shared access to the file server is great.
But back a couple of jobs ...
We ran a Unix system (SCO OpenServer before SCO hit the self destruct button) with a large number of users. The primary tool used by most users was a single application which did all the sales, purchasing, stock control etc, etc. Mostly these were Wyse60 terminals - partly for historical reasons, the previous system was hard-coded for a Wyse 60. Running serial at 9600bps was quite adequate to give a good response time and almost all* the time this worked fine with anything up to 100 users on a Pentium with 16G RAM.
In terms of data, it would make zero sense to have remote processing sharing the data off the server - the sales order detail (line items on sales orders) DB file would exceed 1G without any trouble. Since most of the work is database transactions, there is no sane alternative to a central DB server doing all the DB stuff. So even if you go to PCs on every desktop, you are still down to the client being an "intelligent display".
As it happens, a later version of the program did have a native Windows client - which was basically a Window-ised version of the text interface. As it was, many users were migrating to Windows PCs using a terminal network over ethernet for the main system, and the usual sort of "office stuff" they got Windows for.

But back to the clients we used, there is absolutely nothing as simple to manage on the factory floor tan a "green screen" terminal. It's really hard for the hammer fingered users to mess them up - and if they do, then it's generally nothing more than swapping out the broken one for a good one with zero config needed. Once you go to something more complicated, then the management costs go up - regardless of what system you use, there is more work in either manually configuring systems or setting up an automated system to do it.

Oh yes, and did I mention that we ran across multiple sites ? For a while we ran about 10 users across a 19.2k leased line - that got upgraded to 64k when one of the serial muxes died and we upgraded to IP networking and terminal servers.


* I said "almost all" the time. Any of you familiar with SCO OpenServer 5 will know that it has a link time configured disk buffer size, with a maximum size of 640,000kbytes - and yes, the system failed to boot if set to 640,001 kbytes. And note what I said about one single db file getting to over 1G, some of you will be ahead of me and know what's coming. The reporting tool that came with the package had an "interesting" feature in that it would suddenly stop using indexes for joins - you'd be developing a report and all would run fine, then a minor change and performance drops faster than a lead balloon. Non-indexed joins with files well over 1G and only 640M of disk cache - yup the system slows to a crawl with 99 to 100% wio and a long disk queue. We;d know if someone ran this particular report during the working day when the phones rang to say the system was frozen - everyone got stuck waiting for disk i/o. That was with fast (for their day) wide SCSI drives, arrayed across busses for maximum performance.
In the end, it got to be run over the weekend and took 40 hours. At some point I re-wrote it in Informix SQL, taking care over use of indexes, and we could run it at any time without upsetting users and get the results in under 2 minutes.

I was looking forward to us upgrading as a later version could run on Linux - and thus make use of more memory, would have been lovely to keep the DB in cache. It didn't happen while I was there, there was a bit of a business downturn and I was one of the ones that paid off.