Autor: Simon Data: Para: Devuan ML Assunto: Re: [DNG] Tmpfs
Kevin Chadwick via Dng <dng@???> wrote:
In one one of life’s coincidences, in today’s (virtual) ManLUG meeting someone asked what the tmpfs mounts were about.
So we ended up having “a bit of a chat” about what they are, pros and cons etc.
And of course, during the trip down memory lane, we also reminisced about the days when you had to rebuild your /dev whenever you changed anything in the system. My first Unix was SCO Xenix and then Openserver - you had to relink the kernel on those if you changed hardware (no dynamic module loading). Actually, I think you only had to think about changing anything and it needed a relink 8-O
>> Earlier versions of tmpfs (actually, ramfs) had a major drawback: non-
>> swappable. But that's long gone: on memory pressure the pages can get
>> swapped out exactly like userland pages.
>
> Well you get very slightly faster app openings if you don't blink but isn't
> there the potential for swapping to disk multiple times and at the worst
> possible time (under pressure)? I feel like an engineer deciding to use /tmp has
> decided I want this to be on disk and that is where it should stay, once?
Bear in mind that tmpfs mounted /tmp and the like became common before SSD was that common. The speed difference between ram disk and spinning rust is considerable. And in the general case, these will be small and short lived files.
So for most people, most of the time, it makes sense to use tmpfs. Without looking, I assume it’s possible to change this and stick to having them on disk if that suits you better.
And as these discussions do, it tended to drift a little, and we also ended up discussing swap sizes. At one time, there used to be a rule of thumb that in the absence of reason not to, swap should be twice the physical RAM size. Of course, going back, both RAM and disk were probably constrained (by the depth of your bank balance) - so you needed to think about it. These days, resizing is generally easier, and both RAM and disk are (relatively) cheap. On most of my home system these days, swap is only there as a backstop - mostly unused. But I do have one (a Xen host) where the RAM allocation is small (just what it needs and little more), but it has a significantly larger swap allocation so it’s unlikely to ever hit the OOM killer unless something drastic happens.
> The strongest argument I can see for tmpfs is a default should maybe serve
> everyone and potentially that you might avoid wear on some SDs that do not have
> wear levelling and perhaps emmc with poor wear levelling but then under pressure
> are writes still lower or higher? I agree with the other poster that there
> should be no issue on an SSD.
I don’t think the wear issue is so much of a consideration - at least initially. As above, this became common when most people were still on spinning disks. That there’s a potential write-life benefit now SSDs are almost ubiquitous is probably just an incidental benefit - in the grand scheme of things, I don’t think it’s a massive load considering the rest of the activity a typical system will be doing.
> I was surprised that OpenBSDs daily script periodically clears /tmp though that
> appears to be only certain files.
Ah, I remember the days when /tmp had to be cleaned by a script (periodically and/or on boot).