:: Re: [DNG] SSD Lifetime?
Top Page
Delete this message
Reply to this message
Author: Didier Kryn
Date:  
To: dng
Subject: Re: [DNG] SSD Lifetime?
Le 05/12/2023 à 09:09, Martin Steigerwald a écrit :
> terryc - 05.12.23, 04:18:40 CET:
>>> I put a Western Digital Black NVME 500G drive in my desktop system
>>> and ran it for about 3 years. At the end it had 65% life left which
>>> surprised me as my desktop doesn't do an awful lot of disk writing or
>>> even reading.
>> How did you determine this lifetime?
>>
>> I have a couple of systems with SSDs. One is /Everything and the other
>> has two under / and /home. They were rolled out about two years ago.
>>
>> FWIW HowToGeek on testing SSDs claims Blackblaze claims SSD. will
>> outlast HDDs. Samne rate of failure under 3 years and HDDs start
>> failing after 54 years, but SSDs go on further
> smartctl -x on a Samsung 980 Pro 2 TB SSD which is about 2 years
> meanwhile, in daily usage:
>
> Available Spare:                    100%
> Available Spare Threshold:          10%
> Percentage Used:                    1%
> […]
> Data Units Read:                    261.509.276 [133 TB]
> Data Units Written:                 73.925.789 [37,8 TB]
>
> Especially in case you leave some space free, use trimming either by
> fstrim or in case its cleanly supported by your SSD with discard mount
> option, preferably async discard like in XFS or with discard=async in
> BTRFS, good SSDs should last a very long time. Of course you can still use
> "noatime" and using a new enough kernel also "lazytime". I just use
> "lazytime" nowadays on my laptops. Together with sysctl setting
>
> vm.dirtytime_expire_seconds = 7200
>
> so it updates every 2 hours instead of AFAIR 24 hours in case of no other
> activity triggering an update.
>
> "Percentage Used: 1%" basically means 1% of the usable lifetime has
> expired by vendor estimate:
>
>> The wear level is given by the “Percentage Used” field, which is
>> specified as (page 184):
>>
>> Percentage Used: Contains a vendor specific estimate of the percentage
>> of NVM subsystem life used based on the actual usage and the
>> manufacturer’s prediction of NVM life. A value of 100 indicates that
>> the estimated endurance of the NVM in the NVM subsystem has been
>> consumed, but may not indicate an NVM subsystem failure. The value is
>> allowed to exceed 100. Percentages greater than 254 shall be
>> represented as 255. This value shall be updated once per power-on hour
>> (when the controller is not in a sleep state).
> https://unix.stackexchange.com/questions/652623/how-to-evaluate-the-wear-level-of-a-nvme-ssd
>
> Due to the way flash works the best way to keep them alive for a long time
> is: Use a bigger capacity than you need and leave some space free. With
> LVM I usually just do not allocate about 10-20% of the capacity. But even
> if you allocate all of the space for filesystems… I am not worried about
> SSD lifetime regarding wear leveling. Not at all. I did not see any of my
> SSDs failing due to wear leveling issues. Not even close. Even with write
> heavy systems like a Plasma desktop with PostgreSQL based Akonadi and
> desktop search and all kinds of writing around here and there.
>
> On any of my laptops I would not even consider putting in a hard disk to
> save SSD lifetime. And if I had a desktop computer, I probably would not
> do either. I love totally quiet systems, happily using zcfan on my
> ThinkPad laptops. And since I use SSDs I noticed how loud even 2,5 inch
> hard disks can be. I still use those for backup purposes, cause even with
> today's low SSD prices for backups I prefer even cheaper hard disks. But a
> 12 TB 3,5 inch hard disk monster in my living area or office? Not even a
> chance.


    Congrats Martin; you seem to have some expertise in filesystems and
how to trim them. Unfortunately this isn't very common, and not my case
:~). I used to always specify the noatime option, because I have no
application relying on file access time. Have you any such application
installed?

    I noticed your sysctl command, which, I guess, sets the lifetime of
data in the VFS buffers before they are actually written to disk. It is
also available in /proc/sys/vm/dirtytime_expire_seconds and is set to
43200 in my laptop. What is the goal of reducing it to 7200?

    What does mean the line "Percentage Used: 1%" in your diagnostic
listing? Cause if only 1% of the disk is used, it should last longer
than if 99% was used, but not many people can afford disks 100 time
bigger than their storage need.

You also talk about the noise of a monster spinning disk used for
backup, but the recommendation for a backup disk is to *not* let it
powered all the time. I've read that a spinning disk used rarely is
considered the most resilient backup storage, in contradiction with SSDs
which loose data if they aren' t periodically powered.

Thanks.

-- Didier