The piece of information I couldn't find in your (John Morris') data is
how much of the 160GiB is consumed with data. It makes a big difference.
Consider my SSD hosted root partition, which is 4 or 5 years old:
================================================
[slitt@mydesk ~]$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 220G 23G 187G 11% /
[slitt@mydesk ~]$
================================================
I fstrim / about once a week, delete obviously gratuitous files, and
89% of the drive is still available. New writes are scattered across
that huge area.
Now imagine my SSD were 90% occupied by needed data: A perfectly
useable situation with spinning rust drives. New writes would be
confined to the other 10%, which quickly gets written to death.
By the way, since 2014 I've been using this setup, with a 256G SSD as
root, with /home, /var, /tmp, and various data directories mounted to
spinning rust. The result is a fast computer, with access to everything
under /usr and /etc being SSD speed, while sparing the SSD a lot of
repeated writes. And because of the spinning rust mounts, the SSD can
be small (read that as cheap). I'd recommend this setup for anyone with
a desktop or server requiring more than a few GB of disk space, or
doing a lot of writes.
SteveT
Steve Litt
November 2019 featured book: Manager's Guide to Technical
Troubleshooting Second edition
http://www.troubleshooters.com/mgr
On Thu, 14 Nov 2019 15:20:11 -0600
John Morris <jmorris@???> wrote:
> On Wed, 2019-11-13 at 04:06 -0800, Bruce Ferrell wrote:
> > Well, I was thinking more along the lines of the "early" failure
> > rate for SSD and not so much the convenience of a thing as small as
> > my baby finger nail with insane amounts of
> > storage. I have active and still in use rotational media from the
> > 90's. SSD just can't do that and flash... We don't need to go into
> > it. That's what started this thread.
>
> There is a big difference between SD cards, USB sticks and real SSDs
> too. And there is another big difference between consumer SSD and
> Enterprise gear. Here is some real world data. Drive has been in
> pretty much constant use in production at a public library running the
> online catalog and in house cataloging / automation / etc. since 2011.
>
> === START OF INFORMATION SECTION ===
> Model Family: Intel X25-M SSD
> Device Model: INTEL SSDSA2M160G2GN
> Serial Number: CVPO0510036E160AGN
> Firmware Version: 2CV102HD
> User Capacity: 160,041,885,696 bytes
> Device is: In smartctl database [for details use: -P show]
> ATA Version is: 7
> ATA Standard is: ATA/ATAPI-7 T13 1532D revision 1
> Local Time is: Thu Nov 14 15:09:55 2019 CST
> SMART support is: Available - device has SMART capability.
> SMART support is: Enabled
>
> Vendor Specific SMART Attributes with Thresholds:
> ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE
> UPDATED WHEN_FAILED RAW_VALUE 3 Spin_Up_Time 0x0020 100
> 100 000 Old_age Offline - 0 4 Start_Stop_Count
> 0x0030 100 100 000 Old_age Offline - 0 5
> Reallocated_Sector_Ct 0x0032 100 100 000 Old_age Always
> - 21 9 Power_On_Hours 0x0032 100 100 000
> Old_age Always - 71816 12 Power_Cycle_Count
> 0x0032 100 100 000 Old_age Always - 148 192
> Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always
> - 101 225 Host_Writes_Count 0x0030 200 200 000
> Old_age Offline - 414459 226 Load-in_Time
> 0x0032 100 100 000 Old_age Always - 184 227
> Torq-amp_Count 0x0032 100 100 000 Old_age Always
> - 0 228 Power-off_Retract_Count 0x0032 100 100 000
> Old_age Always - 3027407118 232
> Available_Reservd_Space 0x0033 099 099 010 Pre-fail Always
> - 0 233 Media_Wearout_Indicator 0x0032 096 096 000
> Old_age Always - 0 184 End-to-End_Error 0x0033
> 100 100 099 Pre-fail Always - 0
>
> SMART Error Log Version: 1
> No Errors Logged
>
> So yeah I trust SSDs now in production workloads. It is in a RAID1
> though so trust but verify is still the watchword. There are six of
> these drives in the three servers making up our Evergreen install, all
> bought at the same time and all still going strong. Unless something
> unusual happens they are more likely to be taken out of service for
> being too small than becoming unreliable.