How can microSD flash memory wear be minimized?

Hi @Nordrunner, for clarity, here is what I use and what it looks like:

RPi 3 model B+ and SANDisk microSD card (GBP£14 on Amazon) per photo.

20181006_110524

Latest Raspbian stretch lite ISO image is flashed onto the microSD card using dd. On firstboot the system automatically resizes the Linux partition to fill all the available space. The result is:

$ sudo fdisk --list /dev/mmcblk0
Disk /dev/mmcblk0: 59.5 GiB, 63864569856 bytes, 124735488 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x04c9550d

Device Boot Start End Sectors Size Id Type
/dev/mmcblk0p1 8192 96663 88472 43.2M c W95 FAT32 (LBA)
/dev/mmcblk0p2 98304 124735487 124637184 59.4G 83 Linux
$

$ sudo df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/root 61341180 1220716 57596328 3% /

/dev/mmcblk0p1 43539 22498 21042 52% /boot

$

No further changes to disk/partition layout, types, etc.

This is very simple to implement and, in my experience, this is a very reliable long-term with moderate write rates. I like simple.

That’s a very interesting price for 64GB.
Do you know what it’s performance is in 4K random and sequential reads and writes?

Results of the Sandisk Ultra 64GB are not included here unfortunately:
http://www.pidramble.com/wiki/benchmarks/microsd-cards

Hi @bluejedi, I have not done performance tests.

I can report that the dd command to write the 1.86GBytes Raspbian stretch lite ISO image to the microSD card on a CentOS 7 laptop system takes 62 seconds which looks like 30MBytes per second. The microSD card was in an elago microSD/USB adapter so it appeared on the laptop as /dev/sdb. I don’t follow performance metrics but that looks fast compared to the metrics in the list from your post.

1 Like

Hi @cultsdotelecomatgmai,

Performing the dd write on a SD card in a microSD/USB adapter on a laptop makes a significant difference with doing the same test with the same SD card in the SD slot on a Raspberry Pi so these will not make a reliable comparison.

The dd write test performs sequential writes of large files, but much more representative for the operational performance are the 4k random reads and writes. A high throughput for sequential writes does not automatically imply a high throughput for 4k random reads and writes (and vice versa). A good example is the Samsung Evo+ 32GB in the list which shines in 4k random reads/writes but not in sequential writes.

I don’t think USB sticks have wear-leveling.

Mainly its flash for budget IoT in that long term maintenance free operation can be disrupted by flash wear.
SD cards unless it is ‘industrial’ don’t have any wear levelling and are prone to block wear.

SWAP partitions are the worst hence why Raspbian has a SWAP file but its a close 2nd.
Then logs as the same area gets a lot of writes and long term you are playing block wear lotto when that flash area is exhausted.

Zram using compressed memory can be an effective solution to SWAP
Log2Ram creates a RAM disk for log operation.

So both take the most frequent write operations and move them off SD and into ram often using compression techniques to help minimise ram usage.

Big problem is that many of what is posted as solutions is actually broken the worst being the zram-config-0.50.deb package as take a look at https://bugs.launchpad.net/ubuntu/+source/zram-config before you install.

Even Log2Ram to a much lesser extent is flawed it retains log rotation unnecessarily and writes complete logs ever hour on any change.
I started the flash wear thing by thinking due to the number of complaints about log2ram getting full that a zram backing store would be beneficial.
Didn’t get a response thought that I would implement it.
I have found that Armbian and DietPi have taken versions of zram-config-0.50.deb and Log2Ram and repackaged as their own, with much tying them to their framework.
Not really a big fan as niether even seems to feedback to the original.

I have an alternative zram-swap-config as it should be named at https://github.com/StuartIanNaylor/ and also what might be a much better version of log2ram

Hopefully some others may stop repackaging as their own and a wider spread will concentrate on the maintenance of some common apps in the IoT Maker arena.
I am not a fan of ‘My precious’ dunno about you Hobbitses.

Ps the 2 repo branches on github if anyone would kindly test then maybe we can get them pushed up to the master.
Also please fork push / pull comment and maybe add some finesse :slight_smile:
Stuart


1 Like

I continue to use the most basic of approaches to microSD card wear on RPi with good results:

  • I disable swap. (one command, /sbin/dphys-swapfile swapoff, in /etc/rc.local run at startup).
  • I minimise syslogging. (one edit, /etc/rsyslog.conf).
  • I use high-capacity and high-endurance microSD cards, eg SanDisk SDSDQQ-064G-G46A. (designed for car dash-cams and other video recording, 64GBytes with wear-levelling).
  • If appropriate, I use a separate high-capacity and high-endurance microSD card in a SD/USB adapter formatted with an ext4 filesystem as a data disk.

Result: no extra software, easy to implement, low delta-cost and high availability.

2 Likes

SD cards unless it is ‘industrial’ don’t have any wear levelling …"

If you go back to my post of Oct 18th on this thread you will see I debunked this myth. Took me less than 5 minutes to debunk it by going to the respective manufacture’s web site.

2 Likes

Since the gateway is network connected anyway I log fast moving stuff on my NAS. A small edit in /etc/rsyslog.conf is all that is required (if your NAS has a log server that is).

1 Like

SDHC have no wear levelling SDXC is down to manufacture and like cheap SSD they are dram-less.
You didn’t debunk anything as you posted nothing that says otherwise than that you had been on websites that say otherwise without posting any details.
That isn’t debunking but feel free to actually post some real info.

Anyway I have done a all in one zram-config based on a /etc/ztab that allows any number of swaps / zram backed dirs or a /var/log with log shipping to persistent.
Should help with block wear on SD cards as even if averaged on some SDXC there is still wear that can be mitigated via zram.
Also should provide near ram speed working directories.
Also increased memory via compressed swaps, no wear is just one facet of various uses.

sudo apt-get install git
git clone https://github.com/StuartIanNaylor/zram-config
cd zram-config
sudo sh install.sh

As I said in my previous post, it look me less that 5 minutes to debunk. I assume it took you more than 5 minutes to write your post, pity you did not invest that time by doing research.

Want proof without having to do your own research? Check out this San Disk document: https://www.alliedelec.com/m/d/04db416b291011446889dbd6129e2644.pdf or just go to the the website of the three vendors I mentioned previously and read for yourself.

1.5.4 Wear Leveling
Wear leveling is an intrinsic part of the erase pooling functionality of cards in the
SanDisk microSD Card Product Family using NAND memory.

That is exactly it and please do actually drill down and get specific information on wear levelling, because there is no wear levelling spec even a bad block list can be termed by manufacturers as “Wear levelling”.
You can carry on quoting as for me I am finding it quite amusing that you are in some belief of the general ‘snake oil salesman’ blurb of what are extremely cheap throwaway peripheral devices.
Even the industrial have static wear levelling that it self is prone to wear and is light years away from what must see as decent consumer SSD static level wearing.

Its hilarious what you are saying which basically boils down to it has the words on the manufacturers site but when you drill down there is an absolute lack of any technical details or at least a spec,
Its hilarious that you should think cheap throwaway devices are even designed or suitable for what can be extremely high frequency, high write counts.
Flash especially SD has a specific market that even though write to failure count has massively increased its still an issue as manufacturers push storage size and speed in a price compromise for a very specific application and market.

Are we talking most devices may fail in a month or even a year as now with many cards probably not, but it is still very dependent on application and that with IoT devices can be a real pain to access if they break.
Do you have to use relatively expensive ‘industrial’ and other words such as ‘extreme’ that in reality might be several thousand write flash backed by a write averaging hardened block that itself can have 100’s of thousands writes, probably not.

eMMc has far better wear levelling and write tolerance, Nand is generally on a different planet and after you quote the empty cul-de-sacs of manufacturer sales websites of SD wear levelling that 99.99% lacks any collaborative technical spec of any kind or functional explanation of what write levelling is employed.

Can you use cheap SD and does SD need high grade wear levelling and high write count durability the answer is yes and no as that is not what its designed for and its that simple, even if there is one born every day apparently.
In most IoT / maker projects a linux image is taken where often a single application is in operation that can even have a single directory that the bulk of writes take place or at least a root directory that covers a majority.

Am I saying this covers all, that all applications need zram in a similar manner you are quoting ‘wear levelling’.
No as I am aware of the plethora of differences and uses but in many applications it can be extremely beneficial just not to chance what you don’t need to chance by moving high write frequency areas to volatile durable ram.
It can be extremely beneficial to performance to move from what is essentially still slow SD to near ram speed zram.
Do you have a memory intensive application and small memory footprint then memory compression could help massively.

Am I saying you need to buy anything, buy the best, you need industrial, nope I am saying the opposite that actually you can use the dirtiest and cheapest as its extremely easy to employ zram.
Its a rather nifty kernel mod, by the kernel developers and you need is to configure it, that can be extremely beneficial for IoT / Maker devices that can massively help with flash wear, get near ram speed drives and greatly increase memory pools.
But unlike manufacturer sales sites I am not trying to sell you buzzwords of no meaning.

If you have an IoT / Maker device that may have medium to high frequency writes taking place and you want to place and forget with no maintenance for many years then install the above.
Its deliberately created to make it extremely easy to shift logs, a few extremely durable high speed directories and have a swap device all from a simple single config utility.
We are talking low end/ low cost IoT devices like generally IoT should be and that is what its for and it is very dependent on application and the choice is yours you can believe some sales blurb or go with a more belt and breaches approach and use volatile resilient ram more productively.

A little bit of knowledge can be an extremely dangerous thing and when it comes to SD write levelling, that is obviously all you have.
Zram can be extremely effective for many applications and you will not find it on a manufacturers sales web site.
You don’t need to install additional software to use it, don’t need any loss of logging or special devices and all zram-conf does is configure and setup zram for those less tech savvy than those who can write and implement a boot script themselves.

Hi @stuartiannaylor, I’m a volume user of microSD ‘snake oil’ and I’m interested to know your views on how I’m getting such good results.

I build industrial edge-computing with LoRaWAN-uplink devices using RPi with microSD block storage. The data needs to be persistent through reboots so I use SQLite3 databases on 64GByte SanDisk high-endurance microSD/ext4 block storage. The databases are small, typically 50KBytes, with up to 5,000 SQL inserts, updates and deletes per day. The database size remains constant. Using the hdparm --fibmap command I can see that the SQLite3 databases are staying in the same ext4 Logical Block Addresses (LBAs) so SQLite3 and ext4 are not moving the database files around on the block storage.

The days, weeks, months go by and turn into years and the systems work. I think that the ongoing reliability is due to efficient wear-levelling going on within the SanDisk microSD storage devices per the SanDisk published documents.

However… I am always interested in alternative views so, @stuartiannaylor, I would really appreciate your views on how this is working if microSD wear-levelling is ‘snake oil’ and fake.

1 Like

Yes, me too.
Problem is, how do you test a setup ?

I agree in my initial post I did not provide the linkages to documentation to debunk the myth and instead relied on published specifications of the respective SD card vendors. Then when this was challenged I posted the link to Sandisk document and again referred to the respective vendors web sites for their specifications.

Am I naive in believing the vendors claims? Probably but I have no reason to doubt them because my experience with developing, selling and supporting commercial SD card based data logging systems with embedded controllers (no OS) as well as Linux systems based on the Raspberry Pis with similar configuration and operation profiles as described by cultsdotelecomatgmail, have been very successful. I am cognisant of the endurance characteristics of the media and my Raspberry Pi based systems do use RAM disks for all temporary files to address this limitation.

Pointing out the obvious, you dismissed the claims of these vendors, yet provided no proof points. Is this really any different from my first post?

1 Like

sudo apt-get install iotop iostat dstat
Take your pick then wait a long while with multiple devices in high SD I/O and as its not me selling or providing professional profit based services, whilst nand is know to be prone to wear failure, it should be them who provide the conclusive test results or factual specification to support their claims.

Even again in this thread once more we get claims and figures that actually mean nothing by those who are selling things.
Actual I/O data isn’t given just fictional emphasis on numbers that in some way that means SD writes whilst its highly likely a certain level of in memory and cache operations take place.
No actual code or I/O is actually given and just like the empty words of sales blurb the figures quoted don’t mean anything and no further details are given.

Its quite fanciful to expect me who is making no claim and selling nothing to disprove their claims with actual factual tests that their claims fail to provide.
Its quite simple as all is needed is logic that even with cheap and cheerful SD, moving working directories to ram negates SD block wear.

In linux any directory can be moved elsewhere via a bind mount then you can mount ram in its place.
Copy over the data and copy back on stop.
That is just pure logic and with Zram depending on input and alg used compression can be from approx 220% - 400+% thus stretching your ram much further.

Practically every database I know of employs some form of buffer, cache and in memory processing with data compression techniques and even a low level of competence would be aware of this.
I am afraid we have further snakeoil in action as its presented as raw disk writes with no further simple iostats as backup.
You just can not trust once that trust is broken with empty claims and any further info must be greeted with scepticism and somehow the onus is reversed on you to provide proof?!

Like I say with most IoT / Maker projects its quite simple and easy to move high write operations to durable ram and all that is needed is simple logic to make that proof factual and you don’t need to buy anything.

Created by an IoT/Maker not making any claims or selling anything just frustrated that there seemed to be a lack of any good and simple zram config utils.
Still a bit fresh so any issues please post on the repo as have been hitting ubuntu to update what is a really poor version of zram-config and in reality should be named zram-make-some-illogical-swaps.

In this discussion we have not covered anything new or earth shattering. Virtually every file system working using block storage, such as HDDs, SDDs, SD, eMMC etc, use file buffers in RAM as opposed to writing directly to the media. User writes and typically, but not necessarily reads, directory updates etc, are to the associated ram buffers. Actual block writes only occur when an attempt is made to write across the block sized boundary in which case the current block is flushed to the physical media and the next block read into the buffer, file closure, a user specific flush command, a user specified flush dirty buffers after X where X could be in minutes, hours, days etc… depending on the application and the user’s tolerance to loss of data on power failure.

Yeah exactly there is nothing new as we are still waiting for extended tests based on actual long term i/o stats that provide common sd mtbf figures.
Until then there is nothing new with this conversation and rather than chance of application, write frequency and SD ‘quality’ you can move expected high write quantities to ram, if you wish, without buying anything or leaving purely to trust.
Also the caching and memory operations of databases is vastly more extensive than file-systems and that is why they are data-bases and not just file-systems.