How can microSD flash memory wear be minimized?

I agree with @Sandgroper.

I have a number of RPi systems doing continuous per 1-minute data logging into small SQLite3 databases on the microSD cards. Old data is deleted automatically on a rolling 7-day cycle. Filesystem is regular Linux ext4. I am always careful to disable the paging file to avoid excessive writing to the microSD card.

I always purchase major brand microSD cards, usually SANDisk, and always purchase cards that are much bigger than I need, e.g. 32 or 64 GBytes. I believe that this provides both wear-levelling and 10x or 20x the space I really need to provide space for the wear-levelling to work.

No card failures yet, some systems have been running for 2+ years.

Good idea to use a larger card for minimizing wear. A bigger card however adds a substantial increase to the total price, until recently at least.
Currently SSD’s can be had for less than €25 (Kingston A400 SATA 120GB) which are a better alternative but they make the RPi more bulky (and apparently no RPi cases exist for that yet).

Can that be done without any serious impact on the operation of the RPi?
Any side effects?

FYI, I experienced a Samsung Evo 16GB (major brand) card in RPi that got corrupted/damaged during normal operation.

1 Like

Yes and no…

Swap was important when memory was scarce. It tend to be a problem from the past.
So if you have “enough” memory you don’t need swap.

If you don’t have a swap partition (or you have one and it is full), then the kernel will kill memory hungry processes to survive, which is usually bad. On the other hand if you have to swap on a device like and RPi, the performance will go through the toilet, so it won’t help either.

On a gateway, memory should not be an issue, so you should be better without swap.

If you absolutely want to play safe, you could have a swap, but reduce the Swappiness to its minimum so that it will only swap when there are no other options.

(I personally run without swap on all my RPis)

1 Like

Hi @bluejedi, thanks for your post.

I’d like to point out that the RPi deployments that I do are in a commercial/industrial environment so I have to cost in the time of a skilled person to do complex per-device changes during provisioning and the cost for a technician to visit a failed system. The “substantial increase” to the total price of a much larger microSD card is actually - to me - a substantial cost reduction as I can use regular Raspbian Lite OS with only simple scripted firstboot build config changes. A few minutes of skilled time is needed at provisioning to set credentials, etc. and I get long-term reliability so reduced technician time.

I have worked in the offshore oil industry for 30+ years and if a small capex cost increase at the start can reduce long-term opex cost then it is always done. This capex/opex trading is very normal in commercial/industrial environments. RPi + Raspbian Lite is a great combination in this environment as it is now deployed in huge numbers and therefore very well proven and reliable as well as having the superb systemd and watchdog capabilities to maintain application and OS availability.

Regarding disabling paging/swap (/sbin/dphys-swapfile swapoff in /etc/rc.local) there appear to be no side-effects as long as everything will run within physical memory so system is configured and tuned for embedded / no-graphics use.


Hello. :slight_smile:

Check! aaaaaaaaannd Check! Here we go on RasPi with SDcard`s.

  1. Can we speak about of a basic configuration? Means:
  • RasPi 3 / RasPi 3+ or RasPi Zero That are the modells, they are current now 2018.
  • Raspian light (Stretch) for OS
  • a SDcardsize = or > 8GB / CLASS10 from a named producer like SANDISK or eg

I found another description to convert the FS for the RasPi to F2FS, you find it here.

A statetment from there

Basically, F2FS was a file system designed from the ground up for NAND-based devices. Motorola even started using F2FS for their Android smartphones. It’s designed to help reduce wear on the device, and improve performance on this type of storage medium. Since the Raspberry Pi was designed to run off of an SD card, it makes it a perfect candidate to play around with F2FS. It’s currently supported in the 4.4.y kernel so no need to compile your own kernel this time.

So why doesen`t use it??? for our DIY-GWs?

Some question that not clear for me:

  • Swap? yes or no?

Ok, @Amedee give a good input for that. Thank you!

  • Put /temp, /log or eq into RAM?

In some older contribtions, 2015 or so, some posters/bloggers mean to not get the full capacity from the blockdevice for the FS. They should be a “sparearea”? at the end. The size should be ~10% from the capacity of the device to give the controller the chance to outsource buggy/defektive cells.

  • Is this right?

@cultsdotelecomatgmai Is that what you mean about space for wearleveling?

Later in the description the author wrote, about the entry for the fstab (discard):

If your device is does not support TRIM, you can remove the discard option. Typically, most SD cards/eMMC/SSD will support this command, while USB flash drives may not. You’ll know during the mkfs.f2fs process as it will tell you whether your device does or not.

  • Is this right?

Hopfully get the anwers.

Greets and have a nice day.

Hi @Nordrunner, for clarity, here is what I use and what it looks like:

RPi 3 model B+ and SANDisk microSD card (GBP£14 on Amazon) per photo.


Latest Raspbian stretch lite ISO image is flashed onto the microSD card using dd. On firstboot the system automatically resizes the Linux partition to fill all the available space. The result is:

$ sudo fdisk --list /dev/mmcblk0
Disk /dev/mmcblk0: 59.5 GiB, 63864569856 bytes, 124735488 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x04c9550d

Device Boot Start End Sectors Size Id Type
/dev/mmcblk0p1 8192 96663 88472 43.2M c W95 FAT32 (LBA)
/dev/mmcblk0p2 98304 124735487 124637184 59.4G 83 Linux

$ sudo df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/root 61341180 1220716 57596328 3% /

/dev/mmcblk0p1 43539 22498 21042 52% /boot


No further changes to disk/partition layout, types, etc.

This is very simple to implement and, in my experience, this is a very reliable long-term with moderate write rates. I like simple.

That’s a very interesting price for 64GB.
Do you know what it’s performance is in 4K random and sequential reads and writes?

Results of the Sandisk Ultra 64GB are not included here unfortunately:

Hi @bluejedi, I have not done performance tests.

I can report that the dd command to write the 1.86GBytes Raspbian stretch lite ISO image to the microSD card on a CentOS 7 laptop system takes 62 seconds which looks like 30MBytes per second. The microSD card was in an elago microSD/USB adapter so it appeared on the laptop as /dev/sdb. I don’t follow performance metrics but that looks fast compared to the metrics in the list from your post.

1 Like

Hi @cultsdotelecomatgmai,

Performing the dd write on a SD card in a microSD/USB adapter on a laptop makes a significant difference with doing the same test with the same SD card in the SD slot on a Raspberry Pi so these will not make a reliable comparison.

The dd write test performs sequential writes of large files, but much more representative for the operational performance are the 4k random reads and writes. A high throughput for sequential writes does not automatically imply a high throughput for 4k random reads and writes (and vice versa). A good example is the Samsung Evo+ 32GB in the list which shines in 4k random reads/writes but not in sequential writes.

I don’t think USB sticks have wear-leveling.

Mainly its flash for budget IoT in that long term maintenance free operation can be disrupted by flash wear.
SD cards unless it is ‘industrial’ don’t have any wear levelling and are prone to block wear.

SWAP partitions are the worst hence why Raspbian has a SWAP file but its a close 2nd.
Then logs as the same area gets a lot of writes and long term you are playing block wear lotto when that flash area is exhausted.

Zram using compressed memory can be an effective solution to SWAP
Log2Ram creates a RAM disk for log operation.

So both take the most frequent write operations and move them off SD and into ram often using compression techniques to help minimise ram usage.

Big problem is that many of what is posted as solutions is actually broken the worst being the zram-config-0.50.deb package as take a look at before you install.

Even Log2Ram to a much lesser extent is flawed it retains log rotation unnecessarily and writes complete logs ever hour on any change.
I started the flash wear thing by thinking due to the number of complaints about log2ram getting full that a zram backing store would be beneficial.
Didn’t get a response thought that I would implement it.
I have found that Armbian and DietPi have taken versions of zram-config-0.50.deb and Log2Ram and repackaged as their own, with much tying them to their framework.
Not really a big fan as niether even seems to feedback to the original.

I have an alternative zram-swap-config as it should be named at and also what might be a much better version of log2ram

Hopefully some others may stop repackaging as their own and a wider spread will concentrate on the maintenance of some common apps in the IoT Maker arena.
I am not a fan of ‘My precious’ dunno about you Hobbitses.

Ps the 2 repo branches on github if anyone would kindly test then maybe we can get them pushed up to the master.
Also please fork push / pull comment and maybe add some finesse :slight_smile:

1 Like

I continue to use the most basic of approaches to microSD card wear on RPi with good results:

  • I disable swap. (one command, /sbin/dphys-swapfile swapoff, in /etc/rc.local run at startup).
  • I minimise syslogging. (one edit, /etc/rsyslog.conf).
  • I use high-capacity and high-endurance microSD cards, eg SanDisk SDSDQQ-064G-G46A. (designed for car dash-cams and other video recording, 64GBytes with wear-levelling).
  • If appropriate, I use a separate high-capacity and high-endurance microSD card in a SD/USB adapter formatted with an ext4 filesystem as a data disk.

Result: no extra software, easy to implement, low delta-cost and high availability.


SD cards unless it is ‘industrial’ don’t have any wear levelling …"

If you go back to my post of Oct 18th on this thread you will see I debunked this myth. Took me less than 5 minutes to debunk it by going to the respective manufacture’s web site.


Since the gateway is network connected anyway I log fast moving stuff on my NAS. A small edit in /etc/rsyslog.conf is all that is required (if your NAS has a log server that is).

1 Like

SDHC have no wear levelling SDXC is down to manufacture and like cheap SSD they are dram-less.
You didn’t debunk anything as you posted nothing that says otherwise than that you had been on websites that say otherwise without posting any details.
That isn’t debunking but feel free to actually post some real info.

Anyway I have done a all in one zram-config based on a /etc/ztab that allows any number of swaps / zram backed dirs or a /var/log with log shipping to persistent.
Should help with block wear on SD cards as even if averaged on some SDXC there is still wear that can be mitigated via zram.
Also should provide near ram speed working directories.
Also increased memory via compressed swaps, no wear is just one facet of various uses.

sudo apt-get install git
git clone
cd zram-config
sudo sh

As I said in my previous post, it look me less that 5 minutes to debunk. I assume it took you more than 5 minutes to write your post, pity you did not invest that time by doing research.

Want proof without having to do your own research? Check out this San Disk document: or just go to the the website of the three vendors I mentioned previously and read for yourself.

1.5.4 Wear Leveling
Wear leveling is an intrinsic part of the erase pooling functionality of cards in the
SanDisk microSD Card Product Family using NAND memory.

That is exactly it and please do actually drill down and get specific information on wear levelling, because there is no wear levelling spec even a bad block list can be termed by manufacturers as “Wear levelling”.
You can carry on quoting as for me I am finding it quite amusing that you are in some belief of the general ‘snake oil salesman’ blurb of what are extremely cheap throwaway peripheral devices.
Even the industrial have static wear levelling that it self is prone to wear and is light years away from what must see as decent consumer SSD static level wearing.

Its hilarious what you are saying which basically boils down to it has the words on the manufacturers site but when you drill down there is an absolute lack of any technical details or at least a spec,
Its hilarious that you should think cheap throwaway devices are even designed or suitable for what can be extremely high frequency, high write counts.
Flash especially SD has a specific market that even though write to failure count has massively increased its still an issue as manufacturers push storage size and speed in a price compromise for a very specific application and market.

Are we talking most devices may fail in a month or even a year as now with many cards probably not, but it is still very dependent on application and that with IoT devices can be a real pain to access if they break.
Do you have to use relatively expensive ‘industrial’ and other words such as ‘extreme’ that in reality might be several thousand write flash backed by a write averaging hardened block that itself can have 100’s of thousands writes, probably not.

eMMc has far better wear levelling and write tolerance, Nand is generally on a different planet and after you quote the empty cul-de-sacs of manufacturer sales websites of SD wear levelling that 99.99% lacks any collaborative technical spec of any kind or functional explanation of what write levelling is employed.

Can you use cheap SD and does SD need high grade wear levelling and high write count durability the answer is yes and no as that is not what its designed for and its that simple, even if there is one born every day apparently.
In most IoT / maker projects a linux image is taken where often a single application is in operation that can even have a single directory that the bulk of writes take place or at least a root directory that covers a majority.

Am I saying this covers all, that all applications need zram in a similar manner you are quoting ‘wear levelling’.
No as I am aware of the plethora of differences and uses but in many applications it can be extremely beneficial just not to chance what you don’t need to chance by moving high write frequency areas to volatile durable ram.
It can be extremely beneficial to performance to move from what is essentially still slow SD to near ram speed zram.
Do you have a memory intensive application and small memory footprint then memory compression could help massively.

Am I saying you need to buy anything, buy the best, you need industrial, nope I am saying the opposite that actually you can use the dirtiest and cheapest as its extremely easy to employ zram.
Its a rather nifty kernel mod, by the kernel developers and you need is to configure it, that can be extremely beneficial for IoT / Maker devices that can massively help with flash wear, get near ram speed drives and greatly increase memory pools.
But unlike manufacturer sales sites I am not trying to sell you buzzwords of no meaning.

If you have an IoT / Maker device that may have medium to high frequency writes taking place and you want to place and forget with no maintenance for many years then install the above.
Its deliberately created to make it extremely easy to shift logs, a few extremely durable high speed directories and have a swap device all from a simple single config utility.
We are talking low end/ low cost IoT devices like generally IoT should be and that is what its for and it is very dependent on application and the choice is yours you can believe some sales blurb or go with a more belt and breaches approach and use volatile resilient ram more productively.

A little bit of knowledge can be an extremely dangerous thing and when it comes to SD write levelling, that is obviously all you have.
Zram can be extremely effective for many applications and you will not find it on a manufacturers sales web site.
You don’t need to install additional software to use it, don’t need any loss of logging or special devices and all zram-conf does is configure and setup zram for those less tech savvy than those who can write and implement a boot script themselves.

Hi @stuartiannaylor, I’m a volume user of microSD ‘snake oil’ and I’m interested to know your views on how I’m getting such good results.

I build industrial edge-computing with LoRaWAN-uplink devices using RPi with microSD block storage. The data needs to be persistent through reboots so I use SQLite3 databases on 64GByte SanDisk high-endurance microSD/ext4 block storage. The databases are small, typically 50KBytes, with up to 5,000 SQL inserts, updates and deletes per day. The database size remains constant. Using the hdparm --fibmap command I can see that the SQLite3 databases are staying in the same ext4 Logical Block Addresses (LBAs) so SQLite3 and ext4 are not moving the database files around on the block storage.

The days, weeks, months go by and turn into years and the systems work. I think that the ongoing reliability is due to efficient wear-levelling going on within the SanDisk microSD storage devices per the SanDisk published documents.

However… I am always interested in alternative views so, @stuartiannaylor, I would really appreciate your views on how this is working if microSD wear-levelling is ‘snake oil’ and fake.

1 Like