You’re constantly plugging and unplugging (and mounting/unmounting) your flash drive. What can you do to minimize potential data loss?
Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites.
The Question
SuperUser reader Peter wants to know what he can do to keep the file system and files of his flash drives intact. He writes:
Is there nothing he can do, or are there preventative steps he can take?
Which filesystem is the most robust? Which technologies or labels (xyz certified, etc) indicate that USB sticks supporting them are less likely to become corrupted? Is there something else to look out for?
The Answer
SuperUser contributor Breakthrough offers the following tips:
Sound advice for ensuring you get the maximum expected life out of your flash drive.
Commonly used file systems like FAT32 or NTFS don’t store any data validation information (only on the internal file system itself). Keep backups of your data, validate the data with checksums (you can generate MD5/SHA1 hashes for your files only to check if any data has been corrupted), and/or store recovery archives.
And lastly, regardless of the filesystem, you should always properly unmount the drive. This ensures that any existing file reads/writes are completed, and any read/write buffers have been flushed.
Which filesystem is the most robust?
Robustness comes at a price – compatibility. Arguably, you’d want a file system with built-in data validation and checksumming (or redundant data) like ZFS, but that isn’t very portable with Windows/OSX. If performance is a concern, you might want to try exFAT, which appears to be supported in most major operating systems out of the box or with some slight configuration.
Which technologies or labels (xyz certified, etc) indicate that USB sticks supporting them are less likely to become corrupted?
Anything that keeps flash memory alive longer, most notably wear leveling and over provisioning. If the drive supports wear leveling, a larger drive will keep more available sectors in the case some wear out.
At the end of the day, flash memory doesn’t last forever. All current flash memory has a limited number of read/write cycles, which inherently causes data loss over time. You can mitigate this risk by taking regular backups, and validating your data with checksums to determine when a file has been corrupted.
It’s also possible to use a filesystem with built-in data integrity and recovery, but these are uncommon in many non-UNIX environments as of writing this. They also may be slower and actually wear out the drive faster, due to the requirements of storing additional checksums & redundant information for each file.
For each case there’s a solution, you just need to weigh the portability/integrity/speed considerations.
Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.