Hi Corey,
My recollection of the disk drives I used to work with is that you would assume that a block write was atomic (which makes some sense with an actual disk drive). I'm used to write-through vs write-back, but never heard copy-on-write either. But regardless, with the MRAM, even using DMA, I don't think we can guarantee atomic writes larger than a byte. Since the MRAM really only writes one byte at a time, you can't guarantee which byte the write stopped on if the processor crashes.
Perhaps the copy-on-write means "we write a block to one place, and then copy it. If the system crashes during the first write, no data is in its final location. If it during the copy, we still have the original buffer and can re-do the copy after the system restarts. (Maybe that is what you were saying in the last next-to-last paragraph.)
BTW, have you seen how much this thing costs? The fact that they require you to get a quote makes me think it is fairly high.
73,
Burns Fisher, WB1FJ *AMSAT(R) Engineering -- Flight Software*
On Mon, Dec 12, 2022 at 3:39 PM Corey Minyard [email protected] wrote:
On Mon, Dec 12, 2022 at 10:45:25AM -0500, Burns Fisher (AMSAT) wrote:
Corey, have you been able to get any of the data about this file system? In particular, I'm wondering if there is a minimum size for the MRAM to make it useful?
Nothing is documented, only code and memory sizes, but for the MRAM sizes we are talking about there will be no issue. For a normal filesystem, you need a few superblocks and base inodes for a minimum filesystem, and assuming a 1K block size that's <10KB. It would be very odd to have a minimal size beyond that. They also don't document a minimum block size. 1K is pretty standard, hopefully it's not bigger than that.
The bigger question in my mind is the filesystem overhead (how much space is required besides the data to manage the filesystem, called "metadata" in filesystem parlance). From what I can tell, this is not filesystem technology I'm used to. Modern high-reliability filesystems I've used are journal based and I understand how they work.
Reliance Edge is a "Copy-on-Write" filesystem. That's a new term for me. Well, the term is commonly used in virtual memory systems, but that's something completely different. From what I can glean from the docs, they appear to do all the data writes in unused areas, update the metadata and free the old data. That's going to add overhead, but probably not a big issue. A journal would add just as much.
It is unclear to me how they achieve atomicity with this scheme. It's also not clear how they keep from losing blocks on a failure, unless the filesystem startup code handles this (probably the case). With a journal it's clear how that's all done.
I've been busy with other things and I need to get back on this and document what I've found in the git repository.
-corey
73,
Burns Fisher, WB1FJ *AMSAT(R) Engineering -- Flight Software*
On Fri, Nov 18, 2022 at 1:25 PM Corey Minyard [email protected] wrote:
On Fri, Nov 18, 2022 at 11:25:29AM -0500, Rich Gopstein wrote:
I found this while poking around. They have both GPL and commercial options.
https://www.freertos.org/FreeRTOS-Plus/Fail_Safe_File_System/Reliance_Edge_F...
That's interesting. This is really what we need, since it's transactional. I'll look into it some more.
-corey
pacsat-dev mailing list -- [email protected] View archives of this mailing list at
https://mailman.amsat.org/hyperkitty/list/[email protected]
To unsubscribe send an email to [email protected] Manage all of your AMSAT-NA mailing list preferences at
pacsat-dev mailing list -- [email protected] View archives of this mailing list at https://mailman.amsat.org/hyperkitty/list/[email protected] To unsubscribe send an email to [email protected] Manage all of your AMSAT-NA mailing list preferences at https://mailman.amsat.org
pacsat-dev mailing list -- [email protected] View archives of this mailing list at
https://mailman.amsat.org/hyperkitty/list/[email protected]
To unsubscribe send an email to [email protected] Manage all of your AMSAT-NA mailing list preferences at