Chiron FS is a Fuse based filesystem with the main purpose to guarantee filesystem availability using replication. But it isn't a RAID implementation. RAID replicates DEVICES not FILESYSTEMS.
Servers, like Diamond and Sapphire, offer their services. Let's imagine that Diamond is a web server and Sapphire is a file server. Dog and Cat are the clients.
What happens if Diamond becomes unavailable? It means no web server. It means the phone will be ringing soon, your users and your boss want your head, blah, blah, blah... a bad day!
Yeah! You have made your backups. Good! You have even hardware backups. So you just have to restore everything and your day will be good again! But how much time do you spend restoring all that stuff?
You can make Saphire a hardware backup of Diamond (yeah Diamond can be a Saphire backup too). Then you can use Heartbeat to make each server monitor the other and act as a temporary replacement of the unavailable one.
Things are getting better! All the services are already installed and configured on all the servers: Diamond is prepared to act as a file server and Saphire can be the web server.
Everything automatically done! Except the data... Data written in one server is not available to the other until you do your backups.
Now, we will introduce more servers in the net. Mars, Venus and Mercury will be simple file servers. I said SIMPLE! They will have just a basic configuration. They will serve files only to Diamond and Sapphire. Dog and Cat are not allowed to talk to them. They will be servers' servers. They can serve their files with any protocol you want (NFS, SSH, etc.), the only requisite is that Diamond and Sapphire mount the filesystems served by Mars, Venus and Mercury.
So, this is the point where ChironFS starts its job. Diamond and Sapphire mounts the filesystems in Mars, Venus and Mercury. Let's say that they will be /mars, /venus and /mercury. Then, you mount the ChironFS as a combination of /mars, /venus and /mercury using the /chironfs mount-point. From now on, every write in the /chironfs subtree will be echoed to /mars, /venus and /mercury. Any read from /chironfs will be made from only one of the servers' servers (aiming toward load balance).
At this point you realize that you are free from single points of failure! If one of the servers' servers becomes unavailable, you can get the data from the others. ChironFS will detect that the data is unreachable in that server and try to retrieve from another. If Diamond or Sapphire becomes unavailable you can use Heartbeat to make the automatic temporary replacement server, but this time you will have access to all the data written by the dead server just before it died. No need to restore anything!
But why use ChironFS? Why not just use RAID over some network block device? Because it is a block device and if Diamond mounts that device in RW mode, no other server will be able to mount it in RW mode. And this was just a simple example, your real network may have many servers and offer a variety of services. Keeping everything running can become a real nightmare!
� No single point of failure in your network;
� No downtime on a server crash;
� The data storage may be anything that can be mounted on your server, even different protocols may be used at the same time;
� You can turn off any server for maintenance and its services will still be available.
� It's free (GPLv3 license).
� Some kind of filesystem sharing service up and running;
� Some kind of Heartbeat automatic replacement of dead servers. It is not REQUISITE, ChironFS will run without it, but why use ChironFS and make the server replacement manually?
What's New in 1.0.0 Stable Release:
� This release integrates patches porting ChironFS to FreeBSD and NetBSD, and changes the debug code on *BSD versions to get the same system information that the Linux version gets.
� It is needed because the next ChironFS release will need that info in order to resynchronize failed replicas.
What's New in 1.1.1 Development Release:
� This release has an option that allows it to mount a pseudo-filesystem (like /proc) to control the behavior of the Chiron filesystem being mounted, allowing it to show and change the status of the replicas.
� Dynamically generated Nagios plugin scripts are also provided.
� The howto and man page were updated.
� A a bug that made ChironFS fail to determine the correct path to the program that manages its control filesystem has been fixed.