multisend project uses multicast to send files to multiple machines at the same time.
I wrote it for syncying VMware images out to a bunch of lab machines; 8gb * 15 takes awhile with http or scp. It listens on the local machine and uses ssh to start the receiving end, which connects back for the tcp stream.
It's pretty functional; the default mode takes a source and dest, so it's like scp with a bunch of targets. Or you can use it in sync mode (-S), which sends files to the same location as on the local machine. Very handy for setting up one lab machine, then syncing all the relavent files out to the rest of the lab with a single command.
It sends md5sums and doesn't overwrite anything until the transfer's been verified, so it'll never corrupt data. It also falls back to tcp if for some reason multicast isn't working.
It could use a little more polish, but mostly it needs some real documentation; I haven't worked on it awhile, and I'm the only person who really uses it. At some point I'll include some example scripts that make it easy to use.
Usage: multisend [OPTION] localfile remotefile
or: multisend [OPTION] localfile remotedir
Copies local file to remote file, or multiple files to remote directory
-t < timeout > specify a timeout for clients to connect, in seconds
-n < nclients > wait for a specific number of clients to connect before transferring
-r < rate > rate to transfer at, in kb, mb or gb per second
-S, --sync sync mode
-s, --script file to execute to connect to clients
-g, --group multicast group to use
-a, --archive same as -dpR
-p preserve ownership, permissions, timestamps
-R, --recursive copy directories recursively
-x stay on this file system
-v, --verbose give more info
You must specify either -t or -n
You should also specify a rate as well; otherwise most packets will be dropped
What's New in This Release:
· The code is functional and has been tested, but could use more polish.
· There may be latent bugs, but it will not corrupt data.