A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.
This is going to sound somewhat weird and I don't want to fill in too many details until I've fleshed this out completely in my own mind but I've still got a question to ask. Was wondering what you guys think.
Would there be a way of using bittorrent (well an application based on the BT protocol) to allow a user to upload a piece of a file from his own system but at the same time be forced to download a random piece of a different file on the other guy's system of the same size?
sure you could write an app to do that uploading and downloading different files is what web browsers/servers do everyday. Sure you can use bittorrent to upload and download different files at the same time. You are going to have to ask a real question to get a real answer. Now if you are talking about forcing someone to download a virus while they think they are downloaing something else you will have to break the sha1 hash they use on packets.
Friday, March 07, 2008
Caveat: I've not implemented the protocol, but you can read on it at:
Wouldn't both files have to be in the torrent description being received from the tracker? I'm also not sure that this would "force" you to download the file, since the protocol implies that clients request pieces first; they're not forced upon them.
I suppose you could modify your client to accept "piece" messages that aren't the result of a request, but that would seem to be counter to the protocol's design. You'd have to modify the remote client to send these unasked for pieces, and modify the local client to accept them, and to know what to do with them, if they're not present as one of the pieces in the original .torrent file.
But in that situation you have full control over both ends, so you could just write your own protocol.
Friday, March 07, 2008
I don't believe that you should even consider to do this from scratch. There are plenty of open source tools for data synchronization, backup, distributed storage and the like. As some one commented before, some P2P tools could be a good starting point.
For example, FreeNet seems to have a robust protocol and interesting tools (http://en.wikipedia.org/wiki/Freenet)
I have some experience on this kind of systems (I'm a researcher on distributed systems) so if you can state your requirement in more detail, I could give you a more precise advice on where to start looking at.
Thanks Pablo. My idea is indeed for a distributed backup system. To use the service a user would zip up the files they wanted backed up, split them into multiple pieces and then a hash will be generated of each one. The user would then connect to a network and each single piece would be uploaded to a random user (redundancy) while the sender would receive some pieces from other users on the network. To restore the backup all the user will need is the hashcode for each chunk he uploaded. I like this idea because it's distributed and everyone who uses the service will play a part in it's upkeep.
I thought that Bittorrent was originally a distributed backup / filesystem. Problem was (as I recall), the authors wanted to do some sort of online data marketplace (to trade disk space), but it never took off.
Tuesday, March 11, 2008
If you're thinking of what i think you're thinking of (in essence, random users BitTorrent for Backup), I can't see how something like this can realistically work.
Obviously you need at least some level of redundancy, each backup chunk must exist in at least X locations. That makes each person's filespace requirements (their backup size * X). The bare minimum for this would be looking at about X=5, and that's probably dangerously low still. For 100GB of backup, the average user needs 500GB of *extra* space for it to work. that's not going to make people happy from the start.
Then there's reliability. As the number of 'chunks' increases, your odds of the whole being available decreases. "Oh, use some parity stuff like PAR2?" Up goes your storage requirements again. Still no certainty, either.
Availability? Few computers are on 24/7. "I'm sorry, your restore request will have to wait until Bob comes back from holiday".
The network traffic obviously is X times more than a regular backup system, and add on top of that traffic and server usage telling clients when they can dump, what they need to send out to someone else since other users with that chunk aren't available so you need to increase it's availability... ugh.
The whole thing just sounds like a nightmare. Just write a backup app wrapper around Amazon S3 or something.
Wednesday, March 12, 2008
This topic is archived. No further replies will be accepted.Other recent topics
Powered by FogBugz