Hey saspus,
uptime need to be close to 100% for them to receive the payout for the contract. So any long time disconnection or leaving of the network does void the contract basically.
So the user only pays if the contract is fulfilled, which means that they store the data and have it accessible on demand.
Checksumming is done in IPFS as well as Filecoin itself:
- IPFS is actually addressing the data via the hash of all the chunks of the data. So it’s guaranteed to be the same file you requested when it comes out of the IPFS daemon.
- Filecoin doesn’t tolerate bit-rotting either, as the data is altered as thus the deal is not fulfilled.
IPFS is more about distributing “hot” data, while Filecoin is more about archiving the data cheaply on “cold” stores which are checked for consistency from time to time automatically, so you don’t have to. And you don’t pay people who loose your data.
Both protocols are made by the same community (which is backed by a startup which raised money to develop Filecoin). IPFS and Filecoin are also being evaluated for the Internet Archive (IIRC) as new backend infrastructure.
IPFS itself got a cluster application, which offers a way to make sure certain data keeps being available on the network, so kind of sharing storage requirements with strangers who like to contribute to the cause, or inside company/organization which needs to spread out a large amount of data on multiple servers.
There’s an (pretty new) API which can offer “pinning services” for cluster applications, so you “store” the data on your local IPFS node and send a request to the cluster to “pin” it on the cluster, which then requests the configured amount of cluster members to store it locally.
There are some “collaboration clusters” - where the project websites for example are stored. I run one of them (but are not affiliated with the project itself in any other way) which offers ArchLinux package mirror services via a cluster of servers, which is kind of neat, as computers which are close to each other via ping tries to fetch the data directly from each other. This way, updates can be received via LAN speed from other computers on the local network, if possible - while the rest is fetched from the cluster.
Here’s my project site:
A list of all collaborative clusters on IPFS: https://collab.ipfscluster.io/
The estuary.tech project offers on the other hand an API to store data in IPFS locally, and then request to have it archived on Filecoin. So the data is transported to their servers and then spread on 6 different filecoin nodes. The guys running estuary.tech do select a subset of Filecoin nodes to avoid selecting 6 ones from the same company which then “disappear” with your data etc.
As there’s quite some running costs involved in running a Filecoin storage, as the proofs for spacetime they use are pretty CPU/Memory intensive it’s extremely unlikely that storage deals in general are not fulfilled, as they need to pay for their operating cost - which they only can if they fulfill the deals.
So no, no “best effort” or closet computing involved here.
The price to store data on Filecoin, on the other hand, is pretty cheap. As there are a lot of players around the world offering their empty storage capacity to Filecoin deals:
You’re looking currently at $0.0000038 USD for storing one GB of data for a year on Filecoin. So 2 cent per TB per year if you want to store 6 copies. Source: https://file.app/
Apart from that, if a Filecoin deal fails, as data is no longer provided by one of the 6 Filecoin providers, estuary.tech will fetch the data from the other 5 peers and create a new deal for you.
Hope I could clear things up a bit