Interesting solution and fortunately I am based in Europe. Are you using SFTP to access? How does the connection time fee work out for you? Is that only being charged when you backup/restore etc.?
I’m using SFTP, and there’s no fee for bandwidth
Little update to this, I managed to overcome the 100 mbit/s issue by using WebDAV instead of SFTP and setting up a VPN server on a cloud server in the same region and using that as a relay to the storage box. I’m able to get about 50 MB/s that way at peak
That’s a pretty attractive offer, if you manage to use amount of storage close to the allocation, which is very hard to do. Generally, I’d avoid providers that charge fixed fee regardless of whether you use that space or not.
I am using almost all of it, so that’s proven quite nice
Earlier this year I started a thread with almost the exact same title, so there might be some useful info there if folks are curious:
I ended up going with GCS using their autoclass settings. The advantages of this choice is that you get a high quality storage provider (google), you get access to “archive”-level (or cold storage) costs, and you do not have to worry about any settings to make this work. I backup to hot storage, and after it sits for a while, google just moves it over and charges more less.
I backed up around 100Gb and I’m now in month 4. Currently I have about 2Gb in regional, 5Gb in nearline, and 93Gb in coldline.
This translates to rough costs as follows:
$2 for the first month
$1 for the 2nd and 3rd months
$0.60–0.65 or so for months 4 through 12 (where I am now)
$0.15–0.25 per month after that (my estimate)
Data transfer out of google cloud is $0.12/Gb, so a restore would cost me about $12 (there are no “retrieval” fees, but these “transfer” fees apply).
There are cheaper options, and if you are routinely uploading a ton of data, this might not be ideal. But for my situation it was a nice solution.
Does it mean that paying more for newer, potentially transient data, overweights the 365 day minimum retention retention fee? (Meaning why auto class and not direct to archive?)
Any data retrieval at all costs $0.05/Gb/month, and GCS seems to define “retrieval” broadly:
A retrieval fee applies when you read, copy, move, or rewrite object data or metadata
My best guess is that if you set up a new bucket on GCS and designated it as archive class, such that ALL of your interacts with GCS are with that archive class bucket, then each month you’d have to pay fees for interacting with that data. You would also certainly have to move it out of archive before trying to restore. I wasn’t sure exactly how all those costs would add up, and I’d read horror stories about “gotchas” for people trying to use cold storage.
I don’t know for sure how it would all work out, and what kind of fees would hit me if I had gone with just an archive class bucket.
Ok. Makes sense.
There are no gotchas, all fees are spelled out on pricing page. But generally, if you use archival storage, the expectation is that you plan to never restore, so cost of retrieving data should be irrelevant.
Except with duplicacy you can’t separate working hot set from the rest of the data: metadata chunks are guaranteed egress, if you delete local cache.
So until that is fixed — I would not recommend using archival storage at all.
It should be pretty easy for duplicacy to separate metadat into a separate folder, so we could set storage class based on prefix.
No idea why is this not done years later. Perhaps nobody really wants it.
Fairly recent discussion: Low cost / Archival storage discussion - #5 by saspus
Yep, that was my approach going in. I never plan to restore. That said, this is a worst-case scenario backup, and I find it helpful to know ahead of time what a restore could cost should I ever need to use it.
Yes, exactly. This is why I didn’t go with a straight archival class bucket. Instead, as an experiment I turned on autoclass so that I could then watch and learn – and see how much data needs to stay in hot storage. The advantage is that rather than paying fees for egress out of hot storage, I just pay a small fee for data that stays in hot storage.
Yes, exactly!
This is why I’m doing autoclass. I’m 99% certain (from reading beforehand, and now watching my data) that I’ll pay LESS with autoclass than I would with just choosing hot storage. For me autoclass allows me to say: well, I’m just gonna do hot storage, but if google wants to move stuff to cheaper storage tiers behind the scenes, and save me money, great.
Agree, 100%! If duplicacy ever implemented this, I’d change over to a strictly archival bucket. (And if restic implements it - it’s experimental now - then I’ll switch over to restic.)
I do!!! And I’d think a lot of others would as well.
Oh sure, I read that very carefully before embarking on this experiment.
i probably mentioned this in the original thread, but my bills on GCS have stabilized around £1.10 per month for 565G.
Sounds like a really interesting option especially since for me it is a DR copy as well. Do you have any influence on how fast data is migrated to cheaper tiers, or is it really Autoclass?
It’s completely automatic, so out of the user’s control. In my experience the migration happens right on the clock: at 30 days it moved to nearline, at 90 it moved to coldline, and I expect it to go archive at 365 days.
In my duplicacy bucket I currently have a total of 50Gb: 1Gb is still in hot storage (regional), 2Gb is in nearline (that accurately reflects a couple of gigs of NEW data in my source folder from 6 weeks ago), and 47Gb is in coldline.
I have 100Gb total and my bill for July was 65 cents. It should be under 50 cents in August. The next big drop for me won’t come until spring.
I wonder how pruning works with this setup? Does it have to unfreeze objects in order to delete?
Good question. I assume so, but I don’t know. At $0.0012/Gb, I assume it’s cheaper to just leave the data there than to pay to delete it. So I’m not running any prunes, just checks.
Hmm you’re probably right. I asked Mr Gippty, apparently:
No retrieval (egress) is required to delete them, so there is no unfreezing process or retrieval cost just for deletion.
But:
Early deletion fees apply if you delete an object before the minimum storage duration for its tier:
- Nearline: 30 days
- Coldline: 90 days
- Archive: 365 days
So you could technically do it, but you need to be careful not to prune anything in those windows.
Personally, I’d wanna combine this with a local backup storage. This way, you can copy
from GCS only the necessary chunks to repair a broken local storage.
Failing that, you could initialise a local storage and ‘pre-seed’ chunks from your source data, before copying from GCS to local to fill in the gaps. That’d be a pretty good strategy to keep the restore costs down IMO.
You will NOT encounter these fees with autoclass turned on, because autoclass does not even move the data into the colder tier until it has already met those minimums.
In other words: that data from my first backup went to regional (hot) storage, and didn’t move to nearline until day 31. And so on…
So on autoclass, all your data has already met the minimum for whatever tier it’s in. You shouldn’t be charged to delete it.
Maybe I’ll test this out by running a prune…
I would LOVE to be getting anything close to 100 mbit. I’m in Poland with a 900/300 fiber connection but hetzner storage in Germany gives me 20-40 mbit both for uploading and for downloading duplicacy backups.
Wait…what’s wrong with iDrive? I used their personal service for close to 10 years, tested data recovery multiple times and never had a problem. It was slow way back then, but just as fast as Crashplan and Mozy were at the same time.
Since then I’ve switched entirely to Linux on all of my personal machines and of course servers so the personal tier isn’t officially supported.
I literally just used their migration tool to pull 75GBs from B2 over to E2. I didn’t watch the transfer rate, this is what the completion notification said:
Start Date:
Aug 06, 2025, 09:39 AM
Last updated:
Aug 06, 2025, 10:01 AM
From my point of view I found two things wrong with iDrive:
-
It doesn’t support incremental local backup on MacOS. IMHO, that’s just crazy.
-
Cloud backup from my location is substantially slower than backup to B2, the latter manages to saturate my 200MB line for both backup and restore. I know some people don’t care about backup speed, but I do.