Some beginning questions…

I’m looking to create backups of local files and would like to make 2 copies. So, let’s say, 3 different types of of files (movies, pictures, and other data files). Each type is in multiple directories. I want to have all of the relevant directories backed up in their own repository so I can set each to have their own backup schedule (movies once a week, but data files every hour). All of this would be backed up to a local file storage…AND backed up to a second file storage so I can take those drives offsite. Then, once I feel comfortable, maybe to a cloud provider also.

A) Is this possible? And easily configurable?
B) For the two versions being backed up locally, do I have to configure it all separately or can I set the repository to back up to two different storage locations?
C) Will Duplicacy have a problem if the storage location is “missing” sometimes (when I remove the drives to take them offsite and only plug them in every month or so)?

Hopefully all of this made some sense!

A:

You don’t need to create separate schedules really. The simpler your setup is the better.

There is no harm in backing up your movies, audio, or any other media every 15 minutes: this will have no impact on performance nor storage. Movie files never change and will be always skipped.

While you can separate schedules by data types, and configure filters accordingly it only makes your life harder without any benefits in return.

B:

Each backup job targets one storage location. If you want tor backup to two different places — you would need to create two backup jobs.

C:

If the backup target location is unavailable at the time or backup the backup will fail. It’s also harmless.

D:

I strongly recommend to reconsider backing up to a single HDDs in the first place. See recent discussion (within last week) on this specific topic.

If you still want to do it — enable erasure coding at least, to make it marginally safer. But you will not have a guarantee of your data viability either way, you’re only getting another chore juggling and verifying disks. Hence…

… start with this instead. I’m not sure how can you ever be comfortable with storing data on rotting disks when cheap cloud storage that guarantees data integrity is readily available and is dirt cheap.

Thank you for the information, it is very helpful. If it helps explain my situation, I do not have a NAS in any RAID configuration. I use DrivePool to create one “drive letter “ that goes across multiple actual drives.
This was done because it was easier to handle since I have used Crashplan for many years (Mozy before that, etc.) to backup all of my data both to local disks as well as to their offsite locations.
Using something like Duplicacy, I hope to replicate what Crashplan gives me (a local copy that stays next to my device, a semi local copy that I can store close by, and a truly offsite cloud version).
This is definitely overkill but as the family history data storer, I can’t let this data get lost under any circumstances. Crashplan has been easy since I didn’t have to worry about setting up the cloud, etc. I think I can get there, I just don’t know anything about it yet. I figure I will start with getting my two “local” copies first while I learn about the other.
To that point, it sounds like I should not use DrivePool but, instead figure out which backup sets will fit on what drives and set them that way. I’ll have to think about that.

Thanks again for your assistance.

This is effectively JBOD and is not different that just a bunch of drives (which is in fact what this abbreviation stands for), with reliability worse than that of a single drive. You likely have already lost data but unless you attempt to restore all of it you will never know. This is especially true for photos and other media that are written once and never touched again.

The dude in this video discusses and explains data integrity issues that arise when you want long term data storage and even explicitly mentions your setup: RAID: Obsolete? New Tech BTRFS/ZFS and "traditional" RAID - YouTube

Please don’t use this contraption for any long term data retention, including as a backup target.

I hope to replicate what Crashplan gives me (a local copy that stays next to my device, a semi local copy that I can store close by, and a truly offsite cloud version).

You can of course, but it is not clear to me how is that beneficial at all: to maintain viability of your data you have to have storage with checksumming (for integrity validation) and redundancy (for healing) and keep maintaining it. It’s expensive. Google or Amazon or Microsoft do that on scale, much better, and cheaper. Why not use them?

Overkill implies way better protection than necessary. In your case — having local and semi local storage without data integrity guarantees does not enhance reliability. It’s pure waste of time.

Crashplan has been easy since I didn’t have to worry about setting up the cloud,

Crashplan was and is a waste of time and money: they never had redundancy. If their archive rots — in order to heal it they require you to still have your data. This sort of bull***r defeats the purpose of a backup solution. It’s not a big secret, it’s right there in their support articles. If crashplan was good we both would not be now talking here :slight_smile:

setting up the cloud, etc. I think I can get there, I just don’t know anything about it yet.

It’s not a lengthy process. It’s just another account. Can you sign up for email account? It’s the same thing. You create account, pick storage plan, generate access keys and feed them to duplicacy. You’ll do that even faster than the first backup to hdd completes.

To that point, it sounds like I should not use DrivePool but, instead figure out which backup sets will fit on what drives and set them that way. I’ll have to think about that.

No, this is not what I meant. Generally, if solution requires juggling data and fitting things — it’s a bad solution to begin with.

How much data do you have?

There are many cloud providers, including those that are designed purely for cloud storage — depending on amount of data you have via careful choice of provider you can keep storage cost under $12/month. The trick is to stick to large players — Google Cloud or Google Drive, Azure, and AWS S3 are great, Backblaze B2 is fine, the smaller players — hit or miss. Mostly miss.

1 Like

My lowest amount is (what I MUST keep) currently 8 TB and if I do everything it is currently 22TB. Crashplan has me at $10 a month for unlimited storage but over the years they have shown that they really don’t want me as a customer any more because they moved from personal to business accounts and things take FOREVER to upload.
Even the B2 storage for the 8TB is $40 a month so that is 4X my current costs and I am not sure I want to even contemplate the 22 TB.
Really a daunting feeling.

CrashPlan and other services whose main offering is fixed-price-for-unlimited-access-to-scarce-resource want you to pay but don’t want you to use any of said resources. If you complain about performance they will point at the support article that says “Code42 app users can expect to back up about 10 GB of information per day”; i.e. 132kBps. When asked for support – they say “we don’t guarantee any specific bandwidth, but we provide some bandwidth”. Contrast this with pay-for-what-you-use services where incentives are aligned in your favor: When you tell them, hey, S3 is somewhat slow today – they hear “there is an issue on your side preventing me from hauling carts of cash to you as fast as I’d like to”. It’s not surprising that they’d fix those issues before you even notice.

Crashplan’s Lack of redundancy, and weeks-long maintenance out of the blue when you can’t access your data makes this product a joke. I expressed my frustration here: Optimizing Code42 CrashPlan performance | Trinkets, Odds, and Ends

Look at Google Workspace Business Standard. At the 12/month you get advertised 2TB of space, but de-facto it’s unlimited. Quotas are not and never were enforced. Search this and many other forums for confirmation. It has been a widely known secret for a long time, since G-Suite’s early days. Many of us here use it successfully. (The limitation is 750GB daily ingress – for most users, it’s not a hindrance thanks to abysmal upstream bandwidths of their internet connection).

If you want unlimited storage promised in print – Google Workspace Enterprise is $20/month. Unlimited even on paper. Note, however, that you are mostly paying for features such as vault and data monitoring; storage is not the main product you pay for here, it’s incidental to the main offering of apps and services.

And then there are other providers – Box.com offers unlimited storage at $15/month (you need to find 2 other friends to satisfy 3 min team seize), DropBox has a similar offering.

Then, for long-term data retention (or as a secondary destination) there is Google Archive or Amazon Deep archive. Cost is under $1/TB/month. You can keep a copy of your immutable media data there.

Oh yeah. Storing data reliably is very expensive… Thankfully there are still avenues available to take advantage of pooled storage where light users subsidize heavy ones without impact on performance (unlike with CrashPlan and similar services, where storage is the meat of what they sell).

(multiple edits, so many typos…)

1 Like

I also use DrivePool and it’s a marvellous product but if I may offer up a suggestion… combine it with SnapRAID. Yes you’ll need a spare drive to act as parity, but it’ll protect against a single HDD failure (any drive). It’s ideal for large TV/movie libraries which, I presume, takes up most of your 22TB and rarely changes, only added to.

However, with that amount of data, you really ought to think about building a server - to act as a NAS (i.e. not an off-the-shelf proprietary box) - and you could build one relatively cheaply. Depending on requirements and how much time you’re willing to research, there’s a tonne of options. On the software side, you could stick with Windows+DrivePool+SnapRAID, or off-the-shelf Unraid, or go custom Linux+mergefs+SnapRAID(+ZFS optionally) (example).

The key is to keep doing what you’re doing - multiple backup copies - preferably 3 (including - or excluding - the original, if you’re rich), on 2 different mediums, at least 1 off-site. But self-host, particularly with that amount of data, is really the only way to go - most cloud* options will be prohibitively expensive.

Regarding those media files - and especially if you do go down the route of getting a cloud provider to backup to - I’d actually suggestion not to use Duplicacy to ‘backup’ such files - it’s a lot of unnecessary overhead. Duplicacy is very suited to backing up all your regular flowing data, because it makes nice regular snaphots, but wrapping up multi-GB media files really isn’t ideal. Consider Rclone for that job, and the best thing about it is, you can mount the remote for easy access.

Here’s a prior discussion about Rclone.

*Re cloud: Google Workspace is definitely worth a look. However, I strongly suspect they’ll start enforcing the 1/2TB limit on the Business editions, in about a year. Why do I think this? Well I’m on the legacy G Suite Business product (10TB+ stored in Google Drive thus far; $10/mo) and they’ve started emailing everyone about forcefully migrating us to Workspace - starting from January. You had to have 5x G Suite Business accounts in order to guarantee unlimited space (again: not enforced atm) but now you can get that guarantee on Workspace Enterprise - with a single account. So IMO, they will enforce the limits once the migrations are done (tho $20/mo is still pretty good for a reliable, unlimited, off-site backup).

Here’s a rabbit hole for building your own custom NAS hardware.

1 Like

Can you elaborate on why them finally moving users to the current product line necessitates changing behavior of existing users on said product line? I fail to see this connection.

FWIW I still use legacy free g-suite, and it keeps working just fine, nobody pushes me anywhere, nothing gets enforced.

And then they now have competitors that offer unlimited storage below $20 (e.g., box.com specifically). With g-suite/Workspace storage is incidental for the main SaaS that they provide: office, vaults, data protection and auditing, etc. They even don’t count data in shared drives in your “allowance”. They have enough users (guesstimating here) that don’t use any data at all to subsidize a dozen of “backup aficionados” many times over. In other words, storage is not what Google Workspace is about. Also, letting those noisy folks discuss it nonstop on forums is not bad for brand awareness either.

Of course, neither of us knows what will happen tomorrow… Just my 2 cents.

1 Like

Very simply, it come down to how even more popular Workspace will become amongst the datahoarding community (of which I consider myself one), once the changes are complete…

Google’s storage is perhaps one of the worst kept secrets - and massively (ab)used - to the point where an incredible number of users think nothing of Rclone mounting 100’s of TBs of (fully encrypted) media on GCD to stream to their Plex boxes. As more and more gigabit connections role out, that secret will turn into very common knowledge indeed. GCD isn’t just a good, very reliably, backup location.

Currently, the legacy products have an unenforced limit - very possibly due to the fact they simply didn’t get around to implementing special code to disable/enable it depending on 5 user count, as is written on paper. That stuff sounds technically simple, which it is, but what happens when users repeatedly cross that threshold and get their quota locked out? Google presumably didn’t want to complicate matters, and irritate normal users, so they didn’t bother. By forcing migrations, Google won’t ever have to bother.

The new products - as written on paper - are now simpler to enforce. On Enterprise? You get unlimited. On Business? You only get 2/5TB. No minimum user counts, no more obstacles.

However, most of these datahoarders are on the Business tier, and 2/5TB simply isn’t enough. Imagine the state of play when everyone is now on Workspace and all the datahoarders, with their growing 100TB+ (unduplicatable) stashes, are happy as Larry on a $10/mo plan. Google won’t let that stand. Storage is bloody expensive, no matter who you are. They’re losing heaps of cash as it is. Google can now lock out their quota; kindly notify them well in advance, but give them a viable path (upgrade to Enterprise).

It’s gonna happen - Business editions will lose unlimited.

2 Likes

Thank you for all of this great information! I have a lot to figure out and even more to digest and see if I understand. Without getting into the “create your own NAS server stuff”, I believe you said:
(assume I have the following disks)
D–>8 TB
E–>6 TB
F–>6 TB
G–>6 TB
H–>5 TB

  • I use DrivePool to make a K:\ out of E, F, G for a size of 18 TB for my large, unchanging, files
  • I use SnapRaid with Drive D:\ as my parity and the individual drives E, F, G as the data drives
  • Then use RClone to backup K:\ (do I need to figure out encryption so that cloud providers don’t know what is on my K:| drive?)
  • Use Duplicacy to back up my regular data on Drive H

I am sure i have a lot of that wrong since I only read it this morning. I am not that comfortable with a lot of CLI but I can figure it out and how to automate/schedule the SnapRaid and Rclone stuff (maybe it is in the docs and I just haven’t read it yet). I also have to be able to make sure my wife can understand how it works (generally) in case something happens to me (or at least create a doc she can hand to someone to make it all work).

Thanks again for all of the ideas, I really appreciate your information

So, I could use the business until they notify me I need to change and then just change the account to Enterprise and save $8 a month until they tell me. I think I can handle that.

1 Like

Datahoarding community is a drop in the ocean. It’s nothing. A blip. For comparison, long ago, just this one company I worked for with 70k people all moved to G-Suite. I know for a fact they did not use any storage beyond the trivial amounts for email and a few documents. There are hundreds of thousands of other organization, school, universities, that pay for accounts without utilizing any nontrivial amount of storage.

If Workspace was just storage space – then $12/month/2TB is already cheaper than GCS it is being run on! But then there would not need to be Workspace – there is already google cloud storage.

It’s a marketing tool, not a secret really. And 10 people on reddit and over rclone knowing about it is not really representative set of google customers.

Incredible in what sense? Dozens? Hundreds? Thousands? Still a drop in the ocean compared to all other users with paid accounts that don’t use them.

You don’t need gigabit connection for that, I"ve been using it on 50/10 internet just fine. And no, it will not become mainstream because it requires some technical skills. Vast majority of people don’t posses those. You may think they do because you are part of that small tech circle – but for most people configuring rclone is beyond their capabilities or interests (and there is absolutely nothing wrong with that). Also, these are just consumers. Business users will go by the terms and if they need huge amount of storage for each employee – yes, they’ll likely pay for enterprise. Perhaps, they’ll do it regardless, due to other services included in enterprise; and that’s the point.

You can’t seriously think that!. Google did not get around turning on quotas?. Come on! They built all that ultra reliable and flexible infrastructure, but just can’t keep track of quotas!

But see, no, today google workspace does not enforce quotas either. They had plenty of opportinities to implement quotas during rebranding. They did not. So we are back in the same state where, as you said:

In other words, nothing changes with migration to workspace with respect to their willingness or probability to enforce quotas.

This is not and was never obstacle. if (domain.accounts.count > 5) domain.settings.quotas = ON is not a rocket science. Box.com figured it out – you think google attracts worse talent? :slight_smile:

Again, I feel number of data hoarders is minuscule and a drop in the ocean. It may not seem to you this way because you are part of that community. (yes, I too keep tens of terabytes of encrypted media there)

Of course they will. It’s free marketing. Even you think that there are many data hoarders due to how vocal is that community. It’s free advertisement for google.

Overall the product line is successful. And cost of storage that hoarders use for free does not even begin to cover value of free marketing they provide.

Google builds reliable platforms. How do I know? I used them extensively in a way they were not designed for. You did too. Why? Because of this “secret”. And now I’ve just written this sentence, which is the best advertisement google can ever get – testimonial from an impartial user. If google did not provide me with the opportunity to use their services – I would not have knows how awesome they are. That’s the value they get from it. It is worth way more than you or I can even manage to consume in storage costs.

This all would have been true even if that was purely cloud storage company. But google isn’t. Their income is advertisement and B2B services. The whole google cloud is responsible for under 10% of their revenue. Storage in workspace accounts is essentially complimentary. These $12/month you are paying are not for storage, but for all other stuff. In other words, storage is free.

They could, and they did not. Do you think they did the same “oversight” twice? It’s clearly part of the plan. I’d like to see google’s chief of marketing face expression if they happen to read this thread :slight_smile:

I’ve read somewhere ( I think it was one of the google support discussions, or maybe reddit) that if you keep abusing the business accounts (abusing meaning in petabytes) they may ask you to move to enterprise.

But most importantly, if you are that user that has say 400TB dataset: does it really matter either for you or for google whether you pay $20 or $10? It’s zero, for all intents and purposes. It makes no difference. So, why bother, and risk pissing off a very vocal minority of customers?

Backblaze published the histogram of their users data usage. They too have data hoarders who figured out how to backup 100TB at $6/month. It’s a very steep curve. Vast majority of people pay $6 and use virtually nothing. And that’s pure storage company. With google – most people sign up to use spreadsheets and email, so the curve will be even steeper. Then having those minuscule amount of heavy user triple or 10x their usage has negligible effect.

Noted. I think it won’t. Let’s revisit in 5 years :). I’ll set a reminder in my calendar :slight_smile:

That’s right. Your parity drive would have to be the 8TB. (Theoretically, you have <2TB-ish free disk space on that drive to put other stuff there, but that can get a bit complicated. :slight_smile: )

Depending on your directory structure, personally I’d include your H: drive into the pool too, and just set up a rule to put certain subfolders - say, at the root of the pool - on only H:, and everything else on all other drives.

This is what I do:

image

Screenshot 2021-11-30 173828-edit

If it’s fast-moving data (not ideal for SnapRAID, but not impossible to make work), you could still exclude it from your SnapRAID array, but you can also have includes/excludes for that too.

Anyway, here’s a helper script for SnapRAID that’ll make scheduled syncs and scrubs a bit easier:

1 Like

I guarantee this community is a lot bigger than you think it is. I know several people who’ve hit 1PB on GCD, and 100TB+ are very common - all for the low low price of $10/mo. More importantly, it’s only growing, thanks to increased interest in self-hosting (mainly because streaming services are screwing everyone around lately).

As do I, on 72/18. Yet that’s a fifth of the amount of data I could upload against Google’s 750GB/day limit - if I only had a gigabit connection.

The more media that gets uploaded, the more feasible a gigabit connection becomes for streaming data on Plex - rather than having a local copy. Quite literally it’s a game changer - a threshold by which hosting Plex in the cloud becomes a possibility, opposed to impossible and firmly in the self-hosted only arena.

It makes complete sense since, these internet connections start to match LAN speeds; bottlenecks disappear.

Rclone is a piece of piss to set up and use - you just follow a bunch of instructions (plenty ready-made setups for Plex on forums) and the interest is obvious: unlimited storage for cheap.

You completely missed the point. I already stated it wasn’t a technical barrier. It’s about reputation and honest advertising.

Enforcing quotas that rely on user count, to dissuade against heavy usage, affects normal users too. Potentially pissing off ordinary enterprise customers just because they’re suddenly over their quota and suddenly cannot add more data to their drive is the obstacle.

With G Suite, there never was an unlimited tier for a fixed price - you had to buy 5 licences or more (at least on paper) to guarantee those terms. So asking normal users, who might have >1TB on any of or multiple accounts, to stay on 5 licences is problematic.

Removing the user requirement means the above scenario simply never occurs and Google and usher such users onto a product that guarantees users’ needs.

For now. At some point, there’s a threshold by which the cost of storing Petabytes of data for a handful of people becomes untenable. There’s also a low break-even point (probably around the 5TB mark) where, an increasing number of users are exceeding, due to ease and popularity. In totality, this isn’t a small numbers of users, and not an insubstantial amount of data.

$20/mo is still pretty good value for guaranteed unlimited storage, and a majority of heavy users won’t be too bummed by having to pony up when Google enforce what’s written down in black and white. Making the change doesn’t harm advertising at all, in fact brings clarity to a murky situation.

You bring up a very good point, but it perfectly illustrates the logic of my argument… Backblaze would lose too much by putting any kind of limit on their ‘unlimited’ product. The fallout from knowledge that a product - even if most users would never come close to abusing it - which is supposed to be unlimited, actually isn’t, would damage that reputation in the long term. Seen this happen way too much with ISP data ‘caps’.

The difference is, Google put it in plain black and white and have differentiating products: limited and unlimited. They’ll be to free to enforce the distinction without losing reputation. Unlike Backblaze. Google lose too much money on heavy users, not to flip that switch.

This is my concern. I don’t need to run my Plex data from the cloud. I have the storage space and I am fine to continue that. My concern is just keeping a copy and while Rclone sounds like a good way to go, I am definitely more of a UI guy than a CLI guy. I can copy someone else’s scripts but I may have to find someone younger and smarter than I to get it all running (as noted, I use DiskPool but in its most basic form and didn’t even follow some of the things @Droolio was showing). Once it is setup, I can usually follow what they did but setting this stuff up from scratch is where I run into problems (old dog/new tricks deal).
Its gonna take me while to think it all through.

Yea don’t get me wrong… Googles’ unlimited capacity - however you can get it - is definitely a good route to take - even if I predict they’ll clamp down at some point. :wink: Metered cloud storage is just too expensive for us.

I’m not too worried; they usually give ample notice, it’s reliable, and there’s a reasonable upgrade path worst case scenario.

2 Likes

Yep, understandable. My limited bandwidth makes this undesirable for me too…

Rclone mounting is just one way people use it with the cloud and Plex streaming. I only mention it because mounts are a bit harder to ‘tweak’ for Plex usage, than regular 'ol copies or syncs.

Myself, I just Rclone copy my media into an encrypted remote - for pure backup. Plus, of course, Duplicacy for all other data. My media is hosted locally with Jellyfin on a HP microserver and Kodi as client.

This is the extent of my batch script:

rclone -v copy "L:\TV" gdrivecrypt:/TV --exclude-from exclude-tv.txt --modify-window 1s --fast-list --bwlimit "06:00,1M 23:00,off"

etc.

As you can see, it’s not terribly complicated. :slight_smile: The initial setup is guided and has very good documentation.

1 Like

Let me formulate my point in the most concise way I can:

Charging heavy [ab]users $20 vs $10 vs $0 makes no difference neither for Google nor for said users.

Are we in agreement on this?

Therefore, there is no benefit for Google to start enforcing quotas for the sole reason of pushing abusers to enterprise plan. It solves nothing cost wise, but can infuriate few legit business users, as you pointed out.

Hence, that won’t happen.

That’s all I had to say.

We can continue though with some details: though the leverage provided by the light users, it is possible for a nominal increase of account cost across the board ($.0001, placeholder for a very small number) to compensate for all abusers at once. And as such it’s already covered by the current cost structure. You can attribute $10→$12 cost increase to withstand abuse for the next 100 years.

If in the future, in some hypothetical universe, where users managed to upload enough data for Google to start thinking about storage cost (i.e., when the nominal cost increase across the board (or, storage cost decrease over time) does not cover for abusers anymore) – they will have to bite the bullet and abolish unlimited plans, ot bump the cost few cents more.

However, this is an impossible scenario and cannot happen. Why? Because of a rate limiter that I forgot about, but you thankfully brought up.

Apparently, they figured, that at 750 GB/day they can safely withstand abuse with the current projection of density distribution of abusers over the user base for the foreseeable future. And every day this gets easier and easier – as storage cost keeps going down. I’m sure there are way more factors that they accounted for that we can’t think of here and now.

Agreed. However, in this universe that threshold unattainable, because:

  • Rate at which storage can be consumed is capped
  • Cost of raw storage per TB is going down constantly
  • The number of abusers is negligible (projecting Backblaze’s published curve, which is the worst case scenario, because Google Workspace is not just storage, and as a result has many more users who pay for storage but never use it)

You know what, Google is a public company, we can just look at their financial statements. I might do that some day, interesting how it works.

P.S. great discussion BTW, thanks for patience of all involved, readers and writers :slight_smile:

1 Like

You won’t be surprised to learn I don’t in fact agree on this… :wink: BUT, not because the principle is unsound. Put that way, I’d normally say you’d be right, but very much depends on how we define “abusers”.

Perhaps it’s unfair to label those Petabyters in with the Terrabyters, but the fact of the matter is, it doesn’t take many TBs for those users to be loss-leaders to Google, and that’s where I think the balance breaks down.

The number of users in the 2TB-1PB range is, IMO, significant and growing, and the amount of accumulated data (unduplicatable; because the trend encourages encryption) massive. When you factor in redundancy, Google is losing money on just about everyone storing more than a few TBs. (Funnily enough, this happens to be the mark where clear limited and unlimited plans now exist!)

Paying double for hosting 100TB+ is a no-brainer for the extremely heavy users, but those of us in the lower numbers might be put off from upgrading and be forced to limit what we put on there. Suddenly, those Plex libraries - with all the additional transit costs - aren’t feasible. Not for everyone, at least.

We’re not talking about the difference between $10 and $20 for individual users. You’re right, it’s insignificant. Storage and transit costs aren’t. That one user can cost Google hundreds of times that. Or 100x 2TB+ users the same losses. Somewhere between those two is a line - where one section chooses upgrade and the other to restrict their usage because actually, $10/mo was decent for 20TB, but not for $20. Market forces; supply and demand. There isn’t a line drawn yet, which makes one inevitable.

750GB is still a hefty number, and isn’t really a bottleneck to hosting Plex media, since it doesn’t apply on download. Most users can easily reach the break-even point of costing, rather than earning, Google revenue in the space of about a month or three. Datahoarders are a patient breed.

This also factors into it. Storage costs aren’t dropping at the pace we might expect, in fact it’s painfully slow right now. Versus the ever-increasing demand for storage, I don’t think it can be taken for granted. Especially with a supply chain crisis going on.

Can’t find the exact story right now, but I read recently that Amazon AWS (or was it Azure?) had to put in temporary limits on how many VMs customers could spin up due to a lack of available storage. Coz pandemic. Quite scary when you consider that’s their job.

In light of Amazon ending their unlimited storage thing only a few years ago, and Google nixing unlimited Photo storage for new Pixel devices, I don’t think my prediction is tooo outlandish. :stuck_out_tongue:

Ditto. :slight_smile: