Migration of google photos into idrive e2 bucket

hello
i have different google photos accounts that I would like to migrate into idrive e2 buckets.
i saw that I could mount using rclone the google photos accounts
when I look into the files i see them listed but not sized.
I ran duplicacy backup job from the mounted google drive with rclone mount to the idrive e2 as target.
i see its still indexing and at the end finished with incompleted snapshot
i have tried to run it from laptop to and also docker on pi4.
what could be the issue and does it supported? is there any better approch to do so?

What are you trying to do? Backup google photos to iDrive or copy google photos to iDrive?

In the former, yes, mount the storage with rclone and then backup with duplicacy to iDrive.

In the latter case — configure both google and iDrive storages in rclone and copy there.

Logs should indicate why was the copy or backup interrupted. What was in the logs?

i tried to backup google photos to idrive
when i do it via rclone only it works pretty well besides some (429 RESOURCE_EXHAUSTED)
that could be because of api limits.
when i do it through duplicacy its still indexing and didnt start the backup - it works for hours

Running backup command from /root/.duplicacy-web/repositories/localhost/1 to back up /root/googlephotos_new
Options: [-log backup -storage e2_paris -threads 4 -stats]
2022-12-26 10:16:47.970 INFO REPOSITORY_SET Repository set to /root/googlephotos_new
2022-12-26 10:16:47.970 INFO STORAGE_SET Storage set to s3://e2@g.idrive.com
2022-12-26 10:16:48.664 INFO BACKUP_START No previous backup found
2022-12-26 10:16:48.665 INFO INCOMPLETE_LOAD Previous incomlete backup contains 193422 files and 0 chunks
2022-12-26 10:16:48.665 INFO BACKUP_LIST Listing all chunks
2022-12-26 10:16:48.731 INFO BACKUP_INDEXING Indexing /root/googlephotos_new
2022-12-26 10:16:48.731 INFO SNAPSHOT_FILTER Parsing filter file /root/.duplicacy-web/repositories/localhost/1/.duplicacy/filters
2022-12-26 10:16:48.731 INFO SNAPSHOT_FILTER Loaded 0 include/exclude pattern(s)
exit status 101

How are you mounting the google storage? Are you using VFS and are you specifying timeout values explicitly?

You can add -d flag to get more information in the logs.

i am mounting via rclone mount to google photos
rrclone --vfs-cache-mode full mount googlephotos_old: /mnt/googlephotos.old &

What is the host OS?

Few things here:

  1. get rid of vfs caching entirely. There is no need nor benefit in turning it on in this scenario. Delete the cache folder.
  2. you need to mitigate latency, especially if you have a lot of objects. Add something like --daemon-timeout 599s. Use --fast-list flag with rclone.
  3. don’t use rclone to mount in the first place. Rclone relies on FUSE, and this adds unnecessary roundtrip through the kernel and associated fragility. You can either use native Google Drive application, it will serve files over SMB, or use Mountain Duck, that uses NFS. This will be much more stable.
  4. As was already suggested, add -d flag to duplicacy. At this point it’s not clear to me whether the issues are at rclone or iDrive side.
  5. Lastly, reconsider using iDrive. There are much better providers out there, and while this right here may not be the issue with iDrive, you will hit their other issues eventually.

thanks for your reply , I am using docker on raspberry pi 4 8gb model
i found that rclone copy works well so I am not sure if the rclone is the issue here
when I used the -d flag i saw the following on the log :slight_smile:

2022-12-27 00:20:35.387 DEBUG PASSWORD_ENV_VAR Reading the environment variable DUPLICACY_MEDIA_S3_ID
2022-12-27 00:20:35.387 DEBUG PASSWORD_ENV_VAR Reading the environment variable DUPLICACY_MEDIA_S3_SECRET
2022-12-27 00:20:35.935 DEBUG STORAGE_NESTING Chunk read levels: [1], write level: 1
2022-12-27 00:20:35.974 INFO CONFIG_INFO Compression level: 100
2022-12-27 00:20:35.974 INFO CONFIG_INFO Average chunk size: 4194304
2022-12-27 00:20:35.974 INFO CONFIG_INFO Maximum chunk size: 16777216
2022-12-27 00:20:35.974 INFO CONFIG_INFO Minimum chunk size: 1048576
2022-12-27 00:20:35.974 INFO CONFIG_INFO Chunk seed: 6475706c6963616379
2022-12-27 00:20:35.974 TRACE CONFIG_INFO Hash key: 6475706c6963616379
2022-12-27 00:20:35.974 TRACE CONFIG_INFO ID key: 6475706c6963616379
2022-12-27 00:20:35.974 DEBUG BACKUP_PARAMETERS top: /mnt/googlephotos.old/media, quick: true, tag: 
2022-12-27 00:20:35.974 TRACE SNAPSHOT_DOWNLOAD_LATEST Downloading latest revision for snapshot media
2022-12-27 00:20:35.974 TRACE SNAPSHOT_LIST_REVISIONS Listing revisions for snapshot media
2022-12-27 00:20:36.041 INFO BACKUP_START No previous backup found
2022-12-27 00:20:36.041 INFO INCOMPLETE_LOAD Previous incomlete backup contains 26354 files and 0 chunks
2022-12-27 00:20:36.042 INFO BACKUP_LIST Listing all chunks
2022-12-27 00:20:36.042 TRACE LIST_FILES Listing chunks/
2022-12-27 00:20:36.107 INFO BACKUP_INDEXING Indexing /mnt/googlephotos.old/media
2022-12-27 00:20:36.107 INFO SNAPSHOT_FILTER Parsing filter file /cache/localhost/1/.duplicacy/filters
2022-12-27 00:20:36.108 DEBUG REGEX_DEBUG There are 0 compiled regular expressions stored
2022-12-27 00:20:36.108 INFO SNAPSHOT_FILTER Loaded 0 include/exclude pattern(s)
2022-12-27 00:20:36.108 DEBUG LIST_ENTRIES Listing 
2022-12-27 00:20:36.109 DEBUG LIST_ENTRIES Listing all/
exit status 101

currently I picked i drive just because of its price I have used their regular backup plan a decade ago and there were fine .

is there any better option to mount google photos with nfs? I am not a aware of any client that would be running properly through linux or docker on arm64.

Why docker? Duplicacy can run natively.

Rclone copy and rclone mount is not even in the same ballpark of what’s involved.

Looks like VFS failure to me, or some weird interaction with docker volume mounts.

In my previous comment, items 1 and 2.

Lowest priced products and services offer the worst value. You should optimize the big picture: your data durability, availability, your time investment in support and maintenance. How much is your time worth debugging issues just to save few bucks a month? You’ll find the cheapest solutions cost the most. But that’s a conversation for a different topic.

P.s. I edited your comment for readability: added three backticks before and after the log.

```
Logs go here
```

This turns into this

Logs go here

what will be the preferred way to mount to google photos without loosing quality of images or metadata of GPS for instance?
when I mount rclone for onedrive for instance the duplicacy indexing take ages so I wonder if its the mount that I use. I understand that there are better solutions that not count on FUSE but as I dont have windows or mac to use cyberduck / winscp / cloud mounter to onedrive or google photos I need to count on rclone only.

Incidentally, it’s my understanding that Rclone mount (or anything really) isn’t able to 100% maintain all metadata and original resolution downloads.

As far as I’m aware, Google Takeout is the only way to obtain an unmolested archive of Google Photos.

As @Droolio indicated above, it’s impossible, but in no way unexpected: Google really wants you to keep your photos with them and makes it extremely difficult to take your data elsewhere. Yes, you can use Takeout, but then go ahead, try to delete your photo library. You can’t use rclone for that, because of the (wink wink) “a bug”, and you can’t do it via web interface either. Go ahead, try it. And I’m not talking about 10-50 photos. I’m talking about hundreds of thousands, in bulk. You can’t do that. It’s obvious why—your data is what google bartered from you, in exchange for use of a free photo management software, it’s payment. It would be weird if they allowed to yank it back at a whim.

Do I need to state the obvious? The solution here is not to engineering more workarounds. The solution is to stop using free services in general, and Google’s in particular. These free services are very expensive, I’d say unaffordable to the vast majority of people. (there was a paper somewhere, I’ll link if I find it—each free user (read – data they bring) is worth around $40 to Google annually).

On a separate note, I would strongly recommend searching for “iDrive” on this, and other forums.

1 Like