I am not able to sync the file

Hello Support,

I synced my s3 bucket to duplicacy. i got the config file in the bucket. Then i tried uploading a file from local host and running a backup job. It keeps failing. Saying my keys are unknown.

Running backup command from C:\Users\oodenike/.duplicacy-web/repositories/localhost/0 to back up C:/Users/oodenike/Downloads/MMSIM.tar
Options: [-log backup -storage Test -stats]
2019-10-17 10:20:16.882 INFO REPOSITORY_SET Repository set to C:/Users/oodenike/Downloads/MMSIM.tar
2019-10-17 10:20:16.884 INFO STORAGE_SET Storage set to s3://us-west-2@eftp-dt.s3-us-west-2.amazonaws.com/eftp-dt-BYTE/BYTE
2019-10-17 10:20:17.370 ERROR SNAPSHOT_LIST Failed to list the revisions of the snapshot Jobrun: NoSuchKey: The specified key does not exist.
status code: 404, request id: 0C0D63519152ACE9, host id: DkZ8PeFeDQIof9xbvh/xn+qhAh+mafBXXewgAzvUsQMX/1RGvQlJ54PmYGxSBX1HMrKX2DB2s/g=

Thats the log i got.

The endpoint eftp-dt.s3-us-west-2.amazonaws.com seems unusual. Is eftp-dt the bucket name?

The bucket name is eftp-dt/BYTE

Does Amazon S3 allow the bucket name to have /? It looks like it doesn’t.

Try to use s3-us-west-2.amazonaws.com as the endpoint. You’ll need to delete the current storage and create a new one with the same storage name.

Thank you. I have been able to fix it. Can you tell me how to read the logs to understand how much data was uploaded when backing up

File chunks: 70 total, 310,912K bytes; 22 new, 132,019K bytes, 132,537K bytes uploaded

So the files are split into 70 chunks totaling 310M, but only 22 chunks are new and 132M are uploaded.

If you’re concerned with the deduplication efficiency please take a look at Chunk size details

For anyone: Feel free to use the :heart: button on the posts that you found useful.

For the OP of any #support topic: you can mark the post that solved your issue by ticking the :checked: under the post. That of course may include your own post :slight_smile:

Thanks for the update. So my question is I noticed from the cli that my files where in several chunks which i understand why they are like that. what am trying to understand is does it sync and become one block file in s3 and do i need to do anything to it if moving from s3 to an EBS?

No. S3 won’t see the file as a single unit. It’s the snapshot files that describes the relationship between blocks in the storage.

Files backed up with duplicacy are not stored simply as copies of the files; they’re stored in a special format.

Not that I can think of. You should just be able to move the entire duplicacy storage directory structure from S3 to EBS or anywhere else. But it might be safest to initialize a separate duplicacy storage on EBS and then use duplicacy copy to copy it from S3 to EBS.

Thats nice. If that is the case can i send directly to the EBS and boycott s3 since thats the final destination

I think it’s technically possible. This is what comes to mind as potential options:

  1. Mount EBS directly on the server duplicacy is backing up from if it’s in EC2, and just use it as if it’s a local directory
  2. Use EC2 to run an SSH server, or webdav server, or possibly another server for a storage backend that duplicacy supports (e.g., Minio server and make your own S3, openstack swift server, webdav) with EBS as the block storage device

Just out of curiosity, why are you using EBS instead of S3? I thought that EBS was generally more expensive than S3. And in this case, I can’t think of anything that EBS offers that would make it more ideal for duplicacy backups.

We actually have another storage vault in the cloud. s3 is used to receive the files from onprem to the cloud. so can i use duplicacy to upload files from onprem to the ebs. I can mount the ec2 locally