Just did an upgrade and all my settings got blown away, please tell me there’s a way to get them all back!!!
Which OS are you running? Are you upgrading from 1.0.0?
server 2016, yes I upgraded from 1.0.0
Are you running the web gui from the same user account? The configuration file is ~/.duplicacy-web/duplicacy.json
where ~
is the home directory. So if you’re using a different account a new configuration file may be created.
no, I ran the install under the same user…
I pointed the temp dir and log dir back to where duplicacy was previously installed, which is \users<username>.duplicacy-web\repositories and \users<username>.duplicacy-web\logs and restarted the service. It picked up my old config, but will there be a problem with the keyring or anything like that?
I would highly suggest documenting that the default install location changes between versions, I had no idea and freaked out when everything was blanked.
All settings are stored in duplicacy.json
and settings.json
. repositories
and logs
are just temporary or log directories. Without duplicacy.json
and settings.json
I don’t know how it could pick up your old config.
There isn’t a change made to the default config location in version 1.1.0. However, if you’re talking about the Windows service supported in 1.1.0 – the service always runs under the LocalSystem
account and the default settings location for the service is c:\ProgramData\.duplicacy-web
.
All I did was run the installer for 1.1.0, opened the web interface and it was starting fresh with no settings.
So I ran the backup and it decided it wanted to upload all my files again. Sounds like my backup is toast.
The log was over 10000 lines and truncated when sent over email.
I dont even know what to do at this point.
This is super frustrating.
I don’t know how this could happen, but even with a new and empty duplicacy.json
file, it is not hard to recreate the same storages and backups. If you set the storage directories and backup ids correctly, all the backups should go to the same locations and not upload existing files again.
YES, it should be documented in the release notes that (at least if running on a windows machine)
- the installation path changed
- the former version has to be manually removed
- if now running as a windows service the configuration files need to be moved (stopping the new service, moving the files, starting the service)
just figured this out and fixed it, but that is learning the quite hard way…
already had a smoother user experience…
i moved my settings over and all seems to be working but I dont trust the backup anymore. The first backup job after moving the settings over it uploaded like 39 or 40 gigs of files that were already uploaded… This is messed up.
This shouldn’t happen.
Are the backup id
s the same - you didn’t try to rename them?
What does your log of the backup say? Especially the BACKUP_STATS lines at the bottom.
Does your old or new .duplicacy-web
folder still sit inside a repository? How big is this folder?
Run a check job, and if you don’t mind, post the log here and we may be able to figure out why those files were uploaded again.
My Install of the New version went well. i just chose the “Single User Mode” and it Installed over the top of my old one and has worked on 3 different Servers.
gchen,
There’s over 10,000 entries with filenames uploaded. Can I send it do you personally through private message or email?
What I wanted is the log from the check job, not the backup job. Yes, please send it through private message.
so an update. My settings were so screwed up, I ended up nuking everything, clearing out backblaze, uninstalling duplicacy and starting from scratch. I just would never trust my backups with whatever happened.
Its a shamed this happened, not sure what caused it was due to me or due to the installer.
Now i have to wait 2 weeks for my NAS to be backed up again at the upload speed I have.
My guess is that you probably used a different backup id with the new settings so all files had to be rescanned (in which case the upload speed would still be much faster than a completely new backup since most chunks already existed in the storage already). If there were a log from the check job we would be able to to diagnose what actually happened.
I still have the logs. I’ll PM them to you.
Your log shows that revision 79 contained 0 files – this can happen if the directory to be backed up is on a network share not mounted correctly due to temporary network errors. As a result, revision 80 had to start from scratch (the equivalent of a new backup id) but as you can see from the log only 14GB out of 1.6T was re-uploaded. If there were revision 81 it would have acted normally as it would use revision 80 as the basis (for determining which files are new)
Did you try the -threads
option to speed up the backup?
the files shouldnt have moved or been unavailable. It was a locally mounted hard drive. Weird. And why would it only upload 14 gb out of the 1.7 tb?
Yeah, the -threads switch helped. My upload speed is saturated…
I know I might have been okay, but I was getting some other weirdness and I just decided to nuke and pave. i.e. I think I had the service and executable running at the same time accidentally, etc.