You mean rclone? Other people copy smaller amount of files with rclone or backup smaller amount of data with duplicacy. 400k items is a hard limit, there is no way around it: it is there to prevent abuse (like using it as raw storage with backup tools. It is designed to share files; for that 400k is way more than enough)
number of files selected for back up does not correlate with number of files duplicacy creates on the target: duplicacy makes a long sausage of all your files and then shreds it to pieces. Each file can end up (and usually does) shredded to multiple pieces. You can look up “average chunk size” to get an idea of average size. It’s usually few megabytes, so a gigabyte file can easily end up split into thousands of pieces.
If your files are mostly huge files — you can increase the chunk size duplicacy is using, thus reducing number of chunks it creates, at the expense of worse deduplication and storage overhead.
But this will be an uphill battle. Instead, backup to My Drive or an AppFolder, if you have to use google drive.
I personally recommend against using drive service as backup targets for numerous reasons (you can search this forum for prior discussions). Instead, use S3 type storage. Some providers I can recommend: Storj, Backblaze, Wasabi (duplicacy does not support Archival storage, so you can’t use AWS Archival tiers with it)