11

I get a couple of legacy bat scripts used for files synchronization. They use robocopy. According to the documentation, by default, there's a retry mechanism : one million retries, 30 seconds between retries.

So, if I understand well, if something is going bad (for instance not enough disk space in the destination folder), the script will run during approx 347 days before it ends.

I do appreciate that a retry mechanism exists, but I don't understand why the default behaviour is like that.

Default parameters values are supposed to match common and basic use cases, and for a file copy, I don't see the point of retrying almost forever; I mean, if it still does not work after, let's say, 5 times, it means that something somewhere should be fixed (network down, disk dead...), it worth stopping and raising an error.

What could be the reasons for such default behaviour ?

irimias
  • 261
  • 1
  • 2
  • 5

1 Answers1

10

the answer to "What could be the reasons for such default behavior ?" that i believe your searching for is poor design.

however - i would suggest that the intention for this default behavior is users expectations that robocopy will be 100% when it is completed. skipping means that the copy is incomplete. state of file permissions and locks are in the administrators care to ensure success, otherwise the options are available to change. this command is not for general consumption and is targeted for admins.

to mitigate this issue, use the /r: and /w: options to change them to something reasonable for your use case.

eg.

robocopy /r:3 /w:10 c:\src c:\dest

would copy c:\src to c:\dest with 3 retries of 10 seconds on issues it may need to retry on.

your own documenation link shows these options

2114L3
  • 474
  • 4
  • 8