-5

In our application we have to write large files (100 MBytes up to some GBytes) on remote machines (in the same LAN segement) frequently. This is the bottleneck of our application. Remote machines may be native Windows but also Linux machines using SMB.

We found that creating files first locally and then copying them using the Windows API function CopyFile is MUCH faster than using CreateFile directly with a UNC Path (or drive letter) targeting the remote machine. But still we have to do 2 writes which seems far from optimum.

Inspired by the first comments on this question i implemented the usage of FILE_FLAG_OVERLAPPED for CreateFile as discussed here and here:

  HANDLE hToken;
    auto openResult = OpenProcessToken(GetCurrentProcess(), TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY, &hToken);
    if (!openResult)
    {
        gConsoleAndLog << "OpenProcessToken failed with err " << GetLastError() << std::endl;
    }

    TOKEN_PRIVILEGES tp;
    tp.PrivilegeCount = 3;
    tp.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED;
    tp.Privileges[1].Attributes = SE_PRIVILEGE_ENABLED;
    tp.Privileges[2].Attributes = SE_PRIVILEGE_ENABLED;
    if(! LookupPrivilegeValue(NULL, SE_MANAGE_VOLUME_NAME, &tp.Privileges[0].Luid))
        gConsoleAndLog << "LookupPrivilegeValue SE_MANAGE_VOLUME_NAME failed with err " << GetLastError() << std::endl;

    if (! LookupPrivilegeValue(NULL, SE_INCREASE_QUOTA_NAME, &tp.Privileges[1].Luid))
        gConsoleAndLog << "LookupPrivilegeValue SE_INCREASE_QUOTA_NAME failed with err " << GetLastError() << std::endl;

    if (! LookupPrivilegeValue(NULL, SE_ASSIGNPRIMARYTOKEN_NAME, &tp.Privileges[2].Luid))
        gConsoleAndLog << "LookupPrivilegeValue SE_ASSIGNPRIMARYTOKEN_NAME failed with err " << GetLastError() << std::endl;

    auto adjustResult = AdjustTokenPrivileges(hToken, FALSE, &tp, 0, NULL, NULL);
    if (!adjustResult || GetLastError() != ERROR_SUCCESS)
    {
        gConsoleAndLog << "AdjustTokenPrivileges failed with err " << GetLastError() << std::endl;
    }
    else gConsoleAndLog << "AdjustTokenPrivileges SUCCESS" << std::endl;

In difference to the second post i can not set the privilege "SE_ASSIGNPRIMARYTOKEN_NAME" even when starting as administrator. I dont know if that makes a difference.

After opening the file with FILE_FLAG_NO_BUFFERING | FILE_FLAG_OVERLAPPED, the calculated size is pre allocated:

   auto setFileErr = SetFilePointerEx(hFile, endPosition, NULL, FILE_BEGIN);
    if (setFileErr == INVALID_SET_FILE_POINTER)
    {
        CPrintWithOSError(NULL, 0, "SetFilePointerEx FAILED");
        return 1;
    }

    if (!SetEndOfFile(hFile))
    {
        CPrintWithOSError(NULL, 0, "SetEndOfFile FAILED");
        return 1;
    }

    if (!SetFileValidData(hFile, endPosition.QuadPart))
    {
        CPrintWithOSError(NULL, 0, "SetFileValidData FAILED");
        return 1;
    }

That works for local drives but SetFileValidData fails on remote drives.
The call fails with windows error

1314 a required privilege is not held by the client
  • How can this be fixed?
  • What are other ways to do this?
  • Is there a way to increase file buffering for appending writes using the WinAPI?
RED SOFT ADAIR
  • 11,294
  • 10
  • 48
  • 83
  • How much data are you writing in each of your `WriteFile` calls? – Matteo Italia Jan 26 '19 at 16:53
  • Not so much as we are using i.e. Tifflib to write the files. Some KB possibly - this we can not change. – RED SOFT ADAIR Jan 26 '19 at 16:55
  • so what is current transfer speed? what is the network speed? are files compressed? – user7860670 Jan 26 '19 at 16:56
  • 2
    The most likely explanation for why creating a file remotely is slower than creating locally and copying is due to inefficient buffering when writing out the contents of the file. Tuning and optimizing the internal buffering, when generating the output, should fix that. A file copy uses large buffer sizes, to minimize the overhead. – Sam Varshavchik Jan 26 '19 at 17:12
  • A simple test to do would be to use the plain C API for the IO instead of `CreateFile`/`WriteFile`, using `setvbuf` to add a big (say, 32 MB? 128 MB?) intermediate buffer. – Matteo Italia Jan 26 '19 at 17:21
  • What about writing the file asynchronously? May that be possibly be even better? – RED SOFT ADAIR Jan 26 '19 at 20:14
  • yes, the best choise use `FILE_FLAG_NO_BUFFERING | FILE_FLAG_OVERLAPPED`, also possible just set end of file and valid data length, if it known at begin. – RbMm Jan 26 '19 at 20:17
  • *`CopyFile` is MUCH faster than using`CreateFile` directly* the `CopyFile` anyway call `CreateFile` internal. so task only with wich options you call `CreateFile` and how write data to file – RbMm Jan 26 '19 at 20:19
  • I made tried FILE_FLAG_OVERLAPPED and it wont work as expected. i found this here: https://stackoverflow.com/questions/49580652/windows-writefile-blocks-even-with-file-flag-overlapped. – RED SOFT ADAIR Jan 27 '19 at 07:52

1 Answers1

0

If you have access to the tifflib source code, you should be able to solve this by buffering data to be written to the output file until the buffer is full or the file is closed. A simple FILE * will do this, using setvbuf to set the buffer size to 1MB or so.

Paul Sanders
  • 15,937
  • 4
  • 18
  • 36