17

A client of ours reported a very weird issue when our Swing application is writing a file to the users local machine via Windows Remote Desktop (the application is hosted on a terminal server where users connects).

The flow is:

  • Users logon and run the application via remote desktop (with their C:\ included as a "Local resource")
  • While working they export data from the database into files
  • The user chooses what data to export
  • The user selects a destination file on their local computer like \\tsclient\C\Temp\TestFile.txt
  • Files could be big so 1000 rows are fetched from database and written to file per batch
  • On the second batch, when Java opens the file and write to it again, something really weird starts to happen!
    • The file increases rapidly in size and stops at around 2 GB
    • Then data continues to be written to the file

I'm not sure if this is a problem in the core Java libraries, the Remote Desktop implementation or a combination. Our application is also hosted via Citrix which works fine, and writing to local disk or UNC network paths works fine as well.

I've created a SSCCE demonstrating the problem, connect to a computer with Remote Desktop (make sure C:\ is a "local resource") and run the program to see some really strange behavior! I'm using JDK-7u45.

import static java.nio.file.StandardOpenOption.APPEND;
import static java.nio.file.StandardOpenOption.CREATE;
import static java.nio.file.StandardOpenOption.TRUNCATE_EXISTING;
import static java.nio.file.StandardOpenOption.WRITE;

import java.io.BufferedWriter;
import java.io.File;
import java.io.IOException;
import java.io.OutputStream;
import java.io.OutputStreamWriter;
import java.nio.charset.Charset;
import java.nio.charset.CharsetEncoder;
import java.nio.file.Files;
import java.nio.file.OpenOption;
import java.util.Collections;

/**
 * Demonstrates weird issue when writing (appending) to a file over TsClient (Microsoft Remote Desktop).
 * 
 * @author Martin
 */
public class WriteOverTsClientDemo
{
    private static final File FILE_TO_WRITE = new File("\\\\tsclient\\C\\Temp\\TestFile.txt");
    //private static final File FILE_TO_WRITE = new File("C:\\Temp\\TestFile.txt");

    private static final String ROW_DATA = "111111111122222222223333333333444444444555555555566666666667777777777888888888899999999990000000000";

    public static void main(String[] args) throws IOException
    {
        if (!FILE_TO_WRITE.getParentFile().exists())
        {
            throw new RuntimeException("\nPlease create directory C:\\Temp\\ on your local machine and run this application via RemoteDesktop with C:\\ as a 'Local resource'.");
        }
        FILE_TO_WRITE.delete();
        new WriteOverTsClientDemo().execute();
    }

    private void execute() throws IOException
    {
        System.out.println("Writing to file: " + FILE_TO_WRITE);
        System.out.println();

        for (int i = 1; i <= 10; i++)
        {
            System.out.println("Writing batch " + i + "...");
            writeDataToFile(i);
            System.out.println("Size of file after batch " + i + ": " + FILE_TO_WRITE.length());
            System.out.println();
        }
        System.out.println("Done!");
    }

    private void writeDataToFile(int batch) throws IOException
    {
        Charset charset = Charset.forName("UTF-8");
        CharsetEncoder encoder = charset.newEncoder();

        try(OutputStream out = Files.newOutputStream(FILE_TO_WRITE.toPath(), CREATE, WRITE, getTruncateOrAppendOption(batch));
            BufferedWriter writer = new BufferedWriter(new OutputStreamWriter(out, encoder)))
        {
            writeData(batch, writer);
        }
    }

    private void writeData(int batch, BufferedWriter writer) throws IOException
    {
        for (String data : createData())
        {
            writer.append(Integer.toString(batch));
            writer.append(" ");
            writer.append(data);
            writer.append("\n");
        }
    }

    private Iterable<String> createData()
    {
        return Collections.nCopies(100, ROW_DATA);
    }

    /**
     * @return option to write from the beginning or from the end of the file
     */
    private OpenOption getTruncateOrAppendOption(int batch)
    {
        return batch == 1 ? TRUNCATE_EXISTING : APPEND;
    }
}
Elrond_EGLDer
  • 47,430
  • 25
  • 189
  • 180
Uhlen
  • 1,748
  • 14
  • 29
  • `On the second batch, when Java opens the file and write to it again ...`, so the .delete() in your SSCCE is needed ? – PeterMmm Dec 16 '13 at 07:47
  • No not really, it's just there to start without an existing file, shouldn't matter. – Uhlen Dec 16 '13 at 08:12
  • I am not sure but maybe would be good to try do not use buffered output stream and use ByteChannel to write instad in order to test if it is not problem caused by combination of the output stream and probably little bit non standard behaviour of files mapped via TsClient – JosefN Dec 16 '13 at 15:00
  • @Uhlen It might be silly but try with manual `try-catch` rather than `try with resources`. As sometimes the exception from `try with resources` is suppressed in certain cases. I had faced a similar case once, but in that case I wasn't closing the file due to which teh file size kept increasing. – Jatin Dec 17 '13 at 09:10
  • @Jatin Manual try-catch does not seem to help. – Uhlen Dec 17 '13 at 09:25
  • Any news? Did you find a workaround/ filed a bug for reference? – Jan Dec 26 '13 at 10:06

2 Answers2

8

I do not have a setup (No Windows) to verify this effect :( so just thoughts:

2GB sounds like Filesystem related max file size. 32bit Windows Operating system at your client side?

The behaviour sounds like clever Filesystem caching on bad block FS: rapid file write access of big blocks remotely tries to cleverly preoccupy the file in an attempt to fasten future writes to the file having blocks together. Try a different FS to verify? Tried FreeRDP?

Keep the file open. Re-opening for write of huge blocks could hint clever systems to cache.

Update:

FileChannelImpl.java:248

// in append-mode then position is advanced to end before writing
p = (append) ? nd.size(fd) : position0(fd, -1);

leads finally to FileDispatcherImpl:136

static native long More ...size0(FileDescriptor fd) throws IOException;

what as being native can hold any bug. When it comes to protocolls inbetween. I would file this rather as bug in nio/Windows, as They might not have foreseen any funny thing with RDP underneath.

It looks like the returned size is Integer.MAX_VALUE and the file pointer is moved there…

Alternate implementation java.io.FileWriter and no encoding to reduce lines of code:

import java.io.BufferedWriter;
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.util.Collections;

/**
 * Demonstrates weird issue when writing (appending) to a file over TsClient (Microsoft Remote Desktop).
 *
 * @author Martin
 */
public class WriteOverTsClientDemo
{
   // private static final File FILE_TO_WRITE = new File("\\\\tsclient\\C\\Temp\\TestFile.txt");
   private static final File FILE_TO_WRITE = new File("/tmp/TestFile.txt");

   private static final String ROW_DATA = "111111111122222222223333333333444444444555555555566666666667777777777888888888899999999990000000000";

   public static void main(final String[] args) throws IOException
   {
      if (!FILE_TO_WRITE.getParentFile().exists())
      {
         throw new RuntimeException("\nPlease create directory C:\\Temp\\ on your local machine and run this application via RemoteDesktop with C:\\ as a 'Local resource'.");
      }
      FILE_TO_WRITE.delete();
      new WriteOverTsClientDemo().execute();
   }

   private void execute() throws IOException
   {
      System.out.println("Writing to file: " + FILE_TO_WRITE);
      System.out.println();

      for (int i = 1; i <= 20; i++)
      {
         System.out.println("Writing batch " + i + "...");
         writeDataToFile(i);
         System.out.println("Size of file after batch " + i + ": " + FILE_TO_WRITE.length());
         System.out.println();
      }
      System.out.println("Done!");
   }

   private void writeDataToFile(final int batch) throws IOException
   {
      try (BufferedWriter writer = new BufferedWriter(new FileWriter(FILE_TO_WRITE, batch > 1)))
      {
         writeData(batch, writer);
      }
   }

   private void writeData(final int batch, final BufferedWriter writer) throws IOException
   {
      for (final String data : createData())
      {
         writer.append(Integer.toString(batch));
         writer.append(" ");
         writer.append(data);
         writer.append("\n");
      }
   }

   private Iterable<String> createData()
   {
      return Collections.nCopies(100, ROW_DATA);
   }

}
Jan
  • 941
  • 6
  • 20
  • Thanks for the ideas. I'm using 64 bit Windows and NTFS on the client, still reproducable. I tested by keeping the file open (ie using TRUNCATE_EXISTING only once) which worked fine. And if I try with APPEND also on the first batch I see the same weird behavior right away. So the problem is definitely related to StandardOpenOption.APPEND. Any more ideas? Ps. Keeping the file would open might work, but is not an answer to my question. Ds. – Uhlen Dec 17 '13 at 09:24
  • I borrowed some Windows Laptops and were able to reproduce this on two Windows8 machines with NTFS. Unfortunately my rdesktop on Ubuntu was not connecting local drives - so I could not test a different RDP client. As it does not happen when executed on local disk even remotly I suspect the RDP rather than nio. Tried with normal io of Java? – Jan Dec 17 '13 at 11:10
  • added alternate implementation. Cannot borrow the setup again from my cooleagues. Give it a try @Uhlen to verify whether it is RDP or nio's problem? Do you need `java.nio`? – Jan Dec 18 '13 at 10:37
  • java problem `FileWriter fw = new FileWriter("//tsclient/C/temp/massive.txt", true)` produces the same results – egerardus Sep 19 '16 at 02:16
0

We have this exact same problem, a customer reported that our java application when writing to a TS client shared drive creates 2GB files. We noticed that the problem happens only when appending data, both by using java.io.FileOutputStream and java.nio.Files.write.

We opened an issue, which you can find here:

https://bugs.java.com/bugdatabase/view_bug.do?bug_id=JDK-8206888

However, after further investigation we tracked down the problem to an improper behavior of windows WriteFile API, which in such environment disattends what's written in the documentation:

(extracted from https://docs.microsoft.com/en-gb/windows/desktop/api/fileapi/nf-fileapi-writefile)

To write to the end of file, specify both the Offset and OffsetHigh members of the OVERLAPPED structure as 0xFFFFFFFF. This is functionally equivalent to previously calling the CreateFile function to open hFile using FILE_APPEND_DATA access.

The following C program can be used to reproduce the issue:


#include <windows.h>
#include <stdio.h>

int main(int argc, char *argv[])
{
    if (argc < 2) {
        printf("Not enough args\n");
        return 1;
    }

    HANDLE hFile = CreateFile(argv[1], GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL);
    DWORD nw;
    OVERLAPPED ov;
    ov.Offset = (DWORD)0xFFFFFFFF;
    ov.OffsetHigh = (DWORD)0xFFFFFFFF;
    ov.hEvent = NULL;
    WriteFile(hFile, "a", 1, &nw, &ov);
    CloseHandle(hFile);

    return 1;
}
alex
  • 1
  • 2