2

I am developing a simple client server program using this link. My server is a Mac machine and client is a Windows machine (Win 10).

The client-server communication works fine. When I examined the bytes sent using Wireshark, in the first TCP packet only one character gets sent. And there is another TCP packet where the rest of the data gets sent.

i.e. If I send "qwerty", client send "q", server responds, then client sends "werty" . Similarly Server sends "Q", client responds then server sends "WERTY".

Here is my client server code. I am flushing the data after doing writeBytes().

How can I enforce the data to be sent in a single TCP packet?

Client Code:

import java.io.*;
import java.net.*;

class Client
{
    public static void main(String argv[]) throws Exception
    {
        String sentence;
        String modifiedSentence;
        BufferedReader inFromUser = new BufferedReader( new InputStreamReader(System.in));
        Socket clientSocket = new Socket("192.1.162.65", 6789);
        DataOutputStream outToServer = new DataOutputStream(clientSocket.getOutputStream());
        BufferedReader inFromServer = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
        System.out.println("Enter some text: " + inFromServer);
        sentence = inFromUser.readLine();
        outToServer.writeBytes(sentence + '\n');
        outToServer.flush();
        modifiedSentence = inFromServer.readLine();
        System.out.println("FROM SERVER: " + modifiedSentence);
        clientSocket.close();
    }
}

Server Code:

import java.io.BufferedReader;
import java.io.DataOutputStream;
import java.io.InputStreamReader;
import java.net.ServerSocket;
import java.net.Socket;

public class Server {

    public static void createListener(ServerSocket serverSocket) throws Exception {

        String clientSentence;
        String capitalizedSentence;
        @SuppressWarnings("resource")
        ServerSocket welcomeSocket = new ServerSocket(6789);

        while (true) {
            @SuppressWarnings("resource")
            Socket connectionSocket = welcomeSocket.accept();
            BufferedReader inFromClient = new BufferedReader(new InputStreamReader(connectionSocket.getInputStream()));
            DataOutputStream outToClient = new DataOutputStream(connectionSocket.getOutputStream());
            clientSentence = inFromClient.readLine();
            System.out.println("Received: " + clientSentence);
            capitalizedSentence = clientSentence.toUpperCase() + '\n';
            outToClient.writeBytes(capitalizedSentence);
            outToClient.flush();
        }
    }

    public static void main(String[] args) throws Exception {
        Server.createListener(null);
    }
}

Edit 1: The reason behind this question is, I am trying to use the same server code to send data to another client that is not in my control. In that case, the above code sends just the first TCP packet. I do not see the second TCP packet on Wireshark. Any suggestion on how I can go about debugging it?

Anit
  • 255
  • 4
  • 16

3 Answers3

2

To my knowledge, you can't and more importantly: you should not.

The point is: those libraries intend to create an abstraction to you. The TCP protocol is actually complex; and most likely; you do not want to deal with all the subtle details.

So the non-answer here: unless you encounter real issues with this implementation (like an un-acceptable performance hit) - you should rather look into writing clear, readable code; instead of fiddling with TCP stack implementation details!

Example: your code doesn't deal with exceptions, you just let them pass through. Your code isn't closing streams; your code is containing suppress warnings annotations. And obviously: it was never written with unit-tests in mind.

Such things matter. Not if the JVM sends one TCP package or two!

GhostCat
  • 127,190
  • 21
  • 146
  • 218
  • Thanks. I am trying to use the same server code to send data to another client that is not in my control. In that case, the above code sends just the first TCP packet. – Anit Apr 14 '17 at 19:37
  • What happens when you close that socket? – GhostCat Apr 14 '17 at 19:42
  • Thanks! That helped. – Anit Apr 14 '17 at 19:49
  • 1
    That is what matters. And yes, I probably "misread" and didn't see your real issue first. Really glad that we could sort that out. Especially as your accept kicked me over the daily limit today! – GhostCat Apr 14 '17 at 19:50
  • Consider providing libraries that allow such fine-grained control over packets. It's definitely useful to know that this cannot be achieved with the standard libraries, but as it is, your answer doesn't actually help people who need such control. – Kröw Jan 29 '20 at 20:06
  • Links to libraries and such aren't necessarily helpful. They might quickly outdate. Libraries also depend on your requirements, so researching a library that really fits you is a very individual undertaking. – GhostCat Jan 30 '20 at 02:35
1

You can change part of the behavior for better performance. Actually it's a common practice.

What you are experiencing is due to the stream-oriented nature of TCP (BTW, this is going to be useful when writing your server and client: How to read all of Inputstream in Server Socket JAVA). When the client writes to the socket, the data written is actually written to a send buffer. TCP gets the data from the send buffer, what it can depending on different variables (read Are TCP/IP Sockets Atomic?) and will send it to the other side in a TCP segment.

In your case, TCP gets the first Character, most probably because it gets the data faster than data is written to the send buffer, and sends the TCP segment with one Byte. When an ACK comes back from the other end, then TCP sends the second segment with the rest of the Bytes (Actually, it could send, for example, only two Bytes in the second message and then the rest).

You cannot change the fact that the first message sends only the first character and actually this should be random, sometimes it could send more than one character. However, you can change the part where TCP waits for the ACK before sending the rest of the message. This behavior is mainly due to Nagle algorithm and you can change it with:

clientSocket.setTcpNoDelay(true);

After this you should see the second message doesn't wait for the ACK. In your test probably you will see no difference but if you had a client in the East coast of US and the server in the West coast, the second message might be delayed for 20-40 ms. The delay could even be 300 ms if the client is in China or India or South America.

Other factor that is useful to understand is the initial congestion window though I don't think it is affecting your test. You can read more about initcwnd here: https://www.cdnplanet.com/blog/tune-tcp-initcwnd-for-optimum-performance/.

Community
  • 1
  • 1
rodolk
  • 4,969
  • 3
  • 22
  • 32
0

You have no control over the way TCP packets the data! Stop thinking about it. This is all part of lower OS level stuff.

Till your message is being successfully sent, you shouldn't worry.

Am_I_Helpful
  • 17,636
  • 7
  • 44
  • 68