0

I'm currently working on an client-server application. The Server renders images, compresses them by using Nvidias H264 Encoder and sends them to a client. On client side the image is decompressed and displayed. Im using wrapper for both TCP and UDP Berkeley Windows sockets. UDP works just fine, all pictures are displayed properly. When switching to TCP-transmission, there occurs some random distortion. Not the whole time, just in some occasions. The distorion usually looks like this:

http://de.tinypic.com/r/14sedf6/8

It stays for ~5 to 10 frames (60fps). The site http://www.onsip.com/about-voip/sip/udp-versus-tcp-for-voip states, that using TCP for audio transmission leads to some kind of jittering that is "unacceptable [...] for the end user.". Do you think jittering is the reason for the observed image distortion or any other clues on this?

This is the code of the TCP wrapper. Functions like Bind() are implemented in the super-class and should work, because the udp wrapper uses them too. :)

TcpSocket::TcpSocket()
{
    WSADATA wsaData;

    int i = WSAStartup(MAKEWORD(2,2), &wsaData);

    m_Sock = -1;
}

bool TcpSocket::Create()
{
    if ((m_Sock = socket(AF_INET, SOCK_STREAM, 0)) > 0)
        return true;
    return false;
}

bool TcpSocket::Listen(int que)
{
    if (listen(m_Sock, que) == 0)
        return true;
    return false;
}

bool TcpSocket::Accept(TcpSocket &clientSock)
{
    int size = sizeof(struct sockaddr);
    clientSock.m_Sock = accept(m_Sock,
        (struct sockaddr *) &clientSock.m_SockAddr, (socklen_t *) &size);
    if (clientSock.m_Sock == -1)
        cout << "accept failed: " << WSAGetLastError() << endl;

    WSACleanup();
    return false;
    return true;
}

bool TcpSocket::Connect(string address, int port)
{
    struct in_addr *addr_ptr;
    struct hostent *hostPtr;
    string add;
    try
    {
        hostPtr = gethostbyname(address.c_str());
        if (hostPtr == NULL)
            return false;

        // the first address in the list of host addresses
        addr_ptr = (struct in_addr *) *hostPtr->h_addr_list;

        // changed the address format to the Internet address in standard dot notation
        add = inet_ntoa(*addr_ptr);
        if (add.c_str() == "")
            return false;
    } catch (int e)
    {
        return false;
    }

    struct sockaddr_in sockAddr;
    sockAddr.sin_family = AF_INET;
    sockAddr.sin_port = htons(port);
    sockAddr.sin_addr.s_addr = inet_addr(add.c_str());
    if (connect(m_Sock, (struct sockaddr *) &sockAddr, sizeof(struct sockaddr))== 0)
        return true;
    return false;

}

int TcpSocket::Receive(char *buff, int buffLen)
{
    return recv(m_Sock, buff, buffLen, 0);
}

int TcpSocket::Send(const char *buff, int len)
{
    return send(m_Sock, buff, len, 0);
}

Thank you very much for any help, clues or suggestions!

Edit1: I get the packets on client side like following:

//This is the TCP read() call which should block until something is received            
int i = server->Receive(serverMessage, 100000);
//Passing the received buffer to decoder. sizeof(UINT8) is because of an identifyer which kind of package is received, 
//sizeof(int) is the length of the actual buffer
m_decoder->parseData((const unsigned char*)(serverMessage + sizeof(UINT8) + sizeof(int)), size);
Christoph
  • 548
  • 5
  • 18

1 Answers1

0

As the VOIP page you linked to says, the only difference you will experience between UDB and TCP, assuming everything else is implemented correctly, is that TCP will occasionally delay your data. So the real question is: can a delay in incoming data cause your H264 decoder to draw half the screen in purple like your image shows? I don't know enough about this decoder to answer that.

To explain more where this occasional delay comes from: Networks drop packets from time to time. Maybe because a link is overloaded, or because of electrical noise, or cosmic rays; something like that.

So, when using UDP and you drop a packet, the data stream is missing some data; your H264 decoder will reject it as corrupt, 1/60th of a second later a new frame comes along, and you don't notice it.

TCP, on the other hand, is compelled to deliver every packet correctly and in the right order. so when a packet is dropped, TCP waits a little bit, then re-sends the data that was lost. If two packets in a row are dropped, TCP doubles the wait time and tries again, and so on.

See similar info in answers to question TCP vs UDP on video stream

Community
  • 1
  • 1
Bryan
  • 9,644
  • 1
  • 44
  • 70
  • I read this, but I was interessted in opinions whether the observed distortion comes from TCPs "slower" transmission or if this is something else. And I couldnt find any page/paper/site, that deals with the influence of TCP when streaming media data. I'll further explaion my client sided decompression, when using TCP: In every iteration I'll wait for a new compressed image by using the "read()" function of TCP. And I think this one is blocking by default, so when the function returns the image buffer should be filled correctly? – Christoph Jun 06 '14 at 09:54
  • OK, maybe you are assuming that a single read will return an entire frame? TCP does not deal in 'messages', just a stream of bytes, so it can split your video frame however it feels like it. You have to know at a higher level how many bytes you are expecting and repeatedly call `read()` until you get them all. – Bryan Jun 13 '14 at 12:41