4

When I try to preform lots of synchronous https request using curlpp, I get the following error:

Failed to connect to https://www.google.com/ port 443: No route to host

The code below is of course only a stripped-down, cleaned-up sample of the actual code. The url is different, but the result are the same.

void ThreadFunction()
{
   try
   {
      curlpp::Cleanup myCleanup;
      std::ostringstream os;
      os << curlpp::options::Url("https://www.google.com/");
   }
    catch( curlpp::RuntimeError &e )
    {
        std::cout << e.what() << std::endl;
    }
   catch( curlpp::LogicError &e )
    {
        std::cout << e.what() << std::endl;
   }
}

int main(void)
{
   thread threads[10];

   int loops = 0;

   while(loops < 3000)
   {
      for(int i = 0; i < 10; i++)
      {
         threads[i] = thread(ThreadFunction);
      }

      for(int i = 0; i < 10; i++)
      {
         threads[i].join();
      }
      loops++;
   }

   return 0;
}

The errors occur not direct from the beginning, but around loop ~1000. After that, each request generates this error. I have tried using different ports (like 80) for each thread but curlpp only seems to preform https request using port 443. When running only one thread everything seems fine (except it takes way too long, thus the multiple threads).

So my theory A: It is actually possible to preform synchronous request like this, but I’m missing some closing/cleaning of the port after the request. (I mean, it works for the first ~1000 * 10 requests)

Theory B: It is not possible to preform requests over the same port, at the same time, but the threads just happens to use the same port at a slightly different time (until its not, at ~1000)

Or C: It is simply not allowed, after a bunch of request some network-overlord had enough and blocks new requests.

I must admit, I have limited networking knowledge to pinpoint the cause here or find a solution for that matter. What is the best way to approach a task like this? Any help is appreciated.

JesseB1234
  • 127
  • 6
  • Your dns limits requests? What if you call `gethostbyname` instead of curl in threads? – KamilCuk Oct 12 '18 at 08:18
  • Can you print `h_errno` and `errno`? –  Oct 12 '18 at 12:54
  • I'd guess without knowing anything about curlpp, that you have a problem with linger... –  Oct 12 '18 at 13:10
  • I got your code to work, but I only get the websites content for a couple of iterations and then I get `302 Moved`... I do not really understand what you try to accomplish. By the way you only need to call `cURLpp::Cleanup myCleanup;` once ...so please move it to your main. It calls `cURLpp::initialize();` in its constructor and `cURLpp::terminate()` in its destructor. And they shall only be called once according to the paper on [github](https://github.com/jpbarrette/curlpp/blob/master/doc/guide.pdf) chapter 3. – krjw Oct 15 '18 at 12:04
  • What do you mean it takes to long? For me cURLpp is really fast. Maybe the server you are trying to query is slow? - not google but you mentioned you use a different url – krjw Oct 15 '18 at 12:07
  • First of all, you don't have theories, you have hypotheses – Jules Oct 16 '18 at 13:57

0 Answers0