0

I have the Apache J Meter Tool on my pc and i am trying to stress my App being deployed on GKE.

Sending some requests concurrently or sequentially i am noticing a wired performance.

For example if i make 100 requests at one of the endpoints ,while the most of the requests are successful, 3 or 4 requests from 100 may return the following message:

org.apache.http.conn.HttpHostConnectException: Connect to 35.246.108.130:31225 [/35.246.108.130] failed: Connection refused: connect
    at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156)
    at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl$JMeterDefaultHttpClientConnectionOperator.connect(HTTPHC4Impl.java:404)
    at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376)
    at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393)
    at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
    at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
    at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
    at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
    at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
    at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
    at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl.executeRequest(HTTPHC4Impl.java:935)
    at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl.sample(HTTPHC4Impl.java:646)
    at org.apache.jmeter.protocol.http.sampler.HTTPSamplerProxy.sample(HTTPSamplerProxy.java:66)
    at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1296)
    at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1285)
    at org.apache.jmeter.threads.JMeterThread.doSampling(JMeterThread.java:638)
    at org.apache.jmeter.threads.JMeterThread.executeSamplePackage(JMeterThread.java:558)
    at org.apache.jmeter.threads.JMeterThread.processSampler(JMeterThread.java:489)
    at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:256)
    at java.lang.Thread.run(Unknown Source)
Caused by: java.net.ConnectException: Connection refused: connect
    at java.net.DualStackPlainSocketImpl.connect0(Native Method)
    at java.net.DualStackPlainSocketImpl.socketConnect(Unknown Source)
    at java.net.AbstractPlainSocketImpl.doConnect(Unknown Source)
    at java.net.AbstractPlainSocketImpl.connectToAddress(Unknown Source)
    at java.net.AbstractPlainSocketImpl.connect(Unknown Source)
    at java.net.PlainSocketImpl.connect(Unknown Source)
    at java.net.SocksSocketImpl.connect(Unknown Source)
    at java.net.Socket.connect(Unknown Source)
    at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75)
    at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142)
    ... 19 more

enter image description here Also, the number of requests returning errors may be different (smaller or bigger) if the tests are conducted by a different host machine.

Any ideas about what may cause this problem?

1 Answers1

0

Knowing that you are performing a stress test, a possible explanation could be that your application crashed at that particular instance and restarted. Connection Refused error is usually thrown when there is no application listening for incoming requests.

More details on this stackoverflow thread

Please check if any APM softwares are running for application logs. You will get more hints as what happened at that moment.

Rahul Jadhav
  • 305
  • 1
  • 7
  • Thank you so much for your answer! The problem is that running 100 requests sequentially the apache server answers most of them and it chooses to refuse 3 or 4 of them with random preferencd. If the server was unvailable i think that it would never answer. Also, it seems that if i rapidly increase the concurrency of the requests it is not able to answer successful any of the requests. May it need more resources? – Kostas Tsakos May 20 '21 at 12:24
  • apache server (or any other web servers) are resilient by doing automatic restart in event of crash. Increasing the concurrent request is an indication that the resources are not enough to fulfill incoming requests. But that may not be the reason. You might be encountering 'Out of Memory` exception which can be code issue. Check system memory usage pattern during test and also cpu. These are good hints on what is really happening on server – Rahul Jadhav May 20 '21 at 12:46
  • Yes i have already done it. It seems that it doesn't meet any resource shortage. Also, it seems that the apache k8s pod hasn't been restarted. – Kostas Tsakos May 20 '21 at 12:54