10

We have upgraded to Play 2.7.0 recently and use play2-reactivemongo version 0.16.2 with reactivemongo 0.16.3. We use reactivemongo-shaded-native but also tried without.

We are connecting to a replica set with 3 nodes, MongoDB 3.6.10 on MongoDB Atlas.

The initial connection is fine and the service is running OK for a while.

But in the end we hit this error:

[error] 2019-02-15 09:40:30,466 r.api.Failover2 - [Supervisor-1/Connection-2] Got an error, no more attempts to do. Completing with a failure...

 reactivemongo.core.actors.Exceptions$PrimaryUnavailableException: MongoError['No primary node is available! (Supervisor-1/Connection-2)']
 Caused by: reactivemongo.core.actors.Exceptions$InternalState: null
    at reactivemongo.ConnectAll$IsMaster(400, {{NodeSet Some(PlayappDevelopment-shard-0) Node[playappdevelopment-shard-00-00-tedsb.mongodb.net:27017: Unknown (20/20 available connections), latency=9223372036854775807, authenticated={Authenticated(admin,playapp_rw)}] | Node[playappdevelopment-shard-00-01-tedsb.mongodb.net:27017: Secondary (20/20 available connections), latency=135, authenticated={Authenticated(admin,playapp_rw)}] | Node[playappdevelopment-shard-00-02-tedsb.mongodb.net:27017: Secondary (20/20 available connections), latency=140, authenticated={Authenticated(admin,playapp_rw)}] }})(<time:1550223615628>)
    at reactivemongo.IsMaster(400, {{NodeSet Some(PlayappDevelopment-shard-0) Node[playappdevelopment-shard-00-00-tedsb.mongodb.net:27017: Unknown (20/20 available connections), latency=9223372036854775807, authenticated={Authenticated(admin,playapp_rw)}] | Node[playappdevelopment-shard-00-01-tedsb.mongodb.net:27017: Secondary (20/20 available connections), latency=135, authenticated={Authenticated(admin,playapp_rw)}] | Node[playappdevelopment-shard-00-02-tedsb.mongodb.net:27017: Secondary (20/20 available connections), latency=21, authenticated={Authenticated(admin,playapp_rw)}] }})(<time:1550223615626>)
    at reactivemongo.ConnectAll$IsMaster(399, {{NodeSet Some(PlayappDevelopment-shard-0) Node[playappdevelopment-shard-00-00-tedsb.mongodb.net:27017: Unknown (20/20 available connections), latency=9223372036854775807, authenticated={}] | Node[playappdevelopment-shard-00-01-tedsb.mongodb.net:27017: Secondary (20/20 available connections), latency=135, authenticated={Authenticated(admin,playapp_rw)}] | Node[playappdevelopment-shard-00-02-tedsb.mongodb.net:27017: Secondary (20/20 available connections), latency=21, authenticated={Authenticated(admin,playapp_rw)}] }})(<time:1550223615623>)
    at reactivemongo.IsMaster(399, {{NodeSet Some(PlayappDevelopment-shard-0) Node[playappdevelopment-shard-00-00-tedsb.mongodb.net:27017: Unknown (20/20 available connections), latency=9223372036854775807, authenticated={}] | Node[playappdevelopment-shard-00-01-tedsb.mongodb.net:27017: Secondary (20/20 available connections), latency=17, authenticated={Authenticated(admin,playapp_rw)}] | Node[playappdevelopment-shard-00-02-tedsb.mongodb.net:27017: Secondary (20/20 available connections), latency=21, authenticated={Authenticated(admin,playapp_rw)}] }})(<time:1550223615622>)
    at reactivemongo.RefreshAll({{NodeSet Some(PlayappDevelopment-shard-0) Node[playappdevelopment-shard-00-00-tedsb.mongodb.net:27017: Primary (20/20 available connections), latency=30591, authenticated={Authenticated(admin,playapp_rw)}] | Node[playappdevelopment-shard-00-01-tedsb.mongodb.net:27017: Secondary (20/20 available connections), latency=17, authenticated={Authenticated(admin,playapp_rw)}] | Node[playappdevelopment-shard-00-02-tedsb.mongodb.net:27017: Secondary (20/20 available connections), latency=21, authenticated={Authenticated(admin,playapp_rw)}] }})(<time:1550223615486>)
    at reactivemongo.PrimaryUnavailable(<time:1550223615485>)
    at reactivemongo.ConnectAll({{NodeSet Some(PlayappDevelopment-shard-0) Node[playappdevelopment-shard-00-00-tedsb.mongodb.net:27017: Primary (20/20 available connections), latency=30591, authenticated={Authenticated(admin,playapp_rw)}] | Node[playappdevelopment-shard-00-01-tedsb.mongodb.net:27017: Secondary (20/20 available connections), latency=17, authenticated={Authenticated(admin,playapp_rw)}] | Node[playappdevelopment-shard-00-02-tedsb.mongodb.net:27017: Secondary (20/20 available connections), latency=21, authenticated={Authenticated(admin,playapp_rw)}] }})(<time:1550223615475>)
    at reactivemongo.ConnectAll$IsMaster(397, {{NodeSet Some(PlayappDevelopment-shard-0) Node[playappdevelopment-shard-00-00-tedsb.mongodb.net:27017: Primary (20/20 available connections), latency=30591, authenticated={Authenticated(admin,playapp_rw)}] | Node[playappdevelopment-shard-00-01-tedsb.mongodb.net:27017: Secondary (20/20 available connections), latency=17, authenticated={Authenticated(admin,playapp_rw)}] | Node[playappdevelopment-shard-00-02-tedsb.mongodb.net:27017: Secondary (20/20 available connections), latency=21, authenticated={Authenticated(admin,playapp_rw)}] }})(<time:1550223605498>)
    at reactivemongo.IsMaster(397, {{NodeSet Some(PlayappDevelopment-shard-0) Node[playappdevelopment-shard-00-00-tedsb.mongodb.net:27017: Primary (20/20 available connections), latency=30591, authenticated={Authenticated(admin,playapp_rw)}] | Node[playappdevelopment-shard-00-01-tedsb.mongodb.net:27017: Secondary (20/20 available connections), latency=17, authenticated={Authenticated(admin,playapp_rw)}] | Node[playappdevelopment-shard-00-02-tedsb.mongodb.net:27017: Secondary (20/20 available connections), latency=15, authenticated={Authenticated(admin,playapp_rw)}] }})(<time:1550223605498>)
    at reactivemongo.ConnectAll$IsMaster(396, {{NodeSet Some(PlayappDevelopment-shard-0) Node[playappdevelopment-shard-00-00-tedsb.mongodb.net:27017: Primary (20/20 available connections), latency=30591, authenticated={Authenticated(admin,playapp_rw)}] | Node[playappdevelopment-shard-00-01-tedsb.mongodb.net:27017: Secondary (20/20 available connections), latency=17, authenticated={Authenticated(admin,playapp_rw)}] | Node[playappdevelopment-shard-00-02-tedsb.mongodb.net:27017: Secondary (20/20 available connections), latency=15, authenticated={Authenticated(admin,playapp_rw)}] }})(<time:1550223605493>)

and this:

[error] 2019-02-15 09:40:25,157 r.c.a.MongoDBSystem - [Supervisor-1/Connection-2] Oups. 80193 not found! complete message is Response(MessageHeader(16775938,220053,80193,1), Reply(8,0,0,1), ResponseInfo(b0706d7f))
[error] 2019-02-15 09:40:25,720 r.c.a.MongoDBSystem - [Supervisor-1/Connection-2] Oups. 79580 not found! complete message is Response(MessageHeader(16776408,220065,79580,1), Reply(8,0,0,1), ResponseInfo(5c65523c))
[error] 2019-02-15 09:40:26,616 r.c.a.MongoDBSystem - [Supervisor-1/Connection-2] Oups. 80346 not found! complete message is Response(MessageHeader(16776173,220060,80346,1), Reply(8,0,0,1), ResponseInfo(5613ea02))
[error] 2019-02-15 09:40:26,702 r.c.a.MongoDBSystem - [Supervisor-1/Connection-2] Oups. 80457 not found! complete message is Response(MessageHeader(16776643,220079,80457,1), Reply(8,0,0,1), ResponseInfo(2fa23bb9))
[error] 2019-02-15 09:40:29,612 r.c.a.MongoDBSystem - [Supervisor-1/Connection-2] Oups. 80174 not found! complete message is Response(MessageHeader(16776878,220102,80174,1), Reply(8,0,0,1), ResponseInfo(4faa6eec))
[error] 2019-02-15 09:40:30,257 r.c.a.MongoDBSystem - [Supervisor-1/Connection-2] Oups. 80191 not found! complete message is Response(MessageHeader(16776643,220111,80191,1), Reply(8,0,0,1), ResponseInfo(1fb6b5e0))

It seems the application is not able to recover (completely) -- 1 of 20 requests is successful.

2019-02-15 09:59:45,476 r.c.a.MongoDBSystem - [Supervisor-1/Connection-2] Node[playappdevelopment-shard-00-00-tedsb.mongodb.net:27017: \
  Unknown (20/20 available connections), latency=9223372036854775807, authenticated={Authenticated(admin,playapp_rw)}] \
  hasn't answered in time to last ping! Please check its connectivity

There are a slew of "No channel for request" messages:

2019-03-05 15:02:58,433 r.c.a.MongoDBSystem - [Supervisor-1/Connection-2] No channel for request: Request(48879, 0, Query(0,assets.$cmd,0,1), Primary, None)
2019-03-05 15:02:58,580 r.c.a.MongoDBSystem - [Supervisor-1/Connection-2] No channel for request: Request(48880, 0, Query(0,assets.$cmd,0,1), Primary, None)
2019-03-05 15:02:58,678 r.c.a.MongoDBSystem - [Supervisor-1/Connection-2] No channel for request: Request(48881, 0, Query(0,assets.$cmd,0,1), Primary, None)
2019-03-05 15:02:58,777 r.c.a.MongoDBSystem - [Supervisor-1/Connection-2] No channel for request: Request(48882, 0, Query(0,assets.$cmd,0,1), Primary, None)
2019-03-05 15:02:58,874 r.c.a.MongoDBSystem - [Supervisor-1/Connection-2] No channel for request: Request(48883, 0, Query(0,assets.$cmd,0,1), Primary, None)
2019-03-05 15:02:58,972 r.c.a.MongoDBSystem - [Supervisor-1/Connection-2] No channel for request: Request(48884, 0, Query(0,assets.$cmd,0,1), Primary, None)
2019-03-05 15:02:59,069 r.c.a.MongoDBSystem - [Supervisor-1/Connection-2] No channel for request: Request(48885, 0, Query(0,assets.$cmd,0,1), Primary, None)
2019-03-05 15:02:59,220 r.c.a.MongoDBSystem - [Supervisor-1/Connection-2] No channel for request: Request(48886, 0, Query(0,assets.$cmd,0,1), Primary, None)
2019-03-05 15:02:59,318 r.c.a.MongoDBSystem - [Supervisor-1/Connection-2] No channel for request: Request(48887, 0, Query(0,assets.$cmd,0,1), Primary, None)
2019-03-05 15:02:59,416 r.c.a.MongoDBSystem - [Supervisor-1/Connection-2] No channel for request: Request(48888, 0, Query(0,assets.$cmd,0,1), Primary, None)
2019-03-05 15:02:59,514 r.c.a.MongoDBSystem - [Supervisor-1/Connection-2] No channel for request: Request(48889, 0, Query(0,assets.$cmd,0,1), Primary, None)
2019-03-05 15:02:59,611 r.c.a.MongoDBSystem - [Supervisor-1/Connection-2] No channel for request: Request(48890, 0, Query(0,assets.$cmd,0,1), Primary, None)

This seems to be related to a high number of concurrent requests to the Play HTTP server, which probably saturates all the available channels to the primary node, causing any refresh of the master status to fail.

Apparently, we do not have this problem when connecting to a local replica set with only a single node.


The question is, how do I fix this?

Can anyone make sense of the provided log messages?

Is my analysis correct, ie. can it happen that ReactiveMongo is not able to query the primary status because the channels are just "busy"?


Update: we switched from ReactiveMongo (which served us quite well over the past 4 years) to the official mongo-scala-driver -- which was really painful, and still is (mainly because of the "Registry"-no-compile-time-type-safety concept passing Class[T]s around, and so on) but we no longer see the connection problems.

cchantep
  • 8,797
  • 3
  • 25
  • 39
cbley
  • 4,133
  • 1
  • 14
  • 29
  • Try release 0.16.6 (btw 200 channel per node seems "a lot" to me) – cchantep May 09 '19 at 07:36
  • @cchantep Thanks for your reply! Unfortunately, it is too late now, we already switched to the official driver and don't want to roll back (except using the reactivemongo-bson library maybe). 200 connections was just a desperate option for us, but currently w/ the official driver we have around 500 active connections on prod (2000 max for our cluster on MongoDB Atlas) after running for a while (connections are pooled but not proactively created unless demanded). For comparison, on dev we only have ~60 connections. – cbley May 14 '19 at 06:40

5 Answers5

2

It works for us after upgrading Reactive Mongo to 0.17.1

angelokh
  • 9,394
  • 6
  • 61
  • 119
1

It seems you reached the limit of connections available: have you tried tuning the connection parameters http://reactivemongo.org/releases/0.11/documentation/tutorial/connect-database.html? Atlas usually supports more than 20 connections https://docs.atlas.mongodb.com/connection-limits/

I solved a similar issue with mongo native scala driver setting a pool of 50 connections, a longer wait queue and longer wait time (due latency in connection with Atlas)

il.bert
  • 212
  • 2
  • 10
  • We are running with 200 connections currently. The problem seems to remain (will have to investigate the logs). I'll try to configure the wait times as you suggest. – cbley Mar 07 '19 at 07:43
  • That did not solve it for me, I suspect there is indeed a bug in reactivemongo, as it is not able to query the primary node even if there is no load on the server with 200 steady connections to the nodes each. – cbley Mar 14 '19 at 11:13
0

Have you tried: http://reactivemongo.org/releases/0.1x/documentation/tutorial/connect-database.html#netty-native

Have had a similair error and a colleague suggested the above. Seems to be working for us.

Ian McShane
  • 340
  • 3
  • 10
0

Do ensure that your user has access to the Database you are trying to connect. I had same problem. I was testing with one user which belonged to other Database. Later I created one user for my Database then tested and it works fine.

Ashwani Garg
  • 114
  • 5
  • That clearly was not the problem here, since the initial connection was working fine, it only started to run awry after a few minutes. – cbley Nov 11 '19 at 09:39
0

I struggled with this same issue for a while. If you are using an M0, M2, or M5 instance one possible cause is that the server bandwidth is extremely limited. It is very easy to issue queries that take a very long time with those instances.

It may be possible to increase timeout settings in your configuration so that reactivemongo doesn't give up on connections that don't return. In my case I needed to make that timeout at least 30 seconds. One way to diagnose this is to look at the metrics in your mongo atlas console and see if the network bandwidth is pinned (a flat line) during a period when you have problems. If it is, that's most likely your issue.

I upgraded my Mongo Atlas server to an M10 and have not had a single problem since.

Dave DeCaprio
  • 1,791
  • 14
  • 28
  • Actually, we had M10 instances to begin with (M30 on prod). That didn't make a difference at all for us. But thanks for sharing! – cbley Feb 11 '21 at 13:40
  • One update - I have seen this with the M10 instance when the primary node changes. It doesn't always happen, but whenever I get that error I can see in Atlas that the primary node was updated. – Dave DeCaprio Mar 05 '21 at 12:06