1

I have a Play framework application and want to connect to Postgres with Slick and to Apache Phoenix with JDBC simultaneously.

The connection to Postgres works well. But I can not connect to Phoenix from Play.

I tested the connection to Phoenix in a standalone Scala application, without Play, and that worked great.

Here is a code snippet from the standalone application:

import java.sql._

object TestPhoenix extends App {

  val connectionString = "jdbc:phoenix:srv1,srv2,srv3,srv4:/hbase"

  val request = "select * from MY_TABLE limit 10"

  Class.forName("org.apache.phoenix.jdbc.PhoenixDriver")

  val conn = DriverManager.getConnection(connectionString)

  val stmt = conn.prepareStatement(request)

  val rs = stmt.executeQuery()

  var hasNext = rs.next()

  while (hasNext) {
    println(rs.getBytes(1))
    hasNext = rs.next()
  }

}

I try to use the same code in a controller in the Play application but that doesn't work. And if I try to get a connection with Slick, it doesn't work too:

class DbRequester(connectionString: String, request: String)(implicit val ec: ExecutionContext){

  Class.forName("org.apache.phoenix.jdbc.PhoenixDriver")

  val db: JdbcBackend.DatabaseDef = Database.forURL(connectionString, driver="org.apache.phoenix.jdbc.PhoenixDriver")
  val conn = db.source.createConnection()

  val stmt = conn.prepareStatement(request)

  def sendRequest() = {

    stmt.executeQuery()
  }

}

There is a stacktrace:

2018-01-17 08:26:38.690 [error] - org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher - hconnection-0x284bba4a-0x2009aa21ee3093c, quorum=srv1:2181,srv2:2181,srv3:2181,srv4:2181, baseZNode=/hbase Received unexpected KeeperException, re-throwing exception
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase/table/MY_TABLE
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
    at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
    at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:354)
    at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:624)
    at org.apache.hadoop.hbase.zookeeper.ZKTableStateClientSideReader.getTableState(ZKTableStateClientSideReader.java:185)
    at org.apache.hadoop.hbase.zookeeper.ZKTableStateClientSideReader.isDisabledTable(ZKTableStateClientSideReader.java:59)
    at org.apache.hadoop.hbase.client.ZooKeeperRegistry.isTableOnlineState(ZooKeeperRegistry.java:127)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isTableDisabled(ConnectionManager.java:981)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.relocateRegion(ConnectionManager.java:1150)
    at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:303)
    at org.apache.hadoop.hbase.client.ScannerCallable.prepare(ScannerCallable.java:154)
    at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.prepare(ScannerCallableWithReplicas.java:376)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:135)
    at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[INFO] [01/17/2018 08:26:37.033] [sbt-web-scheduler-1] [akka.actor.ActorSystemImpl(sbt-web)] starting new LARS thread
[ERROR] [SECURITY][01/17/2018 08:26:38.684] [sbt-web-scheduler-1] [akka.actor.ActorSystemImpl(sbt-web)] Uncaught error from thread [sbt-web-scheduler-1]: GC overhead limit exceeded, shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[sbt-web]
java.lang.OutOfMemoryError: GC overhead limit exceeded

[INFO] [01/17/2018 08:26:38.690] [Thread-2] [CoordinatedShutdown(akka://sbt-web)] Starting coordinated shutdown from JVM shutdown hook
[ERROR] [01/17/2018 08:26:38.684] [play-dev-mode-scheduler-1] [akka.actor.ActorSystemImpl(play-dev-mode)] exception on LARS’ timer thread
java.lang.OutOfMemoryError: GC overhead limit exceeded

2018-01-17 08:26:40.607 [error] - akka.actor.ActorSystemImpl - exception on LARS’ timer thread
java.lang.OutOfMemoryError: GC overhead limit exceeded
Uncaught error from thread [play-actors-scheduler-1]: GC overhead limit exceeded, shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for for ActorSystem[play-actors]
java.lang.OutOfMemoryError: GC overhead limit exceeded
Uncaught error from thread [play-dev-mode-scheduler-1]: GC overhead limit exceeded, shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for for ActorSystem[play-dev-mode]
java.lang.OutOfMemoryError: GC overhead limit exceeded
2018-01-17 08:26:46.072 [error] - akka.actor.ActorSystemImpl - Uncaught error from thread [play-actors-scheduler-1]: GC overhead limit exceeded, shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[play-actors]
java.lang.OutOfMemoryError: GC overhead limit exceeded
[INFO] [01/17/2018 08:26:40.601] [play-dev-mode-scheduler-1] [akka.actor.ActorSystemImpl(play-dev-mode)] starting new LARS thread
[ERROR] [SECURITY][01/17/2018 08:26:46.073] [play-dev-mode-scheduler-1] [akka.actor.ActorSystemImpl(play-dev-mode)] Uncaught error from thread [play-dev-mode-scheduler-1]: GC overhead limit exceeded, shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[play-dev-mode]
java.lang.OutOfMemoryError: GC overhead limit exceeded

2018-01-17 08:26:48.784 [warn] - com.zaxxer.hikari.pool.HikariPool - db - Thread starvation or clock leap detected (housekeeper delta=58s114ms).
[WARN] [01/17/2018 08:26:58.809] [play-dev-mode-shutdown-hook-1] [CoordinatedShutdown(akka://play-dev-mode)] CoordinatedShutdown from JVM shutdown failed: Futures timed out after [10000 milliseconds]
[WARN] [01/17/2018 08:26:58.809] [Thread-2] [CoordinatedShutdown(akka://sbt-web)] CoordinatedShutdown from JVM shutdown failed: Futures timed out after [10000 milliseconds]

Process finished with exit code 255

Also I try to add jdbc dependency to my build.sbt, and that doesn't work too by that reason: https://www.playframework.com/documentation/2.6.x/PlaySlickFAQ#A-binding-to-play.api.db.DBApi-was-already-configured

And there is no way to create a connection with the standard Slick DatabaseConfig, because Slick doesn't have a profile for Apache Phoenix.

So, is there a way to connect to Phoenix, and keep Slick in my project?

Alex
  • 887
  • 2
  • 6
  • 19
StopKran
  • 365
  • 4
  • 19
  • If this is running inside `sbt` you may want to follow the instructions here: http://www.scala-sbt.org/1.0/docs/Setup-Notes.html#JVM+heap%2C+permgen%2C+and+stack+sizes – Rich Dougherty Jan 17 '18 at 10:10

0 Answers0