According to the doc, it is recommended to implement efficient serialization with Protobuf or similar for our custom data type. However, I also find the built-in data types (e.g., GCounter
) extends ReplicatedDataSerialization
(see code), which according to scaladoc,
Marker trait for ReplicatedData serialized by akka.cluster.ddata.protobuf.ReplicatedDataSerializer.
I wonder whether I should implement my own serializer implementation or simply use the one from akka. What's the benefit of implementing my own? Since my custom data type implementation (see code or below) is really similar to a PNCounter
I feel the Akka one would work for my case well.
import akka.cluster.ddata.{GCounter, Key, ReplicatedData, ReplicatedDataSerialization, SelfUniqueAddress}
/**
* Denote a fraction whose numerator and denominator are always growing
* Prefer such a custom ddata structure over using 2 GCounter separately is to get best of both worlds:
* As lightweight as a GCounter, and can update/get both values at the same time like a PNCounterMap
* Implementation-wise, it borrows from PNCounter a lot
*/
case class FractionGCounter(
private val numerator: GCounter = GCounter(),
private val denominator: GCounter = GCounter()
) extends ReplicatedData
with ReplicatedDataSerialization {
type T = FractionGCounter
def value: (BigInt, BigInt) = (numerator.value, denominator.value)
def incrementNumerator(n: Int)(implicit node: SelfUniqueAddress): FractionGCounter = copy(numerator = numerator :+ n)
def incrementDenominator(n: Int)(implicit node: SelfUniqueAddress): FractionGCounter =
copy(denominator = denominator :+ n)
override def merge(that: FractionGCounter): FractionGCounter =
copy(numerator = this.numerator.merge(that.numerator), denominator = this.denominator.merge(that.denominator))
}
final case class FractionGCounterKey(_id: String) extends Key[FractionGCounter](_id) with ReplicatedDataSerialization