0

We use nomad to deploy our applications - which provide gRPC endpoints - as tasks. The tasks are then registered to Consul, using nomad's service stanza.

The routing for our applications is achieved with envoy proxy. We are running central envoy instances loadbalanced at IP 10.1.2.2.

The decision to which endpoint/task to route is currently based on the host header and every task is registered as a service under <$JOB>.our.cloud. This leads to two problems.

  1. When accessing the service, the DNS name must be registered for the loadbalancer IP which leads to /etc/hosts entries like

    10.1.2.2 serviceA.our.cloud serviceB.our.cloud serviceC.our.cloud
    

    This problem is partially mitigated by using dnsmasq, but it is still a bit annoying when we add new services

  2. It is not possible to have multiple services running at the same time which provide the same gRPC service. If we e.g. decide to test a new implementation of a service, we need to run it in the same job under the same name and all services which are defined in a gRPC service file need to be implemented.

A possible solution we have been discussing is to use the tags of the service stanza to add tags which define the provided gRPC services, e.g.:

service {
  tags = ["grpc-my.company.firstpackage/ServiceA", "grpc-my.company.secondpackage/ServiceB"]
}

But this is discouraged by Consul:

Dots are not supported because Consul internally uses them to delimit service tags.

Now we were thinking about doing it with tags like grpc-my-company-firstpackage__ServiceA, ... This looks really disgusting, though :-(

So my questions are:

  • Has anyone ever done something like that?
  • If so, what are recommendations on how to route to gRPC services which are autodiscovered with Consul?
  • Does anyone have some other ideas or insights into this?
  • How is this accomplished in e.g. istio?
Dominik Sandjaja
  • 5,679
  • 5
  • 46
  • 74

2 Answers2

1

I think this is a fully supported usecase for Istio. Istio will help you with service discovery w/ Consul and you can use route rules to specify which deployment will provide the service. You can start explore from https://istio.io/docs/tasks/traffic-management/

Joy Zhang
  • 21
  • 1
0

We do something similar to this, using our own product, Turbine Labs.

We're on a slightly different stack, but the idea is:

  • Pull service discovery information into our control plane. (We use Kubernetes but support Consul).
  • Organize this service discovery information by service and by version. We use the tbn_cluster, stage, and version (like here).

Since version for us is the SHA of the release, we don't have formatting issues with it. Also, they don't have to be unique, because the tbn_cluster tag defines the first level of the hierarchy.

Once we have those, we use UI / API to define all the routes (e.g. app.turbinelabs.io/stats -> stats_service). These rules include the tags, so when we deploy a new version (deploy != release), no traffic is routed to it. Releases are done by updating the rules.

(There's even some nice UI affordances for updating those rules for the common case of "release 10% of traffic to the new version," like a slider!)

Hopefully that helps! You might check out LearnEnvoy.io -- lots of tutorials and best practices on what works with Envoy. The articles on Service Discovery Integration and Incremental Blue/Green Releases may be helpful.

TR.
  • 2,095
  • 3
  • 15
  • 15
  • That looks interesting, I will have an in depth look later this week. The thing is, that so far we don't have a fully functional control plane, the integration of consul and envoy is done completely automated. But it looks like adding this controlling layer might be a good idea ... – Dominik Sandjaja Mar 27 '18 at 09:15