Traffic Director With TCP Services & TCP Route Using Envoy Proxies.

Namrata D
4 min readJun 27, 2023

--

Traffic Director is Google Cloud’s fully managed application networking platform and service mesh. We can deploy global load balancing across clusters and virtual machine (VM) instances in multiple regions, offload health checking from service proxies, and configure traffic control policies. Today we will set up Traffic Director which uses TCP route & TCP services.

Traffic Director which uses TCP route & TCP services is similar to the Envoy sidecar proxy configuration as in the HTTP services. The difference being the backend service provides a TCP service and routing based on TCP parameters.

Prerequisites:

  1. Enable billing.
  2. Enable the required APIs as described below .
gcloud services enable \
osconfig.googleapis.com \
trafficdirector.googleapis.com \
compute.googleapis.com \
networkservices.googleapis.com

2. Decide how you want to install Envoy.

3. Grant the required permissions.

4. If you are using Compute Engine, enable the Cloud DNS API and configure Cloud DNS.

5. Ensure that the service account used by the Envoy proxies has sufficient permissions to access the Traffic Director API.

The next step involves setting up Envoy on VMs. On Compute Engine, we can automatically add Envoy to applications running on the VMs. We will use a VM template that installs Envoy, connects it to Traffic Director, and also configures VM’s networking. Before we begin we need to have the required permissions. It is strongly recommend that we configure new Envoy deployments with xDS v3, or migrate to xDS v3 if we have an existing deployment that uses xDS v2. To enable SA IAM Traffic Director Client role (roles/trafficdirector.client), is set which wraps the required permissions. We can use the following gcloud command for that purpose.

gcloud projects add-iam-policy-binding PROJECT \
--member serviceAccount:SERVICE_ACCOUNT_EMAIL \
--role=roles/trafficdirector.client

How To Set Up TCP Services:

  1. First we need to create a file called mesh.yaml to create the mesh resource specification.
name: sidecar-mesh

2. Use the mesh.yaml file to create the mesh resource.

gcloud network-services meshes import sidecar-mesh \
--source=mesh.yaml \
--location=global

Configure the TCP server

For the purpose of demo , we will create a backend service which uses VMs using managed instance groups that serve a test TCP service on port 80.

  1. Create a Compute Engine template with a service on port 80.
gcloud compute instance-templates create tcp-vm-template \
--scopes=https://www.googleapis.com/auth/cloud-platform \
--tags=allow-health-checks \
--image-family=debian-10 \
--image-project=debian-cloud \
--metadata=startup-script="#! /bin/bash
sudo apt-get update -y
sudo apt-get install netcat -y
while true;
do echo 'Hello from traffic Director' | nc -l -s 0.0.0.0 -p 80;
done &"

2. The next step is to create a managed instance group based on the template.

gcloud compute instance-groups managed create tcp-mig \
--zone=Zone \
--size=1 \
--template=tcp-vm-template

3. Set the named ports on the created managed instance group to port 80.

gcloud compute instance-groups set-named-ports tcp-mig 
--zone=ZONE
--named-ports=tcp:80

4. Create a health check for the instances.

gcloud compute health-checks create tcp tcp-health-check --port 80

5. Create a firewall rule to allow incoming health check connections to instances in your network.

gcloud compute firewall-rules create tcp-health-checks \
--network default \
--action allow \
--direction INGRESS \
--source-ranges=35.191.0.0/16,130.211.0.0/22 \
--target-tags allow-health-checks \
--rules tcp:80

6. The following step is to create a global backend service with a load balancing scheme of type INTERNAL_SELF_MANAGED and attach the health check to the backend service. Here we use managed instance group that runs the sample TCP service that we created earlier.

gcloud compute backend-services create tcp-service-mig \
--global \
--load-balancing-scheme=INTERNAL_SELF_MANAGED \
--protocol=TCP \
--health-checks tcp-health-check

7. Finally we attach the managed instance group to the backend service.

gcloud compute backend-services add-backend tcp-service-mig \
--instance-group tcp-mig \
--instance-group-zone=ZONE \
--global

Setting up routing with TCP route

  1. In a file called tcp_route.yaml create the TcpRoute specification.
name: tcp-route
meshes:
- projects/$PROJECT_NUMBER/locations/global/meshes/sidecar-mesh
rules:
- action:
destinations:
- serviceName: projects/$PROJECT_NUMBER/locations/global/backendServices/tcp-service-mig
matches:
- address: '10.0.0.1/32'
port: '80'

2. Using the tcp_route.yaml specification, create the TcpRoute resource.

gcloud network-services tcp-routes import tcp-route \
--source=tcp-route.yaml \
--location=global

Create a TCP client with an Envoy sidecar to access the application

For this we need to create a instance template and then create a VM with Envoy that is connected to Traffic Director.

gcloud beta compute instance-templates create td-client-template \
--image-family=debian-10 \
--image-project=debian-cloud \
--service-proxy=enabled,mesh=sidecar-mesh \
--metadata=startup-script="#! /bin/bash
sudo apt-get update -y
sudo apt-get install netcat -y"


gcloud compute instances create td-vm-client \
--zone=ZONE \
--source-instance-template td-client-template

2. Log in to the VM that we created and verify connectivity to the test services that we created using the netcat utility.

gcloud compute ssh td-vm-client
echo 'Hello  from traffic Director' | nc 10.0.0.1 80

The test service should return the phrase Hello from traffic Director.

--

--

Namrata D
Namrata D

Written by Namrata D

AWS Solution Architect Associate, CKA,CKAD,CKS, Terraform & HashiCorp Vault Certified

No responses yet