For the most part, Telepresence doesn't require any special
configuration in the cluster and can be used right away in any
cluster (as long as the user has adequate RBAC permissions
and the cluster's server version is
1.17.0 or higher).
However, some advanced features do require some configuration in the cluster.
In this example, other applications in the cluster expect to speak TLS to your intercepted application (perhaps you're using a service-mesh that does mTLS).
In order to use
--mechanism=http (or any features that imply
--mechanism=http) you need to tell Telepresence about the TLS
certificates in use.
Tell Telepresence about the certificates in use by adjusting your workload's Pod template to set a couple of annotations on the intercepted Pods:
spec:template:metadata:labels:service: your-service+ annotations:+ "getambassador.io/inject-terminating-tls-secret": "your-terminating-secret" # optional+ "getambassador.io/inject-originating-tls-secret": "your-originating-secret" # optionalspec:+ serviceAccountName: "your-account-that-has-rbac-to-read-those-secrets"containers:
getambassador.io/inject-terminating-tls-secretannotation (optional) names the Kubernetes Secret that contains the TLS server certificate to use for decrypting and responding to incoming requests.
When Telepresence modifies the Service and workload port definitions to point at the Telepresence Agent sidecar's port instead of your application's actual port, the sidecar will use this certificate to terminate TLS.
getambassador.io/inject-originating-tls-secretannotation (optional) names the Kubernetes Secret that contains the TLS client certificate to use for communicating with your application.
You will need to set this if your application expects incoming requests to speak TLS (for example, your code expects to handle mTLS itself instead of letting a service-mesh sidecar handle mTLS for it, or the port definition that Telepresence modified pointed at the service-mesh sidecar instead of at your application).
If you do set this, you should to set it to the same client certificate Secret that you configure the Ambassador Edge Stack to use for mTLS.
It is only possible to refer to a Secret that is in the same Namespace as the Pod.
The Pod will need to have permission to
watch each of
type: kubernetes.io/tls Secrets and
type: istio.io/key-and-cert Secrets; as well as
Secrets that it detects to be formatted as one of those types.
If your cluster is on an isolated network such that it cannot communicate with Ambassador Cloud, then some additional configuration is required to acquire a license key in order to use personal intercepts.
Go to the teams setting page in Ambassador Cloud and select Licenses for the team you want to create the license for.
Generate a new license (if one doesn't already exist) by clicking Generate New License.
You will be prompted for your Cluster ID. Ensure your kubeconfig context is using the cluster you want to create a license for then run this command to generate the Cluster ID:Terminal$ telepresence current-cluster-idCluster ID: <some UID>
Click Generate API Key to finish generating the license.
On the licenses page, download the license file associated with your cluster.
There are two separate ways you can add the license to your cluster: manually creating and deploying the license secret or having the helm chart manage the secret
You only need to do one of the two options.
Use this command to generate a Kubernetes Secret config using the license file:Terminal$ telepresence license -f <downloaded-license-file>apiVersion: v1data:hostDomain: <long_string>license: <longer_string>kind: Secretmetadata:creationTimestamp: nullname: systema-licensenamespace: ambassador
Save the output as a YAML file and apply it to your cluster with
When deploying the
traffic-managerchart, you must add the additional values when running
helm installby putting the following into a file (for the example we'll assume it's called license-values.yaml)shelllicenseKey:# This mounts the secret into the traffic-managercreate: truesecret:# This tells the helm chart not to create the secret since you've created it yourselfcreate: false
Install the helm chart into the clustershellhelm install traffic-manager -n ambassador datawire/telepresence --create-namespace -f license-values.yaml
Ensure that you have the docker image for the Smart Agent (datawire/ambassador-telepresence-agent:1.11.0) pulled and in a registry your cluster can pull from.
Have users use the
imagesconfig key keys so telepresence uses the aforementioned image for their agent.
Get the jwt token from the downloaded license fileTerminal$ cat ~/Downloads/ambassador.License_for_yourclustereyJhbGnotarealtoken.butanexample
Create the following values file, substituting your real jwt token in for the one used in the example below. (for this example we'll assume the following is placed in a file called license-values.yaml)shelllicenseKey:# This mounts the secret into the traffic-managercreate: true# This is the value from the license file you download. this value is an example and will not workvalue: eyJhbGnotarealtoken.butanexamplesecret:# This tells the helm chart to create the secretcreate: true
Install the helm chart into the clustershellhelm install traffic-manager charts/telepresence -n ambassador --create-namespace -f license-values.yaml
Users will now be able to use preview intercepts with the
--preview-url=false flag. Even with the license key, preview URLs
cannot be used without enabling direct communication with Ambassador
Cloud, as Ambassador Cloud is essential to their operation.
If using Helm to install the server-side components, see the chart's README to learn how to configure the image registry and license secret.
Have clients use the skipLogin key to ensure the cli knows it is operating in an air-gapped environment.
By default, Telepresence updates the intercepted workload (Deployment, StatefulSet, ReplicaSet) template to add the Traffic Agent sidecar container and update the port definitions. If you use GitOps workflows (with tools like ArgoCD) to automatically update your cluster so that it reflects the desired state from an external Git repository, this behavior can make your workload out of sync with that external desired state.
To solve this issue, you can use Telepresence's Mutating Webhook alternative mechanism. Intercepted workloads will then stay untouched and only the underlying pods will be modified to inject the Traffic Agent sidecar container and update the port definitions.
Simply add the
telepresence.getambassador.io/inject-traffic-agent: enabled annotation to your
workload template's annotations:
spec:template:metadata:labels:service: your-service+ annotations:+ telepresence.getambassador.io/inject-traffic-agent: enabledspec:containers:
A service port annotation can be added to the workload to make the Mutating Webhook select a specific port in the service. This is necessary when the service has multiple ports.
spec:template:metadata:labels:service: your-serviceannotations:telepresence.getambassador.io/inject-traffic-agent: enabled+ telepresence.getambassador.io/inject-service-port: httpsspec:containers:
targetPort of your intercepted service is pointing at a port number, in addition to
injecting the Traffic Agent sidecar, Telepresence will also inject an
initContainer that will
reconfigure the pod's firewall rules to redirect traffic to the Traffic Agent.
For example, the following service is using a numeric port, so Telepresence would inject an initContainer into it:
apiVersion: v1kind: Servicemetadata:name: your-servicespec:type: ClusterIPselector:service: your-serviceports:- port: 80targetPort: 8080---apiVersion: apps/v1kind: Deploymentmetadata:name: your-servicelabels:service: your-servicespec:replicas: 1selector:matchLabels:service: your-servicetemplate:metadata:annotations:telepresence.getambassador.io/inject-traffic-agent: enabledlabels:service: your-servicespec:containers:- name: your-containerimage: jmalloc/echo-serverports:- containerPort: 8080