Troubleshooting & Workarounds
For method-specific limitations see the documentation on the available proxying methods.
General limitations & workarounds
--method vpn-tcp or
--method inject-tcp a container run via
docker run will not inherit the outgoing functionality of the Telepresence shell.
If you want to use Telepresence to proxy a containerized application you should use
localhost and the pod
127.0.0.1 will access the host machine, i.e. the machine where you ran
If you're using the container method,
localhost will refer to your local container.
In either case,
localhost will not connect you to other containers in the pod.
This can be a problem in cases where you are running multiple containers in a pod and you need your process to access a different container in the same pod.
The solution is to use
--to-pod PORT and/or
--from-pod PORT to tell Telepresence to set up additional forwarding.
If you want to make connections from your local session to the pod, e.g., to access a proxy/helper sidecar, use
On the other hand, if you want connections from other containers in the pod to reach your local session, use
An alternate solution to connect to the pod is to access the pod via its IP, rather than at
You can have the pod IP configured as an environment variable
$MY_POD_IP in the Deployment using the Kubernetes Downward API:
apiVersion: extensions/v1beta1 kind: Deployment spec: template: spec: containers: - name: servicename image: datawire/telepresence-k8s:0.109 env: - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP
Fedora 18+/CentOS 7+/RHEL 7+ and
Fedora 18+/CentOS 7+/RHEL 7+ ship with firewalld enabled and running by default. In its default configuration this will drop traffic on unknown ports originating from Docker's default bridge network - usually
To resolve this issue, instruct firewalld to trust traffic from
sudo firewall-cmd --permanent --zone=trusted --add-source=172.17.0.0/16 sudo firewall-cmd --reload
For more details see issue # 464.