Telepresence Release Notes
Version 2.4.5 (October 15, 2021)
Feature: Get pod yaml with gather-logs command
Adding the flag
--get-pod-yamlto your request will get the pod yaml manifest for all kubernetes components you are getting logs for (
traffic-managerand/or pods containing a
traffic-agentcontainer). This flag is set to
Feature: Anonymize pod name + namespace when using gather-logs command
Adding the flag
--anonymizeto your command will anonymize your pod names + namespaces in the output file. We replace the sensitive names with simple names (e.g. pod-1, namespace-2) to maintain relationships between the objects without exposing the real names of your objects. This flag is set to
Feature: Added context and defaults to ingress questions when creating a preview URL
Previously, we referred to OSI model layers when asking these questions, but this terminology is not commonly used. The questions now provide a clearer context for the user, along with a default answer as an example.
Feature: Support for intercepting headless services
Intercepting headless services is now officially supported. You can request a headless service on whatever port it exposes and get a response from the intercept. This leverages the same approach as intercepting numeric ports when using the mutating webhook injector, mainly requires the
Change: Use one tunnel per connection instead of multiplexing into one tunnel
We have changed Telepresence so that it uses one tunnel per connection instead of multiplexing all connections into one tunnel. This will provide substantial performance improvements. Clients will still be backwards compatible with older managers that only support multiplexing.
Bug Fix: Added checks for Telepresence kubernetes compatibility
Telepresence currently works with Kubernetes server versions
1.17.0and higher. We have added logs in the connector and
traffic-managerto let users know when they are using Telepresence with a cluster it doesn't support.
Bug Fix: Traffic Agent security context is now only added when necessary
When creating an intercept, Telepresence will now only set the traffic agent's GID when strictly necessary (i.e. when using headless services or numeric ports). This mitigates an issue on openshift clusters where the traffic agent can fail to be created due to openshift's security policies banning arbitrary GIDs.
Version 2.4.4 (September 27, 2021)
Feature: Numeric ports in agent injector
The agent injector now supports injecting Traffic Agents into pods that have unnamed ports.
Feature: New subcommand to gather logs and export into zip file
Telepresence has logs for various components (the
traffic-agents, the root and user daemons), which are integral for understanding and debugging Telepresence behavior. We have added the
telepresence gather-logscommand to make it simple to compile logs for all Telepresence components and export them in a zip file that can be shared to others and/or included in a github issue. For more information on usage, run
telepresence gather-logs --help.
Feature: Pod CIDR strategy is configurable in Helm chart
Telepresence now enables you to directly configure how to get pod CIDRs when deploying Telepresence with the Helm chart. The default behavior remains the same. We've also introduced the ability to explicitly set what the pod CIDRs should be.
Bug Fix: Compute pod CIDRs more efficiently
When computing subnets using the pod CIDRs, the traffic-manager now uses less CPU cycles.
Bug Fix: Prevent busy loop in traffic-manager
In some circumstances, the
traffic-manager's CPU would max out and get pinned at its limit. This required a shutdown or pod restart to fix. We've added some fixes to prevent the traffic-manager from getting into this state.
Bug Fix: Added a fixed buffer size to TUN-device
The TUN-device now has a max buffer size of 64K. This prevents the buffer from growing limitlessly until it receies a PSH, which could be a blocking operation when receiving lots of TCP-packets.
Bug Fix: Fix hanging user daemon
When Telepresence encountered an issue connecting to the cluster or the root daemon, it could hang indefintely. It now will error correctly when it encounters that situation.
Bug Fix: Improved proprietary agent connectivity
To determine whether the environment cluster is air-gapped, the proprietary agent attempts to connect to the cloud during startup. To deal with a possible initial failure, the agent backs off and retries the connection with an increasing backoff duration.
Bug Fix: Telepresence correctly reports intercept port conflict
When creating a second intercept targetting the same local port, it now gives the user an informative error message. Additionally, it tells them which intercept is currently using that port to make it easier to remedy.
Version 2.4.3 (September 15, 2021)
Feature: Environment variable TELEPRESENCE_INTERCEPT_ID available in interceptor's environment
When you perform an intercept, we now include a
TELEPRESENCE_INTERCEPT_IDenvironment variable in the environment.
Bug Fix: Improved daemon stability
Fixed a timing bug that sometimes caused a "daemon did not start" failure.
Bug Fix: Complete logs for Windows
Crash stack traces and other errors were incorrectly not written to log files. This has been fixed so logs for Windows should be at parity with the ones in MacOS and Linux.
Bug Fix: Log rotation fix for Linux kernel 4.11+
On Linux kernel 4.11 and above, the log file rotation now properly reads the
birth-timeof the log file. Older kernels continue to use the old behavior of using the
change-timein place of the
Bug Fix: Improved error messaging
When Telepresence encounters an error, it tells the user where they should look for logs related to the error. We have refined this so that it only tells users to look for errors in the daemon logs for issues that are logged there.
Bug Fix: Stop resolving localhost
When using the overriding DNS resolver, it will no longer apply search paths when resolving
localhost, since that should be resolved on the user's machine instead of the cluster.
Bug Fix: Variable cluster domain
Previously, the cluster domain was hardcoded to
cluster.local. While this is true for many kubernetes clusters, it is not for all of them. Now this value is retrieved from the
Bug Fix: Improved cleanup of traffic-agents
Telepresence now uninstalls
traffic-agentsinstalled via mutating webhook when using
telepresence uninstall --everything.
Bug Fix: More large file transfer fixes
Downloading large files during an intercept will no longer cause timeouts and hanging
Bug Fix: Setting --mount to false when intercepting works as expected
--mount=falsewhile performing an intercept, the file system was still mounted. This has been remedied so the intercept behavior respects the flag.
Bug Fix: Traffic-manager establishes outbound connections in parallel
traffic-managerestablished outbound connections sequentially. This resulted in slow (and failing)
Dialcalls would block all outbound traffic from the workstation (for up to 30 seconds). We now establish these connections in parallel so that won't occur.
Bug Fix: Status command reports correct DNS settings
Telepresence statusnow correctly reports DNS settings for all operating systems, instead of
Local IP:nil, Remote IP:nilwhen they don't exist.
Version 2.4.2 (September 01, 2021)
Feature: New subcommand to temporarily change log-level
We have added a new
telepresence loglevelsubcommand that enables users to temporarily change the log-level for the local demons, the
traffic-agents. While the
logLevelssettings from the config will still be used by default, this can be helpful if you are currently experiencing an issue and want to have higher fidelity logs, without doing a
telepresence connect. You can use
telepresence loglevel --helpto get more information on options for the command.
Change: All components have info as the default log-level
We've now set the default for all components of Telepresence (traffic-agent, traffic-manager, local daemons) to use
infoas the default log-level.
Bug Fix: Updating RBAC in helm chart to fix cluster-id regression
In 2.4.1, we enabled the
traffic-managerto get the cluster ID by getting the UID of the default namespace. The helm chart was not updated to give the
traffic-managerthose permissions, which has since been fixed. This impacted users who use licensed features of the Telepresence extension in an air-gapped environment.
Bug Fix: Timeouts for Helm actions are now respected
The user-defined timeout for Helm actions wasn't always respected, causing the daemon to hang indefinitely when failing to install the
Version 2.4.1 (August 30, 2021)
Feature: External cloud variables are now configurable
We now support configuring the host and port for the cloud in your
config.yml. These are used when logging in to utilize features provided by an extension, and are also passed along as environment variables when installing the `traffic-manager`. Additionally, we now run our testsuite with these variables set to localhost to continue to ensure Telepresence is fully fuctional without depeneding on an external service. The SYSTEMA_HOST and SYSTEMA_PORT environment variables are no longer used.
Feature: Helm chart can now regenerate certificate used for mutating webhook on-demand.
You can now set
agentInjector.certificate.regeneratewhen deploying Telepresence with the Helm chart to automatically regenerate the certificate used by the agent injector webhook.
Change: Traffic Manager installed via helm
The traffic-manager is now installed via an embedded version of the Helm chart when
telepresence connectis first performed on a cluster. This change is transparent to the user. A new configuration flag,
timeouts.helmsets the timeouts for all helm operations performed by the Telepresence binary.
Change: traffic-manager gets cluster ID itself instead of via environment variable
The traffic-manager used to get the cluster ID as an environment variable when running
telepresence connnector via adding the value in the helm chart. This was clunky so now the traffic-manager gets the value itself as long as it has permissions to "get" and "list" namespaces (this has been updated in the helm chart).
Bug Fix: Telepresence now mounts all directories from /var/run/secrets
In the past, we only mounted secret directories in
/var/run/secrets/kubernetes.io. We now mount *all* directories in
/var/run/secrets, which, for example, includes directories like
eks.amazonaws.comused for IRSA tokens.
Bug Fix: Max gRPC receive size correctly propagates to all grpc servers
This fixes a bug where the max gRPC receive size was only propagated to some of the grpc servers, causing failures when the message size was over the default.
Bug Fix: Updated our Homebrew packaging to run manually
We made some updates to our script that packages Telepresence for Homebrew so that it can be run manually. This will enable maintainers of Telepresence to run the script manually should we ever need to rollback a release and have
latestpoint to an older verison.
Bug Fix: Telepresence uses namespace from kubeconfig context on each call
In the past, Telepresence would use whatever namespace was specified in the kubeconfig's current-context for the entirety of the time a user was connected to Telepresence. This would lead to confusing behavior when a user changed the context in their kubeconfig and expected Telepresence to acknowledge that change. Telepresence now will do that and use the namespace designated by the context on each call.
Bug Fix: Idle outbound TCP connections timeout increased to 7200 seconds
Some users were noticing that their intercepts would start failing after 60 seconds. This was because the keep idle outbound TCP connections were set to 60 seconds, which we have now bumped to 7200 seconds to match Linux's
Bug Fix: Telepresence will automatically remove a socket upon ungraceful termination
When a Telepresence process terminates ungracefully, it would inform users that "this usually means that the process has terminated ungracefully" and implied that they should remove the socket. We've now made it so Telepresence will automatically attempt to remove the socket upon ungraceful termination.
Bug Fix: Fixed user daemon deadlock
Remedied a situation where the user daemon could hang when a user was logged in.
Bug Fix: Fixed agentImage config setting
The config setting
images.agentImagesis no longer required to contain the repository, and it will use the value at
Version 2.4.0 (August 04, 2021)
Feature: Windows Client Developer Preview
There is now a native Windows client for Telepresence that is being released as a Developer Preview. All the same features supported by the MacOS and Linux client are available on Windows.
Feature: CLI raises helpful messages from Ambassador Cloud
Telepresence can now receive messages from Ambassador Cloud and raise them to the user when they perform certain commands. This enables us to send you messages that may enhance your Telepresence experience when using certain commands. Frequency of messages can be configured in your
Bug Fix: Improved stability of systemd-resolved-based DNS
When initializing the
systemd-resolved-based DNS, the routing domain is set to improve stability in non-standard configurations. This also enables the overriding resolver to do a proper take over once the DNS service ends.
Bug Fix: Fixed an edge case when intercepting a container with multiple ports
When specifying a port of a container to intercept, if there was a container in the pod without ports, it was automatically selected. This has been fixed so we'll only choose the container with "no ports" if there's no container that explicitly matches the port used in your intercept.
Bug Fix: $(NAME) references in agent's environments are now interpolated correctly.
If you had an environment variable $(NAME) in your workload that referenced another, intercepts would not correctly interpolate $(NAME). This has been fixed and works automatically.
Bug Fix: Telepresence no longer prints INFO message when there is no config.yml
Fixed a regression that printed an INFO message to the terminal when there wasn't a
config.ymlpresent. The config is optional, so this message has been removed.
Bug Fix: Telepresence no longer panics when using --http-match
Fixed a bug where Telepresence would panic if the value passed to
--http-matchdidn't contain an equal sign, which has been fixed. The correct syntax is in the
--helpstring and looks like
Bug Fix: Improved subnet updates
The `traffic-manager` used to update subnets whenever the `Nodes` or `Pods` changed, even if the underlying subnet hadn't changed, which created a lot of unnecessary traffic between the client and the `traffic-manager`. This has been fixed so we only send updates when the subnets themselves actually change.
Version 2.3.7 (July 23, 2021)
Feature: Also-proxy in telepresence status
also-proxyentry in the Kubernetes cluster config will show up in the output of the
Feature: Non-interactive telepresence login
telepresence loginnow has an
--apikey=KEYflag that allows for non-interactive logins. This is useful for headless environments where launching a web-browser is impossible, such as cloud shells, Docker containers, or CI.
Bug Fix: Mutating webhook injector correctly hides named ports for probes.
The mutating webhook injector has been fixed to correctly rename named ports for liveness and readiness probes
Bug Fix: telepresence current-cluster-id crash fixed
Fixed a regression introduced in 2.3.5 that caused `telepresence current-cluster-id` to crash.
Bug Fix: Better UX around intercepts with no local process running
Requests would hang indefinitely when initiating an intercept before you had a local process running. This has been fixed and will result in an
Empty reply from serveruntil you start a local process.
Bug Fix: API keys no longer show as "no description"
New API keys generated internally for communication with Ambassador Cloud no longer show up as "no description" in the Ambassador Cloud web UI. Existing API keys generated by older versions of Telepresence will still show up this way.
Bug Fix: Fix corruption of user-info.json
Fixed a race condition that logging in and logging out rapidly could cause memory corruption or corruption of the
user-info.jsoncache file used when authenticating with Ambassador Cloud.
Bug Fix: Improved DNS resolver for systemd-resolved
systemd-resolved-based DNS resolver is now more stable and in case it fails to initialize, the
overriding resolverwill no longer cause general DNS lookup failures when telepresence defaults to using it.
Bug Fix: Faster telepresence list command
The performance of
telepresence listhas been increased significantly by reducing the number of calls the command makes to the cluster.
Version 2.3.6 (July 20, 2021)
Bug Fix: Fix preview URLs
Fixed a regression introduced in 2.3.5 that caused preview URLs to not work.
Bug Fix: Fix subnet discovery
Fixed a regression introduced in 2.3.5 where the Traffic Manager's
RoleBindingdid not correctly appoint the
Role, causing subnet discovery to not be able to work correctly.
Bug Fix: Fix root-user configuration loading
Fixed a regression introduced in 2.3.5 where the root daemon did not correctly read the configuration file; ignoring the user's configured log levels and timeouts.
Bug Fix: Fix a user daemon crash
Fixed an issue that could cause the user daemon to crash during shutdown, as during shutdown it unconditionally attempted to close a channel even though the channel might already be closed.
Version 2.3.5 (July 15, 2021)
Feature: traffic-manager in multiple namespaces
We now support installing multiple traffic managers in the same cluster. This will allow operators to install deployments of telepresence that are limited to certain namespaces.
Feature: No more dependence on kubectl
Telepresence no longer depends on having an external
kubectlbinary, which might not be present for OpenShift users (who have
Feature: Agent image now configurable
We now support configuring which agent image + registry to use in the config. This enables users whose laptop is an air-gapped environment to create personal intercepts without requiring a login. It also makes it easier for those who are developing on Telepresence to specify which agent image should be used. Env vars TELEPRESENCE_AGENT_IMAGE and TELEPRESENCE_REGISTRY are no longer used.
Feature: Max gRPC receive size now configurable
The default max size of messages received through gRPC (4 MB) is sometimes insufficient. It can now be configured.
Feature: CLI can be used in air-gapped environments
While Telepresence will auto-detect if your cluster is in an air-gapped environment, we've added an option users can add to their config.yml to ensure the cli acts like it is in an air-gapped environment. Air-gapped environments require a manually installed licence.
Version 2.3.4 (July 09, 2021)
Bug Fix: Improved IP log statements
Some log statements were printing incorrect characters, when they should have been IP addresses. This has been resolved to include more accurate and useful logging.
Bug Fix: Improved messaging when multiple services match a workload
If multiple services matched a workload when performing an intercept, Telepresence would crash. It now gives the correct error message, instructing the user on how to specify which service the intercept should use.
Bug Fix: Traffic-manger creates services in its own namespace to determine subnet
Telepresence will now determine the service subnet by creating a dummy-service in its own namespace, instead of the default namespace, which was causing RBAC permissions issues in some clusters.
Bug Fix: Telepresence connect respects pre-existing clusterrole
When Telepresence connects, if the
clusterrolealready exists in the cluster, Telepresence will no longer try to update the clusterrole.
Bug Fix: Helm Chart fixed for clientRbac.namespaced
The Telepresence Helm chart no longer fails when installing with
Version 2.3.3 (July 07, 2021)
Feature: Traffic Manager Helm Chart
Telepresence now supports installing the Traffic Manager via Helm. This will make it easy for operators to install and configure the server-side components of Telepresence separately from the CLI (which in turn allows for better separation of permissions).
Feature: Traffic-manager in custom namespace
traffic-managercan now be installed in any namespace via Helm, Telepresence can now be configured to look for the Traffic Manager in a namespace other than
ambassador. This can be configured on a per-cluster basis.
Feature: Intercept --to-pod
telepresence interceptnow supports a
--to-podflag that can be used to port-forward sidecars' ports from an intercepted pod.
Change: Change in migration from edgectl
Telepresence no longer automatically shuts down the old
edgectldaemon. If migrating from such an old version of
edgectlyou must now manually shut down the
edgectldaemon before running Telepresence. This was already the case when migrating from the newer
Bug Fix: Fixed error during shutdown
The root daemon no longer terminates when the user daemon disconnects from its gRPC streams, and instead waits to be terminated by the CLI. This could cause problems with things not being cleaned up correctly.
Bug Fix: Intercepts will survive deletion of intercepted pod
An intercept will survive deletion of the intercepted pod provided that another pod is created (or already exists) that can take over.
Version 2.3.2 (June 18, 2021)
Feature: Service Port Annotation
The mutator webhook for injecting traffic-agents now recognizes a
telepresence.getambassador.io/inject-service-portannotation to specify which port to intercept; bringing the functionality of the
--portflag to users who use the mutator webook in order to control Telepresence via GitOps.
Feature: Outbound Connections
Outbound connections are now routed through the intercepted Pods which means that the connections originate from that Pod from the cluster's perspective. This allows service meshes to correctly identify the traffic.
Change: Inbound Connections
Inbound connections from an intercepted agent are now tunneled to the manager over the existing gRPC connection, instead of establishing a new connection to the manager for each inbound connection. This avoids interference from certain service mesh configurations.
Change: Traffic Manager needs new RBAC permissions
The Traffic Manager requires RBAC permissions to list Nodes, Pods, and to create a dummy Service in the manager's namespace.
Change: Reduced developer RBAC requirements
The on-laptop client no longer requires RBAC permissions to list the Nodes in the cluster or to create Services, as that functionality has been moved to the Traffic Manager.
Bug Fix: Able to detect subnets
Telepresence will now detect the Pod CIDR ranges even if they are not listed in the Nodes.
Bug Fix: Dynamic IP ranges
The list of cluster subnets that the virtual network interface will route is now configured dynamically and will follow changes in the cluster.
Bug Fix: No duplicate subnets
Subnets fully covered by other subnets are now pruned internally and thus never superfluously added to the laptop's routing table.
Change: Change in default timeout
trafficManagerAPItimeout default has changed from 5 seconds to 15 seconds, in order to facilitate the extended time it takes for the traffic-manager to do its initial discovery of cluster info as a result of the above bugfixes.
Bug Fix: Removal of DNS config files on macOS
On macOS, files generated under
/etc/resolver/as the result of using
include-suffixesin the cluster config are now properly removed on quit.
Bug Fix: Large file transfers
Telepresence no longer erroneously terminates connections early when sending a large HTTP response from an intercepted service.
Bug Fix: Race condition in shutdown
When shutting down the user-daemon or root-daemon on the laptop,
telepresence quitand related commands no longer return early before everything is fully shut down. Now it can be counted on that by the time the command has returned that all of the side-effects on the laptop have been cleaned up.
Version 2.3.1 (June 14, 2021)
Feature: DNS Resolver Configuration
Telepresence now supports per-cluster configuration for custom dns behavior, which will enable users to determine which local + remote resolver to use and which suffixes should be ignored + included. These can be configured on a per-cluster basis.
Feature: AlsoProxy Configuration
Telepresence now supports also proxying user-specified subnets so that they can access external services only accessible to the cluster while connected to Telepresence. These can be configured on a per-cluster basis and each subnet is added to the TUN device so that requests are routed to the cluster for IPs that fall within that subnet.
Feature: Mutating Webhook for Injecting Traffic Agents
The Traffic Manager now contains a mutating webhook to automatically add an agent to pods that have the
telepresence.getambassador.io/traffic-agent: enabledannotation. This enables Telepresence to work well with GitOps CD platforms that rely on higher level kubernetes objects matching what is stored in git. For workloads without the annotation, Telepresence will add the agent the way it has in the past
Change: Traffic Manager Connect Timeout
The trafficManagerConnect timeout default has changed from 20 seconds to 60 seconds, in order to facilitate the extended time it takes to apply everything needed for the mutator webhook.
Bug Fix: Fix for large file transfers
Fix a tun-device bug where sometimes large transfers from services on the cluster would hang indefinitely
Change: Brew Formula Changed
Now that the Telepresence rewrite is the main version of Telepresence, you can install it via Brew like so:
brew install datawire/blackbird/telepresence.
Version 2.3.0 (June 01, 2021)
Feature: Brew install Telepresence
Telepresence can now be installed via brew on macOS, which makes it easier for users to stay up-to-date with the latest telepresence version. To install via brew, you can use the following command:
brew install datawire/blackbird/telepresence2.
Feature: TCP and UDP routing via Virtual Network Interface
Telepresence will now perform routing of outbound TCP and UDP traffic via a Virtual Network Interface (VIF). The VIF is a layer 3 TUN-device that exists while Telepresence is connected. It makes the subnets in the cluster available to the workstation and will also route DNS requests to the cluster and forward them to intercepted pods. This means that pods with custom DNS configuration will work as expected. Prior versions of Telepresence would use firewall rules and were only capable of routing TCP.
Change: SSH is no longer used
All traffic between the client and the cluster is now tunneled via the traffic manager gRPC API. This means that Telepresence no longer uses ssh tunnels and that the manager no longer have an
sshdinstalled. Volume mounts are still established using
sshfsbut it is now configured to communicate using the sftp-protocol directly, which means that the traffic agent also runs without
sshd. A desired side effect of this is that the manager and agent containers no longer need a special user configuration.
Feature: Running in a Docker container
Telepresence can now be run inside a Docker container. This can be useful for avoiding side effects on a workstation's network, establishing multiple sessions with the traffic manager, or working with different clusters simultaneously.
Feature: Configurable Log Levels
Telepresence now supports configuring the log level for Root Daemon and User Daemon logs. This provides control over the nature and volume of information that Telepresence generates in
Version 2.2.2 (May 17, 2021)
For a detailed list of all the changes in past releases, please consult the CHANGELOG.