In my use of Google Cloud Platform (GCP), I often recreate virtual machines (VMs), requiring frequent updates to connection information. This process becomes cumbersome, especially when dealing with stale entries in the known_hosts
file and managing DNS entries for a growing number of servers. Additionally, using the Identity-Aware Proxy (IAP) requires the use of the gcloud
cli for secure access to these servers adds another layer of complexity. What I ended up doing was to leverage my previous experience with SSH configuration files to streamline my workflow.
The Challenge
There were several issues I needed to solve:
- Frequent VM recreation leading to connection information changes
- Managing stale
known_hosts
entries - DNS management for numerous servers
- The necessity of using IAP for secure access
- The complexity of the
gcloud compute ssh
command
The Solution
To implement my solution, I needed two key tools:
gcloud
CLI installed locallync
(netcat) installed on the remote servers to proxy the connection from IAP to VM
The gcloud compute ssh
is a wrapper around ssh
that helps with authentication, and IP resolution. This command also got a bit unweildly when attempting to combine it with other ssh based tools such as ansible, or rsync, as if I wanted to rsync between my local machine, and a remote server, I had to pass the command into rsync, and the same for other ssh-based tools. As I previously worked with Vault to adjust the ssh configuration to execute a command per connection I figured I could use the same trick here. I found a helpful stating point on StackOverflow and was able to adapt/extend it for my needs.
To maintain security without opening SSH ports to the public or assigning public IPs to each VM, I utilized GCP’s Identity-Aware Proxy for tunneling. The gcloud
cli makes this easy by providing the --tunnel-through-iap
flag.
The Configuration
The final addition to my SSH configuration was the following:
Host *.gcloud
ProxyCommand bash -c 'IFS=. read -r server zone project _ <<< "${1}"; gcloud compute --project "$project" ssh --zone "$zone" --ssh-key-file ~/.ssh/gcp_key --tunnel-through-iap "${2}@$server" --command="nc 0.0.0.0 22"' _ %h %r
IdentityFile ~/.ssh/gcp_key
# GCP handles the known hosts, and hostkey checking seperately, so we can ignore them here
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null
This configuration takes a hostname I provide (e.g., server1.us-west1-b.project1-id.gcloud
) and converts it into variables that are then passed to the gcloud
CLI.
I also wanted to forward some ports from inside GCP’s network to my local machine. So to add customizations for specific host, you can define a more specific host (without the wildcard), and add the options. SSH will then also use the settings defined in the wildcard block.
Host server1.us-west1-b.project1-id.gcloud
LocalForward 3306 10.20.30.40:3306
# Host *.gcloud...
Troubleshooting
If you are following along and ncounter issues with this setup, ensure that you’ve added the IAP IP addresses to your GCP network ingress rules. You can find the list of IP addresses in the GCP documentation. Also, you may need to sign into the gcloud
cli, if you haven’t already.
Conclusion
This setup has saved me a lot of time. I could manage DNS entries when creating the servers with infra-as-code, but this approach eliminates extra configuration which presents another point of failure, and lets me add extra security by not opening up any ports to the world.