Simplest way to setup VPN in cloud cluster?

I’ve got a pretty common case that haunts me way too often. I need a way to connect to resources in private subnets, a VPN or any kind of substitue. Most of my systems are hosted in AWS and they’re quite small, therefore they are pretty cheap.

Right now I’m using EC2 Spot Machine automated with Ansible and Packer running OpenVPN Server which doesn’t satisfy me, but it mostly gets its job done and it’s cheap (t3.nano spot machine costs me around 2$ per month, which makes it too attractive against AWS VPN priced $0.10 per hour per endpoint).

At this moment I’m looking for something containerized that I could run within ECS or EKS cluster and treat it just like a stateful application. Why? Because I love the idea of having single Terraform module that will bring up the VPN just like that. It would be great if it would support IPSec or PPTP.

So the question: what are you guys using? Do you have similar concerns?

Look at VyOS. Its free (upstream version) and have all functions you need. I think you can use VyOS like statleless app because it uses only one config for all.

I personally use it in production in HA mode with VRRP and it works great.

I’ve run the gamut of anything openvpn based: fronting clusters with it on dedicated ec2 instances, running it statefully inside of the cluster, OpenVPN Access, pritunl, and AWS VPN. I’m slowly coming to the point where, when possible, I don’t bother with VPN at all. Oauth + mutual TLS certs handle nearly all of my sensitive endpoints. Where possible, I use port-forwarding (from kubefwd), to avoid having to run a public ingress object in the first place. Yes, it sounds klunky (and it is), but it’s reasonably secure, dirt cheap, and low effort to implement. The downside, is it really doesn’t scale well.

As a separate topic, I really dislike having to juggle multiple VPN solutions. I’ve done VPC peering, and find it obnoxious, especially if you have to peer cross-providers. The KISS solution, for me, was to simply automate hard-coding of routes in OpenVPN/split tunnel mode. The only major drawback there, is you cannot dynamically update OVPN routes, since they get pushed by the server at login time. You could mitigate this entirely by running full-tunnel, but now your clients are piping everything through the VPN, which can rack up some non-trivial bandwidth charges. Still, at the end of the day, you’re trading something off, somewhere: simplicity, convenience, reliability, security, cost, etc.

Finally, I looked hard at IPSEC solutions, ultimately decided they weren’t a good fit. If I remember right, the implementation on non-windows hosts was considered to be difficult, though I can’t really recall the details (this was like five years ago when I looked at it.) OpenVPN is a robust, open source, and actively developed solution, with both UDP and TCP flavors to it. It has its warts, but they’re not terrible, when you get familiar with them. G’luck!

Anything wrong with the AWS client VPN endpoint? It’s openvpn, native AWS, just creates an endpoint inside your vpc easy-peasy. They had a bug with the generated configs, but that’s been fixed. About to roll it out to my org with one per environment, integrated into our AWS SSO, but you can just use mutual auth certs if you only have a few users.

I’d like to know what issues you have with OpenVPN. That’s what we’re using, but definitely not a super developed company on the devops side so wondering if we may run into issues later with it (we’re looking at at least moving it now if not using a different service, so good timing)

What is your pain point with openvpn?

I use openvpn in a couple of different projects. My major issue with it was setup, so I initially wrote an ansible playbook, to launch a t2 server and setup openvp from scratch.

Later, I found this gem of an image - GitHub - kylemanna/docker-openvpn: 🔒 OpenVPN server in a Docker container complete with an EasyRSA PKI CA . U’ve since modified the original playbook to simply setup this docker image instead.

If you as a human just need to connect, then you can use system session manager to connect to an ec2 instance in your vpc through the AWS console.

The advantage of using System Manager is that your ec2 instance doesn’t need to allow ingress from the Internet. It doesn’t even need a public IP address. Also, all the authentication is done through IAM, so you don’t have to manage SSH keys. You can also configure limits on sessions and audit who logged in with system manager.

Here’s a tutorial on setting up system manager agent: How to Remotely Run Commands on an EC2 Instance with AWS Systems Manager | AWS

This is how you would connect after system manager is ready to use: Start a session - AWS Systems Manager

It’s also possible to run commands remotely with system session manager via the cli.

I use wireguard on the job. It’s really easy to set-up, and I even use GitHub - vx3r/wg-gen-web: Simple Web based configuration generator for WireGuard in a dockerized setup to get an UI/REST API I can use to create key files. I’m overall happy with this setup.

Did you run VyOS in a container in Kubernetes? If so, any issues connecting to it as a vpn endpoint. I’ve seen talk of running a vpn in a container, but I’d think it would be unstable. Looking for opinions from more experienced folks.

I mean, the cost? 73 bucks per month per VPC is crazy, for the same amount I get EKS Control Plane. With cluster cost around $150 paying over $70 for VPN used twice a day is not the greatest deal :stuck_out_tongue:

I run VyOS on baremetal and never saw anyone using VyOS in k8s, but i think it will work good if correct config applied.

You could always just spin it up/down with terraform as needed? It does take ~10min to spin up, so if you’re optimizing like that something more custom that you can halt might make more sense