Reading Time: 5 Minutes ReadThis is the fourth article in a series of posts about Google Cloud Platform security. The first two articles (Article I., Article II.) in the series focused on hardening possibilities on a project level with Google Cloud Platform. The third post highlighted the topic of built-in GCE Security. In the fourth article in the series, we will go through several network-level protection tools available for your Google Compute Engine instances. Read this article to learn more about GCE Network security. Also, don’t miss the last post of this series:
Use Google Cloud Load Balancer for Inbound HTTPS TrafficIt is always good practice to reduce your attack surface. This method is an example of doing just that with the incoming HTTP and HTTPS traffic. If your instance serves web requests, then you should use the Google Cloud Load Balancer to route those requests to your instance and avoid directly opening the HTTP and HTTPS ports to the Internet. This is a good idea if you only use a single machine. This approach has two advantages. One is that you will not disclose the individual public IP addresses of your web servers, thus making them harder to attack. The other advantage is that the Google Cloud Load Balancer applies some filters to the incoming HTTP requests, so you will probably not even see a bunch of malicious requests in the first place, because they will be filtered out before hitting your instances. Also, there is the possibility to scale or replace your web servers while maintaining the full availability of your services. You can change the backend servers behind the load balancer any time and there could be multiple web instances serving your sites at the same time.
Restrict the Outbound Firewall RulesEven if you disable every incoming request other than web requests, and even route those requests through the Google Cloud Load Balancer, you might still have a bug in the software you use in your web stack (e.g. Tomcat, NGINX) or maybe even in your application code. If an attacker can somehow manage to get control over the web processes and run arbitrary code on your web server, it will be much harder to practically control that machine in the long term without communicating with it from a master node. For this reason, most attacks use a technique to trick your firewall rules and open up an outgoing connection to an outside control host managed by the attacker. This way the connection is not incoming (where the firewall would block it) but outgoing (where the rules are usually much less restrictive). You can make this type of attack much harder if you disable new outgoing connections toward the Internet in your firewall rules (either on the OS level on the virtual machines or using the Firewall provided by GCE). There are two ways to go about this. One is to disable only newly established outgoing connections so every reply packet for incoming connections will be able to leave the instance. A second, stricter way to do it is to enable outgoing connections only to those IPs where you want to access the machine from. This way, only those machines can talk to the instance and no other outside access will be possible.
Set Up an HTTPS Proxy for Outbound AccessIf you implement the rules from the previous section, then you might inadvertently limit some legitimate functionality of your applications. If your application code accesses a third-party API during normal operations, you have to enable outgoing traffic in the firewall for the IP addresses of that third-party service. Otherwise, the code cannot access it. If your applications only use HTTP or HTTPS as outgoing connections, it is better to use an HTTPS (web)proxy for the requests. This way, you do not have to list every third-party API service IP in your firewall rules. Install a proxy server on a separate instance. Let that instance open new connections to the whole Internet on the HTTP and HTTPS ports in the firewall. Then, set up that instance internal IP as a proxy server to use on your web servers. This way you will have access to any APIs over HTTP or HTTPS while an attacker cannot open up new outgoing web connections from your machine by default. At least without knowledge of your network layout.
Refine All Inbound Firewall Rules which have 0.0.0.0/0 as a SourceIf you have any inbound firewall rules where the allowed connection source is set up as the whole Internet, then you might want to reconsider those rules. If you allow SSH access for the whole Internet, you will have a very high attack surface and will have to rely on every user not to lose their SSH private keys. And your particular SSH server must have no security flaws in it. There are multiple methods to remove or at least reduce the number of inbound rules with wide sources. If the specific port is only used by a limited number of employees/partners, it’s better to specify the IP addresses of the offices where the service is used from. If the access is for customers and the service is a web page or an API, use Google Cloud Load Balancer in front of your instances as described earlier. When you have a limited audience, the requests are not web related. You cannot know the source IP address range of the audience in advance. In this case, you should try the methods described in the next section.
Use VPN or a Jump Host for SSH or Special Port AccessesIf you have a specific port with a service intended for a limited audience, don’t open that service up to the whole Internet. If the audiences’ IP address isn’t known in advance, you have two solutions to try. Create a jump host. This is a dedicated instance which enables some kind of remote access (e.g. SSH, RDP) for the whole Internet. From this instance, the special service is accessible using the internal network. As a result, your audience first connects to the jump host, then uses the service from that machine. This approach has some prerequisites to be secure:
- This jump host’s IP address is not publicly known
- This jump host has to be the most up-to-date and best-secured machine in your whole infrastructure.
- Moving the remote access from the well-known default port to a random one, known only by your audience
- Applying advanced techniques like port knocking.