This document describes how to inspect outgoing traffic to Internet through FortiGate with multi-zone HA cluster setup using network-tag feature set per Virtual Machine (VM) in Google Cloud Platform (GCP). Use-case is to help customers who wants to shift outgoing traffic to FortiGate deployment in a VM-by-VM fashion. With this option, selected VMs' egress traffic can be inspected by FortiGate.
Official document for network-tags can be found at here
In this scenario, FortiGate active/passive (A/P) cluster is operating as Internet breakout firewall in GCP project. "Custom route" is a GCP networking component that we can use to manipulate default routing behavior for inter-VPC type of traffic. In normal conditions, each subnet within a VPC uses default route automatically assigned by platform pointing GCP's Internet services. Egress traffic to Internet can be routed through following next-hop options:
- Instance (available in GUI)
- IP address (available in GUI)
- VPN tunnel (available in GUI)
- Forwarding rule of internal TCP/UDP load balancer (available in GUI)
- Internal TCP/UDP load balancer IP address (not-available in GUI)
Last option above cannot be configured using GCP GUI as of today (Q2'22). Configuration will be done using gcloud CLI commands which will be described below.
- Fortigate multi-zone HA A/P deployed in project using deployment templates,
- Spoke VPCs and VMs deployed in each VPC,
- VPC peering established between Spoke VPCs and FortiGate Internal VPC (configuring VPC peering)
- FortiGate NAT-enabled security rule allowing specific egress services for Spoke-VMs
- FortiGate static route for Spoke-VPC CIDRs
- (Optional) FortiGate Fabric-connector to import GCP objects (FortiGate GCP Fabric-connector how to)
The following diagram illustrates environment for this use-case. As shown in the topology, an Internal Load Balancer (ILB) is placed behind FortiGate HA cluster. ILB's internal fronting IP address will be used as a next-hop-IP-address setting in custom-route configuration pointing out to Internet.
GCP management console page can be used to access CLI by clicking "Activage Cloud Shell" button on top right.
After your Cloud Shell machine provisioned, a black screen will be visible at the bottom.
After accessing cloud shell over CLI, following syntax can be used to create custom route table with network tag specified. Same network_tag_value will be used by compute resources in Spoke VPC.
gcloud compute routes create custom_route_name \
--network=name_of_spoke_vpc \
--destination-range=0.0.0.0/0 \
--next-hop-ilb=ip_address_of_ilb \
--priority=custom_route_priority \
--tags=network_tag_value
Navigate to specific VM which needs egress inspection by FortiGate under "Compute Engine > VM Instances > select specific VM"
You can add network tag value on "Edit" screen for a virtual machine. Find "Network Tags" section on edit screen as below.
Add network_tag_value defined in custom route table. When you click "Save" at the bottom, custom route table will become effective and outgoing traffic will be routed to ILB IP address as it is defined in custom route.
A security rule allows outgoing traffic for specific source objects. In the example below, Spoke1_VPC object is used.
Here is the nagivation path for creating those: Objects: Under "Policy & Objects > Addresses" Security Rule: Under "Policy & Objects > Firewall Policy"
In this demo environment, Linux based VM is installed within Spoke VPC. Serial console for this VM can be accessed through GCP console GUI via this path "Compute Engine > VM Instances > Spoke-VM > Connect to Serial Console"
To find out egress-NAT'ed public IP address, curl ip.me can be used. Output will show Public IP address which is used by Cloud NAT deployed via FortiGate deployment template. That means, outgoing packets are traversing through FortiGate instances.
Traffic log for the test above can be accessed through FortiGate management GUI screen via "Log & Export > Forward Traffic"