Skip to main content

Network Configuration And Access Control

A robust networking setup is essential for Kubernetes clusters in GCP to ensure private communication and controlled access to external resources. This guide provides detailed instructions for configuring critical networking components in Google Cloud Platform (GCP) before deploying a private Kubernetes cluster. It ensures a secure and scalable environment by leveraging Google Kubernetes Engine (GKE) best practices.

Step 1: Create a VPC Network

The VPC will act as the foundational network for your Kubernetes cluster.

  1. Navigate to the GCP Console:
  2. Create the VPC:
    • Name: prod-0.
    • Subnet Creation Mode: Automatic (or Custom if specific IP ranges are required).
  3. Add a Custom Subnet:
    • Name: prod-subnet-0.
    • Region: us-central1.
    • IPv4 Range: 10.2.204.0/22.
    • Toggle Private Google Access to ON to allow instances without external IPs to access Google APIs and services.
  4. Save and Apply.
tip

Refer to Google Cloud VPC Network documentation for detailed instructions.

Step 2: Configure a Cloud Router

Cloud Router manages dynamic route advertisements, a critical component for NAT.

  1. Navigate to the Cloud Router Page:
  2. Create a Cloud Router:
    • Name: prod-router-0.
    • Region: us-central1.
    • Network: prod-0.
  3. Enable BGP (optional):
    • Set up BGP if required for dynamic routing with on-premises networks.
  4. Save Configuration.
tip

Refer to Cloud Router documentation for detailed instructions.

Step 3: Set Up Cloud NAT

Cloud NAT provides egress internet access for private Kubernetes nodes.

  1. Navigate to the Cloud NAT Page:
  2. Create a Cloud NAT Gateway:
    • Name: prod-gateway.
    • Region: us-central1.
    • Network: prod-0.
    • Router: prod-router-0.
  3. Specify NAT Mapping:
    • Recommended: Use automatic allocation of NAT IP ranges for simplicity.
  4. Enable Logging:
    • For monitoring purposes, enable NAT logging.
  5. Save and Deploy.
tip

Refer to Cloud NAT documentation for detailed instructions.

Step 4: Access Control

GKE uses IAM service accounts attached to your nodes to handle essential system tasks like logging and monitoring. At a basic level, these node service accounts need the Kubernetes Engine Default Node Service Account role (roles/container.defaultNodeServiceAccount) in your project.

By default, GKE assigns the Compute Engine default service account, which is automatically created for your project, as the node service account. Refer to Kubernetes Engine Default Node Service Account and Compute Engine Default Service Account for detailed instructions.

Step 5: Verify and Integrate

After configuring the network, ensure integration with GKE by setting up your private Kubernetes cluster.

  1. Private Cluster Creation:
  2. Integrate with Zero-Trust frameworks like Cloudflare and add the following applications:
    • Use the GKE Cluster Creation Guide to set up a private cluster.
    • IDHub Admin:
      • Application URL: [IDHUB_FQDN]/admin
    • Keycloak Master Realm:
      • Application URL: [IDHUB_FQDN]/auth/admin/master/console/
note
  • The IDHUB_FQDN will be the URL of IDHub application.
  • Please click here to get a detailed understanding of what is FQDN and how to configure.

Additional Considerations

  • Use Google Cloud Monitoring to track network traffic
  • Enable Logging for System and Workloads
  • Enable Shielded GKE Nodes
  • Implement Cluster Upgrade from stable channel
  • Enable HTTP Load Balancing