Aller au contenu

Lab Adv 03: Deploy an OKS Cluster with Terraform

By the end of this lab, you will be able to:

  • Create an OKS project with Terraform
  • Deploy an OKS cluster with Terraform
  • Automatically generate the cluster kubeconfig
  • Interact with the cluster using kubectl

Before starting this lab, make sure you have the following:

  • Terraform installed
  • kubectl installed

Create the following structure:

oks-terraform/
├── main.tf
├── providers.tf
├── variables.tf
├── terraform.tfvars
└── outputs.tf

Create the providers.tf file:

terraform {
required_version = ">= 1.0"
required_providers {
outscale = {
source = "outscale/outscale"
version = "1.3.0"
}
}
}
provider "outscale" {
access_key_id = var.access_key
secret_key_id = var.secret_key
region = var.region
endpoints {
api = "api.${var.region}.outscale.com"
}
}

Create the variables.tf file:

variable "access_key" {
description = "Access Key"
type = string
}
variable "secret_key" {
description = "Secret Key"
type = string
sensitive = true
}
variable "region" {
description = "Region"
type = string
}
variable "project_name" {
description = "OKS project name"
type = string
}
variable "project_cidr" {
description = "OKS project CIDR"
type = string
}
variable "cluster_name" {
description = "OKS cluster name"
type = string
}
variable "cidr_pods" {
description = "Pod network CIDR"
type = string
}
variable "cidr_services" {
description = "Service network CIDR"
type = string
}
variable "cluster_version" {
description = "OKS cluster Kubernetes version"
type = string
}
variable "admin_whitelist" {
description = "List of CIDRs allowed to access the Kubernetes API"
type = list(string)
}
variable "control_planes" {
description = "Control Planes"
type = string
default = "cp.mono.master"
}
variable "cp_multi_az" {
description = "Multi-AZ Control Plane"
type = bool
default = false
}
variable "cp_subregions" {
description = "Control Plane Subregions"
type = list(string)
default = ["1a", "1b", "1c"]
}

Create the terraform.tfvars file:

access_key = ""
secret_key = ""
region = "eu-west-2"
project_name = "my-project"
project_cidr = "10.50.0.0/16"
cluster_name = "cluster01"
cluster_version = "1.32"
cidr_pods = "10.91.0.0/16"
cidr_services = "10.92.0.0/16"
admin_whitelist = ["46.231.147.8/32"]

In the main.tf file, add:

resource "outscale_oks_project" "project01" {
name = var.project_name
cidr = var.project_cidr
region = var.region
}

Still in main.tf:

resource "outscale_oks_cluster" "cluster01" {
name = var.cluster_name
project_id = outscale_oks_project.project01.id
version = var.cluster_version
cidr_pods = var.cidr_pods
cidr_service = var.cidr_services
admin_whitelist = var.admin_whitelist
control_planes = var.control_planes
cp_multi_az = var.cp_multi_az
cp_subregions = var.cp_subregions
}

This configuration defines the control plane type, whether it is deployed in single-AZ or multi-AZ mode, and which subregions are used.

In this lab, the OKS cluster kubeconfig is automatically retrieved using a Terraform data source and exposed through an output.

Create the output.tf file:

data "outscale_oks_kubeconfig" "cluster01" {
cluster_id = outscale_oks_cluster.cluster01.id
}
output "kubeconfig" {
description = "OKS cluster kubeconfig"
value = data.outscale_oks_kubeconfig.cluster01.kubeconfig
sensitive = true
}

Initialize the Terraform project:

Terminal window
terraform init

Review the execution plan:

Terminal window
terraform plan

Apply the configuration:

Terminal window
terraform apply

Once the deployment is complete, export the kubeconfig generated by Terraform:

Terminal window
terraform output -raw kubeconfig > kubeconfig
export KUBECONFIG=$PWD/kubeconfig

Check that the cluster is accessible:

Terminal window
kubectl get ns
kubectl get pods -A

Once the OKS cluster is deployed and the kubeconfig is configured, you need to create a NodePool to provision worker nodes for running Kubernetes workloads.

Create a nodepool.yaml file with the following content:

apiVersion: oks.dev/v1beta2
kind: NodePool
metadata:
name: nodepool-01
spec:
desiredNodes: 2
nodeType: tinav7.c2r4p2
zones:
- eu-west-2a
upgradeStrategy:
maxUnavailable: 1
maxSurge: 0
autoUpgradeEnabled: true
autoUpgradeMaintenance:
durationHours: 1
startHour: 12
weekDay: Tue
autoHealing: true

Apply the configuration with the following command:

Terminal window
kubectl apply -f nodepool.yaml

Verify that the NodePool was created successfully:

Terminal window
kubectl get nodepools

Provisioning the NodePool machines may take several minutes.

Example of a Remote Terraform Backend Configuration (Optional)

Section intitulée « Example of a Remote Terraform Backend Configuration (Optional) »

This example is provided for reference only and is not required to complete this lab.

In this lab, the Terraform backend is intentionally kept local. The state file (terraform.tfstate) is therefore stored on the local machine.

In a production environment, it is recommended to use a remote backend to centralize the Terraform state, enable collaboration, and improve security.

Example backend.tf file:

terraform {
backend "s3" {
bucket = "terraform-bucket"
endpoint = "https://oos.eu-west-2.outscale.com"
key = "terraform/terraform.tfstate"
profile = "my-profile"
region = "eu-west-2"
skip_credentials_validation = true
skip_metadata_api_check = true
skip_region_validation = true
skip_requesting_account_id = true
skip_s3_checksum = true
}
}