Using a custom gateway on an OVHcloud Managed Kubernetes cluster
Objectives
In this tutorial we are going to use a custom gateway deployed in vRack with a Managed Kubernetes cluster.
Why?
By default, in a Kubernetes cluster, the Pods you deploy take the Node's output IP.
So we have as many output IPs as Nodes. This can be a problem when you are in a situation where you need to manage a whitelist and you have a cluster with AutoScaling (creating and deleting Nodes on the fly).
One solution is to use a custom gateway which will allow you to have a single output IP (your gateway).
You will:
create a private network
create subnets
create an OpenStack router (in every regions) and link them to the external provider network and the subnets
create an OVHcloud Managed Kubernetes cluster with the private gateway
test the Pod's output IP
At the end of this tutorial you should have the following flow:
INFO
In this tutorial we guide you on how to create the private network in two regions but you can use only one region if you want, GRA9 for example.
To setup a functional environment, you have to load the OpenStack and the OVHcloud API credentials.
To help you we also created for you several useful scripts and templates.
First, create a utils folder in your environment/local machine.
Then, download the ovhAPI.sh script into it.
And then add execution rights to the ovhAPI.sh script:
chmod +x utils/ovhAPI.sh
You have to load the content of the given utils/openrc file, to manage OpenStack, and variables contained in the utils/ovhAPI.properties file to manage the OVHcloud API.
Create the utils/openrc, or download it from your Openstack provider. It must be like:
export OS_AUTH_URL=https://auth.cloud.ovh.net/v3export OS_IDENTITY_API_VERSION=3export OS_USER_DOMAIN_NAME=${OS_USER_DOMAIN_NAME:-"Default"}export OS_PROJECT_DOMAIN_NAME=${OS_PROJECT_DOMAIN_NAME:-"Default"}export OS_TENANT_ID=xxxxxxxxxxxxxxxxxxxxxexport OS_TENANT_NAME="xxxxxxxxxxxxxxxxxx"export OS_USERNAME="user-xxxxxxxxxxxxx"export OS_PASSWORD="xxxxxxxxxxxxxxxxxx"export OS_REGION_NAME="xxxx"if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
Create the utils/ovhAPI.properties with your generated keys and secret:
Note: To be clear, the parameter "noGateway": false means "Gateway": true. We want the subnet to explicitly use the first IP address of the CIDR range.
Then create subnets with appropriate routes, and finally get IDs (subnGRA9 & subnGRA11):
For now, it's not possible to add routes to the subnet via the API, so we must use the OpenStack CLI instead.
Bash
openstack --os-region-name=GRA9 subnet set ${subnGRA9} --host-route destination=192.168.0.0/25,gateway=192.168.0.1openstack --os-region-name=GRA11 subnet set ${subnGRA11} --host-route destination=192.168.0.128/25,gateway=192.168.0.129
OpenStack router
Create the routers
We have the ability to create OpenStack virtual routers. To do this, we need to use the OpenStack CLI.
Create routers and get their IDs (rtrGRA9Id & rtrGRA11Id):
$ export rtrGRA9Id="$(openstack --os-region-name=GRA9 router create rtr-GRA9 -f json | jq -r .id)" && echo $rtrGRA9Id26bf99c8-d6fa-4c5a-9d42-1358776ee0a2$ export rtrGRA11Id="$(openstack --os-region-name=GRA11 router create rtr-GRA11 -f json | jq -r .id)" && echo $rtrGRA11IdResourceNotFound: 404: Client Error for url: https://network.compute.gra11.cloud.ovh.net/v2.0/routers, The resource could not be found.
INFO
For the moment you can only create a virtual router in the GRA9 and GRA11 regions, but this feature will be released in other regions in the coming weeks and months.
Now, you can display the information of your new virtual router on GRA9 in order to display its IP:
$ openstack --os-region-name=GRA9 router show $rtrGRA9Id -c id -c name -c status -c created_at -c external_gateway_info+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+| Field | Value |+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+| created_at | 2022-07-25T07:32:06Z || external_gateway_info | {"network_id": "b2c02fdc-ffdf-40f6-9722-533bd7058c06", "external_fixed_ips": [{"subnet_id": "0f11270c-1113-4d4f-98de-eba83445d962", "ip_address": "141.94.209.244"}, {"subnet_id": "4aa6cac1-d5cd-4e25-b14b-7573aeabcab1", "ip_address": "2001:41d0:304:400::917"}], "enable_snat": true} || id | 26bf99c8-d6fa-4c5a-9d42-1358776ee0a2 || name | rtr-GRA9 || status | ACTIVE |+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
As you can see, in this example, the IP of the gateway will be 141.94.209.244.
Link the router to the external provider network
First, get the regional external network ID (extNwGRA9Id & extNwGRA11Id), then link the router to it:
Now the network is ready. Create an OVHcloud Managed Kubernetes cluster, specifying the use of the gateway defined on each subnet.
Note: until the end of this tutorial, we are only using the GRA9 region, but you can repeat the exact same steps to create a cluster on the GRA11 region.
INFO
In this guide we defined 1.23 version for the Kubernetes cluster but you can use another supported version.
First, get the private network IDs (pvnwGRA9Id & pvnwGRA11Id), then create the OVHcloud Managed Kubernetes Cluster, and finally get the cluster ID (kubeId):
Bash
API
OVHcloud Control Panel
Terraform
Create a tpl/data-kube.json.tpl file as data and add the right parameters. The files should be like:
Log in to the OVHcloud Control Panel, go to the Public Cloud section and select the Public Cloud project concerned.
Access the administration UI for your OVHcloud Managed Kubernetes clusters by clicking on Managed Kubernetes Service in the left-hand menu:
INFO
You can create your networks and subnets using Terraform by following this guide.
You need to create a file, let's name it kubernetes-cluster-test.tf with this content:
# Create your Kubernetes clusterresource "ovh_cloud_project_kube" "cluster_terraform" { service_name = "my_service_name" # Replace with your OVHcloud project ID name = "cluster_terraform" region = "GRA9" private_network_id = "my_private_network_id" # Replace with your private network id private_network_configuration { private_network_routing_as_default = true default_vrack_gateway = "192.168.0.1" }}# Create your node pool and assign it to your clusterresource "ovh_cloud_project_kube_nodepool" "node_pool" { service_name = "my_service_name" # Replace with your OVHcloud project ID kube_id = ovh_cloud_project_kube.cluster_terraform.id name = "node-pool-terraform" flavor_name = "b3-8" # Replace with the desired instance flavour desired_nodes = 3 max_nodes = 3 min_nodes = 3}
You can create your resources by entering the following command:
terraform apply
Now wait until your OVHcloud Managed Kubernetes cluster is READY.
For that, you can check its status in the OVHcloud Control Panel:
Log in to the OVHcloud Control Panel, go to the Public Cloud section and select the Public Cloud project concerned.
Access the administration UI for your OVHcloud Managed Kubernetes clusters by clicking on Managed Kubernetes Service in the left-hand menu:
As you can see, your new cluster is attached to demo-pvnw network.
And now click in your demo created Kubernetes cluster in order to see its status:
When your cluster's status is OK, you can go to the next section.
Get Kubeconfig file
To proceed with the freshly created Kubernetes cluster, you must get the Kubeconfig file.
Bash
API
utils/ovhAPI.sh POST /cloud/project/$OS_TENANT_ID/kube/$kubeId/kubeconfig | jq -r .content > kubeconfig-demo
To use this kubeconfig file and access to your cluster, you can follow our configuring kubectl tutorial, or simply add the --kubeconfig flag in your kubectl commands.
Test
List the running nodes in your cluster:
kubectl --kubeconfig=kubeconfig-demo get no -o wide
You should obtain a result like this:
$ kubectl --kubeconfig=kubeconfig-demo get no -o wideNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIMEnodepool-8f0b4d98-874a-4cfd-b8-node-c74f26 Ready <none> 56m v1.23.6 192.168.0.71 141.94.215.23 Ubuntu 18.04.6 LTS 4.15.0-189-generic containerd://1.4.6nodepool-8f0b4d98-874a-4cfd-b8-node-c9bf60 Ready <none> 57m v1.23.6 192.168.0.96 141.94.208.78 Ubuntu 18.04.6 LTS 4.15.0-189-generic containerd://1.4.6nodepool-8f0b4d98-874a-4cfd-b8-node-e666f5 Ready <none> 56m v1.23.6 192.168.0.31 141.94.212.214 Ubuntu 18.04.6 LTS 4.15.0-189-generic containerd://1.4.6
Now test the cluster by running a simple container that requests its published IP address.
If you need training or technical assistance to implement our solutions, contact your sales representative or click on this link to get a quote and ask our Professional Services experts for assisting you on your specific use case of your project.