we are committed to delivering innovative solutions that drive growth and add value to our clients. With a team of experienced professionals and a passion for excellence.

Search Now!
Follow Us

Deploying Kong Gateway on AKS with Pulumi: Streamlining API Management in Microservices

Images
Authored by
Zelarsoft_revamp
Date Released
16 January, 2026
Comments
No Comments

Introduction: 

Managing and securing APIs is crucial for successful application development in microservices and distributed systems. An open-source API gateway, Kong has gained popularity for its robust features and scalability. By acting as a central entry point for all API traffic, Kong enables organizations to handle authentication efficiently, rate limiting, request/response transformations, and more. This article will explore how to deploy Kong Gateway in standalone mode using Pulumi, an Infrastructure as Code (IaC) tool. This deployment approach empowers seamless API management across on-premises and cloud environments while leveraging the capabilities of Pulumi to automate infrastructure provisioning and configuration.

Why Pulumi?

Pulumi offers a simplified approach to managing and deploying infrastructure by treating it as code. With Pulumi, you can leverage familiar programming languages like JavaScript, Python, or Go to write and provision infrastructure. By utilizing Pulumi, you can automate the deployment of Kong, including its associated resources, in a consistent and repeatable manner. This automation reduces manual effort and enhances reproducibility, making infrastructure management more efficient and reliable.

Prerequisites:

Before proceeding with the deployment of Kong Gateway on AKS using Pulumi, ensure that you have the following prerequisites in place:

  1. Basic understanding of Azure Kubernetes Service (AKS): Familiarity with AKS and its core concepts will be beneficial for comprehending the deployment process and managing the AKS cluster effectively.
  2. Knowledge of Pulumi: Pulumi is an Infrastructure as Code (IaC) tool that allows you to write infrastructure code using popular programming languages like JavaScript, Python, or Go. Familiarize yourself with Pulumi’s concepts and syntax to grasp the deployment workflow and effectively manage the infrastructure.
  3. Azure account with access to create an AKS cluster: You should have an active Azure account with appropriate permissions to create and manage resources in Azure, including the ability to create an AKS cluster.
  4. Pulumi installed on your local machine: Pulumi simplifies the management and deployment of infrastructure by treating infrastructure as code. It lets you write and provision infrastructure using familiar programming languages like JavaScript, Python, or Go. Using Pulumi, you can automate the deployment of Kong, along with its associated resources, in a consistent and repeatable manner, reducing manual effort and ensuring reproducibility. Ensure that Pulumi is installed on your local development machine. You can refer to the Pulumi documentation for installation instructions specific to your operating system.

By fulfilling these prerequisites, you will have the foundational knowledge and necessary tools to deploy Kong Gateway on AKS using Pulumi.

Pulumi documentation here

Step 1: Setting up the Pulumi Project

The first step is to define the infrastructure as code using Pulumi. Create a new directory for your project and navigate into it; initialize a new Pulumi project. Start by initializing a new Pulumi project and selecting the desired programming language here.

$ mkdir quickstart && cd quickstart
$ pulumi new azure-typescript

# Install necessary dependencies by running
npm install @pulumi/azure-native @pulumi/kubernetes @pulumi/azuread @pulumi/kubernetes-cert-manager

Configure Pulumi to access your Microsoft Azure account and set config:

az login

# pulumi config set
pulumi config set azure-native:location westus2
pulumi config set pulumi-poc:kubernetesVersion 1.25.6
pulumi config set pulumi-poc:nodeVmSize Standard_DS2_v2
pulumi config set pulumi-poc:numWorkerNodes “2”
pulumi config set pulumi-poc:resourceGroupName ac-rg
pulumi config set pulumi-poc:resourceName pulumi-poc

Step 2: Configure Kong Deployment

Next, you’ll configure the Kong deployment within your Pulumi project. This involves defining the necessary resources such as Azure AD Application, Service Principal Password, AKS cluster, and kubeconfig required to run Kong. Define the configuration in your Pulumi project’s index.ts file.

import * as pulumi from “@pulumi/pulumi”;
import * as resources from “@pulumi/azure-native/resources”;
import * as azuread from “@pulumi/azuread”;
import * as containerservice from “@pulumi/azure-native/containerservice”;
import * as k8s from “@pulumi/kubernetes”;
import * as certmanager from ‘@pulumi/kubernetes-cert-manager’;
import {FileAsset} from “@pulumi/pulumi/asset”;

// Grab some values from the Pulumi stack configuration (or use defaults)
const projCfg = new pulumi.Config();
const numWorkerNodes = projCfg.getNumber(“numWorkerNodes”) || 3;
const k8sVersion = projCfg.get(“kubernetesVersion”) || “1.26.3”;
const nodeVmSize = projCfg.get(“nodeVmSize”) || “Standard_DS2_v2”;
const resourceGroupName = projCfg.get(“resourceGroupName”) || “ac-rg”;
const clusterName = projCfg.get(“resourceName”) || “pulumi-poc”;
const enterpriseLicense = “../license.json”; // path to license


// Get existing Azure Resource Group Name
const resourceGroup = pulumi.output(resources.getResourceGroup({
resourceGroupName: resourceGroupName,
}));

// Get existing Azure AD Application named “poc”
const adAppName = “poc”;
const adApp = pulumi.output(azuread.getApplication({
displayName: adAppName,
}));

// Retrieve the existing Azure AD Service Principal by its Application ID
const servicePrincipal = adApp.apply(app => azuread.getServicePrincipal({
applicationId: app.applicationId,
}));

// Create a new Service Principal Password for the existing AD App “poc”
const spPassword = servicePrincipal.apply(sp => new azuread.ServicePrincipalPassword(“servicePrincipalPassword”, {
servicePrincipalId: sp.id,
endDate: “2099-02-02T00:00:00Z”,
}));

// Create an Azure Kubernetes Cluster
const aksCluster = new containerservice.ManagedCluster(“aksCluster”, {
resourceName: clusterName, // Set the resourceName to “pulumi-poc”
resourceGroupName: resourceGroup.name,
location: resourceGroup.location,
dnsPrefix: clusterName,
agentPoolProfiles: [{
availabilityZones: [“1”,“2”,“3”],
count: numWorkerNodes,
enableNodePublicIP: false,
mode: “System”,
name: “systempool”,
osType: “Linux”,
osDiskSizeGB: 30,
type: “VirtualMachineScaleSets”,
vmSize: nodeVmSize,
}],
kubernetesVersion: k8sVersion,
servicePrincipalProfile: {
clientId: adApp.applicationId,
secret: spPassword.value,
},
});

// Build a Kubeconfig to access the cluster
const creds = pulumi.all([aksCluster.name, resourceGroup.name]).apply(([kubeconfigName, rgName]) =>
containerservice.listManagedClusterUserCredentials({
resourceGroupName: rgName,
resourceName: kubeconfigName,
}));

const encodedKubeconfig = creds.kubeconfigs[0].value!;
const kubeconfig = encodedKubeconfig.apply(kubeconfigYaml => Buffer.from(kubeconfigYaml, “base64”).toString());

// Create the k8s provider using the kubeconfig from the existing AKS cluster
const k8sProvider = new k8s.Provider(“k8s-provider”, { kubeconfig });

// Create a namespace for Kong
const kongNamespace = new k8s.core.v1.Namespace(“namespace”, {
metadata: {
name: “kong”,
},
}, { provider: k8sProvider });

// Create a secret from the content of the license.json file
const licenseFile = pulumi.interpolate`${enterpriseLicense}`
const kongEnterpriseLicense = new k8s.core.v1.Secret(“kong-enterprise-license”, {
metadata: {
name: “kong-enterprise-license”,
namespace: kongNamespace.metadata.name,
},
data: {
license: licenseFile.apply(content => Buffer.from(content).toString(“base64”)),
},
}, { provider: k8sProvider });

// Define the secret data
const sessionConfigSecretData = {
“portal_session_conf”: ‘{“storage”:”kong”,”secret”:”super_secret_salt_string”,”cookie_name”:”portal_session”,”cookie_samesite”:”off”,”cookie_secure”:false, “cookie_domain”: “.create9.io”}’,
“admin_gui_session_conf”: ‘{“cookie_name”:”admin_session”,”cookie_samesite”:”off”,”secret”:”super_secret_salt_string”,”cookie_secure”:false,”storage”:”kong”, “cookie_domain”: “.create9.io”}’,
“pg_host”: “enterprise-postgresql.kong.svc.cluster.local”,
“kong_admin_password”: “kong”,
“password”: “kong”,
};

// Create a Kubernetes Secret with the specified literals
const kongSessionConfig = new k8s.core.v1.Secret(“kong-session-config”, {
metadata: {
name: “kong-config-secret”,
namespace: kongNamespace.metadata.name,
},
data: pulumi.interpolate`${JSON.stringify(sessionConfigSecretData)}`
.apply(content => JSON.parse(content))
.apply(content => {
const ret: Record<string, pulumi.Input<string>> = {};
for (const key of Object.keys(content)) {
ret[key] = pulumi.interpolate`${content[key]}`
.apply(contentStr => Buffer.from(contentStr).toString(“base64”));
}
return ret;
}),
}, { provider: k8sProvider });

// Cert Manager Namespace
const certManagerNamespace = new k8s.core.v1.Namespace(“cert-manager-namespace”, {
metadata: {name: “cert-manager”},
}, { provider: k8sProvider });

// Install cert manager
const certManager = new certmanager.CertManager(“cert-manager”, {
installCRDs: true,
helmOptions: {
name: “cert-manager”,
namespace: certManagerNamespace.metadata.name,
values: {
global: {
operatorNamespace: certManagerNamespace.metadata.name,
rbac: {
create: true,
},
logLevel: “debug”,
leaderElection: {
namespace: “kube-system”,
},
},
serviceAccount: {
create: true,
automountServiceAccountToken: true,
},
securityContext: {
runAsNonRoot: true,
},
webhook: {
enabled: true,
namespace: certManagerNamespace.metadata.name,
timeoutSeconds: 30,
serviceAccount: {
create: true,
automountServiceAccountToken: true,
},
hostNetwork: false,
serviceType: “ClusterIP”,
},
},
},
}, { provider: k8sProvider, dependsOn: certManagerNamespace });

// Create Self Signed Root Certificate Authority.
const rootIssuer = new k8s.apiextensions.CustomResource(“issuerRoot”, {
apiVersion: “cert-manager.io/v1”,
kind: “Issuer”,
metadata: {
name: “quickstart-kong-selfsigned-issuer-root”,
namespace: kongNamespace.metadata.name,
},
spec: {
selfSigned: {}
},
}, { provider: k8sProvider, dependsOn: [certManager, kongNamespace] });

// Certificate for Self Signed ClusterIssuer.
const selfSignedCa = new k8s.apiextensions.CustomResource(“selfSignCertificateAuthority”, {
apiVersion: “cert-manager.io/v1”,
kind: “Certificate”,
metadata: {
name: “quickstart-kong-selfsigned-issuer-ca”,
namespace: kongNamespace.metadata.name,
},
spec: {
commonName: “quickstart-kong-selfsigned-issuer-ca”,
duration: “2160h0m0s”,
isCA: true,
issuerRef: {
group: “cert-manager.io”,
kind: “Issuer”,
name: “quickstart-kong-selfsigned-issuer-root”
},
privateKey: {
algorithm: “ECDSA”,
size: 256
},
renewBefore: “360h0m0s”,
secretName: “quickstart-kong-selfsigned-issuer-ca”
},
}, { provider: k8sProvider, dependsOn: [certManager, kongNamespace] });

// Create Self Signed Issuer
const selfsignedIssuer = new k8s.apiextensions.CustomResource(“selfSignIssuer”, {
apiVersion: “cert-manager.io/v1”,
kind: “Issuer”,
metadata: {
name: “quickstart-kong-selfsigned-issuer”,
namespace: kongNamespace.metadata.name,
},
spec: {
ca: {
secretName: “quickstart-kong-selfsigned-issuer-ca”,
},
},
}, { provider: k8sProvider, dependsOn: [certManager, kongNamespace] });

// Deploying Kong Gateway on AKS
const kongHelmChart = new k8s.helm.v3.Release(“quickstart”, {
namespace: kongNamespace.metadata.name,
chart: “kong”,
name: “quickstart”,
version: “2.23.0”,
repositoryOpts: {
repo: “https://charts.konghq.com”,
},
skipAwait: false,
valueYamlFiles: [new FileAsset(“./quickstart-values.yaml”)],
timeout: 600,
}, { provider: k8sProvider, dependsOn: kongNamespace });

Step 3: Define Kong Configurations values in quickstart-values. yaml.

admin:
annotations:
konghq.com/protocol: https
enabled: true
http:
enabled: false
ingress:
annotations:
konghq.com/https-redirect-status-code: “301”
konghq.com/protocols: https
konghq.com/strip-path: “true”
kubernetes.io/ingress.class: default
nginx.ingress.kubernetes.io/app-root: /
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/permanent-redirect-code: “301”
enabled: true
hostname: kong-api.create9.io
path: /
tls: quickstart-kong-admin-cert
tls:
containerPort: 8444
enabled: true
parameters:
http2
servicePort: 443
type: LoadBalancer
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
podAffinityTerm:
labelSelector:
matchExpressions:
key: app.kubernetes.io/instance
operator: In
values:
dataplane
topologyKey: kubernetes.io/hostname
weight: 100
certificates:
enabled: true
issuer: quickstart-kong-selfsigned-issuer
cluster:
enabled: true
admin:
enabled: true
commonName: kong.create9.io
portal:
enabled: true
commonName: developer.create9.io
proxy:
enabled: true
commonName: create9.io
dnsNames:
‘*.create9.io’
cluster:
enabled: true
labels:
konghq.com/service: cluster
tls:
containerPort: 8005
enabled: true
servicePort: 443
type: LoadBalancer
clustertelemetry:
enabled: true
tls:
containerPort: 8006
enabled: true
servicePort: 443
type: LoadBalancer
deployment:
kong:
daemonset: false
enabled: true
enterprise:
enabled: true
license_secret: kong-enterprise-license
portal:
enabled: true
rbac:
admin_api_auth: basic-auth
admin_gui_auth_conf_secret: kong-config-secret
enabled: true
session_conf_secret: kong-config-secret
smtp:
enabled: false
vitals:
enabled: true
env:
admin_access_log: /dev/stdout
admin_api_uri: https://kong-api.create9.io/
admin_error_log: /dev/stdout
admin_gui_access_log: /dev/stdout
admin_gui_error_log: /dev/stdout
admin_gui_host: kong.create9.io
admin_gui_protocol: https
admin_gui_url: https://kong.create9.io/
cluster_data_plane_purge_delay: 60
cluster_listen: 0.0.0.0:8005
cluster_telemetry_listen: 0.0.0.0:8006
database: postgres
log_level: debug
lua_package_path: /opt/?.lua;;
nginx_worker_processes: “2”
nginx_proxy_proxy_busy_buffers_size: 256k
nginx_proxy_proxy_buffers: 4 256k
nginx_proxy_proxy_buffer_size: 256k
nginx_http_client_body_buffer_size: 256k
nginx_proxy_large_client_header_buffers: 8 256k
trusted_ips: “0.0.0.0/0,::/0”
# real_ip_recursive: on
# real_ip_header: X-Forwarded-For

password:
valueFrom:
secretKeyRef:
key: kong_admin_password
name: kong-config-secret
pg_database: kong
pg_host:
valueFrom:
secretKeyRef:
key: pg_host
name: kong-config-secret
pg_ssl: “off”
pg_ssl_verify: “off”
pg_user: kong
plugins: bundled,openid-connect
portal: true
portal_api_access_log: /dev/stdout
portal_api_error_log: /dev/stdout
portal_api_url: https://developer-api.create9.io/
portal_auth: basic-auth
portal_cors_origins: ‘*’
portal_gui_access_log: /dev/stdout
portal_gui_error_log: /dev/stdout
portal_gui_host: developer.create9.io
portal_gui_protocol: https
portal_gui_url: https://developer.create9.io/
portal_session_conf:
valueFrom:
secretKeyRef:
key: portal_session_conf
name: kong-config-secret
prefix: /kong_prefix/
proxy_access_log: /dev/stdout
proxy_error_log: /dev/stdout
proxy_stream_access_log: /dev/stdout
proxy_stream_error_log: /dev/stdout
smtp_mock: “on”
status_listen: 0.0.0.0:8100
vitals: true
extraLabels:
konghq.com/component: quickstart
image:
repository: kong/kong-gateway
tag: “3.1.1.1-alpine”
ingressController:
enabled: true
env:
kong_admin_filter_tag: ingress_controller_default
kong_admin_tls_skip_verify: true
kong_admin_token:
valueFrom:
secretKeyRef:
key: password
name: kong-config-secret
kong_admin_url: https://localhost:8444
kong_workspace: default
publish_service: kong/quickstart-kong-proxy
image:
repository: docker.io/kong/kubernetes-ingress-controller
tag: “2.7”
ingressClass: default
installCRDs: false
manager:
annotations:
konghq.com/protocol: https
enabled: true
http:
containerPort: 8002
enabled: false
servicePort: 443
ingress:
annotations:
konghq.com/https-redirect-status-code: “301”
kubernetes.io/ingress.class: default
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
enabled: true
hostname: kong.create9.io
path: /
tls: quickstart-kong-admin-cert
tls:
containerPort: 8445
enabled: true
parameters:
http2
servicePort: 443
type: LoadBalancer
migrations:
enabled: true
postUpgrade: true
preUpgrade: true
namespace: kong
podAnnotations:
kuma.io/gateway: enabled
portal:
annotations:
konghq.com/protocol: https
enabled: true
http:
containerPort: 8003
enabled: false
servicePort: 443
ingress:
annotations:
konghq.com/https-redirect-status-code: “301”
# konghq.com/protocols: https
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
# konghq.com/strip-path: “false”
kubernetes.io/ingress.class: default
enabled: true
hostname: developer.create9.io
path: /
tls: quickstart-kong-portal-cert
tls:
containerPort: 8446
enabled: true
parameters:
http2
servicePort: 443
type: LoadBalancer
portalapi:
annotations:
konghq.com/protocol: https
enabled: true
http:
enabled: false
ingress:
annotations:
konghq.com/https-redirect-status-code: “301”
konghq.com/protocols: https
konghq.com/strip-path: “true”
kubernetes.io/ingress.class: default
nginx.ingress.kubernetes.io/app-root: /
enabled: true
hostname: developer-api.create9.io
path: /
tls: quickstart-kong-portal-cert
tls:
containerPort: 8447
enabled: true
parameters:
http2
servicePort: 443
type: LoadBalancer
postgresql:
enabled: true
auth:
database: kong
username: kong
proxy:
annotations:
prometheus.io/port: “9542”
prometheus.io/scrape: “true”
enabled: true
http:
containerPort: 8080
enabled: true
hostPort: 80
ingress:
enabled: false
labels:
enable-metrics: true
tls:
containerPort: 8443
enabled: true
hostPort: 443
type: LoadBalancer
replicaCount: 1
secretVolumes: []
status:
enabled: true
http:
containerPort: 8100
enabled: true
tls:
containerPort: 8543
enabled: false
updateStrategy:
rollingUpdate:
maxSurge: 100%
maxUnavailable: 100%
type: RollingUpdate

# readinessProbe for Kong pods
readinessProbe:
httpGet:
path: “/status”
port: status
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3

# livenessProbe for Kong pods
livenessProbe:
httpGet:
path: “/status”
port: status
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
periodSeconds: 15
successThreshold: 1
failureThreshold: 3

Step 4: Provision the Infrastructure

Once the Pulumi project is configured and the Kong resources and configurations are defined, you can provide the infrastructure. Pulumi will deploy the necessary resources and configure Kong according to your specifications.

Run the following command to deploy Kong on your AKS cluster using Pulumi:

# TO deploy your stack
pulumi up

# To destroy resources, run the following:
pulumi destroy

Confirm the changes and wait for the deployment to complete.

Once the deployment finishes, retrieve the external IP address of the Kong services and map to DNS address as we mentioned in the Helm values file quickstart-values.yaml.

Access the Kong Admin API and the configured services and routes.

Conclusion

In conclusion, this article demonstrated the deployment of Kong Gateway in traditional mode using Pulumi on an Azure Kubernetes Service (AKS) cluster. By harnessing the infrastructure as code capabilities of Pulumi, you can streamline the provisioning and management of your AKS cluster while effectively deploying Kong as an API gateway. This deployment approach offers numerous benefits, including ensuring consistency, scalability, and reproducibility in managing your infrastructure and API gateway.

Combining Kong’s extensive feature set and Pulumi’s declarative approach empowers you to tackle your API management needs confidently. With Kong Gateway, you can efficiently handle authentication, rate limiting, request/response transformations, and other essential aspects of API traffic management. Pulumi’s infrastructure as code philosophy enables you to treat infrastructure provisioning and configuration as code, facilitating automation and reducing manual effort.

Adopting the deployment approach outlined in this article establishes a solid foundation for managing your microservices architecture and distributed systems. The seamless integration of Kong Gateway and Pulumi’s powerful toolset empowers you to achieve consistent and reliable API management while leveraging the scalability and flexibility of AKS. Embracing this robust solution ensures that your API management needs are met effectively and efficiently.

Leave a Comment

Your email address will not be published. Required fields are marked *