Exposing Services on OKE: Three Ways

Exposing Services on OKE: Three Ways
Photo by Joshua Sortino / Unsplash

The OCI Always Free Tier also includes a load balancer that we can use with our cluster. There are various ways we could use this, and in this post I'll run you through two different ways you can do so as well as another option for exposing services.

There are some limits to the Always Free tier to bear in mind when exposing services:

  • Always Free includes a Flex load balancer with 10Mpbs of up/down bandwidth
  • Always Free includes 10TB of outbound traffic per month

Practically speaking the 10TB limit is irrelevant, as even if you max out your 10Mbps connection for an entire month you should only use about 3TB of that limit. For most hobbyists these limits should not be a problem, but just be aware if you deploy an image-heavy blog that becomes a smash hit then you might need to re-think your choice of cloud hosting!

Test Application

Throughout this series I'll be using Ghost as my example application. Ghost is a lightweight blogging platform that has no dependency on an external database (when run in development mode anyway) so it's ideal for testing out OKE. The files below show the manifests I use to build Ghost. You can of course use your own apps and manifests instead if you want.

💡
I don't intend to go into much detail into what these manifests are - if you don't understand the basics I suggest going through an intro to Kubernetes first so you understand the background.
---
apiVersion: v1
kind: Namespace
metadata:
  name: ghost

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ghost
  namespace: ghost
  labels:
    app: ghost
spec:
  selector:
    matchLabels:
      app: ghost
  template:
    metadata:
      labels:
        app: ghost
    spec:
      containers:
        - name: ghost
          image: ghost:5.59.0-alpine
          ports:
            - containerPort: 2368
          env:
            - name: NODE_ENV
              value: development
            - name: url
              value: http://blog.example.com # use your own domain here

ghost.yaml

These create a very basic install of Ghost inside its own namespace, but they don't expose anything yet. Apply these using kubectl:

kubectl apply -f ghost.yaml

Note that you will need your own domain name to point to the external IPs where we will be exposing our services. If you don't have one you could use a dynamic DNS service like noip.com.

Using a Kubernetes Load Balancer

To create a load balancer we need to create an additional manifest of the type "Service". This will be annotated to tell OKE that we want to use the size and shape of LB that fits within the Always Free tier, that is a "flexible" type with a minimum and maximum size of "10" (meaning Mbps):

---
apiVersion: v1
kind: Service
metadata:
  name: ghost
  namespace: ghost
  annotations:
    service.beta.kubernetes.io/oci-load-balancer-shape-flex-max: "10"
    service.beta.kubernetes.io/oci-load-balancer-shape-flex-min: "10"
    service.beta.kubernetes.io/oci-load-balancer-shape: "flexible"
    oci.oraclecloud.com/load-balancer-type: "lb"
spec:
  type: LoadBalancer
  ports:
    - name: ghost-http
      port: 80
      targetPort: 2368
  selector:
    app: ghost

load-balancer.yaml

Apply the configuration using kubectl:

kubectl apply -f load-balancer.yaml

It may take a few moments for the LB to be created, but when it comes up it will automatically be assigned an external IP address, which you can discover by running:

kubectl get svc -n ghost

And you should get a result like this:

The "EXTERNAL-IP" column shows you what your assigned public internet IP address is. Browse to http://130.162.184.186 (or whatever your assigned IP is) and you should see the Ghost sign-up screen. A good idea is to point your public DNS names to this address.

Ingress Nginx as a Load Balancer

Standard load balancers are fine, but they can be limiting. We need to deploy a load balancer per application that we need, which can get expensive, plus we need another solution for adding things like TLS encryption.

Another approach is to use Ingress Nginx as an application load balancer, with traffic then routed to the correct K8S workloads via something called an "Ingress" object. Ingresses allow a single load balancer can handle traffic for multiple web services that you might want to run on the cluster, for example "blog.example.com" would be directed to your blog whilst "mailinglist.example.com" would be directed to the sign up for a mailing list.

Even if you only want to expose a single service to the internet, it's still a good idea to look at using an Ingress provider as they make it much easier to add TLS, authentication, traffic manipulation and all sorts of useful features over and above what is possible with a basic LB.

There are many options for ingress controllers in Kubernetes including Traefik and Contour, but my preferred option is Ingress Nginx which I will be using for the next part of this guide.

First, to stay within the Always Free limits, we need to delete the Load Balancer service that we created in the previous section, to ensure we don't get charged for having two:

kubectl delete -f load-balancer.yaml

To install Ingress Nginx you need to have the Helm package manager CLI installed as per the instructions : https://helm.sh/docs/intro/install/

Helm automatically works with kubectl to deploy new applications, so all you need to do is running the following commands to deploy Nginx with an appropriately sized load balancer:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

helm repo update

helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace \
   --set controller.service.annotations.service\\.beta\\.kubernetes\\.io/oci-load-balancer-shape=flexible \
   --set controller.service.annotations.service\\.beta\\.kubernetes\\.io/oci-load-balancer-shape-flex-min=10 \
   --set controller.service.annotations.service\\.beta\\.kubernetes\\.io/oci-load-balancer-shape-flex-max=10

After a few seconds Ingress Nginx should be installed, and OCI should automatically provision a load balancer of the requested shape and assign it a public IP address. You can check that this has worked successfully by running the following kubectl command:

kubectl get svc ingress-nginx-controller -n ingress-nginx

You should see output like the below. The service "nginx-ingress-nginx-controller" is the one that will handle all traffic from the internet. Again the "EXTERNAL-IP" column tells you what your public IP address is - this is the address you should use when accessing services on your cluster from the internet for all services that sit behind the LB.

We then need to create an "Ingress" manifest to allow traffic to pass from Ingress Nginx to our ghost service:

---
apiVersion: v1
kind: Service
metadata:
  name: ghost
  namespace: ghost
spec:
  type: ClusterIP
  ports:
    - name: ghost-http
      port: 80
      targetPort: 2368
  selector:
    app: ghost

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ghost
  namespace: ghost
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
    - host: blog.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: ghost
                port:
                  number: 80

ingress.yaml

Apply this with kubectl.

Because ingresses work at higher levels of the OSI model than Load Balancers, you must have a valid domain name pointed at your load-balanced IP address for this to work and the subdomain ("blog.example.com" in this case) must point to the same IP address. Ingress Nginx will work out where you Ghost app is running and direct the traffic accordingly.

Once you have done this, browse to "blog.example.com" and you should see the Ghost site.

💡
This is fine as a basic lab but you really should add TLS and set up authentication to prevent someone else hijacking the site.

Cloudflare Tunnels

One way to completely bypass the bandwidth and traffic limits on the Flex load balancer is to avoid using it entirely. Cloudflare is perhaps best known for their CDN capabilities but they also provide a technology called Tunnels which provide a way to directly link a service within your network to their external CDN platform without needing to expose the service directly to the internet - no load balancer or ingress required.

Using Cloudflare does mean that you have to use their DNS services for your site, but as part of the deal you also get TLS encryption without the need to do any certificate management. You also get the CDN capabilities which should reduce the load on your actual webservers, helping you to get the most from the limited power of your OKE nodes.

First, make sure you have signed up for Cloudflare and that you have configured an DNS address for you site on the platform.

Then, download the cloudflared command line utility. This allows you to create new tunnels and save the credentials you will need to allow connectivity between Cloudflare and your OKE cluster:

# Download cloudflared binary
wget https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64 -O cloudflared
# Provide execute permissions
chmod +x cloudflared
# Move to pathed location
sudo mv cloudflared /usr/local/bin/

Next you need to log in to the Cloudflare service and request a new tunnel:

cloudflared tunnel login

A browser should open to let you complete the sign-in process within Cloudflare, including passing any MFA checks. Then create a tunnel:

cloudflared tunnel create oke-test
# Results:
Tunnel credentials written to /home/user/.cloudflared/54aabb5b-e690-4737-9783-5dda98e9b91b.json. cloudflared chose this file based on where your origin certificate was found. Keep this file secret. To revoke these credentials, delete the tunnel.

Created tunnel oke-test with id 54aabb5b-e690-4737-9783-5dda98e9b91b

Then link your newly created tunnel with a DNS domain or subdomain so that CF will know to send requests for that address down the tunnel to your service:

cloudflared tunnel route dns oke-test blog.mattscott.cloud

To connect our tunnel to the service running in Kubernetes, we need to run an additional service within the cluster. First create a ConfigMap where the settings for the CF tunnel will be stored:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: cloudflared-config
  namespace: ghost
data:
  config.yml: |
    tunnel: 54aabb5b-e690-4737-9783-5dda98e9b91b
    credentials-file: /etc/cloudflared/creds/credentials.json
    no-autoupdate: true
    ingress:
      - hostname: blog.mattscott.cloud
        service: http://ghost:80
      - service: http_status:404

configmap.yaml

We also need to add the credential file that we generated earlier to our Kubernetes cluster as a secret:

kubectl create secret -n ghost generic cloudflared-auth \
  --from-file=/home/user/.cloudflared/54aabb5b-e690-4737-9783-5dda98e9b91b.json

The last thing to create is a deployment that will run the cloudflared service within our cluster, and connect out to the CF platform to establish the tunnel and allow traffic in:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cloudflared
  namespace: ghost
spec:
  selector:
    matchLabels:
      app: cloudflared
  template:
    metadata:
      labels:
        app: cloudflared
    spec:
      containers:
        - name: cloudflared
          image: docker.io/cloudflare/cloudflared:2024.4.1
          args:
            - tunnel
            - --config
            - /etc/cloudflared/config/config.yaml
            - run
          volumeMounts:
            - name: config
              mountPath: /etc/cloudflared/config
              readOnly: true
            - name: creds
              mountPath: /etc/cloudflared/creds
              readOnly: true
      volumes:
        - name: creds
          secret:
            secretName: tunnel-credentials
        - name: config
          configMap:
            name: cloudflared-config
            items:
              - key: config.yaml
                path: config.yaml

deployment.yaml

Add the Deployment and ConfigMap to your cluster:

kubectl apply -f configmap.yaml -f deployment.yaml

After a few seconds the tunnel should be up and running and you should be able to browse to your domain and see your site up and running via the tunnel.

Now you should have everything you need to start running your own services on OKE!