Implementing Knative Deployment and Routing for Public Exposure

Shantanu Dey Anik
6 min readMar 21, 2024

--

Knative revolutionizes serverless computing by seamlessly integrating with Kubernetes. It provides a powerful set of tools for building, deploying, and managing serverless applications, offering event-driven autoscaling and scale-to-zero capabilities. With Knative, developers can focus on innovation, leveraging its robust APIs and vibrant ecosystem for rapid application development.

This is best suited for microservices that are not frequently used. It scales up pod deployment only when requests come in, and by scaling pods to zero when inactive, it conserves system resources. That’s precisely why it’s considered serverless.

In this write-up, we will see how to deploy Knative and Knative services to Kubernetes. Additionally, we will explore how to route the Knative services using Ingress, Kong, and Nginx.

First, we need to deploy Knative Serving. You can find the brief documentation here: https://knative.dev/docs/install/yaml-install/serving/install-serving-with-yaml/ .

However, I’ll outline the steps I’ve followed.

Install the Knative Serving component

To install the Knative Serving component:

  1. Install the required custom resources by running the command:
kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.13.1/serving-crds.yaml
  1. Install the core components of Knative Serving by running the command:
kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.13.1/serving-core.yaml

Install a networking layer

The following commands install Istio and enable its Knative integration.

  1. Install a properly configured Istio by following the Advanced Istio installation instructions or by running the command:
kubectl apply -l knative.dev/crd-install=true -f <https://github.com/knative/net-istio/releases/download/knative-v1.13.1/istio.yaml>
kubectl apply -f https://github.com/knative/net-istio/releases/download/knative-v1.13.1/istio.yaml
  1. Install the Knative Istio controller by running the command:
kubectl apply -f https://github.com/knative/net-istio/releases/download/knative-v1.13.1/net-istio.yaml
  1. Fetch the External IP address or CNAME by running the command:
kubectl --namespace istio-system get service istio-ingressgateway

NB: external-IP may remain pending, just use master or worker IP instead.

In this write-up, I'll demonstrate routing services using Kong and Nginx. Therefore, I'll be referring to the 'No DNS' section under the 'Configure DNS' section in the original documentation.

You can do this by running the following command and inspecting the output:

kubectl get pods -n knative-serving

Now that our Knative Serving is deployed in the cluster, let's deploy two example microservice applications to verify if Knative is functioning correctly. Additionally, we'll utilize these services for routing purposes.

For example-service-1.yaml

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: example-service-one
spec:
template:
spec:
containers:
- image: ghcr.io/knative/helloworld-go:latest
ports:
- containerPort: 8080
env:
- name: TARGET
value: "This is service 1"

For example-service-2.yaml

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: example-service-two
spec:
template:
spec:
containers:
- image: ghcr.io/knative/helloworld-go:latest
ports:
- containerPort: 8080
env:
- name: TARGET
value: "This is service 2"

Next, let's create a namespace and apply the manifest files:

kubectl create ns kn-test
kubectl apply -f example-service-2.yaml -n kn-test
kubectl apply -f example-service-1.yaml -n kn-test

Let's verify whether the Knative service has been deployed or not.

kubectl get ksvc -n kn-test

Let's patch the URL domain according to our requirements. Suppose our domain is example.com

kubectl patch configmap/config-domain \\
--namespace knative-serving \\
--type merge \\
--patch '{"data":{"example.com":""}}'

Now, the URLs will appear as follows:

At first, the deployment of the services will remain at zero. However, when the service is called using its URL, it will scale up accordingly. Next, let's examine what happens when we request the Istio-ingressgateway Service NodePort with the host header of the newly deployed service. In my case istio ingressgateway IP and NodePort is 172.19.0.2:32212

During the initial pod creation, there may be a delay in processing requests. However, once the deployment is scaled up, it will operate smoothly. Additionally, if there are no requests during the Knative grace period, the deployment will be scaled down to zero.

Exposing The service publicly

Lets expose this service for public availability using Nginx Reverse proxy.

knative.example.com.conf file:

upstream knative-test {
server 172.19.0.2:32212;
}
server {
listen 80;
server_name knative.example.com;
return 301 https://$host$request_uri;
}

server {
listen 443 ssl http2;
server_name knative.example.com;
ssl_certificate /etc/nginx/conf.d/ssl/knative.example.com.crt;
ssl_certificate_key /etc/nginx/conf.d/ssl/knative.example.com.key;

location / {
default_type text/html;
return 200 "Hlw The services are available at /api/v1 route";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
}

location /api/v1/service-1 {
proxy_pass <http://knative-test>;
##It's crucial to include the URL obtained from the Knative Service (Ksvc) in the Nginx proxy pass host header.
proxy_set_header Host example-service-one.kn-test.example.com;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
rewrite ^/api/v1/service-1(/.*)$ $1 break;
}

location /api/v1/service-2 {
proxy_pass <http://knative-test>;
##It's crucial to include the URL obtained from the Knative Service (Ksvc) in the Nginx proxy pass host header.
proxy_set_header Host example-service-two.kn-test.example.com;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
rewrite ^/api/v1/service-2(/.*)$ $1 break;
}
}

Let's set up a host alias for this domain against the Nginx IP address and test the URL route.

example-service-1
example-service-2

While the route through Nginx is functioning, adding extended features like authentication, rate limiting, and consumer management would be challenging with Nginx alone. Therefore, an API gateway like Kong is a better option for API routing.”

Using kong API gateway [Kong oss manager]

Here, we will utilize Kong Manager to add a gateway service. If you’re unsure how to access Kong Manager, you can refer to this guide: https://medium.com/@shantanudeyanik/install-kong-with-a-database-and-custom-values-using-helm-by-fetching-chart-locally-2a1f72e881fb

Add a new gateway service, naming it after the service name itself, and set the Upstream URL to the URL obtained from the Knative Service earlier, excluding the base domain. Use the format like :-

http://<servicename>.<namespace>

After saving the gateway service, navigate to the gateway service and go to the 'Routes' tab. Then, add a new route.

Similar to before, name the route as the service name, specify the path where we want the service, and add the hostname which will be matched by Kong to route to the service. Ensure 'strip path' is set to true and 'preserve host' is set to false.

Let's verify if we can access the service from Kong or not.

Since I haven't routed service-2 yet, Kong is giving us an error.

Here is an extended service YAML file that includes security context and revision control.

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: example-api-service
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/min-scale: "1"
spec:
containerConcurrency: 0
containers:
- name: example-api-service
image: ghcr.io/knative/helloworld-go:latest
ports:
- containerPort: 8080
env:
- name: TARGET
value: "This is Test service"
envFrom:
- configMapRef:
name: example-api-cmp
- secretRef:
name: example-api-secret
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
runAsGroup: 1000
seccompProfile:
type: RuntimeDefault
capabilities:
drop:
- ALL
runAsNonRoot: true
enableServiceLinks: false
timeoutSeconds: 300
traffic:
- latestRevision: true
percent: 100

Deploying services using Knative on Kubernetes offers a flexible and efficient solution, especially for microservices that are not frequently accessed. By scaling pods to zero during inactivity, Knative conserves system resources, making it a serverless option. While Nginx can handle basic routing, integrating advanced features like authentication and rate limiting can be cumbersome. Thus, employing an API gateway like Kong enhances API routing capabilities, ensuring better management and security for the deployed services.

Ref:

--

--

Responses (1)