Service Catalog Installation and Troubleshooting on Kubernetes Cluster running on Bare Metal
Aggregation Layer in Kubernetes Cluster enable third party resource provider to integrate their own custom extension kubernetes like api server with the existing kube-apiserver. This gives flexibility to customize api versions and provide loose coupling with kube apiserver to provide customize flexibilities to end user.
Service Catalog incorporates same features and provides functionalities to integrate with Open Service Broker API. In the scenario of multi cloud as well as in on-premise Open Service Broker gives flexibility to consume various services provided by various service providers within the same K8S cluster.
Here we are not going to discuss about Open Service Broker API but to work Open Service Broker API it needs Service Catalog to be installed first in the existing Kubernetes Cluster. In this post i will discuss how to setup aggregation layer in Kubernetes cluster and how to install service catalog in the cluster and what are the challenges i faced and how i resolved them.
Service Catalog is an open source project which provides it’s own apiserver and controller manager known as catalog-apiserver and catalog-controller-manager respectively.
catalog-apiserver is an extension apiserver which must be registered with the existing kube-apiserver to work service catalog with Open Service Broker API.
To enable the aggregation layer in kube-apiserver, we must add following flags in the kube-apiserver manifest.
— proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
— proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
— requestheader-allowed-names=front-proxy-client
— requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
— requestheader-extra-headers-prefix=X-Remote-Extra-
— requestheader-group-headers=X-Remote-Group
— requestheader-username-headers=X-Remote-User
front-proxy is a proxy uses by extension api server to communicate between kube-apiserver for authentication/authorization purpose.
These flags will enable aggregation layer in kube-apiserver and every request coming to kube-apiserver will be proxied to catalog-apiserver and for this front-proxy certificates are used by catalog-apiserver for request authentication and authorization.
Now to install service-catalog in K8S cluster, we will use Helm charts.
and execute command inside catalog directory helm install . — name catalog — namespace catalog. after the execution of this you should see catalog-apiserver and catalog-controller-manager in running status by executing
kubectl get pods -n catalog.
You should see the response something like this.
Challanges i faced installing service-catalog in Bare Metal:
- Catalog API Server uses APIService object to register itself with kube-apiserver. so this APIService object must have status True for to work catalog-apiserver and catalog-controller-manager properly. To check this APIService object execute kubectl describe apiservice v1beta1.servicecatalog.k8s.io and if this object is showing status as FailedDiscoveryCheck and reason as no response from https://10.233.11.240:443: Get https://10.233.11.240:443: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) then it might be because aggregation process is not able to register catalog-apiserver with kube-apiserver. To fix this issue i removed http_proxy, https_proxy, no_proxy from env of kube-apiserver manifest. kube-apiserver must be run without any proxy to register extension apiserver with it.
- If APIService Object is successful and there is lagging issue in kube-apiserver and performance impacted all of sudden then you need to disable TLS check in APIService object by adding insecureSkipTLSVerify: true flag and remove caBundle from apiregistration.yml file of service-catalog and add hostNetwork: true to catalog-apiserver-deployment.yml and catalog-controller-manager-deployment.yml. This fix will remove the lagging issue as well as stabilizes catalog-controller-manager pod.
- If catalog-apiserver and catalog-controller-manager pod are crashing continuously because of Liveness and Readiness Probe failure then disable health-check of these pods by changing value to false in values.yml file of service-catalog helm charts.This fix will stabilizes catalog-apiserver and catalog-controller-manager pods by disabling health-checks.
- If you are using any overlay network then stick to calico or weave-net because these follows Layer 2 networking. I used calico as overlay network. otherwise you might run into other network misconfiguration issues.
I hope this helps to setup and troubleshoot service-catalog in Kubernetes cluster. Queries and Suggesstions are welcome.
Thanks!!!