The ClusterIP is one of the three services in Kubernetes. When creating a manifest file in yaml format if the kind is service, and no type is mentioned, Kubernetes assumes that it will be ClusterIP by default.
Let us assume there is a 2-tire application. A layer that consists of 3 application servers (replica of 3), connecting to next layer of 3 DB servers (3 replica). In a typical traditional network world, 3 DB servers will be sitting behind an LB, and each of the 3 Application Server will be accessing the LB that routs the connection to one of the 3 target DB server.
In the world of Kubernetes each of these DB server has an IP address, and the IPs that sits behind the above mentioned LB can change if a DB pod goes down and another one takes its place, arranged by the ReplicaSet.
In the world of Kubernetes the service ClusterIP takes the role of the LB mentioned in the above architecture, and this service discovers and gathers the IP (even new ones) associated with pods hosting DB containers.
As an example, let us create a ReplicaSet that deploys 3 pods with image mysql server, with label parameter “theApp: sv1″
[root@kubmaster01 ~]# cat replicaset1.yml
==== ==
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myreplicaset
labels:
theApp: sv1
spec:
template:
metadata:
name: mypod1
labels:
theApp: sv1
spec:
containers:
– name: mypodcontainer
image: nginx
replicas: 2
selector:
matchLabels:
theApp: sv1
==== ===
Now let us create a service with type ClusterIP.
[root@kubmaster01 ~]# cat service-clusterip.yml
===== ====
apiVersion: v1
kind: Service
metadata:
name: front-end
spec:
type: ClusterIP
ports:
– targetPort: 80
port: 80
selector:
theApp: sv1
==== ====
[root@kubmaster01 ~]# kubectl create -f replicaset1.yml
[root@kubmaster01 ~]# kubectl create -f service-clusterip.yml
[root@kubmaster01 ~]# kubectl get all | grep front-end
service/front-end ClusterIP 10.106.148.176 <none> 80/TCP 18m
Now that we have the Cluster’s IP which is 10.106.148.176, let us check the nodes where the pods are running, created by the replicaset.
[root@kubmaster01 ~]# kubectl get pods -o wide
If there are 2 pods created, and running in both worker nodes, they can be accessed by the below command in both the nodes.
[root@kubworker01 ~]# curl http://10.106.148.176
However, if both the pods are only on one node, then the above curl command have to be run on that specific node.
As the web page is accessed using the clusterIP, it does not matter if the pods are recreated and the pod’s IP changes. This can be verified by scaling the replicaset zero and then back to 1. As the pods are recreated, their IPs change, but still they can be accessed by the ClusterIP.