Pod Affinity

使用场景

  • 假如有两个pod,都消耗大量的网络和计算资源,这时不想让它们同时分布在一个node。

  • 如果有两个pod,他们互相依赖,希望让它们分布在同一个node。

image-20200122190212044

实践

podAffinity:

apiVersion: v1
kind: Pod
metadata:
  name: with-pod-affinity
spec:
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: run
            operator: In
            values:
            - frontend
        topologyKey: kubernetes.io/hostname
  containers:
    - image: nginx
      name: nginx

首先创建一个frontend pod:

kubectl run frontend --image=nginx

kubectl apply后,发现两个pod都在同一个node上:

frontend-6c79f8b69d-kwkdp 1/1 Running 0 19s 192.168.172.247 ip-192-168-175-56.us-west-2.compute.internal <none> <none>

with-pod-affinity 1/1 Running 0 7s 192.168.136.147 ip-192-168-175-56.us-west-2.compute.internal <none> <none>

Anti-Affinity:

apiVersion: v1
kind: Pod
metadata:
  name: with-pod-affinity
spec:
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: run
              operator: In
              values:
              - frontend
          topologyKey: kubernetes.io/hostname
  containers:
    - image: nginx
      name: nginx

kubectl apply后,发现两个pod不在同一个node上:

with-pod-affinity 1/1 Running 0 11s 192.168.217.125 ip-192-168-209-134.us-west-2.compute.internal <none> <none>

frontend-6c79f8b69d-kwkdp 1/1 Running 0 3m10s 192.168.172.247 ip-192-168-175-56.us-west-2.compute.internal <none> <none>