首页 / 常见问题 / 如何配置接入平台的标准 Kubernetes 集群的审计功能?

如何配置接入平台的标准 Kubernetes 集群的审计功能?

标准 Kubernetes 集群接入平台后,需要修改集群上审计相关的配置,才能通过平台正常采集集群的审计数据。

主要流程如下:

  1. 在本地创建包含审计策略信息的 policy.yaml 文件。

  2. policy.yaml 文件上传至集群中所有 Master 节点的 /etc/kubernetes/audit/ 目录下。

    提示/etc/kubernetes/audit/ 目录需手动创建。

  3. 修改集群中所有 Master 节点的 /etc/kubernetes/manifests/kube-apiserver.yaml 文件,增加或修改审计相关的配置项及存储卷挂载配置信息。

  4. 验证配置是否生效。

操作步骤

  1. 拷贝以下 YAML 文件内容,保存为本地文件 policy.yaml

    • 若 Kubernetes 集群版本低于 1.24,请使用以下 YAML 文件。
    apiVersion: audit.k8s.io/v1beta1 # This is required.
    kind: Policy
    # Don't generate audit events for all requests in RequestReceived stage.
    omitStages:
      - "RequestReceived"
    rules:
      # The following requests were manually identified as high-volume and low-risk,
      # so drop them.
      - level: None
        users:
          - system:kube-controller-manager
          - system:kube-scheduler
          - system:serviceaccount:kube-system:endpoint-controller
        verbs: ["get", "update"]
        namespaces: ["kube-system"]
        resources:
          - group: "" # core
            resources: ["endpoints"]
      # Don't log these read-only URLs.
      - level: None
        nonResourceURLs:
          - /healthz*
          - /version
          - /swagger*
      # Don't log events requests.
      - level: None
        resources:
          - group: "" # core
            resources: ["events"]
      # Don't log devops requests.
      - level: None
        resources:
          - group: "devops.alauda.io"
      # Don't log get list watch requests.
      - level: None
        verbs: ["get", "list", "watch"]
      # Don't log system's lease operation
      - level: None
        namespaces:
          [
            "kube-system",
            "cpaas-system",
            "alauda-system",
            "istio-system",
            "kube-node-lease",
          ]
        resources:
          - group: "coordination.k8s.io"
            resources: ["leases"]
      # Don't log access review and token review requests.
      - level: None
        resources:
          - group: "authorization.k8s.io"
            resources: ["subjectaccessreviews", "selfsubjectaccessreviews"]
          - group: "authentication.k8s.io"
            resources: ["tokenreviews"]
      # Secrets, ConfigMaps can contain sensitive & binary data,
      # so only log at the Metadata level.
      - level: Metadata
        resources:
          - group: "" # core
            resources: ["secrets", "configmaps"]
      # Default level for known APIs
      - level: RequestResponse
        resources:
          - group: "" # core
          - group: "aiops.alauda.io"
          - group: "apps"
          - group: "app.k8s.io"
          - group: "authentication.istio.io"
          - group: "auth.alauda.io"
          - group: "autoscaling"
          - group: "asm.alauda.io"
          - group: "clusterregistry.k8s.io"
          - group: "crd.alauda.io"
          - group: "infrastructure.alauda.io"
          - group: "monitoring.coreos.com"
          - group: "networking.istio.io"
          - group: "networking.k8s.io"
          - group: "portal.alauda.io"
          - group: "rbac.authorization.k8s.io"
          - group: "storage.k8s.io"
          - group: "tke.cloud.tencent.com"
          - group: "devopsx.alauda.io"
          - group: "core.katanomi.dev"
          - group: "deliveries.katanomi.dev"
          - group: "integrations.katanomi.dev"
          - group: "builds.katanomi.dev"
          - group: "operators.katanomi.dev"
          - group: "tekton.dev"
          - group: "operator.tekton.dev"
          - group: "eventing.knative.dev"
          - group: "flows.knative.dev"
          - group: "messaging.knative.dev"
          - group: "operator.knative.dev"
          - group: "sources.knative.dev"
          - group: "operator.devops.alauda.io"
      # Default level for all other requests.
      - level: Metadata
  1. policy.yaml 文件上传至集群中所有 Master 节点的 /etc/kubernetes/audit/ 目录下。

    注意

    • 当集群有多个 Master 节点时,每个节点都需要上传 policy.yaml 文件。

    • /etc/kubernetes/audit/ 目录需手动创建。

  2. 更新集群中所有 Master 节点上的 /etc/kubernetes/manifests/kube-apiserver.yaml 文件,修改或新增以下配置项,并设置存储卷挂载相关参数后保存文件。

    注意:当集群有多个 Master 节点时,每个节点上的 kube-apiserver.yaml 文件都需要更新;配置项缺失时,请参考 YAML 示例增加对应的配置项。

    以下说明需要修改或添加的配置信息。

    • 配置项

      是否必填项 说明
      AdvancedAuditing 的功能开关 存在功能开关时,功能开关键对应的值必须为 true
      –audit-policy-file 审计策略文件的相对路径,值必须为 /etc/kubernetes/audit/policy.yaml
      –audit-log-format 审计日志的输出格式,值必须为 json
      –audit-log-path 日志文件的存储路径,值必须为 /etc/kubernetes/audit/audit.log
      –audit-log-mode 日志模式,值必须为 batch
      –audit-log-maxsize 审计文件大小的限制值,单位 M,推荐值为 200。
      –audit-log-maxbackup 审计文件留存的数量,推荐值为 2。
    • 存储卷挂载配置

      存储卷挂载配置,用于存储审计相关的配置文件和集群的审计数据。

      分别为 containers.volumeMountsvolumes 参数,添加以下配置项:

      volumeMounts:
      - mountPath: /etc/kubernetes/audit   # 存储卷在容器中的挂载路径,不可修改
        name: k8s-audit    # 存储卷名称,必须与 volumes 中配置的名称一致
      volumes:
      - hostPath:
          path: /etc/kubernetes/audit    # 宿主机的文件目录,将作为存储卷挂载到容器,不可修改
          type: DirectoryOrCreate      # 目录类型
        name: k8s-audit      # 存储卷名称,可自定义

    完整的 kube-apiserver.yaml 文件内容示例如下:

    apiVersion: v1
    kind: Pod
    metadata:
      creationTimestamp: null
      labels:
        component: kube-apiserver
        tier: control-plane
      name: kube-apiserver
      namespace: kube-system
    spec:
      containers:
      - command:
        - kube-apiserver
        - --advertise-address=10.0.130.185
        - --allow-privileged=true
        - --authorization-mode=Node,RBAC
        - --client-ca-file=/etc/kubernetes/pki/ca.crt
        - --enable-admission-plugins=NodeRestriction
        - --enable-bootstrap-token-auth=true
        - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
        - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
        - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
        - --etcd-servers=https://127.0.0.1:2379
        - --insecure-port=0
        - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
        - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
        - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
        - --requestheader-allowed-names=front-proxy-client
        - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
        - --requestheader-extra-headers-prefix=X-Remote-Extra-
        - --requestheader-group-headers=X-Remote-Group
        - --requestheader-username-headers=X-Remote-User
        - --secure-port=6443
        - --service-account-key-file=/etc/kubernetes/pki/sa.pub
        - --service-cluster-ip-range=10.96.0.0/12
        - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
        - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
        - --audit-log-format=json
        - --audit-log-maxbackup=2
        - --audit-log-maxsize=200
        - --audit-log-mode=batch
        - --audit-log-path=/etc/kubernetes/audit/audit.log
        - --audit-policy-file=/etc/kubernetes/audit/policy.yaml
        - --feature-gates=AdvancedAuditing=true
        image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.16.15
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 8
          httpGet:
            host: 10.0.130.185
            path: /healthz
            port: 6443
            scheme: HTTPS
          initialDelaySeconds: 15
          timeoutSeconds: 15
        name: kube-apiserver
        resources:
          requests:
            cpu: 250m
        volumeMounts:
        - mountPath: /etc/ssl/certs
          name: ca-certs
          readOnly: true
        - mountPath: /etc/pki
          name: etc-pki
          readOnly: true
        - mountPath: /etc/kubernetes/pki
          name: k8s-certs
          readOnly: true
        - mountPath: /etc/kubernetes/audit
          name: k8s-audit
      hostNetwork: true
      priorityClassName: system-cluster-critical
      volumes:
      - hostPath:
          path: /etc/ssl/certs
          type: DirectoryOrCreate
        name: ca-certs
      - hostPath:
          path: /etc/pki
          type: DirectoryOrCreate
        name: etc-pki
      - hostPath:
          path: /etc/kubernetes/pki
          type: DirectoryOrCreate
        name: k8s-certs
      - hostPath:
          path: /etc/kubernetes/audit
          type: DirectoryOrCreate
        name: k8s-audit
    status: {}
  3. 配置项修改并保存成功后,检查集群 Master 节点的 /etc/kubernetes/audit/ 路径下是否生成了 audit.log 文件。如果该文件已存在,且文件中存储了审计日志内容,则表明配置已生效。