项目设计

需求:外部分析器工具连接到运行在 kubernetes 集群上 Java pod 的 JVM,通过 jprofiler 暴露其接口,可以直接连接至这个 java pod,并且可以实现自动化映射,为了安全保证,映射将在不活跃时自动取消。

需要解决的问题

Jprofiler 如何在 kubernetes 集群中运行:

  • 方法1:打包至业务Pod容器内
    • 缺点:需要侵入业务Pod内,不方便
  • 方法2:使用 Init Container 将 JProfiler 安装复制到 Init Container 和将在 Pod 中启动的其他容器之间共享的卷
  • 方法3:使用 sidecar 方式 共享业务Pod与 sidecar 共享名称空间
    • 缺点:涉及到容器共享进程空间,与 jprofiler-agent 机制问题,所以需要共享 /tmp 目录

JProfiler finds JVMs via the “Attach API” that is part of the JDK. Have a look at the $TMP/hsperfdata_$USER directory, which is created by the hot spot JVM. It should contain PID files for all running JVMs. If not, delete the directory and restart all JVMs.

使用 Init Container 实施步骤

先决条件

假设已存在 Java 应用程序 deployment,我们还需要一个 JProfiler 镜像。如果您没有 JProfiler 镜像,这里有一个可用于构建映像的Dockerfile示例

yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
FROM centos:7

# Switch to root
USER 0

ENV \
 JPROFILER_DISTRO="jprofiler_linux_14_1_1.tar.gz" \
 STAGING_DIR="/jprofiler-staging" \
 HOME="/jprofiler"

LABEL \
 io.k8s.display-name="JProfiler from ${JPROFILER_DISTRO}"

RUN yum -y update \
 && yum -y install ca-certificates curl \
 && mkdir -p ${HOME} ${STAGING_DIR} \
 && cd ${STAGING_DIR} \
 # curl is expected to be available; wget would work, too
 # Add User-Agent header to pretend to be a browser and avoid getting HTTP 404 response
 && curl -v -OL "https://download-keycdn.ej-technologies.com/jprofiler/${JPROFILER_DISTRO}" -H "User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.62 Safari/537.36" \
 && tar -xzf ${JPROFILER_DISTRO} \
 && rm -f ${JPROFILER_DISTRO} \
 # Eliminate the version-specific directory
 && cp -R */* ${HOME} \
 && rm -Rf ${STAGING_DIR} \
 && chmod -R 0775 ${HOME} \
 && yum clean all

# chown and switch user as needed

WORKDIR ${HOME}

与业务Pod配置

更改应用程序的部署配置如下:

  • 如果尚未定义,请在 “spec.template.spec” 下添加 “volumes” 部分并定义一个新卷:
yaml
1
2
3
volumes:
  - name: jprofiler-share-tmp
    emptyDir: {}

如果尚未定义,请在“spec.template.spec” 下添加 “initContainers”(Kubernetes 1.6+),并使用 JProfiler 的镜像定义 Init Container 将 Init container 中的文件复制到共享目录

yaml
1
2
3
4
5
6
7
initContainers:
  - name: jprofiler-init
    image: <JPROFILER_IMAGE:TAG>
    command: ["/bin/sh", "-c", "cp -R /jprofiler/ /tmp/"]
    volumeMounts:
      - name: jprofiler
        mountPath: "/tmp/jprofiler"

将 jprofiler-agent 添加到 JVM 启动参数。

yaml
1
-agentpath:/jprofiler/bin/linux-x64/libjprofilerti.so=port=8849

完整的Deployment 示例

yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jprofiler-test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jprofiler
  template:
    metadata:
      labels:
        app: jprofiler
    spec:
      volumes:
      - name: jprofiler-share-tmp
        emptyDir: {}
      shareProcessNamespace: true
      initContainers:
      - name: jprofiler-init
        image: jprofiler:14_0
        command: ["/bin/sh", "-c", "cp -R /jprofiler/ /tmp/"]
        volumeMounts:
          - name: jprofiler-share-tmp
            mountPath: "/tmp"
      containers:
        - name: sprintboot-test
          image:javaweb:3
          imagePullPolicy: IfNotPresent
          volumeMounts:
          - name: jprofiler-share-tmp
            mountPath: /tmp
          env:
          - name: JAVA_OPTS
          # nowait 表示启动时不需要手动确认,如果不加会stuck到 jprofiler,使得业务容器不能启动
          # -agentpath 必须加到java参数后,而不是 java -jar xxx -agentpath 这样
            value: "-agentpath:/tmp/jprofiler/jprofiler14.0/bin/linux-x64/libjprofilerti.so=port=8849,nowait" 
          command: 
          - "java"
          - "-jar"
          - demo-0.0.1-SNAPSHOT.jar 
          args:
          - --server.port=8085

一个完整的 deployment 详情

yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: springboot-jprofiler
  name: springboot-jprofiler
  namespace: debug
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: springboot-jprofiler
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: springboot-jprofiler
        name: springboot-jprofiler
    spec:
      containers:
      - name: springboot-jprofiler-injection
        image: {your registry}
        imagePullPolicy: IfNotPresent
        env:
        - name: JAVA_OPTS
          value: -server
        - name: JAVA_TOOL_OPTIONS
          value: -XX:MinRAMPercentage=25.0 -XX:MaxRAMPercentage=85.0 -XX:InitialRAMPercentage=25.0
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 240
          timeoutSeconds: 1
        ports:
        - containerPort: 8080
          name: 8080tcp2
          protocol: TCP
        - containerPort: 8849
          name: jprofiler
          protocol: TCP
      imagePullSecrets:
      - name: harbor
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: springboot-jprofiler
  name: springboot-jprofiler
  namespace: debug
spec:
  ports:
    - name: tcp-80
      port: 80
      protocol: TCP
      targetPort: 8080 
  selector:
    app: springboot-jprofiler
  type: ClusterIP
  sessionAffinity: ClientIP

如何动态映射 Pod

需要连接 jprofiler-agent,在启动时作为进程启动,然后映射 jprofiler-agent 的 8849 端口

采用工具,pod-proixer,使用 haproxy 映射 Pod,并且提供 HTTP API,可以控制映射,与映射生效时间。

--jprofiler-port-name,将会使用业务 Pod 配置的 Port name 来选择映射,而无需经过 ingress/gateway

下面是服务的部署清单

yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: pod-proxier-secret-reader
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: pod-proxier-rolebinding
subjects:
  - kind: ServiceAccount
    name: pod-proxier-secret-sa
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: pod-proxier-secret-reader
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
  namespace: kube-system
  name: pod-proxier-secret-sa
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pod-proxier
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: pod-proxier
  template:
    metadata:
      labels:
        app: pod-proxier
    spec:
      serviceAccount: pod-proxier-secret-sa
      hostNetwork: true
      nodeSelector:
        role: infra
        service: proxier
      containers:
        - name: haproxy
          # 这个是实际上是haproxy
          image: cylonchau/pod-proxier-ints:v2.6.1
          imagePullPolicy: IfNotPresent
          ports:
          - containerPort: 5555
            name: data-plan-port
            protocol: TCP
          - containerPort: 8404
            hostPort: 8404
            name: admin-port
            protocol: TCP
          - containerPort: 8849
            hostPort: 8849
            name: entry-port
            protocol: TCP
        - name: pod-proxier
          # 通过dataplane api控制映射的HTTP API
          image: cylonchau/pod-proxier:0.4
          imagePullPolicy: IfNotPresent
          command: 
          - "sh"
          - "-c"
          args:
          - sleep 2 && ./pod-proxier-gateway 
          - --apiAddr=http://127.0.0.1:5555
          - -v=4
          readinessProbe:
            failureThreshold: 5
            httpGet:
              path: /health
              port: 3343
              scheme: HTTP
          ports:
          - containerPort: 3343
            hostPort: 3343
            name: http
            protocol: TCP

Reference

How to Connect JProfiler to a JVM Running in Kubernetes Pod