0%

前提

随着K8S的火爆,K8S渐渐抛弃了docker容器,采用了Containerd容器,这就导致安装K8S的话一般就没有docker命令了,以及在流水线编译镜像时,也依赖宿主机的docker环境,因此才研究了下除了使用docker环境编译镜像,还能使用其他那几种方式编译镜像。

DinD(docker in docker)

未实践 使用 DinD 作为 Pod 的 Sidecar

未实践 使用 DaemonSet 在每个 containerd 节点上部署 Docker

改方式使用的是docker pull docker镜像的方式,常用使用方式如下

1
2
3
4
5
6
7
8
9
10
11
12
13
#需要挂载宿主机的docker,其实共用了宿主机环境不安全
$ docker run --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
docker:latest sh
/ # docker version

#需要在宿主机创建网络,因此就没实验了,因为宿主机都没有docker命令
$ docker network create some some-network
$ docker run -it --rm --network some-network \
-e DOCKER_TLS_CERTDIR=/certs \
-v some-docker-certs-client:/certs/client:ro \
docker:latest sh
/ # docker version

buildkit

属于工具软件,主要安装在Linux, macOS, and Windows环境,安装命令brew install buildkit,如果在编译的容器(例如gitlab runner的容器、jenkins的容器)里面安装,应该也可以有很好的适应性,但是还未实验。

Kaniko

测试使用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
#使用debug镜像,因为可以进入sh命令行,原镜像是gcr.io替换为gcr.lank8s.cn,因为gcr.io访问不了
➜ ~ docker run -it --entrypoint=/busybox/sh gcr.lank8s.cn/kaniko-project/executor:debug
#创建个Dockerfile,内容如下
/workspace # vi Dockerfile
/workspace # cat Dockerfile
FROM nginx:alpine
/workspace # vi /kaniko/.docker/config.json
/workspace # cat /kaniko/.docker/config.json
{
"auths":{
"http://harbor.xxxtech.dev/v2/":{
"username":"harbor",
"password":"harbor123"
}
}
}
/workspace # /kaniko/executor --insecure-registry harbor.xxxtech.dev -d harbor.xxxtech.dev/test/aa:latest
INFO[0000] Retrieving image manifest nginx:alpine
INFO[0000] Retrieving image nginx:alpine from registry index.docker.io
INFO[0004] Built cross stage deps: map[]
INFO[0004] Retrieving image manifest nginx:alpine
INFO[0004] Returning cached image manifest
INFO[0004] Executing 0 build triggers
INFO[0004] Building stage 'nginx:alpine' [idx: '0', base-idx: '-1']
INFO[0004] Skipping unpacking as no commands require it.
INFO[0004] Pushing image to harbor.hcytech.dev/test/aa:latest
INFO[0008] Pushed harbor.xxxtech.dev/test/aa@sha256:c20d8bd7e80b5ffa16019254427e3215b61b730db61a78c7b7b6be8d00acdded
#测试运行镜像
➜ ~ docker run -p 8080:80 -d harbor.xxxtech.dev/test/aa:latest
Unable to find image 'harbor.xxxtech.dev/test/aa:latest' locally
latest: Pulling from test/aa
7264a8db6415: Pull complete
518c62654cf0: Pull complete
d8c801465ddf: Pull complete
ac28ec6b1e86: Pull complete
eb8fb38efa48: Pull complete
e92e38a9a0eb: Pull complete
58663ac43ae7: Pull complete
2f545e207252: Pull complete
Digest: sha256:c20d8bd7e80b5ffa16019254427e3215b61b730db61a78c7b7b6be8d00acdded
Status: Downloaded newer image for harbor.xxxtech.dev/test/aa:latest
3c2e345f52b4a06f9d4d57dcbb95b95876552d6d122536828123e66099c8310d
#访问http://localhost:8080/即可看到nginx页面

gitlab runner使用(还未实践)

使用 kaniko 构建 Docker 镜像

Jib

参考:镜像构建

  1. 创建harbor仓库

  2. 本地编译web项目

  3. 添加Dockerfile到项目根目录

    1
    2
    3
    4
    FROM nginx:alpine
    RUN rm /etc/nginx/conf.d/default.conf
    ADD default.conf /etc/nginx/conf.d/
    COPY build/ /usr/share/nginx/html/
  4. 添加nginx的配置文件default.conf到项目根目录

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    # 后端接口代理地址,部署在一个命名空间,通过service模式访问,在这里配置了就可以不用配置外网访问的ip
    upstream api_server {
    #后端服务的server名
    server backend-application:8080;
    }
    server {
    listen 80;
    server_name localhost;
    underscores_in_headers on;

    location / {
    root /usr/share/nginx/html;
    index index.html index.htm;
    #解决浏览器地址栏回车,报nginx错误
    try_files $uri $uri/ /index.html =404;
    }

    #后端是统一api开头的接口,api可以根据自己的项目修改为其他统一路径
    location /api/ {
    rewrite ~/api/(.*)$ /$1 break;
    proxy_pass http://api_server/api/;
    proxy_set_header Host $host;
    proxy_pass_request_headers on;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_read_timeout 300;
    proxy_send_timeout 300;
    }

    error_page 500 502 503 504 /50x.html;

    location = /50x.html {
    root html;
    }
    }
  5. 编译镜像,在项目根目录执行docker build --no-cache=true --build-arg JAR_FILE='*' -f Dockerfile -t harbor.iexxk.dev/base/test_web:latest .

  6. 推送镜像,在项目根目录执行docker push harbor.iexxk.dev/base/test_web:lates

  7. 在kuboard上面创建deployment部署。

  8. 配置ingress,配置一个自定义域名映射到前端80的服务上。

总结:

到此就配置结束了,通过自定义域名即可访问前端,后端通过自定义域名加/api就可以通过前端部署时用的nginx进行转发。

疑问:在我的印象你前端需要指定后端接口地址,但是不知前端做了什么操作,没有指定,部署上直接访问的接口就是域名加api的模式访问。

安装

安装IngressNginxController见K8s安装Kuboard

配置支持websocket

  1. 部署netty服务端,相关代码见iexxk/springLeaning-netty

  2. 修改服务的Ingress,关键点添加注解nginx.org/websocket-services

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    ---
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    annotations:
    nginx.org/websocket-services: netty #添加该句,netty对应下面的服务名
    labels:
    k8s.kuboard.cn/name: netty
    name: netty
    namespace: exxk
    resourceVersion: '919043'
    spec:
    ingressClassName: myingress
    rules:
    - host: netty.iexxk.io
    http:
    paths:
    - backend:
    service:
    name: netty
    port:
    number: 8080
    path: /test
    pathType: Prefix
    - backend:
    service:
    name: netty #服务名
    port:
    number: 8081 #netty的端口
    path: /ws #匹配规则
    pathType: Prefix
  3. 测试,通过postman,点击new->WebSocket,填入netty.iexxk.io/ws,点击连接就可以发送消息了

问题
  1. 通过返回的消息,可以发现ingress消息返回的客户端ip是ingress-nginx的pod的容器ip,通过host/nodePort的形式发现可以拿大客户端真实的ip,因此想要拿到客户端ip可能还需要额外配置其他的配置。

    1
    2
    3
    4
    5
    6
    7
    #---通过ingress---netty.iexxk.io/ws----------------
    接收的消息:[10-234-216-40.ingress-nginx-controller-myingress.ingress-nginx.svc.cluster.local][12:29:59] ==> 123123
    发送的消息:123123

    #---通过hostPort/nodePort----172.16.30.165:30081/ws---------
    接收的消息:[172.16.10.168][12:29:56] ==> 2312
    发送的消息:2312

kuboard权限(可以不设置,只建立组)

kuboard的权限是指可以看到那些集群,针对的是集群管理相关的角色,他有两个类型的角色设定。

  1. RoleBindings at global level默认有三个角色
    • administrator: 管理员
    • sso-user
    • viewer:只读用户,看不到secript
  2. ClusterRoleBingding:这个是针对某个集群有什么权限

Kuboard的权限可以不用设置,直接通过集群内部的权限进行设置,如果设置了,内部集群的权限小于外部的,内部就无法进行精细话控制,会产生两个冲突的权限。1和2一起设置,进入集群前就有两种角色选择其中一个进入集群。

Cluster Access Control

集群内部权限,选了集群过后进行设置。

  1. phase1-auth:第一阶段授权和kuboard的ClusterRoleBingding是同一个东西。都是针对集群设置主角色。
  2. phase2-auth: 第二阶段授权,是进行精细话控制能访问k8s那些api,在这里可以控制隐藏config、secript等配置api,这里的配置主体都是一个命名空间一个配置。

各大方案工具对比

手动

通过目录和git版本控制共同管理,sql根据版本按顺序执行,增量的时候可以通过变量记录上次执行到了那个阶段,防止重复执行。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Example.DataSchema
├─V1.0
│ ├─Common
│ │ 001.Create.Table.Product.sql
│ │ 002.Create.Table.User.sql
│ ├─Enterprise
│ │ 001.Create.Table.Highland.sql
│ └─Professional
│ 001.Create.Table.Lowend.sql
├─V1.1
│ ├─Common
│ │ 001.Alter.Table.User.sql
│ │ 002.Drop.Function.USP_CleanFeedback.sql
│ ├─Enterprise
│ │ 001.Alter.Table.Highland.sql
│ └─Professional
│ 001.Alter.Table.Lowend.sql

Flyway

相关文档:Flyway快速上手教程

通过依赖的方式引入springboot,而且有对应的maven插件,以及相关sql记录会记录到数据库,主要分V和R,R可以重复执行,V只能执行一次。

Liquibase

通过maven插件,功能较多,功能复杂

Bytebase

阿里 DMS

Flyway 通过maven依赖的形式
https://www.jianshu.com/p/567a8a161641

Liquibase 支持maven支持客户端

https://www.cnblogs.com/nevermorewang/p/16185585.html

Bytebase

https://www.modb.pro/db/621194

基础命令对比

命令 docker ctr(containerd) crictl(k8s)
查看运行的容器 docker ps ctr task ls/ctr container ls crictl ps
查看镜像 docker images ctr image ls crictl images
查看容器日志 docker logs crictl logs
查看容器数据信息 docker inspect ctr container info crictl inspect
查看容器资源 docker stats crictl stats
启动/关闭已有的容器 docker start/stop ctr task start/kill crictl start/stop
运行一个新的容器 docker run ctr run
修改镜像标签 docker tag ctr image tag
创建一个新的容器 docker create ctr container create crictl create
导入镜像 docker load ctr image import
导出镜像 docker save ctr image export
删除容器 docker rm ctr container rm crictl rm
删除镜像 docker rmi ctr image rm crictl rmi
拉取镜像 docker pull ctr image pull crictl pull
推送镜像 docker push ctr image push
在容器内部执行命令 docker exec crictl exec

配置镜像加速

方案零

采用镜像代理服务商,一般直接将原镜像更名即可,例如:docker pull gcr.io/kaniko-project/executor:debug修改成docker pull gcr.lank8s.cn/kaniko-project/executor:debug

  1. lank8s

    原始仓库 lank8s服务
    registry.k8s.io(原k8s.gcr.io) registry.lank8s.cn
    registry.k8s.io lank8s.cn
    gcr.io gcr.lank8s.cn

方案一(采用)

  1. 修改/etc/containerd/config.toml文件,在endpoint = ["https://registry-1.docker.io"] 添加"https://xxx.mirror.aliyuncs.com"得到endpoint = ["https://xxx.mirror.aliyuncs.com","https://registry-1.docker.io"],添加在前面,优先用阿里云加速仓库。

    1
    2
    3
    4
    5
    6
    7
    .......
    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    systemdCgroup = true
    [plugins."io.containerd.grpc.v1.cri".registry]
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
    endpoint = ["https://xxx.mirror.aliyuncs.com","https://registry-1.docker.io"]
  2. 重启服务systemctl daemon-reloadsystemctl restart containerd

方案二(报错)

  1. 修改/etc/containerd/config.toml文件,在[plugins."io.containerd.grpc.v1.cri".registry]一行下面添加config_path = "/etc/containerd/certs.d"。示例如下
1
2
3
4
5
6
7
8
.......
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
systemdCgroup = true
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = "/etc/containerd/certs.d" //添加这一句
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io"]
  1. 创建目录/etc/containerd/certs.d/docker.io,创建/etc/containerd/certs.d/docker.io/hosts.toml文件。
1
2
3
4
[root@exxk ~]# cat /etc/containerd/certs.d/docker.io/hosts.toml
server = "https://docker.io"
[host."https://xxx.mirror.aliyuncs.com"]
capabilities = ["pull","resolve"]
  1. 重启服务systemctl daemon-reloadsystemctl restart containerd

  2. 其他加速同理

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    $ tree /etc/containerd/certs.d
    /etc/containerd/certs.d/
    ├── docker.io
    │ └── hosts.toml
    └── quay.io
    └── hosts.toml

    $ cat /etc/containerd/certs.d/docker.io/hosts.toml
    server = "https://docker.io"
    [host."https://xxxx.mirror.aliyuncs.com"]

    $ cat /etc/containerd/certs.d/quay.io/hosts.toml
    server = "https://quay.io"
    [host."https://xxx.mirrors.ustc.edu.cn"]
  3. 执行crictl pull nacos/nacos-server:v2.2.3报错

    1
    2
    3
    4
    [root@exxk ~]# crictl pull docker.io/nacos/nacos-server:v2.2.3
    FATA[0000] validate service connection: CRI v1 image API is not implemented for endpoint "unix:///var/run/containerd/containerd.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService
    [root@exxk ~]# crictl pull nacos/nacos-server:v2.2.3
    FATA[0000] validate service connection: CRI v1 image API is not implemented for endpoint "unix:///var/run/containerd/containerd.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService

配置私有仓库

  1. 修改/etc/hosts,映射harbor.exxktech.dev到harbor内网服务ip。

  2. 修改/etc/containerd/config.toml文件,重启服务systemctl daemon-reloadsystemctl restart containerd

1
2
3
4
5
6
7
8
9
    [plugins."io.containerd.grpc.v1.cri".registry]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://rq98iipq.mirror.aliyuncs.com","https://registry-1.docker.io"]
#下面是新加的
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."harbor.exxktech.dev"]
endpoint = ["http://harbor.exxktech.dev"]
[plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.exxktech.dev".tls]
insecure_skip_verify = true

安装

  1. 在Mac或其他机器安装管理工具kuboard-spray

    1
    2
    3
    4
    5
    6
    7
    8
    docker run -d \
    --privileged \
    --restart=unless-stopped \
    --name=kuboard-spray \
    -p 80:80/tcp \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -v ~/kuboard-spray-data:/data \
    eipwork/kuboard-spray:latest-amd64
  2. 访问http://localhost/#/login,输入用户名 admin,默认密码 Kuboard123,即可登录 Kuboard-Spray 界面。

  3. 点击Add Cluster Installation Plad输入集群名称,选择spray-v2.21.0c_k8s-v1.26.4_v4.4-amd64点击OK

  4. 点击Add Node添加一个节点,勾选control planeetcd nodeworker node点击OK。

  5. 右侧输入安装节点的ip=172.16.3.165,端口,密码,最底部输入etcd的名字etcd_exxk,点击Validate Connection

  6. 最后点击save,然后点击Install/Setup K8S Cluster按钮进行安装。

  7. 等待安装完成,失败可以重复6

  8. 访问http://172.16.3.165默认用户名: admin默认密 码: Kuboard123

配置

方案一 :修改Kuboard端口

  1. 找到Kuboard的部署配置文件,vi /etc/kubernetes/manifests/kuboard.yaml修改

    1
    2
    3
    4
    5
    6
    7
    8
    9
    - env:
    - name: KUBOARD_ENDPOINT
    value: "http://172.16.3.165:14001" #把80修改为14001
    name: kuboard
    ports:
    - containerPort: 80
    hostPort: 14001 #hostPort修改为14001
    name: web
    protocol: TCP
  2. 保存,等待自动重启。

    知识点:static-pod静态 Pod 在指定的节点上由 kubelet 守护进程直接管理,不需要 API 服务器监管。 与由控制面管理的 Pod(例如,Deployment) 不同;kubelet 监视每个静态 Pod(在它失败之后重新启动)。

    特点:更改配置自动重启pod,无法删除pod,只能把配置文件移除目录才能删除,默认静态pod目录/etc/kubernetes/manifests

方案二:修改Kuboard走ingress-nginx(失败,能访问界面,但是bash相关功能用不了)

  1. 在Kuboard管理界面的Kuboard命名空间创建service

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    apiVersion: v1
    kind: Service
    metadata:
    labels:
    k8s.kuboard.cn/name: kuboard-v3
    name: kuboard-v3
    namespace: kuboard
    spec:
    ports:
    - protocol: TCP
    port: 80
    targetPort: 80
    selector:
    k8s.kuboard.cn/name: kuboard-v3
    type: ClusterIP

  2. 在Kuboard管理界面的Kuboard命名空间创建Ingress

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    labels:
    k8s.kuboard.cn/name: kuboard-v3
    name: kuboard
    namespace: kuboard
    spec:
    ingressClassName: myingresscontroller #需要先安装IngressNginxController,使用安装时候的名字
    rules:
    - host: kuboard.iexxk.io #安装时候的域名后缀
    http:
    paths:
    - backend:
    service:
    name: kuboard-v3
    port:
    number: 80
    path: /
    pathType: Prefix
  3. 找到Kuboard的部署配置文件,vi /etc/kubernetes/manifests/kuboard.yaml修改

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    ....省略其他配置
    - env:
    - name: KUBOARD_ENDPOINT
    value: "http://172.16.3.165:14001" #把172.16.3.165:80修改为kuboard.iexxk.io
    name: kuboard
    ports:
    - containerPort: 80
    # hostPort: 14001 #hostPort这一行删除
    name: web
    protocol: TCP
    .....省略
  4. 保存,等待自动重启。

    知识点:

    1. 本来想用静态pod的方式配置,发现添加到配置文件不生效,后来只能在界面的模式添加。

    2. IngressNginxController简单来说就是一个nginx,进入pod容器里面可以看到nginx的相关配置,在使用服务配置了ingress后,会自动在ingress的pod里面生成相应的nginx配置,样例如下:

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      46
      47
      48
      49
      50
      51
      52
      53
      54
      55
      56
      57
      58
      59
      60
      61
      62
      63
      64
      65
      66
      67
      68
      69
      70
      71
      72
      73
      74
      75
      76
      77
      78
      79
      80
      81
      82
      83
      84
      85
      86
      87
      88
      89
      90
      91
      92
      93
      94
      95
      96
      97
      98
      99
      100
      101
      102
      103
      104
      105
      106
      107
      108
      109
      110
      111
      112
      113
      114
      115
      116
      117
      118
      119
      120
      121
      122
      123
      124
      125
      126
      127
      128
      129
      130
      131
      132
      133
      134
      135
      ## start server web.iexxk.io
      server {
      server_name web.iexxk.io ;

      listen 80 ;
      listen [::]:80 ;
      listen 443 ssl http2 ;
      listen [::]:443 ssl http2 ;

      set $proxy_upstream_name "-";

      ssl_certificate_by_lua_block {
      certificate.call()
      }

      location / {

      set $namespace "exxk";
      set $ingress_name "web";
      set $service_name "web";
      set $service_port "80";
      set $location_path "/";
      set $global_rate_limit_exceeding n;

      rewrite_by_lua_block {
      lua_ingress.rewrite({
      force_ssl_redirect = false,
      ssl_redirect = true,
      force_no_ssl_redirect = false,
      preserve_trailing_slash = false,
      use_port_in_redirects = false,
      global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
      })
      balancer.rewrite()
      plugins.run()
      }

      # be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
      # will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
      # other authentication method such as basic auth or external auth useless - all requests will be allowed.
      #access_by_lua_block {
      #}

      header_filter_by_lua_block {
      lua_ingress.header()
      plugins.run()
      }

      body_filter_by_lua_block {
      plugins.run()
      }

      log_by_lua_block {
      balancer.log()

      monitor.call()

      plugins.run()
      }

      port_in_redirect off;

      set $balancer_ewma_score -1;
      set $proxy_upstream_name "exxk-web-80";
      set $proxy_host $proxy_upstream_name;
      set $pass_access_scheme $scheme;

      set $pass_server_port $server_port;

      set $best_http_host $http_host;
      set $pass_port $pass_server_port;

      set $proxy_alternative_upstream_name "";

      client_max_body_size 1m;

      proxy_set_header Host $best_http_host;

      # Pass the extracted client certificate to the backend

      # Allow websocket connections
      proxy_set_header Upgrade $http_upgrade;

      proxy_set_header Connection $connection_upgrade;

      proxy_set_header X-Request-ID $req_id;
      proxy_set_header X-Real-IP $remote_addr;

      proxy_set_header X-Forwarded-For $remote_addr;

      proxy_set_header X-Forwarded-Host $best_http_host;
      proxy_set_header X-Forwarded-Port $pass_port;
      proxy_set_header X-Forwarded-Proto $pass_access_scheme;
      proxy_set_header X-Forwarded-Scheme $pass_access_scheme;

      proxy_set_header X-Scheme $pass_access_scheme;

      # Pass the original X-Forwarded-For
      proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

      # mitigate HTTPoxy Vulnerability
      # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
      proxy_set_header Proxy "";

      # Custom headers to proxied server

      proxy_connect_timeout 5s;
      proxy_send_timeout 60s;
      proxy_read_timeout 60s;

      proxy_buffering off;
      proxy_buffer_size 4k;
      proxy_buffers 4 4k;

      proxy_max_temp_file_size 1024m;

      proxy_request_buffering on;
      proxy_http_version 1.1;

      proxy_cookie_domain off;
      proxy_cookie_path off;

      # In case of errors try the next upstream server before returning an error
      proxy_next_upstream error timeout;
      proxy_next_upstream_timeout 0;
      proxy_next_upstream_tries 3;

      proxy_pass http://upstream_balancer;

      proxy_redirect off;

      }

      }
      ## end server web.iexxk.io

安装IngressNginxController

  1. 在集群的 集群管理 –> 网络 –> IngressClass 列表页点击图中的 安装 IngressNginxController 并创建 IngressClass 的按钮,输入名称myingresscontroller

  2. 查看界面上的端口提示信息

    1
    2
    3
    4
    5
    负载均衡映射
    建议使用 Kubernetes 集群外的负载均衡器,对如下端口设置 L4 转发(不能通过 X-FORWARDED-FOR 追溯源地址) 或 L7 转发(部分负载均衡产品配置 L7 转发较繁琐)
    (如果您已完成转发设置,请忽略此消息)。
    负载均衡的 80 端口转发至 Kubernetes 集群任意节点的 32211
    负载均衡的 443 端口转发至 Kubernetes 集群任意节点的 31612
  3. 方案一(比较节约资源,但是80端口被占用,不能做更多的用途),修改容器的hostPort端口为80,然后直接通过域名即可访问。(修改myingresscontroller的no dePort32211端口为80,但是k8s集群的nodePort端口为30000~40000)

  4. 方案二(多搭建了一个nginx服务,但是灵活性更高)

    配置外部nginx,创建一个static-pod的nginx服务。

    /etc/kubernetes/manifests/目录创建一个static-nginx.yaml

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    apiVersion: v1
    kind: Pod
    metadata:
    annotations: {}
    labels:
    k8s.kuboard.cn/name: static-nginx
    name: static-nginx
    namespace: ingress-nginx
    spec:
    containers:
    - name: web
    image: nginx:alpine
    ports:
    - name: web
    containerPort: 80
    hostPort: 80
    protocol: TCP
    volumeMounts:
    - mountPath: /etc/nginx/conf.d/default.conf
    name: nginx-conf
    volumes:
    - hostPath:
    path: "/root/static-nginx/nginx.conf"
    name: nginx-conf

    在目录/root/static-nginx创建nginx.conf

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    server {
    listen 80;
    server_name .iexxk.io;
    #access_log /var/log/nginx/hoddst.access.log main;

    location / {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_pass http://172.16.3.165:32211/;
    }
    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
    root /usr/share/nginx/html;
    }
    }
  5. 创建个测试web,ingress设置为web.iexxk.io进行访问即可,记得映射域名*.iexxk.io172.16.3.165主机上。

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    annotations: {}
    labels:
    k8s.kuboard.cn/name: web
    name: web
    namespace: exxk
    resourceVersion: '126840'
    spec:
    progressDeadlineSeconds: 600
    replicas: 1
    revisionHistoryLimit: 10
    selector:
    matchLabels:
    k8s.kuboard.cn/name: web
    strategy:
    rollingUpdate:
    maxSurge: 25%
    maxUnavailable: 25%
    type: RollingUpdate
    template:
    metadata:
    creationTimestamp: null
    labels:
    k8s.kuboard.cn/name: web
    spec:
    containers:
    - image: 'nginx:alpine'
    imagePullPolicy: IfNotPresent
    name: web
    ports:
    - containerPort: 80
    protocol: TCP
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    dnsPolicy: ClusterFirst
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    terminationGracePeriodSeconds: 30
    status:
    availableReplicas: 1
    conditions:
    - lastTransitionTime: '2023-08-30T03:52:17Z'
    lastUpdateTime: '2023-08-30T03:52:17Z'
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: 'True'
    type: Available
    - lastTransitionTime: '2023-08-30T03:52:16Z'
    lastUpdateTime: '2023-08-30T03:52:17Z'
    message: ReplicaSet "web-6f8fdd7f55" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: 'True'
    type: Progressing
    observedGeneration: 1
    readyReplicas: 1
    replicas: 1
    updatedReplicas: 1

    ---
    apiVersion: v1
    kind: Service
    metadata:
    annotations: {}
    labels:
    k8s.kuboard.cn/name: web
    name: web
    namespace: exxk
    resourceVersion: '126824'
    spec:
    clusterIP: 10.233.80.181
    clusterIPs:
    - 10.233.80.181
    internalTrafficPolicy: Cluster
    ipFamilies:
    - IPv4
    ipFamilyPolicy: SingleStack
    ports:
    - name: gre3pw
    port: 80
    protocol: TCP
    targetPort: 80
    selector:
    k8s.kuboard.cn/name: web
    sessionAffinity: None
    type: ClusterIP
    status:
    loadBalancer: {}

    ---
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    annotations: {}
    labels:
    k8s.kuboard.cn/name: web
    name: web
    namespace: exxk
    resourceVersion: '128138'
    spec:
    ingressClassName: myingresscontroller
    rules:
    - host: web.iexxk.io
    http:
    paths:
    - backend:
    service:
    name: web
    port:
    number: 80
    path: /
    pathType: Prefix
    status:
    loadBalancer:
    ingress:
    - ip: 172.16.3.165
  6. 额外,如果要不同域名对应不同的集群,nginx设置如下

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    server {
    listen 80;
    server_name .iexxk.io;
    #access_log /var/log/nginx/hoddst.access.log main;

    location / {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_pass http://172.16.3.165:32211/;
    }
    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
    root /usr/share/nginx/html;
    }
    }
    server {
    listen 80;
    server_name .test.io;
    #access_log /var/log/nginx/hoddst.access.log main;

    location / {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_pass http://172.16.3.160:32211/;
    }
    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
    root /usr/share/nginx/html;
    }
    }

基础概念

  • o:organization(组织-公司)
  • ou:organization unit(组织单元-部门)
  • c:countryName(国家)
  • dc:domainComponent(域名)
  • sn:surname(姓氏)
  • cn:common name(常用名称)
  • dn:Distiguished Name(唯一标识名)
  • uid:User ID(用户标识)

安装

服务端安装osixia/docker-openldap

1
2
3
4
5
6
7
8
9
10
11
12
docker pull osixia/openldap:1.5.0
docker run \
-p 389:31236 \ #tcp
-p 636:636 \ #https
--volume /data/slapd/database:/var/lib/ldap \
--volume /data/slapd/config:/etc/ldap/slapd.d \
--env LDAP_ORGANISATION="exxk" \
--env LDAP_DOMAIN="exxktech.io" \
--env LDAP_ADMIN_PASSWORD="exxkTech@2023" \
--detach osixia/openldap:1.5.0


客户端安装工具

mac客户端管理工具Ldap Admin Tool

进去可以创建用户或组以及设置密码

测试demo

application.yml配置

1
2
3
4
5
6
spring:
ldap:
urls: ldap://172.1.1.44:31236
base: dc=hcytech,dc=io
username: cn=admin,dc=exxktech,dc=io
password: exxkTech@2023

Pom.xml添加依赖

1
2
3
4
5
6
7
8
9
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-ldap</artifactId>
</dependency>
<dependency>
<groupId>com.unboundid</groupId>
<artifactId>unboundid-ldapsdk</artifactId>
<scope>test</scope>
</dependency>

Customer.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
package com.exxk.ldaputil;

import org.springframework.ldap.odm.annotations.Attribute;
import org.springframework.ldap.odm.annotations.Entry;
import org.springframework.ldap.odm.annotations.Id;

import javax.naming.Name;

@Entry(base = "ou=customer,dc=exxktech,dc=io",objectClasses ="inetOrgPerson" )
public class Customer {
@Id
private Name id;
@Attribute(name = "cn")
private String userName;

@Override
public String toString() {
return "Customer{" +
"id=" + id +
", userName='" + userName + '\'' +
'}';
}
}

TestController.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
package com.exxk.ldaputil;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.ldap.core.LdapTemplate;
import org.springframework.ldap.filter.EqualsFilter;
import org.springframework.ldap.query.LdapQuery;
import org.springframework.ldap.query.LdapQueryBuilder;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class TestController {
@Autowired
LdapTemplate ldapTemplate;

@GetMapping("/login")
public String compressVideo(String username,String password) {
String status="ok";
LdapQuery query= LdapQueryBuilder.query().where("cn").is(username);
Customer customer= ldapTemplate.findOne(query,Customer.class);
System.out.println("用户名"+customer.toString());
EqualsFilter filter = new EqualsFilter("cn", username);
if(!ldapTemplate.authenticate("", filter.toString(), password)){
status="用户密码错误!";
}
return status;
}
}

访问http://127.0.0.1:8080/login?username=lisi&password=111111进行测试

常见错误

  1. InvalidNmeException: [LDAP: error code 34 - invalid DN]] with root cause

    解决:spring.ldap.username的值从admin修改为cn=admin,dc=exxktech,dc=io

  1. 错误信息:

    1
    Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = failed to get sandbox image "k8s.gcr.io/pause:3.8": failed to pull image "k8s.gcr.io/pause:3.8": failed to pull and unpack image "k8s.gcr.io/pause:3.8": failed to resolve reference "k8s.gcr.io/pause:3.8": failed to do request: Head "https://k8s.gcr.io/v2/pause/manifests/3.8": dial tcp 74.125.23.82:443: i/o timeout

    原因:关键信息failed to pull image "k8s.gcr.io/pause:3.8",说明镜像拉取失败,因为k8s.gcr.io解析的都是国外ip。

    方案一(临时解决):

    1
    2
    3
    4
    #拉取镜像
    crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8
    #修改镜像名
    ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8 k8s.gcr.io/pause:3.8

    方案二(永久解决):

    1
    2
    3
    4
    5
    6
    vi /etc/containerd/config.toml
    #修改该行:sandbox_image = "k8s.gcr.io/pause:3.8"
    #为 :sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8"
    systemctl daemon-reload
    systemctl restart containerd
    systemctl status containerd
  2. 需求信息:获取k8s的节点ip

    原因:解决一些服务需要知道节点id

    解决:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    spec:
    containers:
    - env:
    - name: spring.profiles.active
    value: test
    #增加如下配置
    - name: MY_POD_IP #MY_POD_IP是自定义的名字,可以修改
    valueFrom:
    fieldRef:
    apiVersion: v1
    fieldPath: status.hostIP

    对应的界面配置:

    pP2bQfA.png

    验证:在容器里面执行env就能看到MY_POD_IP的环境变量的值已经是宿主机的ip了

常见问题

  1. 当一个镜像用于两个服务时,通过latest标签,只会更新成功一个,因此修改为指定版本号进行更新。

    分析:1. sh脚本'''三个单引号不支持环境变量,修改为三个双引号"""才支持。

    ​ 2. 相同镜像,两个服务通过重启,怀疑其中一个镜像还没拉取完但是检测到镜像版本已经是最新的了,因为另一个已经在执行了,就直接重启了。

    1
    2
    3
    4
    5
    6
    7
    sh """#!/bin/bash    
    curl -X PUT \
    -H "content-type: application/json" \
    -H "Cookie: KuboardUsername=admin; KuboardAccessKey=4xx" \
    -d '{"kind":"deployments","namespace":"base","name":"dev-system-business","images":{"harbor/business":"harbor/business:${env.gitlabMergeRequestTitle}"}}' \
    "http://172.1.1.44/kuboard-api/cluster/kubernates/kind/CICDApi/admin/resource/updateImageTag"
    """

环境信息:

软件 版本
Kubernetes 1.26.4
Kuboard 3.5.2.4
Jenkins 2.375.1
Harbor 2.0
Gitlab 10.0.0

后端

基础架构:

graph LR
A[gitlab合并] -->|通知| B(Jenkins编译)
B -->|1.push| D[harbor]
B -->|2.执行| E[kuboard重启项目负载]
D -->|完成后| E

详细步骤:

配置gitlab的webhook
  1. 点击项目Settings-->Integrations
  2. 输入URL,勾选jenkins里面的``Build when a change is pushed to GitLab. GitLab webhook URL: 可以拿到地址,eg: URL为http://172.1.1.24:8080/project/xxxdemo`
  3. 输入SecretToken,SecretToken为jenkins项目的Configure-->General-->Secret token-->generate生产的token,eg:035311df1e0bbedf1c1efb0cb5f5a630
  4. 只勾选Trigger触发方式Merge Request events,关闭其他包括SSL
  5. 点击Add webhook
配置Jenkins的configure
  1. 点击jenkins项目的configure-->General-->Build Triggers

  2. 勾选Build when a change is pushed to GitLab. GitLab webhook URL: http://172.1.1.24:8080/project/xxxdemo

    子集勾选:Accepted Merge Request Events

    Approved Merge Requests(EE-only)

    点击Advanced...

    其他不变,点击Generate生成Secret token

  3. 在Pipeline Script一栏,输入script脚本

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    pipeline {
    agent any
    tools {
    jdk 'jdk8'
    dockerTool 'docker'
    }
    environment {
    GITLAB_API_URL = 'http://172.1.1.2:9999/api/v4'
    GITLAB_PROJECT_ID = '138'
    GITLAB_PRIVATE_TOKEN = 'Ny9ywkxxggjo9CwfuWMz'

    DOCKER_REGISTRY = 'harbor.exxktech.dev'
    DOCKER_REGISTRY_URL = 'http://harbor.exxktech.dev'
    DOCKER_REGISTRY_CREDENTIALS = 'c7da3fce-7e0c-415e-a684-e49e17560120'
    DOCKERFILE_PATH = 'src/main/docker'

    projectVersion = ''
    projectName = ''
    projectVersion1 = ''
    projectName1 = ''

    }
    stages {
    stage('Checkout') {
    steps {
    git branch: 'release',
    credentialsId: '123456',
    url: 'http://172.1.1.2:9999/exxk_backend_project/exxk_center.git'
    }
    }

    stage('maven build') {
    steps {
    script {
    sh 'mvn -Dmaven.test.failure.ignore=true clean install'
    }
    }
    }

    stage('multi build') {
    parallel {
    //项目一
    stage('exxk_center_manager') {
    stages {
    stage('docker build') {
    steps {
    dir('exxk_center_manager') {
    script {
    projectVersion = sh(script: 'mvn -f pom.xml help:evaluate -Dexpression=project.version -q -DforceStdout', returnStdout: true).trim()
    projectName = sh(script: 'mvn -f pom.xml help:evaluate -Dexpression=project.artifactId -q -DforceStdout', returnStdout: true).trim()
    // 执行Maven构建
    sh 'mvn -Dmaven.test.failure.ignore=true clean package dockerfile:build'
    }
    }
    }
    }
    stage('Docker tags') {
    steps {
    // 使用Docker插件构建和推送镜像
    script {
    def dockerImage = docker.image("${DOCKER_REGISTRY}/exxk_center/${projectName}:${projectVersion}")
    dockerImage.tag("${env.gitlabMergeRequestTitle}")
    }
    }
    }
    stage('Push Docker Image') {
    steps {
    // 使用Docker插件构建和推送镜像
    script {
    def dockerImage = docker.image("${DOCKER_REGISTRY}/exxk_center/${projectName}:${env.gitlabMergeRequestTitle}")
    withDockerRegistry(credentialsId: "${DOCKER_REGISTRY_CREDENTIALS}", toolName: 'docker', url: "${DOCKER_REGISTRY_URL}") {
    dockerImage.push()
    }
    }
    }
    }
    stage('Docker latest tags') {
    steps {
    // 使用Docker插件构建和推送镜像
    script {
    def dockerImage = docker.image("${DOCKER_REGISTRY}/exxk_center/${projectName}:${projectVersion}")
    dockerImage.tag("latest")
    }
    }
    }
    stage('Push latest Image') {
    steps {
    // 使用Docker插件构建和推送镜像
    script {
    def dockerImage = docker.image("${DOCKER_REGISTRY}/exxk_center/${projectName}:latest")
    withDockerRegistry(credentialsId: "${DOCKER_REGISTRY_CREDENTIALS}", toolName: 'docker', url: "${DOCKER_REGISTRY_URL}") {
    dockerImage.push()
    }
    }
    }
    }
    }
    post {
    success {
    script {
    sh '''#!/bin/bash
    curl -X PUT \
    -H "Content-Type: application/yaml" \
    -H "Cookie: KuboardUsername=admin; KuboardAccessKey=4ip7hrrs6ias.2npbn4kc546tdxb8ew58nsdyz37j7cby" \
    -d '{"kind":"deployments","namespace":"arts-center","name":"exxk-center-manager"}' \
    "http://172.1.1.44/kuboard-api/cluster/kubernates/kind/CICDApi/admin/resource/restartWorkload"
    '''
    }
    }
    }
    }
    //项目二
    stage('exxk_center_application') {
    stages {
    stage('docker build') {
    steps {
    dir('exxk_center_application') {
    script {
    projectVersion1 = sh(script: 'mvn -f pom.xml help:evaluate -Dexpression=project.version -q -DforceStdout', returnStdout: true).trim()
    projectName1 = sh(script: 'mvn -f pom.xml help:evaluate -Dexpression=project.artifactId -q -DforceStdout', returnStdout: true).trim()
    // 执行Maven构建
    sh 'mvn -Dmaven.test.failure.ignore=true clean package dockerfile:build'
    }
    }
    }
    }
    stage('Docker tags') {
    steps {
    // 使用Docker插件构建和推送镜像
    script {
    def dockerImage = docker.image("${DOCKER_REGISTRY}/exxk_center/${projectName1}:${projectVersion1}")
    dockerImage.tag("${env.gitlabMergeRequestTitle}")
    }
    }
    }
    stage('Push Docker Image') {
    steps {
    // 使用Docker插件构建和推送镜像
    script {
    def dockerImage = docker.image("${DOCKER_REGISTRY}/exxk_center/${projectName1}:${env.gitlabMergeRequestTitle}")
    withDockerRegistry(credentialsId: "${DOCKER_REGISTRY_CREDENTIALS}", toolName: 'docker', url: "${DOCKER_REGISTRY_URL}") {
    dockerImage.push()
    }
    }
    }
    }
    stage('Docker latest tags') {
    steps {
    // 使用Docker插件构建和推送镜像
    script {
    def dockerImage = docker.image("${DOCKER_REGISTRY}/exxk_center/${projectName1}:${projectVersion1}")
    dockerImage.tag("latest")
    }
    }
    }
    stage('Push latest Image') {
    steps {
    // 使用Docker插件构建和推送镜像
    script {
    def dockerImage = docker.image("${DOCKER_REGISTRY}/exxk_center/${projectName1}:latest")
    withDockerRegistry(credentialsId: "${DOCKER_REGISTRY_CREDENTIALS}", toolName: 'docker', url: "${DOCKER_REGISTRY_URL}") {
    dockerImage.push()
    }
    }
    }
    }
    }
    post {
    success {
    script {
    sh '''#!/bin/bash
    curl -X PUT \
    -H "Content-Type: application/yaml" \
    -H "Cookie: KuboardUsername=admin; KuboardAccessKey=4ip7hrrs6ias.xxxbn4kc546tdxb8ew58nsdyz37j7cby" \
    -d '{"kind":"deployments","namespace":"arts-center","name":"exxk-center-application"}' \
    "http://172.1.1.44/kuboard-api/cluster/kubernates/kind/CICDApi/admin/resource/restartWorkload"
    '''
    }
    }
    }
    }
    }
    }
    }
    }

    java项目配置

    文件结构

    1
    2
    3
    4
    5
    6
    7
    |-parent
    |-demo1
    |-src\main\docker\Dockerfile
    |-pom.xml
    |-demo2
    |-src\main\docker\Dockerfile
    |-pom.xml

    Dockerfile

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    FROM harbor.exxktech.dev/base/java8:1.0.0

    ARG JAR_FILE
    ADD target/${JAR_FILE}.jar app.jar


    ENV JAVA_OPTS -Xms128m -Xmx256m
    ENV BOOT_PARAMS ""

    EXPOSE 8080

    ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS $JAVA_OPTS_AGENT -Djava.security.egd=file:/dev/./urandom -jar app.jar $BOOT_PARAMS" ]

    pom.xml

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    <plugin>
    <groupId>com.spotify</groupId>
    <artifactId>dockerfile-maven-plugin</artifactId>
    <version>1.4.13</version>
    <executions>
    <execution>
    <id>default</id>
    <phase>none</phase>
    </execution>
    <execution>
    <id>after-deploy</id>
    <phase>deploy</phase>
    <goals>
    <goal>build</goal>
    </goals>
    </execution>
    </executions>
    <configuration>
    <repository>harbor.exxktech.dev/exxk_center/${project.name}</repository>
    <tag>${project.version}</tag>
    <buildArgs>
    <JAR_FILE>${project.build.finalName}</JAR_FILE>
    </buildArgs>
    <dockerfile>src/main/docker/Dockerfile</dockerfile>
    </configuration>
    </plugin>

微前端

架构

graph LR
A[gitlab前端项目1合并] -->|通知| B(Jenkins编译)
c[gitlab前端项目2合并] -->|通知| B(Jenkins编译)
B -->|1.push| D[harbor]
B -->|2.执行| E[kuboard重启项目负载]
D -->|完成后| E

配置

jenkins

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
pipeline {
agent any
tools {
dockerTool 'docker'
}
environment {
GITLAB_API_URL = 'http://172.1.1.25:9999/api/v4'
GITLAB_PROJECT_ID = '115'
GITLAB_PRIVATE_TOKEN = 'Ny9ywk6zggjo9CwfuWMz'

DOCKER_REGISTRY = 'harbor.exxktech.dev'
DOCKER_REGISTRY_URL= 'http://harbor.exxktech.dev'
DOCKER_REGISTRY_CREDENTIALS = 'c7da3fce-7e2c-415e-a684-e49e17560120'

NGINX_IMAGE = "nginx:latest"
IMAGE_NAME = 'harbor.exxktech.dev/art/web-art-center-main'
NGINX_CONFIG = 'default.conf'
}
stages {
stage('GitLab Checkout') {
steps {
dir('main') {
checkout([$class: 'GitSCM', branches: [[name: '*/release']], userRemoteConfigs: [[url: 'http://172.1.1.25:9999/exxk_frontend_project/performing-arts-center-system.git',credentialsId: 'e7a93679-a7f5-411f-823f-c3c5f467549b']]])
}
dir('sub') {
checkout([$class: 'GitSCM', branches: [[name: '*/release']], userRemoteConfigs: [[url: 'http://172.1.1.25:9999/exxk_frontend_project/performing-arts-center-business.git',credentialsId: 'e7a93679-a7f5-411f-823f-c3c5f467549b']]])
}
}
}
stage('Build') {
steps {
sh 'mkdir -p main_dist'
sh 'mkdir -p sub_dist'
// 构建前端项目,需要根据项目结构和使用的构建工具进行修改
sh 'cd main && yarn install && yarn build'
sh 'cd sub && yarn install && yarn build'
}
}

stage('Copy to Workspace') {
steps {
script {
// Copy dist contents to workspace's out directory
sh 'cp -r main/dist/* main_dist/'
sh 'cp -r sub/child/business/* sub_dist/'
sh 'cp main/Dockerfile .'
sh 'cp main/default.conf .'
}
}
}


stage('Build Image') {
steps {
script {
def dockerImage = docker.build("${IMAGE_NAME}:${env.gitlabMergeRequestTitle}")
withDockerRegistry(credentialsId: "${DOCKER_REGISTRY_CREDENTIALS}", toolName: 'docker', url: "${DOCKER_REGISTRY_URL}") {
dockerImage.push()
}
}
}
}

stage('Docker latest tags') {
steps {
// 使用Docker插件构建和推送镜像
script {
def dockerImage = docker.image("${IMAGE_NAME}:${env.gitlabMergeRequestTitle}")
dockerImage.tag("latest")
}
}
}
stage('Push latest Image') {
steps {
// 使用Docker插件构建和推送镜像
script {
def dockerImage = docker.image("${IMAGE_NAME}:latest")
withDockerRegistry(credentialsId: "${DOCKER_REGISTRY_CREDENTIALS}", toolName: 'docker', url: "${DOCKER_REGISTRY_URL}") {
dockerImage.push()
}
}
}

}

}
post {
success {
script {
sh '''#!/bin/bash
curl -X PUT \
-H "Content-Type: application/yaml" \
-H "Cookie: KuboardUsername=admin; KuboardAccessKey=4ip7hrrs6ias.2npbn4kc546tdxb8ew58nsdyz37j7cby" \
-d '{"kind":"deployments","namespace":"arts-center","name":"web-art-center"}' \
"http://172.1.1.44/kuboard-api/cluster/kubernates/kind/CICDApi/admin/resource/restartWorkload"
'''
}
}
}
}

Dockerfile

1
2
3
4
5
6
7
8
9
FROM nginx


RUN rm /etc/nginx/conf.d/default.conf

ADD default.conf /etc/nginx/conf.d/

COPY main_dist/ /usr/share/nginx/html/
COPY sub_dist/ /usr/share/nginx/html/child/busniess/

nginx default.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
# 后端接口代理地址
upstream api_server {
server deduction-center-manager:8555;
}
server {
listen 80;
server_name localhost;
underscores_in_headers on;

location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}

location /child/busniess {
root html;
index index.html index.htm;
#try_files $uri $uri/ /child/busniess/index.html;
}

location /manager/api/ {
rewrite ~/manager/api/(.*)$ /$1 break;
proxy_pass http://api_server/manager/api/;
proxy_set_header Host $host;
proxy_pass_request_headers on;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 300;
proxy_send_timeout 300;
}


error_page 500 502 503 504 /50x.html;

location = /50x.html {
root html;
}
}