0%

  1. 错误信息:

    1
    Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = failed to get sandbox image "k8s.gcr.io/pause:3.8": failed to pull image "k8s.gcr.io/pause:3.8": failed to pull and unpack image "k8s.gcr.io/pause:3.8": failed to resolve reference "k8s.gcr.io/pause:3.8": failed to do request: Head "https://k8s.gcr.io/v2/pause/manifests/3.8": dial tcp 74.125.23.82:443: i/o timeout

    原因:关键信息failed to pull image "k8s.gcr.io/pause:3.8",说明镜像拉取失败,因为k8s.gcr.io解析的都是国外ip。

    方案一(临时解决):

    1
    2
    3
    4
    5
    6
    7
    8
    #拉取镜像
    crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8
    #修改镜像名
    ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8 k8s.gcr.io/pause:3.8

    crictl pull registry.cn-hangzhou.aliyuncs.com/owater/cluster-proportional-autoscaler-amd64:1.8.3

    ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/owater/cluster-proportional-autoscaler-amd64:1.8.3 k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.5

    方案二(永久解决):

    1
    2
    3
    4
    5
    6
    vi /etc/containerd/config.toml
    #修改该行:sandbox_image = "k8s.gcr.io/pause:3.8"
    #为 :sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8"
    systemctl daemon-reload
    systemctl restart containerd
    systemctl status containerd
  2. 需求信息:获取k8s的节点ip

    原因:解决一些服务需要知道节点id

    解决:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    spec:
    containers:
    - env:
    - name: spring.profiles.active
    value: test
    #增加如下配置
    - name: MY_POD_IP #MY_POD_IP是自定义的名字,可以修改
    valueFrom:
    fieldRef:
    apiVersion: v1
    fieldPath: status.hostIP

    对应的界面配置:

    pP2bQfA.png

    验证:在容器里面执行env就能看到MY_POD_IP的环境变量的值已经是宿主机的ip了

常见问题

  1. 当一个镜像用于两个服务时,通过latest标签,只会更新成功一个,因此修改为指定版本号进行更新。

    分析:1. sh脚本'''三个单引号不支持环境变量,修改为三个双引号"""才支持。

    ​ 2. 相同镜像,两个服务通过重启,怀疑其中一个镜像还没拉取完但是检测到镜像版本已经是最新的了,因为另一个已经在执行了,就直接重启了。

    1
    2
    3
    4
    5
    6
    7
    sh """#!/bin/bash    
    curl -X PUT \
    -H "content-type: application/json" \
    -H "Cookie: KuboardUsername=admin; KuboardAccessKey=4xx" \
    -d '{"kind":"deployments","namespace":"base","name":"dev-system-business","images":{"harbor/business":"harbor/business:${env.gitlabMergeRequestTitle}"}}' \
    "http://172.1.1.44/kuboard-api/cluster/kubernates/kind/CICDApi/admin/resource/updateImageTag"
    """

环境信息:

软件 版本
Kubernetes 1.26.4
Kuboard 3.5.2.4
Jenkins 2.375.1
Harbor 2.0
Gitlab 10.0.0

后端

基础架构:

graph LR
A[gitlab合并] -->|通知| B(Jenkins编译)
B -->|1.push| D[harbor]
B -->|2.执行| E[kuboard重启项目负载]
D -->|完成后| E

详细步骤:

配置gitlab的webhook
  1. 点击项目Settings-->Integrations
  2. 输入URL,勾选jenkins里面的``Build when a change is pushed to GitLab. GitLab webhook URL: 可以拿到地址,eg: URL为http://172.1.1.24:8080/project/xxxdemo`
  3. 输入SecretToken,SecretToken为jenkins项目的Configure-->General-->Secret token-->generate生产的token,eg:035311df1e0bbedf1c1efb0cb5f5a630
  4. 只勾选Trigger触发方式Merge Request events,关闭其他包括SSL
  5. 点击Add webhook
配置Jenkins的configure
  1. 点击jenkins项目的configure-->General-->Build Triggers

  2. 勾选Build when a change is pushed to GitLab. GitLab webhook URL: http://172.1.1.24:8080/project/xxxdemo

    子集勾选:Accepted Merge Request Events

    Approved Merge Requests(EE-only)

    点击Advanced...

    其他不变,点击Generate生成Secret token

  3. 在Pipeline Script一栏,输入script脚本

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    pipeline {
    agent any
    tools {
    jdk 'jdk8'
    dockerTool 'docker'
    }
    environment {
    GITLAB_API_URL = 'http://172.1.1.2:9999/api/v4'
    GITLAB_PROJECT_ID = '138'
    GITLAB_PRIVATE_TOKEN = 'Ny9ywkxxggjo9CwfuWMz'

    DOCKER_REGISTRY = 'harbor.exxktech.dev'
    DOCKER_REGISTRY_URL = 'http://harbor.exxktech.dev'
    DOCKER_REGISTRY_CREDENTIALS = 'c7da3fce-7e0c-415e-a684-e49e17560120'
    DOCKERFILE_PATH = 'src/main/docker'

    projectVersion = ''
    projectName = ''
    projectVersion1 = ''
    projectName1 = ''

    }
    stages {
    stage('Checkout') {
    steps {
    git branch: 'release',
    credentialsId: '123456',
    url: 'http://172.1.1.2:9999/exxk_backend_project/exxk_center.git'
    }
    }

    stage('maven build') {
    steps {
    script {
    sh 'mvn -Dmaven.test.failure.ignore=true clean install'
    }
    }
    }

    stage('multi build') {
    parallel {
    //项目一
    stage('exxk_center_manager') {
    stages {
    stage('docker build') {
    steps {
    dir('exxk_center_manager') {
    script {
    projectVersion = sh(script: 'mvn -f pom.xml help:evaluate -Dexpression=project.version -q -DforceStdout', returnStdout: true).trim()
    projectName = sh(script: 'mvn -f pom.xml help:evaluate -Dexpression=project.artifactId -q -DforceStdout', returnStdout: true).trim()
    // 执行Maven构建
    sh 'mvn -Dmaven.test.failure.ignore=true clean package dockerfile:build'
    }
    }
    }
    }
    stage('Docker tags') {
    steps {
    // 使用Docker插件构建和推送镜像
    script {
    def dockerImage = docker.image("${DOCKER_REGISTRY}/exxk_center/${projectName}:${projectVersion}")
    dockerImage.tag("${env.gitlabMergeRequestTitle}")
    }
    }
    }
    stage('Push Docker Image') {
    steps {
    // 使用Docker插件构建和推送镜像
    script {
    def dockerImage = docker.image("${DOCKER_REGISTRY}/exxk_center/${projectName}:${env.gitlabMergeRequestTitle}")
    withDockerRegistry(credentialsId: "${DOCKER_REGISTRY_CREDENTIALS}", toolName: 'docker', url: "${DOCKER_REGISTRY_URL}") {
    dockerImage.push()
    }
    }
    }
    }
    stage('Docker latest tags') {
    steps {
    // 使用Docker插件构建和推送镜像
    script {
    def dockerImage = docker.image("${DOCKER_REGISTRY}/exxk_center/${projectName}:${projectVersion}")
    dockerImage.tag("latest")
    }
    }
    }
    stage('Push latest Image') {
    steps {
    // 使用Docker插件构建和推送镜像
    script {
    def dockerImage = docker.image("${DOCKER_REGISTRY}/exxk_center/${projectName}:latest")
    withDockerRegistry(credentialsId: "${DOCKER_REGISTRY_CREDENTIALS}", toolName: 'docker', url: "${DOCKER_REGISTRY_URL}") {
    dockerImage.push()
    }
    }
    }
    }
    }
    post {
    success {
    script {
    sh '''#!/bin/bash
    curl -X PUT \
    -H "Content-Type: application/yaml" \
    -H "Cookie: KuboardUsername=admin; KuboardAccessKey=4ip7hrrs6ias.2npbn4kc546tdxb8ew58nsdyz37j7cby" \
    -d '{"kind":"deployments","namespace":"arts-center","name":"exxk-center-manager"}' \
    "http://172.1.1.44/kuboard-api/cluster/kubernates/kind/CICDApi/admin/resource/restartWorkload"
    '''
    }
    }
    }
    }
    //项目二
    stage('exxk_center_application') {
    stages {
    stage('docker build') {
    steps {
    dir('exxk_center_application') {
    script {
    projectVersion1 = sh(script: 'mvn -f pom.xml help:evaluate -Dexpression=project.version -q -DforceStdout', returnStdout: true).trim()
    projectName1 = sh(script: 'mvn -f pom.xml help:evaluate -Dexpression=project.artifactId -q -DforceStdout', returnStdout: true).trim()
    // 执行Maven构建
    sh 'mvn -Dmaven.test.failure.ignore=true clean package dockerfile:build'
    }
    }
    }
    }
    stage('Docker tags') {
    steps {
    // 使用Docker插件构建和推送镜像
    script {
    def dockerImage = docker.image("${DOCKER_REGISTRY}/exxk_center/${projectName1}:${projectVersion1}")
    dockerImage.tag("${env.gitlabMergeRequestTitle}")
    }
    }
    }
    stage('Push Docker Image') {
    steps {
    // 使用Docker插件构建和推送镜像
    script {
    def dockerImage = docker.image("${DOCKER_REGISTRY}/exxk_center/${projectName1}:${env.gitlabMergeRequestTitle}")
    withDockerRegistry(credentialsId: "${DOCKER_REGISTRY_CREDENTIALS}", toolName: 'docker', url: "${DOCKER_REGISTRY_URL}") {
    dockerImage.push()
    }
    }
    }
    }
    stage('Docker latest tags') {
    steps {
    // 使用Docker插件构建和推送镜像
    script {
    def dockerImage = docker.image("${DOCKER_REGISTRY}/exxk_center/${projectName1}:${projectVersion1}")
    dockerImage.tag("latest")
    }
    }
    }
    stage('Push latest Image') {
    steps {
    // 使用Docker插件构建和推送镜像
    script {
    def dockerImage = docker.image("${DOCKER_REGISTRY}/exxk_center/${projectName1}:latest")
    withDockerRegistry(credentialsId: "${DOCKER_REGISTRY_CREDENTIALS}", toolName: 'docker', url: "${DOCKER_REGISTRY_URL}") {
    dockerImage.push()
    }
    }
    }
    }
    }
    post {
    success {
    script {
    sh '''#!/bin/bash
    curl -X PUT \
    -H "Content-Type: application/yaml" \
    -H "Cookie: KuboardUsername=admin; KuboardAccessKey=4ip7hrrs6ias.xxxbn4kc546tdxb8ew58nsdyz37j7cby" \
    -d '{"kind":"deployments","namespace":"arts-center","name":"exxk-center-application"}' \
    "http://172.1.1.44/kuboard-api/cluster/kubernates/kind/CICDApi/admin/resource/restartWorkload"
    '''
    }
    }
    }
    }
    }
    }
    }
    }

    java项目配置

    文件结构

    1
    2
    3
    4
    5
    6
    7
    |-parent
    |-demo1
    |-src\main\docker\Dockerfile
    |-pom.xml
    |-demo2
    |-src\main\docker\Dockerfile
    |-pom.xml

    Dockerfile

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    FROM harbor.exxktech.dev/base/java8:1.0.0

    ARG JAR_FILE
    ADD target/${JAR_FILE}.jar app.jar


    ENV JAVA_OPTS -Xms128m -Xmx256m
    ENV BOOT_PARAMS ""

    EXPOSE 8080

    ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS $JAVA_OPTS_AGENT -Djava.security.egd=file:/dev/./urandom -jar app.jar $BOOT_PARAMS" ]

    pom.xml

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    <plugin>
    <groupId>com.spotify</groupId>
    <artifactId>dockerfile-maven-plugin</artifactId>
    <version>1.4.13</version>
    <executions>
    <execution>
    <id>default</id>
    <phase>none</phase>
    </execution>
    <execution>
    <id>after-deploy</id>
    <phase>deploy</phase>
    <goals>
    <goal>build</goal>
    </goals>
    </execution>
    </executions>
    <configuration>
    <repository>harbor.exxktech.dev/exxk_center/${project.name}</repository>
    <tag>${project.version}</tag>
    <buildArgs>
    <JAR_FILE>${project.build.finalName}</JAR_FILE>
    </buildArgs>
    <dockerfile>src/main/docker/Dockerfile</dockerfile>
    </configuration>
    </plugin>

微前端

架构

graph LR
A[gitlab前端项目1合并] -->|通知| B(Jenkins编译)
c[gitlab前端项目2合并] -->|通知| B(Jenkins编译)
B -->|1.push| D[harbor]
B -->|2.执行| E[kuboard重启项目负载]
D -->|完成后| E

配置

jenkins

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
pipeline {
agent any
tools {
dockerTool 'docker'
}
environment {
GITLAB_API_URL = 'http://172.1.1.25:9999/api/v4'
GITLAB_PROJECT_ID = '115'
GITLAB_PRIVATE_TOKEN = 'Ny9ywk6zggjo9CwfuWMz'

DOCKER_REGISTRY = 'harbor.exxktech.dev'
DOCKER_REGISTRY_URL= 'http://harbor.exxktech.dev'
DOCKER_REGISTRY_CREDENTIALS = 'c7da3fce-7e2c-415e-a684-e49e17560120'

NGINX_IMAGE = "nginx:latest"
IMAGE_NAME = 'harbor.exxktech.dev/art/web-art-center-main'
NGINX_CONFIG = 'default.conf'
}
stages {
stage('GitLab Checkout') {
steps {
dir('main') {
checkout([$class: 'GitSCM', branches: [[name: '*/release']], userRemoteConfigs: [[url: 'http://172.1.1.25:9999/exxk_frontend_project/performing-arts-center-system.git',credentialsId: 'e7a93679-a7f5-411f-823f-c3c5f467549b']]])
}
dir('sub') {
checkout([$class: 'GitSCM', branches: [[name: '*/release']], userRemoteConfigs: [[url: 'http://172.1.1.25:9999/exxk_frontend_project/performing-arts-center-business.git',credentialsId: 'e7a93679-a7f5-411f-823f-c3c5f467549b']]])
}
}
}
stage('Build') {
steps {
sh 'mkdir -p main_dist'
sh 'mkdir -p sub_dist'
// 构建前端项目,需要根据项目结构和使用的构建工具进行修改
sh 'cd main && yarn install && yarn build'
sh 'cd sub && yarn install && yarn build'
}
}

stage('Copy to Workspace') {
steps {
script {
// Copy dist contents to workspace's out directory
sh 'cp -r main/dist/* main_dist/'
sh 'cp -r sub/child/business/* sub_dist/'
sh 'cp main/Dockerfile .'
sh 'cp main/default.conf .'
}
}
}


stage('Build Image') {
steps {
script {
def dockerImage = docker.build("${IMAGE_NAME}:${env.gitlabMergeRequestTitle}")
withDockerRegistry(credentialsId: "${DOCKER_REGISTRY_CREDENTIALS}", toolName: 'docker', url: "${DOCKER_REGISTRY_URL}") {
dockerImage.push()
}
}
}
}

stage('Docker latest tags') {
steps {
// 使用Docker插件构建和推送镜像
script {
def dockerImage = docker.image("${IMAGE_NAME}:${env.gitlabMergeRequestTitle}")
dockerImage.tag("latest")
}
}
}
stage('Push latest Image') {
steps {
// 使用Docker插件构建和推送镜像
script {
def dockerImage = docker.image("${IMAGE_NAME}:latest")
withDockerRegistry(credentialsId: "${DOCKER_REGISTRY_CREDENTIALS}", toolName: 'docker', url: "${DOCKER_REGISTRY_URL}") {
dockerImage.push()
}
}
}

}

}
post {
success {
script {
sh '''#!/bin/bash
curl -X PUT \
-H "Content-Type: application/yaml" \
-H "Cookie: KuboardUsername=admin; KuboardAccessKey=4ip7hrrs6ias.2npbn4kc546tdxb8ew58nsdyz37j7cby" \
-d '{"kind":"deployments","namespace":"arts-center","name":"web-art-center"}' \
"http://172.1.1.44/kuboard-api/cluster/kubernates/kind/CICDApi/admin/resource/restartWorkload"
'''
}
}
}
}

Dockerfile

1
2
3
4
5
6
7
8
9
FROM nginx


RUN rm /etc/nginx/conf.d/default.conf

ADD default.conf /etc/nginx/conf.d/

COPY main_dist/ /usr/share/nginx/html/
COPY sub_dist/ /usr/share/nginx/html/child/busniess/

nginx default.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
# 后端接口代理地址
upstream api_server {
server deduction-center-manager:8555;
}
server {
listen 80;
server_name localhost;
underscores_in_headers on;

location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}

location /child/busniess {
root html;
index index.html index.htm;
#try_files $uri $uri/ /child/busniess/index.html;
}

location /manager/api/ {
rewrite ~/manager/api/(.*)$ /$1 break;
proxy_pass http://api_server/manager/api/;
proxy_set_header Host $host;
proxy_pass_request_headers on;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 300;
proxy_send_timeout 300;
}


error_page 500 502 503 504 /50x.html;

location = /50x.html {
root html;
}
}

问题现象:

在给Legion Y7000P IAH7联想电脑本来带的win11系统再加装一个CentOS-7-x86_64-Minimal-2009.iso的时候发现没有wifi驱动,无法连接网络,切换装ubuntu系统,也是没有wifi驱动。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@exxk ~]# lspci -v #查看无线设备,驱动相关信息,发现没有驱动
00:14.3 Network controller: Intel Corporation Device 51f0 (rev 01)
Subsystem: Intel Corporation Device 0094
Flags: fast devsel, IRQ 16
Memory at 410317c000 (64-bit, non-prefetchable) [size=16K]
Capabilities: [c8] Power Management version 3
Capabilities: [d0] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [40] Express Root Complex Integrated Endpoint, MSI 00
Capabilities: [80] MSI-X: Enable- Count=16 Masked-
Capabilities: [100] Latency Tolerance Reporting
Capabilities: [164] Vendor Specific Information: ID=0010 Rev=0 Len=014 <?>
Kernel modules: iwlwifi
[root@exxk ~]# dmesg | grep iwlwifi
[ 32.584961] iwlwifi 0000:00:14.3: enabling device (0000 -> 0002)
[ 32.588729] iwlwifi 0000:00:14.3: firmware: failed to load iwlwifi-so-a0-gf-a0-72.ucode (-2)
[ 32.588794] iwlwifi 0000:00:14.3: firmware: failed to load iwlwifi-so-a0-gf-a0-72.ucode (-2)
[ 32.588840] iwlwifi 0000:00:14.3: Direct firmware load for iwlwifi-so-a0-gf-a0-72.ucode failed with error -2
......
[ 32.634708] iwlwifi 0000:00:14.3: Direct firmware load for iwlwifi-so-a0-gf-a0-39.ucode failed with error -2
[ 32.634709] iwlwifi 0000:00:14.3: minimum version required: iwlwifi-so-a0-gf-a0-39
[ 32.635165] iwlwifi 0000:00:14.3: maximum version supported: iwlwifi-so-a0-gf-a0-72
[ 32.635644] iwlwifi 0000:00:14.3: check git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git

尝试解决:

  1. 下载英特尔® Wi-Fi 6 AX210 160MHz驱动**(改驱动不起作用)**

    本来电脑是AX211驱动,但是官网没有211的Linux驱动,但是在官网发现AX211驱动和AX210的驱动在win11系统下是一样的,因此尝试用210的Linux驱动安装。

  2. 根据AX210的Linux驱动要求内核5.10+,因此先升级内核

    下载内核

    kernel-ml-6.4.11-1.el7.elrepo.x86_64.rpm

    kernel-ml-devel-6.4.11-1.el7.elrepo.x86_64.rpm

    1
    2
    3
    cp kernel* ~/rpm
    cd ~/rpm
    rpm -Uvh --force --nodeps *

    后续步骤见之前的文章:centos7.3升级内核

    升级内核后无法通过有线网卡上网,切换旧的内核,执行

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    # 安装
    yum -y install pciutils
    #执行
    lspci -v
    # 最后一行可以看到
    31:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 15)
    Subsystem: Lenovo Device 3938
    Flags: bus master, fast devsel, latency 0, IRQ 17
    I/O ports at 3000 [size=256]
    Memory at 5c204000 (64-bit, non-prefetchable) [size=4K]
    Memory at 5c200000 (64-bit, non-prefetchable) [size=16K]
    Capabilities: [40] Power Management version 3
    Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+
    Capabilities: [70] Express Endpoint, MSI 01
    Capabilities: [b0] MSI-X: Enable+ Count=4 Masked-
    Capabilities: [100] Advanced Error Reporting
    Capabilities: [140] Virtual Channel
    Capabilities: [160] Device Serial Number 01-00-00-00-68-4c-e0-00
    Capabilities: [170] Latency Tolerance Reporting
    Capabilities: [178] L1 PM Substates
    Kernel driver in use: r8169
    Kernel modules: r8169
    # 切换为 CentOS Linux (6.4.11-1.el7.elrepo.x86_64) 7 (Core)可以看到没有加载驱动,最后少了Kernel driver in use: r8169这一行
    rmmode r8169 #需要先移除才能加载成功
    modprobe r8169 #加载驱动成功就可以上网了,重启就会失效,需要重新执行这两个命令
    # 持久加载:如果您希望在每次系统启动时自动加载驱动程序,您可以将其添加到 /etc/modules 文件中。打开该文件并在末尾添加一行,写入您的驱动程序名称。保存文件后,下次系统启动时,该驱动程序将自动加载。
  3. 解压tar -zxvf iwlwifi-ty-59.601f3a66.0.tgz

  4. 安装驱动,执行cp *.ucode /lib/firmware

  5. 重启,执行reboot

  6. 配置网络nmtui

  7. vi /etc/systemd/logind.conf 去掉HandleLidSwitch前面的注释符号#,并把它的值从suspend修改为ignore,执行systemctl restart systemd-logind生效

方案一(成功解决):

内核相关依赖https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/在里面找个最新的[ linux-firmware-20230804.tar.gz](https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/snapshot/linux-firmware-20230804.tar.gz) (sig)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
wget https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/snapshot/linux-firmware-20230804.tar.gz
tar -xzvf linux-firmware-20230804.tar.gz
cd linux-firmware-20230804
# 全部同意Y
cp iwlwifi-* /lib/firmware
reboot
nmtui
# 备份系统
tar cvpzf backup.tgz / --exclude=/proc --exclude=/lost+found --exclude=/mnt --exclude=/sys --exclude=backup.tgz --waring=no-file-changed
# 还原
tar xvpfz backup.tgz -C /
mkdir proc
mkdir lost+found
mkdir mnt
mkdir sys
restorecon -Rv /

方案二:网卡驱动冲突

1
2
3
4
5
6
#查看PCI设备(网卡、声卡、显卡、磁盘控制器、USB 控制器等)信息
lspci |grep -i network
# lspci commond not found
yum -y install pciutils
# 进入对应设备目录,删除多余版本的固件文件
cd /lib/firmware/ath10k/<网卡名字>/<硬件版本>/

方案三:centos 配置无线网卡centos7无线网卡

1
2
3
4
5
6
7
8
#查看无线网卡是否安装
lspci | grep Wireless
#查找驱动
yum search kmod-wl
#安装驱动
yum install kmod-wl
#启用无线网卡
nmcli radio wifi on

参考:

https://community.intel.com/t5/Wireless/AX211-wifi-does-not-work-on-Debian-Bullseye-Linux-kernel-6-NUC/td-p/1465779

centos 7 笔记本闭盖不休眠

CentOS7 内核升级失败处理办法

  1. 创建harbor项目(已有项目可以忽略)

    pPeou38.png

  2. 查看推送命令,得到镜像名称格式harbor.xxxtech.dev/backend/REPOSITORY[:TAG]

    pPmvTDx.png

  3. 配置hos映射,将harbor.xxxtech.dev域名映射到172.16.10.49 主机上(后期域名可直接访问省略这一步)

  4. 打包java项目,构建docker镜像,执行命令docker build --no-cache=true -f [Dockerfile的路径] -t harbor.xxxtech.dev/backend/[项目名]:[TAG] [指定目录构建]

    Eg: docker build --no-cache=true -f docker/Dockerfile -t harbor.xxxtech.dev/backend/licensemanager:1.0 .

    pPmvj8H.png

  5. 推送镜像到Harbor。

    1
    2
    3
    4
    5
    6
    #修改 Docker daemon的配置文件,添加如下配置
    "insecure-registries": [ "harbor.xxxtech.dev" ]
    #根据提示输入用户名和密码
    docker login harbor.xxxtech.dev
    #推送镜像
    docker push harbor.xxxtech.dev/backend/licensemanager:1.0

    pPmvxxA.png

  6. 登录kuboard创建命名空间(存在可以忽略该步骤)

    pPmx9qP.png

  7. 配置harbor仓库(已配置可以忽略该步骤)

    输入docker server:http://harbor.xxxtech.dev

    输入docker username:对应harbor的用户名

    输入docker password:对应harbor的密码

    pPmxnrq.png

  8. 开始部署项目,创建Deployment

    pPmxtMR.png

    设置工作负载名称,副本数默认1即可,生产按需进行增加副本

    pPmxgsI.png

    添加工作容器

    可选:添加两个健康检查接口/actuator/health/liveness/actuator/health/readiness

    pPmxoWQ.png

    配置访问地址ingress
    pPmxXwV.png

    pPmzCl9.png

  9. 配置nginx进行访问(这一步有公共nginx可以省略)

    运行nginx:docker run --name xxx-nginx -v /Users/xuanleung/xxx/nginx.conf:/etc/nginx/conf.d/default.conf:ro -p 80:80 -d nginx

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    server {
    listen 80;
    server_name .xxxtech.io;
    #access_log /var/log/nginx/hoddst.access.log main;

    location / {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_pass http://172.16.10.44:31407/;
    }
    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
    root /usr/share/nginx/html;
    }
    }

    在host添加域名映射:127.0.0.1 licensmanager.xxxtech.io

    最后访问http://licensmanager.xxxtech.io即可

安装步骤

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
[root@exxk ~]# helm repo add harbor https://helm.goharbor.io
#下载harbor相关配置到当前目录
[root@exxk ~]# helm fetch harbor/harbor --untar
[root@exxk ~]# cd harbor/
[root@exxk harbor]# ls
Chart.yaml LICENSE README.md templates values.yaml
#前提,需要有挂载卷
1. 创建nfs,参考https://www.iexxk.com/2022/07/26/k8s-nfs/?highlight=nfs#%E5%88%9B%E5%BB%BAnfs%E6%9C%8D%E5%8A%A1
2. 创建 StorageClass
....
kind: StorageClass
metadata:
annotations:
k8s.kuboard.cn/storageType: nfs_client_provisioner
name: nfs-172.16.30.165
reclaimPolicy: Delete
volumeBindingMode: Immediate
##创建空间
[root@exxk harbor]# kubectl create namespace harbor
##修改values.yaml
# expose:tls:enable 设置为false #关闭证书
# ingress:hosts:core: 设置为harbor.iexxk.io #根据自己的域名后缀来
# externalURL: 设置为自己实际的访问地址harbor.iexxk.io
# ingress:className: 设置为myingress #根据自己的ingressClassName来
# storageClass设置为刚刚创建的nfs-172.16.30.165
[root@exxk harbor]# helm install my-harbor . -n harbor
NAME: my-harbor
LAST DEPLOYED: Wed Sep 27 14:47:16 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Please wait for several minutes for Harbor deployment to complete.
Then you should be able to visit the Harbor portal at https://core.harbor.domain
For more details, please visit https://github.com/goharbor/harbor
#卸载
[root@exxk ~]# helm uninstall my-harbor -n harbor

通过harbor.iexxk.io进行访问,注意进行域名映射,默认登录用户名密码admin/Harbor12345

问题

  1. 登录输入正确密码也提示密码错误

    1
    2
    3
    4
    5
    6
    7
    8
    Request URL:
    http://harbor.iexxk.io/c/login
    Request Method:
    POST
    Status Code:
    403 Forbidden
    接口返回
    {"errors":[{"code":"FORBIDDEN","message":"CSRF token invalid"}]}

    解决:修改externalURL: https://core.harbor.domain为自己实际的访问地址`harbor.iexxk.io`

记一次生产证书过期问题

环境信息:

Kubernetes 版本v1.18.6
KubeSphere 版本v3.1.0
一个host集群,下面三个menber集群(其中一个member集群过期失联)
member集群有三个master节点

执行kubectl get node提示如下错误:

1
kubesphere Unable to connect to the server: x509: certificate has expired or is not yet valid

且host集群查看member集群,发下改集群未就绪,且无法访问,直接通过ip可以进行访问。

更新证书

在所有master集群执行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#确认是否过期
kubeadm alpha certs check-expiration
# 备份
mkdir pki230411
cp -rfa /etc/kubernetes/ pki230411/
ls pki230411/kubernetes/pki
#更新证书
kubeadm alpha certs renew all
#检查是否更新成功
kubeadm alpha certs check-expiration
#重启kube-apiserver、kube-controller-manager、kube-scheduler、etcd
sudo docker ps |grep -E 'k8s_kube-apiserver|k8s_kube-controller-manager|k8s_kube-scheduler|k8s_etcd_etcd'|sudo xargs docker restart
#检查重启状态
docker ps |grep -E 'k8s_kube-apiserver|k8s_kube-controller-manager|k8s_kube-scheduler|k8s_etcd_etcd'
#因为etcd时安装的单独节点,所有需要在所有etcd节点进行重启
systemctl status etcd #可以查看到是否需要执行systemctl daemon-reload
systemctl daemon-reload #有一台节点需要执行该命令才能重启(似乎只需要找到需要执行该命令的节点,执行该命令,不需要重启也可以)
systemctl restart etcd
systemctl status etcd
#备份kubectl连接证书,不然kubectl命令还是用的老的证书
cp .kube/config ~/pki230411/
#更新连接证书为新生成的证书
cp /etc/kubernetes/admin.conf ~/.kube/config
#当集群所有master更新结束执行该命令就可以成功返回节点信息了
kubectl get node

重新添加host集群的member集群

  1. 在kubeshpere查看原集群信息,备份

  2. 删除未就绪的member集群

  3. 重新添加member集群

引入依赖

1
2
3
4
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>

消费者模式

配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#kafka消费服务地址
spring.kafka.consumer.bootstrap-servers=ip1:port,ip2:port
#是否自动提交
spring.kafka.consumer.enable-auto-commit=true
#提交间隔时间
spring.kafka.consumer.auto-commit-interval=100ms
#key反序列化
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
#value反序列化
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
#心跳超时时间
spring.kafka.consumer.properties.session.timeout.ms=15000
#消费者groupID
spring.kafka.consumer.group-id=test-group-id

代码

1

安装

  1. Dashbord>Manage Jenkins>Plugins页面点击Available plugins搜索NodeJS点击安装并重启。
  2. Dashbord>Manage Jenkins>Tools页面最底部的NodeJS一栏点击Add NodeJS,设置名字nodejs,然后选择版本。

配置

  1. Dockerfile

    1
    2
    3
    4
    5
    FROM nginx:alpine
    RUN rm /etc/nginx/conf.d/default.conf
    ADD ./nginx.conf /etc/nginx/conf.d/nginx.conf
    COPY ./dist /usr/share/nginx/html/
    CMD ["nginx","-g","daemon off;"]
  2. nginx的配置nginx.conf

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    server {
    listen 80;
    server_name localhost;

    location / {
    root /usr/share/nginx/html;
    index index.html index.htm;
    }

    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
    root html;
    }

    }
  3. Jenkinsfile

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    #!/usr/bin/env groovy Jenkinsfile

    pipeline {
    agent any
    parameters {
    choice(name: 'project_choice',
    choices: 'demo-pc',
    description: '你要编译构建那个项目?')
    }
    stages {
    stage('npm build') {
    tools {
    nodejs "nodejs" //这里的node要和nodejs工具的配置里的name要一致
    }
    steps {
    dir("${params.project_choice}") { //切换目录
    echo "开始构建${params.project_choice}项目"
    sh "npm install --registry=https://registry.npmmirror.com"
    sh "npm run build:stage" //编译打包
    }
    }
    }
    stage('docker build'){
    steps{
    dir("${params.project_choice}") { //切换目录
    sh "docker build --no-cache -t registry.cn-qingdao.aliyuncs.com/xxx/demo-pc:${env.BUILD_NUMBER} ."
    echo "构建镜像" + "ag/${params.project_choice}".toLowerCase() + "完成"
    }
    }
    }
    stage('docker push'){
    steps{
    sh "docker login --username=zhangsan registry.cn-qingdao.aliyuncs.com -p mima"
    sh "docker push registry.cn-qingdao.aliyuncs.com/xxx/demo-pc:${env.BUILD_NUMBER}"
    echo "推送镜像" + "ag/${params.project_choice}".toLowerCase() + "完成"
    }
    }
    stage('docker run'){
    steps{
    sh "docker stop demo-pc"
    sh "docker rm demo-pc"
    sh "docker run -d -p 8911:80 --name demo-pc registry.cn-qingdao.aliyuncs.com/xxx/demo-pc:${env.BUILD_NUMBER}"
    echo "运行镜像" + "ag/${params.project_choice}".toLowerCase() + "完成"
    }
    }
    }
    }

运行

  1. 创建流水线Dashboard>New Item>Pipeline输入流水线名字,点击ok。
  2. Pipeline一栏的Definition选择Pipeline script from SCM
  3. SCM一栏选择Git,在Repository URL中输入git项目地址。
  4. 设置项目中Jenkinsfile文件的路径,点击保存。
  5. 点击build运行流水线即可。

java项目同理

  1. Dockerfile

    1
    2
    3
    4
    FROM exxk/java:8-alpine-cst
    ADD ./target/demo-admin.jar /app.jar
    EXPOSE 8080
    ENTRYPOINT ["java","-jar","/app.jar"]
  2. Jenkinsfile

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    #!/usr/bin/env groovy Jenkinsfile

    pipeline {
    agent any
    parameters {
    choice(name: 'project_choice',
    choices: 'demo-admin' +
    '\ndemo-framework' +
    '\ndemo-system' +
    '\ndemo-common',
    description: '你要编译构建那个项目?')
    }
    stages {
    stage('maven build') {
    tools {
    maven "maven" //这里的maven要和maven工具的配置里的name要一致
    }
    steps {
    echo "开始构建${params.project_choice}项目"
    sh "mvn package -pl ${params.project_choice} -am" //构建
    }
    }
    stage('docker build'){
    steps{
    dir("${params.project_choice}") { //切换目录
    sh "docker build --no-cache -t registry.cn-qingdao.aliyuncs.com/xxx/demo-admin:${env.BUILD_NUMBER} ."
    echo "构建镜像" + "ag/${params.project_choice}".toLowerCase() + "完成"
    }
    }
    }
    stage('docker push'){
    steps{
    sh "docker login --username=zhangsan registry.cn-qingdao.aliyuncs.com -p mima"
    sh "docker push registry.cn-qingdao.aliyuncs.com/xxx/demo-admin:${env.BUILD_NUMBER}"
    echo "推送镜像" + "ag/${params.project_choice}".toLowerCase() + "完成"
    }
    }
    stage('docker run'){
    steps{
    sh "docker stop demo-admin"
    sh "docker rm demo-admin"
    sh "docker run -d -p 8910:8080 --name demo-admin registry.cn-qingdao.aliyuncs.com/xxx/demo-admin:${env.BUILD_NUMBER}"
    echo "运行镜像" + "ag/${params.project_choice}".toLowerCase() + "完成"
    }
    }
    }
    }

TDengine官方文档/github源码

Helm安装

1
2
3
4
5
6
#下载TDengine chart helm安装包
wget https://github.com/taosdata/TDengine-Operator/raw/3.0/helm/tdengine-3.0.0.tgz
#上面因国内网络原因无法下载,可以试试下面的
wget --no-check-certificate https://raw.githubusercontents.com/taosdata/TDengine-Operator/3.0/helm/tdengine-3.0.0.tgz
#安装
helm install tdengine tdengine-3.0.0.tgz

常用sql

1
2
3
4
5
6
taos #进入数据库命令行
#显示所有数据库
taos> SHOW DATABASES;
#创建test数据库,有10 vgroups,有10 MB cache。
taos> create database if not exists test vgroups 10 buffer 10;

gui工具

官方/TDengineGUI

dbeaver 连接 tdengine(因第四步dbeaver的bug未显示驱动类,因此采用idea接入)

  1. maven仓库搜索com.taosdata.jdbc
  2. 在versions界面下载找到相应版本,下载taos-jdbcdriver-3.0.0-dist.jar
  3. 在dbeaver菜单数据库->驱动管理器->新建->库->添加文件,选择刚刚下载的文件
  4. 点击找到类,然后驱动类选择对应的com.taosdata.jdbc.rs.RestfulDriver

idea 连接 tdengine

  1. maven仓库搜索com.taosdata.jdbc
  2. 在versions界面下载找到相应版本,下载taos-jdbcdriver-3.0.0-dist.jar
  3. 在idea右侧database菜单➕->Driver->Driver Files->➕->custom jars,选择刚刚下载的文件
  4. 在Class一栏选择对应的com.taosdata.jdbc.rs.RestfulDriver
  5. 在idea右侧database菜单➕->Data Source->刚刚新建的驱动->,在URL一栏输入jdbc:TAOS-RS://<数据库ip>:6041/<数据库名称>?user=root&password=taosdata,(记得先去创建数据库)
  6. 点击保存连接即可

接入springboot

引入依赖

1
2
3
4
5
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.0.0</version>
</dependency>
原生连接
rest连接(损失30%左右的性能)

InfluxDB官方文档/githab源码

Helm安装

1
helm upgrade --install my-influxdb influxdata/influxdb --set image.tag=2.5.1-alpine

访问:开放nodeport,访问ip:nodeport地址

基础概念

measurement:表

point::一行数据记录

time:时间戳,自带字段,单位纳秒

tags:有各种索引的属性,一般用于

fields:没有索引的属性,一般会实时变化,存经纬度、温度、等变化的数据

annotated CSV数据

csv数据格式
group false false true true false false true true true 组?
datatype string long dateTime:RFC3339 dateTime:RFC3339 dateTime:RFC3339 double string string string 数据类型
default mean 默认值?
result table _start _stop _time _value _field _measurement car 数据表头
0 2022-11-22T07:20:32.833674853Z 2022-11-22T08:20:32.833674853Z 2022-11-22T08:00:30Z 39.90786 lat gps 川A888888 point一行数据
注释 表? 查询的开始时间 查询的结束时间 数据的时间 field的值 field的key tag
csv源数据
1
2
3
4
5
6
7
8
#group,false,false,true,true,false,false,true,true,true
#datatype,string,long,dateTime:RFC3339,dateTime:RFC3339,dateTime:RFC3339,double,string,string,string
#default,mean,,,,,,,,
,result,table,_start,_stop,_time,_value,_field,_measurement,car
,,0,2022-11-22T07:20:32.833674853Z,2022-11-22T08:20:32.833674853Z,2022-11-22T08:00:30Z,39.90786,lat,gps,川A888888
,,0,2022-11-22T07:20:32.833674853Z,2022-11-22T08:20:32.833674853Z,2022-11-22T08:12:40Z,39.91786,lat,gps,川A888888
,,1,2022-11-22T07:20:32.833674853Z,2022-11-22T08:20:32.833674853Z,2022-11-22T08:00:30Z,116.510958,lon,gps,川A888888
,,1,2022-11-22T07:20:32.833674853Z,2022-11-22T08:20:32.833674853Z,2022-11-22T08:12:40Z,116.510928,lon,gps,川A888888

line protocol

1
2
3
4
5
6
measurementName,tagKey=tagValue fieldKey="fieldValue" 1465839830100400200
---------------,--------------- --------------------- -------------------
Measurement tags set fields set timestamp
eg:
gps,car=川A888888 lat=39.90786,lon=116.510958 1669104020000000000
gps,car=川A888888 lat=39.91786,lon=116.510928 1669104759000000000

接入springboot

添加依赖

1
2
3
4
5
<dependency>
<groupId>com.influxdb</groupId>
<artifactId>influxdb-client-java</artifactId>
<version>6.7.0</version>
</dependency>

连接

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
@Configuration
public class InfluxdbConfig {

@Value("${spring.influx.url:''}")
private String influxDbUrl;
@Value("${spring.influx.token:''}")
private String influxDbToken;
@Value("${spring.influx.org:''}")
private String influxDbOrg;
@Value("${spring.influx.buket:''}")
private String influxDbBuket;


@Bean
InfluxDBClient influxDBClient(){
InfluxDBClient influxDBClient= InfluxDBClientFactory.create(influxDbUrl,influxDbToken.toCharArray(),influxDbOrg,influxDbBuket);
influxDBClient.setLogLevel(LogLevel.BASIC);
return influxDBClient;
}

}

读写

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
@Repository
public class GpsSeriesDao {
private static final Logger LOG = Logger.getLogger(GpsSeriesDao.class.getName());
/**
* 表(_measurement)
*/
private static final String TABLE = "gps";
@Autowired
InfluxDBClient influxDBClient;
@Autowired
InfluxdbConfig influxdbConfig;

/**
* 读数据
* @param carNum
* @param startTime
* @param endTime
* @return
*/
public List<GpsRecord> findTrackByCarNum(String carNum, String startTime, String endTime) {
String flux = "from(bucket: \"" + influxdbConfig.influxDbBuket + "\")\n" +
" |> range(start: " + startTime + ",stop:" + endTime + ")\n" +
" |> filter(fn: (r) => r[\"_measurement\"] == \"" + TABLE + "\")\n" +
" |> filter(fn: (r) => r[\"carNum\"] == \"" + carNum + "\")\n" +
" |> pivot(rowKey:[\"_time\"],columnKey: [\"_field\"],valueColumn: \"_value\") "+
" |> yield(name: \"mean\")";
LOG.info("query flux:\n" + flux);
QueryApi queryApi = influxDBClient.getQueryApi();
List<GpsRecord> gpsRecords = queryApi.query(flux, GpsRecord.class);
return gpsRecords;
}

/**
* 写数据
* @param gpsRecord
*/
public void writeGpsRecord(GpsRecord gpsRecord) {
String timestamp;
if (gpsRecord.getTime() != null) {
timestamp = gpsRecord.getTime().toEpochMilli() + "000000"; //13位转为19位时间戳,数据库才能识别
} else {
throw new BusinessException(ApiCode.PARAM_FORMAT_INCORR.getValue(), "gps time参数格式错误");
}
String data = TABLE + ",carNum=" + gpsRecord.getCarNum() + " lat=" + gpsRecord.getLat() + ",lon=" + gpsRecord.getLon() + " " + timestamp;
LOG.info("Line Protocol:\n" + data);
WriteApiBlocking writeApi = influxDBClient.getWriteApiBlocking();
writeApi.writeRecord(WritePrecision.NS, data);
}
}

常见influxdb flux语句

官方文档

行列转换语句-pivot

在influxdb当有多个field时,每个field查询出来会单独成一行数据,这是就需要添加

1
|> pivot(rowKey:["_time"],columnKey: ["_field"],valueColumn: "_value")

添加之后,就会把多个field换成一行。

flux in查询

1
|> filter(fn: (r) =>contains(value: r["carNum"], set: ["01","02","03"]))  

对应java代码

1
2
3
4
5
6
7
8
9
10
11
String carNumsStr=carNums.stream().map(s->"\""+s+"\"").collect(Collectors.joining(","));
String flux = "from(bucket: \"" + influxdbConfig.influxDbBuket + "\")\n" +
" |> range(start:-15d)\n" +
" |> filter(fn: (r) => r[\"_measurement\"] == \"" + TABLE + "\")\n" +
" |> filter(fn: (r) => contains(value: r[\"carNum\"], set: ["+carNumsStr+"]))\n" +
" |> last()\n" +
" |> pivot(rowKey:[\"_time\"],columnKey: [\"_field\"],valueColumn: \"_value\") "+
" |> yield(name: \"mean\")";
LOG.info("query flux:\n" + flux);
QueryApi queryApi = influxDBClient.getQueryApi();
List<GpsRecord> gpsRecords = queryApi.query(flux, GpsRecord.class);

last查询最新数据

1
|> last()  

flux官方文档阅读笔记

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|> group(columns: ["host"], mode: "by") #分组
|> sort(columns: ["host", "_value"]) #排序
|> aggregateWindow(every: 20m, fn: mean) #每20分钟取一次平均值
|> map(fn: (r) => ({ r with _value: r._value * r._value })) #转换数据,求平方
|> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value") #行转列
|> increase() #每分钟累加的值
|> movingAverage(n: 5) #当前时间点的前三个数据(从自己开始计数)的平均数
|> timedMovingAverage(every: 2m, period: 4m) #每2分钟统计4分钟前的平均数
|> derivative(unit: 1m, nonNegative: true) #
|> histogram(
column: "_value",
upperBoundColumn: "le",
countColumn: "_value",
bins: [100.0, 200.0, 300.0, 400.0],
) #分别统计100,200,300,400以内的数据的个数
|> fill(usePrevious: true) #替换空值,用上一个数据进行替换,第一个数据可能为null,因为没有上一个数据
|> fill(value: 0.0) #null空值用0.0替换
|> median() #获取中间值
|> quantile(q: 0.99, method: "estimate_tdigest")
|> cumulativeSum() #历史数据求和
|> first() #最早的数据
|> last() #最晚的数据
|> filter(fn: (r) => exists r._value) #过滤null的数据
|> count()
|> count(column: "lat")


常见实用案例查询

1
2
3
4
5
6
7
8
9
# 先把数据值转化为1,然后统计数据的条数,可以按分钟,天、月等力度统计每个区间段的数据
from(bucket: "transport")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "gps")
|> filter(fn: (r) => r["_field"] == "lat")
|> filter(fn: (r) => r["carNum"] == "719135")
|> map(fn: (r) => ({ r with _value: 1 }))
|> aggregateWindow(every: 1m, fn: sum, createEmpty: false)
|> yield(name: "mean")

常见集成架构

graph LR

A[mqtt/物联网设备] -->B(emq/mqtt服务端)
B --> c[telegraf/数据中转处理]
c --> d[infludb/入库]
d --> e[grafna/展示]

常见问题

  1. 时区问题,在界面上默认是0时区去查询,时间数据导入2022-11-22T08:51:00Z z代表0时区。

    解决:修改容器的时区,添加只读挂在卷

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    spec:
    volumes:
    - name: localtime
    hostPath:
    path: /etc/localtime
    type: ''
    containers:
    volumeMounts:
    - name: localtime
    readOnly: true
    mountPath: /etc/localtime

    添加之后,导入数据也要注意,切换为中国(+8)时区,也就是在时间RFC3339上,就行加8,也就是2022-11-22T08:51:00+08:00进行导入,但是查询时,界面上还是以0时区显示,不方便观察,可以在data explorer->customize->time format选择YYYY-MM-DD hh:mm:ss a ZZ

参考

Go - time.RFC3339 时间格式化

mqtt+emq+influxdb+grafana系统搭建傻瓜教程