mirror of
https://github.com/kubesphere/kubekey.git
synced 2025-12-25 17:12:50 +00:00
feat: add Chinese README and documentation updates (#2791)
- Introduced a new Chinese version of the README file (README_zh-CN.md) to enhance accessibility for Chinese-speaking users. - Updated the English README to reflect new features and installation instructions. - Added detailed documentation for project structure, playbooks, roles, tasks, and modules to improve user understanding and usability. Signed-off-by: [Your Name] <[Your Email]> Signed-off-by: redscholar <blacktiledhouse@gmail.com>
This commit is contained in:
parent
f12dc62ae9
commit
192af7bb7e
145
README.md
145
README.md
|
|
@ -1,29 +1,132 @@
|
|||
# 背景
|
||||
当前kubekey中,如果要添加命令,或修改命令,都需要提交代码并重新发版。扩展性较差。
|
||||
1. 任务与框架分离(优势,目的,更方便扩展,借鉴ansible的playbook设计)
|
||||
2. 支持gitops(可通过git方式,管理自动化任务)
|
||||
3. 支持connector扩展
|
||||
4. 支持云原生方式自动化批量任务管理
|
||||
<div align=center><img src="docs/images/kubekey-logo.svg?raw=true"></div>
|
||||
|
||||
# 安装kubekey
|
||||
## kubernetes中安装
|
||||
[](https://github.com/kubesphere/kubekey/actions/workflows/golangci-lint.yaml?query=event%3Apush+branch%3Amain+workflow%3ACI)
|
||||
|
||||
> [English](README.md) | 中文
|
||||
|
||||
**👋 Welcome to KubeKey!**
|
||||
|
||||
KubeKey is an open-source lightweight task flow execution tool. It provides a flexible and fast way to install Kubernetes.
|
||||
|
||||
> KubeKey has passed the [CNCF Kubernetes Conformance Certification](https://www.cncf.io/certification/software-conformance/)
|
||||
|
||||
# Comparison of new features in 3.x
|
||||
1. Expanded from Kubernetes lifecycle management tool to task execution tool (flow design refers to [Ansible](https://github.com/ansible/ansible))
|
||||
2. Supports multiple ways to manage task templates: git, local, etc.
|
||||
3. Supports multiple node connection methods, including: local, ssh, kubernetes, prometheus.
|
||||
4. Supports cloud-native automated batch task management
|
||||
5. Advanced features: UI page (not yet open)
|
||||
|
||||
# Install kubekey
|
||||
|
||||
## Install in Kubernetes
|
||||
Install kubekey via helm.
|
||||
```shell
|
||||
helm upgrade --install --create-namespace -n kubekey-system kubekey kubekey-1.0.0.tgz
|
||||
helm upgrade --install --create-namespace -n kubekey-system kubekey config/kubekey
|
||||
```
|
||||
然后通过创建 `Inventory` 和 `Playbook` 资源来执行命令
|
||||
**Inventory**: 任务执行的host清单. 用于定义与host相关, 与任务模板无关的变量. 详见[参数定义](docs/zh/201-variable.md)
|
||||
**Playbook**: playbook的配置信息,在哪些host中执行,执行哪个playbook文件, 执行时参数等等。
|
||||
|
||||
## 二进制执行
|
||||
可直接用二进制在命令行中执行命令
|
||||
## Binary
|
||||
Get the corresponding binary files from the [release](https://github.com/kubesphere/kubekey/releases) page.
|
||||
|
||||
# Deploy Kubernetes
|
||||
|
||||
- Supported deployment environments: Linux distributions
|
||||
- almaLinux: 9.0 (not fully tested)
|
||||
- centOS: 8
|
||||
- debian: 10, 11
|
||||
- kylin: V10SP3 (not fully tested)
|
||||
- ubuntu: 18.04, 20.04, 22.04, 24.04.
|
||||
|
||||
- Supported Kubernetes versions: v1.23.x ~ v1.33.x
|
||||
|
||||
## Requirements
|
||||
|
||||
- One or more computers running Linux operating systems compatible with deb/rpm; for example: Ubuntu or CentOS.
|
||||
- Each machine should have more than 2 GB of memory; applications will be limited if memory is insufficient.
|
||||
- Control plane nodes should have at least 2 CPUs.
|
||||
- Full network connectivity among all machines in the cluster. You can use public or private networks.
|
||||
|
||||
## Define node information
|
||||
|
||||
kubekey uses the `inventory` resource to define node connection information.
|
||||
You can use `kk create inventory` to get the default inventory.yaml resource. The default `inventory.yaml` configuration is as follows:
|
||||
```yaml
|
||||
apiVersion: kubekey.kubesphere.io/v1
|
||||
kind: Inventory
|
||||
metadata:
|
||||
name: default
|
||||
spec:
|
||||
hosts: # your can set all nodes here. or set nodes on special groups.
|
||||
# node1:
|
||||
# connector:
|
||||
# type: ssh
|
||||
# host: node1
|
||||
# port: 22
|
||||
# user: root
|
||||
# password: 123456
|
||||
groups:
|
||||
# all kubernetes nodes.
|
||||
k8s_cluster:
|
||||
groups:
|
||||
- kube_control_plane
|
||||
- kube_worker
|
||||
# control_plane nodes
|
||||
kube_control_plane:
|
||||
hosts:
|
||||
- localhost
|
||||
# worker nodes
|
||||
kube_worker:
|
||||
hosts:
|
||||
- localhost
|
||||
# etcd nodes when etcd_deployment_type is external
|
||||
etcd:
|
||||
hosts:
|
||||
- localhost
|
||||
# image_registry:
|
||||
# hosts:
|
||||
# - localhost
|
||||
# nfs nodes for registry storage. and kubernetes nfs storage
|
||||
# nfs:
|
||||
# hosts:
|
||||
# - localhost
|
||||
|
||||
```
|
||||
The inventory contains the following built-in groups:
|
||||
1. k8s_cluster: Kubernetes cluster. Contains two subgroups: kube_control_plane, kube_worker
|
||||
2. kube_control_plane: control_plane node group in the Kubernetes cluster
|
||||
3. kube_worker: worker node group in the Kubernetes cluster.
|
||||
4. etcd: node group for installing etcd cluster.
|
||||
5. image_registry: node group for installing image registry (including harbor, registry)
|
||||
6. nfs: node group for installing nfs.
|
||||
|
||||
## Define key configuration information
|
||||
|
||||
kubekey uses the `config` resource to define node connection information.
|
||||
You can use `kk create config --with-kubernetes v1.33.1` to get the default inventory.yaml resource. The default `config.yaml` configuration is as follows:
|
||||
|
||||
Default config configurations are provided as references for different Kubernetes versions:
|
||||
- [Config for installing Kubernetes v1.23.x](builtin/core/defaults/config/v1.23.yaml)
|
||||
- [Config for installing Kubernetes v1.24.x](builtin/core/defaults/config/v1.24.yaml)
|
||||
- [Config for installing Kubernetes v1.25.x](builtin/core/defaults/config/v1.25.yaml)
|
||||
- [Config for installing Kubernetes v1.26.x](builtin/core/defaults/config/v1.26.yaml)
|
||||
- [Config for installing Kubernetes v1.27.x](builtin/core/defaults/config/v1.27.yaml)
|
||||
- [Config for installing Kubernetes v1.28.x](builtin/core/defaults/config/v1.28.yaml)
|
||||
- [Config for installing Kubernetes v1.29.x](builtin/core/defaults/config/v1.29.yaml)
|
||||
- [Config for installing Kubernetes v1.30.x](builtin/core/defaults/config/v1.30.yaml)
|
||||
- [Config for installing Kubernetes v1.31.x](builtin/core/defaults/config/v1.31.yaml)
|
||||
- [Config for installing Kubernetes v1.32.x](builtin/core/defaults/config/v1.32.yaml)
|
||||
- [Config for installing Kubernetes v1.33.x](builtin/core/defaults/config/v1.33.yaml)
|
||||
|
||||
## Install cluster
|
||||
```shell
|
||||
kk run -i inventory.yaml -c config.yaml playbook.yaml
|
||||
kk create cluster -i inventory.yaml -c config.yaml
|
||||
```
|
||||
运行命令后, 会在工作目录的runtime下生成对应的 `Inventory` 和 `Playbook` 资源
|
||||
If `-i inventory.yaml` is not provided, the default inventory.yaml is used. Kubernetes will only be installed on the executing machine.
|
||||
If `-c config.yaml` is not provided, the default config.yaml is used. Installs Kubernetes version v1.33.1.
|
||||
|
||||
# 文档
|
||||
**[项目模版编写规范](docs/zh/001-project.md)**
|
||||
**[模板语法](docs/zh/101-syntax.md)**
|
||||
**[参数定义](docs/zh/201-variable.md)**
|
||||
**[集群管理](docs/zh/core/README.md)**
|
||||
# Documentation
|
||||
**[Project template writing specification](docs/en/001-project.md)**
|
||||
**[Template syntax](docs/en/101-syntax.md)**
|
||||
**[Parameter definition](docs/en/201-variable.md)**
|
||||
**[Cluster management](docs/en/core/README.md)**
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,132 @@
|
|||
<div align=center><img src="docs/images/kubekey-logo.svg?raw=true"></div>
|
||||
|
||||
[](https://github.com/kubesphere/kubekey/actions/workflows/golangci-lint.yaml?query=event%3Apush+branch%3Amain+workflow%3ACI)
|
||||
|
||||
> [English](README.md) | 中文
|
||||
|
||||
**👋 欢迎使用KubeKey!**
|
||||
|
||||
KubeKey 是一个开源的轻量的任务流程执行工具。提供了一种灵活、快速的方式来安装kubernetes。
|
||||
|
||||
> KubeKey 通过了 [CNCF kubernetes 一致性认证](https://www.cncf.io/certification/software-conformance/)
|
||||
|
||||
# 对比3.x新特性
|
||||
1. 从kubernetes生命周期管理工具扩展为任务执行工具(流程设计参考[Ansible](https://github.com/ansible/ansible))
|
||||
2. 支持多种方式管理任务模版:git,本地等。
|
||||
3. 支持多种节点连接方式。包括:local、ssh、kubernetes、prometheus。
|
||||
4. 支持云原生方式自动化批量任务管理
|
||||
5. 高级特性:UI页面(暂未开放)
|
||||
|
||||
# 安装kubekey
|
||||
|
||||
## kubernetes中安装
|
||||
通过helm安装kubekey。
|
||||
```shell
|
||||
helm upgrade --install --create-namespace -n kubekey-system kubekey config/kubekey
|
||||
```
|
||||
|
||||
## 二进制
|
||||
在 [release](https://github.com/kubesphere/kubekey/releases) 页面获取对应的二进制文件。
|
||||
|
||||
# 部署kubernetes
|
||||
|
||||
- 支持部署环境:Linux发行版
|
||||
- almaLinux: 9.0 (未充分测试)
|
||||
- centOS: 8
|
||||
- debian: 10, 11
|
||||
- kylin: V10SP3 (未充分测试)
|
||||
- ubuntu: 18.04, 20.04, 22.04, 24.04.
|
||||
|
||||
- 支持的Kubernetes版本:v1.23.x ~ v1.33.x
|
||||
|
||||
## requirement
|
||||
|
||||
- 一台或多台运行兼容 deb/rpm 的 Linux 操作系统的计算机;例如:Ubuntu 或 CentOS。
|
||||
- 每台机器 2 GB 以上的内存,内存不足时应用会受限制。
|
||||
- 用作控制平面节点的计算机上至少有 2 个 CPU。
|
||||
- 集群中所有计算机之间具有完全的网络连接。你可以使用公共网络或专用网络
|
||||
|
||||
## 定义节点信息
|
||||
|
||||
kubekey使用 `inventory` 资源来定义节点的连接信息。
|
||||
可使用 `kk create inventory` 来获取默认的inventory.yaml 资源。默认的`inventory.yaml`配置如下:
|
||||
```yaml
|
||||
apiVersion: kubekey.kubesphere.io/v1
|
||||
kind: Inventory
|
||||
metadata:
|
||||
name: default
|
||||
spec:
|
||||
hosts: # your can set all nodes here. or set nodes on special groups.
|
||||
# node1:
|
||||
# connector:
|
||||
# type: ssh
|
||||
# host: node1
|
||||
# port: 22
|
||||
# user: root
|
||||
# password: 123456
|
||||
groups:
|
||||
# all kubernetes nodes.
|
||||
k8s_cluster:
|
||||
groups:
|
||||
- kube_control_plane
|
||||
- kube_worker
|
||||
# control_plane nodes
|
||||
kube_control_plane:
|
||||
hosts:
|
||||
- localhost
|
||||
# worker nodes
|
||||
kube_worker:
|
||||
hosts:
|
||||
- localhost
|
||||
# etcd nodes when etcd_deployment_type is external
|
||||
etcd:
|
||||
hosts:
|
||||
- localhost
|
||||
# image_registry:
|
||||
# hosts:
|
||||
# - localhost
|
||||
# nfs nodes for registry storage. and kubernetes nfs storage
|
||||
# nfs:
|
||||
# hosts:
|
||||
# - localhost
|
||||
|
||||
```
|
||||
inventory包含如下几个内置的group:
|
||||
1. k8s_cluster: kubernetes集群。包含两个子group: kube_control_plane, kube_worker
|
||||
2. kube_control_plane: kubernetes集群中的control_plane节点组
|
||||
3. kube_worker: kubernetes集群中的worker节点组。
|
||||
4. etcd: 安装etcd集群的节点组。
|
||||
5. image_registry: 安装镜像仓库的节点组。(包含harbor,registry)
|
||||
6. nfs: 安装nfs的节点组。
|
||||
|
||||
## 定义关键配置信息
|
||||
|
||||
kubekey使用 `config` 资源来定义节点的连接信息。
|
||||
可使用 `kk create config --with-kubernetes v1.33.1` 来获取默认的inventory.yaml 资源。默认的`config.yaml`配置如下:
|
||||
|
||||
针对不同的kubernetes版本,给出了不同默认config配置作为参考:
|
||||
- [安装 v1.23.x 版本的kubernetes 配置](builtin/core/defaults/config/v1.23.yaml)
|
||||
- [安装 v1.24.x 版本的kubernetes 配置](builtin/core/defaults/config/v1.24.yaml)
|
||||
- [安装 v1.25.x 版本的kubernetes 配置](builtin/core/defaults/config/v1.25.yaml)
|
||||
- [安装 v1.26.x 版本的kubernetes 配置](builtin/core/defaults/config/v1.26.yaml)
|
||||
- [安装 v1.27.x 版本的kubernetes 配置](builtin/core/defaults/config/v1.27.yaml)
|
||||
- [安装 v1.28.x 版本的kubernetes 配置](builtin/core/defaults/config/v1.28.yaml)
|
||||
- [安装 v1.29.x 版本的kubernetes 配置](builtin/core/defaults/config/v1.29.yaml)
|
||||
- [安装 v1.30.x 版本的kubernetes 配置](builtin/core/defaults/config/v1.30.yaml)
|
||||
- [安装 v1.31.x 版本的kubernetes 配置](builtin/core/defaults/config/v1.31.yaml)
|
||||
- [安装 v1.32.x 版本的kubernetes 配置](builtin/core/defaults/config/v1.32.yaml)
|
||||
- [安装 v1.33.x 版本的kubernetes 配置](builtin/core/defaults/config/v1.33.yaml)
|
||||
|
||||
## 安装集群
|
||||
```shell
|
||||
kk create cluster -i inventory.yaml -c config.yaml
|
||||
```
|
||||
`-i inventory.yaml`不传时,使用默认的inventory.yaml. 只会在执行的机器上安装kubernetes.
|
||||
`-c config.yaml`不传时,使用默认的config.yaml. 安装 v1.33.1 版本的kubernetes
|
||||
|
||||
# 文档
|
||||
**[项目模版编写规范](docs/zh/001-project.md)**
|
||||
**[模板语法](docs/zh/101-syntax.md)**
|
||||
**[参数定义](docs/zh/201-variable.md)**
|
||||
**[集群管理](docs/zh/core/README.md)**
|
||||
|
||||
|
|
@ -0,0 +1,44 @@
|
|||
# Project
|
||||
The project stores task templates to be executed, consisting of a series of YAML files.
|
||||
To help users quickly understand and get started, kk’s task abstraction is inspired by [ansible](https://github.com/ansible/ansible)’s playbook specification.
|
||||
|
||||
## Directory Structure
|
||||
```text
|
||||
|-- project
|
||||
| |-- playbooks/
|
||||
| |-- playbook1.yaml
|
||||
| |-- playbook2.yaml
|
||||
| |-- roles/
|
||||
| | |-- roleName1/
|
||||
| | |-- roleName2/
|
||||
...
|
||||
```
|
||||
**[playbooks](002-playbook.md)**: The execution entry point. Stores a series of playbooks. A playbook can define multiple tasks or roles. When running a workflow template, the defined tasks are executed in order.
|
||||
**[roles](003-role.md)**: A collection of roles. A role is a group of tasks.
|
||||
|
||||
## Storage Locations
|
||||
Projects can be stored as built-in, local, or on a Git server.
|
||||
|
||||
### Built-in
|
||||
Built-in projects are stored in the `builtin` directory and integrated into kubekey commands.
|
||||
Example:
|
||||
```shell
|
||||
kk precheck
|
||||
```
|
||||
This runs the `playbooks/precheck.yaml` workflow file in the `builtin` directory.
|
||||
|
||||
### Local
|
||||
Example:
|
||||
```shell
|
||||
kk run demo.yaml
|
||||
```
|
||||
This runs the `demo.yaml` workflow file in the current directory.
|
||||
|
||||
### Git
|
||||
Example:
|
||||
```shell
|
||||
kk run playbooks/demo.yaml \
|
||||
--project-addr=$(GIT_URL) \
|
||||
--project-branch=$(GIT_BRANCH)
|
||||
```
|
||||
This runs the `playbooks/demo.yaml` workflow file from the Git repository at `$(GIT_URL)`, branch `$(GIT_BRANCH)`.
|
||||
|
|
@ -0,0 +1,63 @@
|
|||
# Playbook
|
||||
## File Definition
|
||||
A playbook file executes multiple playbooks in the defined order. Each playbook specifies which tasks to run on which hosts.
|
||||
```yaml
|
||||
- import_playbook: others/playbook.yaml
|
||||
|
||||
- name: Playbook Name
|
||||
tags: ["always"]
|
||||
hosts: ["host1", "host2"]
|
||||
serial: 1
|
||||
run_once: false
|
||||
ignore_errors: false
|
||||
gather_facts: false
|
||||
vars: {a: b}
|
||||
vars_files: ["vars/variables.yaml"]
|
||||
pre_tasks:
|
||||
- name: Task Name
|
||||
debug:
|
||||
msg: "I'm Task"
|
||||
roles:
|
||||
- role: role1
|
||||
when: true
|
||||
tasks:
|
||||
- name: Task Name
|
||||
debug:
|
||||
msg: "I'm Task"
|
||||
post_tasks:
|
||||
- name: Task Name
|
||||
debug:
|
||||
msg: "I'm Task"
|
||||
```
|
||||
**import_playbooks**: Defines the referenced playbook file name, usually a relative path. File lookup order is: `project_path/playbooks/`, `current_path/playbooks/`, `current_path/`.
|
||||
**name**: Playbook name, optional.
|
||||
**tags**: Tags of the playbook, optional. They apply only to the playbook itself; roles and tasks under the playbook do not inherit these tags.
|
||||
When running a playbook command, you can filter which playbooks to execute using tags. For example:
|
||||
- `kk run [playbook] --tags tag1 --tags tag2`: Executes playbooks with either the tag1 or tag2 label.
|
||||
- `kk run [playbook] --skip-tags tag1 --skip-tags tag2`: Skips playbooks with the tag1 or tag2 label.
|
||||
Playbooks with the `always` tag always run. Playbooks with the `never` tag never run.
|
||||
When the argument is `all`, it selects all playbooks. When the argument is `tagged`, it selects only tagged playbooks.
|
||||
**hosts**: Defines which machines to run on. Required. All hosts must be defined in the `inventory` (except localhost). You can specify host names or group names.
|
||||
**serial**: Executes playbooks in batches. Can be a single value (string or number) or an array. Optional. Defaults to executing all at once.
|
||||
- If `serial` is an array of numbers, hosts are grouped by fixed sizes. If the hosts exceed the defined `serial` values, the last `serial` value is used.
|
||||
For example, if `serial = [1, 2]` and `hosts = [a, b, c, d]`, the playbook runs in 3 batches: [a], [b, c], [d].
|
||||
- If `serial` is percentages, the number of hosts per batch is calculated based on percentages (rounded down). If hosts exceed the defined percentages, the last percentage is reused.
|
||||
For example, if `serial = [30%, 60%]` and `hosts = [a, b, c, d]`, percentages translate to [1.2, 2.4], rounded to [1, 2].
|
||||
Numbers and percentages can be mixed.
|
||||
**run_once**: Whether to execute only once. Optional. Defaults to false. If true, it runs on the first host.
|
||||
**ignore_errors**: Whether to ignore failed tasks under this playbook. Optional. Defaults to false.
|
||||
**gather_facts**: Whether to gather system information. Optional. Defaults to false. Collects data per host.
|
||||
- localConnector: Collects release (/etc/os-release), kernel_version (uname -r), hostname (hostname), architecture (arch). Currently supports only Linux.
|
||||
- sshConnector: Collects release (/etc/os-release), kernel_version (uname -r), hostname (hostname), architecture (arch). Currently supports only Linux.
|
||||
- kubernetesConnector: Not supported yet.
|
||||
**vars**: Defines default parameters. Optional. YAML format.
|
||||
**vars_files**: Defines default parameters. Optional. YAML file format. Fields in `vars` and `vars_files` must not overlap.
|
||||
**pre_tasks**: Defines [tasks](004-task.md) to run before roles. Optional.
|
||||
**roles**: Defines [roles](003-role.md) to run. Optional.
|
||||
**tasks**: Defines [tasks](004-task.md) to run. Optional.
|
||||
**post_tasks**: Defines [tasks](004-task.md) to run after roles and tasks. Optional.
|
||||
|
||||
## Playbook Execution Order
|
||||
- Across playbooks: Executed in the defined order. If an `import_playbook` is included, the referenced file is converted into playbooks.
|
||||
- Within the same playbook: Execution order is pre_tasks -> roles -> tasks -> post_tasks.
|
||||
If any task fails (excluding ignored failures), the playbook execution fails.
|
||||
|
|
@ -0,0 +1,43 @@
|
|||
# Role
|
||||
A role is a group of tasks.
|
||||
|
||||
## Defining a Role Reference in a Playbook
|
||||
```yaml
|
||||
- name: Playbook Name
|
||||
#...
|
||||
roles:
|
||||
- name: Role Name
|
||||
tags: ["always"]
|
||||
when: true
|
||||
run_once: false
|
||||
ignore_errors: false
|
||||
vars: {a: b}
|
||||
role: Role-ref Name
|
||||
```
|
||||
**name**: Role name, optional. This name is different from the role reference name in the playbook.
|
||||
**tags**: Tags of the playbook, optional. They apply only to the playbook itself; roles and tasks under it do not inherit these tags.
|
||||
**when**: Execution condition, can be a single value (string) or multiple values (array). Optional. By default, the role is executed. The condition is evaluated separately for each host.
|
||||
**run_once**: Whether to execute only once. Optional. Defaults to false. If true, it runs on the first host.
|
||||
**ignore_errors**: Whether to ignore failures of tasks under this role. Optional. Defaults to false.
|
||||
**role**: The reference name used in the playbook, corresponding to a subdirectory under the `roles` directory. Required.
|
||||
**vars**: Defines default parameters. Optional. YAML format.
|
||||
|
||||
## Role Directory Structure
|
||||
```text
|
||||
|-- project
|
||||
| |-- roles/
|
||||
| | |-- roleName/
|
||||
| | | |-- defaults/
|
||||
| | | | |-- main.yml
|
||||
| | | |-- tasks/
|
||||
| | | | |-- main.yml
|
||||
| | | |-- templates/
|
||||
| | | | |-- template1
|
||||
| | | |-- files/
|
||||
| | | | |-- file1
|
||||
```
|
||||
**roleName**: The reference name of the role. Can be a single-level or multi-level directory.
|
||||
**defaults**: Defines default parameter values for all tasks under the role. Defined in the `main.yml` file.
|
||||
**[tasks](004-task.md)**: Task templates associated with the role. A role can include multiple tasks, defined in the `main.yml` file.
|
||||
**templates**: Template files, which usually reference variables. Used in tasks of type `templates`.
|
||||
**files**: Raw files, used in tasks of type `copy`.
|
||||
|
|
@ -0,0 +1,73 @@
|
|||
# Task
|
||||
Tasks are divided into single-level tasks and multi-level tasks.
|
||||
Single-level tasks: Contain module-related fields and do not contain the `block` field. A task can contain only one module.
|
||||
Multi-level tasks: Do not contain module-related fields and must contain the `block` field.
|
||||
When a task runs, it is executed separately on each defined host.
|
||||
|
||||
## File Definition
|
||||
```yaml
|
||||
- include_tasks: other/task.yaml
|
||||
tags: ["always"]
|
||||
when: true
|
||||
run_once: false
|
||||
ignore_errors: false
|
||||
vars: {a: b}
|
||||
|
||||
- name: Block Name
|
||||
tags: ["always"]
|
||||
when: true
|
||||
run_once: false
|
||||
ignore_errors: false
|
||||
vars: {a: b}
|
||||
block:
|
||||
- name: Task Name
|
||||
[module]
|
||||
rescue:
|
||||
- name: Task Name
|
||||
[module]
|
||||
always:
|
||||
- name: Task Name
|
||||
[module]
|
||||
|
||||
- name: Task Name
|
||||
tags: ["always"]
|
||||
when: true
|
||||
loop: [""]
|
||||
[module]
|
||||
```
|
||||
**include_tasks**: References other task template files in this task.
|
||||
**name**: Task name, optional.
|
||||
**tags**: Task tags, optional. They apply only to the task itself; roles and playbooks do not inherit them.
|
||||
**when**: Execution condition, can be a single value (string) or multiple values (array). Optional. By default, the role is executed. Values use [template syntax](101-syntax.md) and are evaluated separately for each host.
|
||||
**failed_when**: Failure condition. When a host meets this condition, the task is considered failed. Can be a single value (string) or multiple values (array). Optional. Values use [template syntax](101-syntax.md) and are evaluated separately for each host.
|
||||
**run_once**: Whether to execute only once. Optional. Defaults to false. If true, it runs on the first host.
|
||||
**ignore_errors**: Whether to ignore failures. Optional. Defaults to false.
|
||||
**vars**: Defines default parameters. Optional. YAML format.
|
||||
**loop**: Executes the module operation repeatedly. On each iteration, the value is passed to the module as `item: loop-value`. Can be a single value (string) or multiple values (array). Optional. Values use [template syntax](101-syntax.md) and are evaluated separately for each host.
|
||||
**retries**: Number of times to retry the task if it fails.
|
||||
**register**: A string value that registers the task result into a [variable](201-variable.md), which can be used in subsequent tasks. The register contains two subfields:
|
||||
- stderr: Failure output
|
||||
- stdout: Success output
|
||||
**register_type**: Format for registering `stderr` and `stdout` in the register.
|
||||
- string: Default, registers as a string.
|
||||
- json: Registers as JSON.
|
||||
- yaml: Registers as YAML.
|
||||
**block**: A collection of tasks. Optional (required if no module-related fields are defined). Always runs.
|
||||
**rescue**: A collection of tasks. Optional. Runs when the block fails (if any task in the block fails, the block fails).
|
||||
**always**: A collection of tasks. Optional. Always runs after block and rescue, regardless of success or failure.
|
||||
**module**: The actual operation to execute. Optional (required if no `block` field is defined). A map where the key is the module name and the value is the arguments. Available modules must be registered in advance in the project. Registered modules include:
|
||||
- [add_hostvars](modules/add_hostvars.md)
|
||||
- [assert](modules/assert.md)
|
||||
- [command](modules/command.md)
|
||||
- [copy](modules/copy.md)
|
||||
- [debug](modules/debug.md)
|
||||
- [fetch](modules/fetch.md)
|
||||
- [gen_cert](modules/gen_cert.md)
|
||||
- [image](modules/image.md)
|
||||
- [prometheus](modules/prometheus.md)
|
||||
- [result](modules/result.md)
|
||||
- [set_fact](modules/set_fact.md)
|
||||
- [setup](modules/setup.md)
|
||||
- [template](modules/template.md)
|
||||
- [include_vars](modules/include_vars.md)
|
||||
```
|
||||
|
|
@ -0,0 +1,48 @@
|
|||
# Syntax
|
||||
The syntax follows the `go template` specification, with function extensions provided by [sprig](https://github.com/Masterminds/sprig).
|
||||
|
||||
# Custom Functions
|
||||
|
||||
## toYaml
|
||||
Converts a parameter into a YAML string. The argument specifies the number of leading spaces, and the value is a string.
|
||||
```yaml
|
||||
{{ .yaml_variable | toYaml }}
|
||||
```
|
||||
|
||||
## fromYaml
|
||||
Converts a YAML string into a parameter format.
|
||||
```yaml
|
||||
{{ .yaml_string | fromYaml }}
|
||||
```
|
||||
|
||||
## ipInCIDR
|
||||
Gets all IP addresses (as an array) within the specified IP range (CIDR).
|
||||
```yaml
|
||||
{{ .cidr_variable | ipInCIDR }}
|
||||
```
|
||||
|
||||
## ipFamily
|
||||
Determines the family of an IP or IP_CIDR. Returns: Invalid, IPv4, or IPv6.
|
||||
```yaml
|
||||
{{ .ip | ipFamily }}
|
||||
```
|
||||
|
||||
## pow
|
||||
Performs exponentiation.
|
||||
```yaml
|
||||
# 2 to the power of 3, 2 ** 3
|
||||
{{ 2 | pow 3 }}
|
||||
```
|
||||
|
||||
## subtractList
|
||||
Array exclusion.
|
||||
```yaml
|
||||
# Returns a new list containing elements that exist in a but not in b
|
||||
{{ .b | subtractList .a }}
|
||||
```
|
||||
|
||||
## fileExist
|
||||
Checks if a file exists.
|
||||
```yaml
|
||||
{{ .file_path | fileExist }}
|
||||
```
|
||||
|
|
@ -0,0 +1,78 @@
|
|||
# Variables
|
||||
Variables are divided into static variables (defined before execution) and dynamic variables (generated at runtime).
|
||||
The parameter priority is: dynamic variables > static variables.
|
||||
|
||||
## Static Variables
|
||||
Static variables include the inventory, global configuration, and parameters defined in templates.
|
||||
The parameter priority is: global configuration > inventory > parameters defined in templates.
|
||||
|
||||
### Inventory
|
||||
A YAML file without template syntax, passed in via the `-i` parameter (`kk -i inventory.yaml ...`), effective on each host.
|
||||
**Definition format**:
|
||||
```yaml
|
||||
apiVersion: kubekey.kubesphere.io/v1
|
||||
kind: Inventory
|
||||
metadata:
|
||||
name: default
|
||||
spec:
|
||||
hosts:
|
||||
hostname1:
|
||||
k1: v1
|
||||
#...
|
||||
hostname2:
|
||||
k2: v2
|
||||
#...
|
||||
hostname3:
|
||||
#...
|
||||
groups:
|
||||
groupname1:
|
||||
groups:
|
||||
- groupname2
|
||||
# ...
|
||||
hosts:
|
||||
- hostname1
|
||||
#...
|
||||
vars:
|
||||
k1: v1
|
||||
#...
|
||||
groupname2:
|
||||
#...
|
||||
vars:
|
||||
k1: v1
|
||||
#...
|
||||
```
|
||||
**hosts**: The key is the host name, the value is the variables assigned to that host.
|
||||
**groups**: Defines host groups. The key is the group name, and the value includes groups, hosts, and vars.
|
||||
- groups: Other groups included in this group.
|
||||
- hosts: Hosts included in this group.
|
||||
- vars: Group-level variables, effective for all hosts in the group.
|
||||
The total hosts in a group are the sum of those in `groups` and those listed under `hosts`.
|
||||
**vars**: Global variables, effective for all hosts.
|
||||
Variable priority: $(host_variable) > $(group_variable) > $(global_variable).
|
||||
|
||||
### Global Configuration
|
||||
A YAML file without template syntax, passed in via the `-c` parameter (`kk -c config.yaml ...`), effective on each host.
|
||||
```yaml
|
||||
apiVersion: kubekey.kubesphere.io/v1
|
||||
kind: Config
|
||||
metadata:
|
||||
name: default
|
||||
spec:
|
||||
k: v
|
||||
#...
|
||||
```
|
||||
Parameters can be of any type.
|
||||
|
||||
### Parameters Defined in Templates
|
||||
Parameters defined in templates include:
|
||||
- Parameters defined in the `vars` and `vars_files` fields in playbooks.
|
||||
- Parameters defined in `defaults/main.yaml` in roles.
|
||||
- Parameters defined in the `vars` field in roles.
|
||||
- Parameters defined in the `vars` field in tasks.
|
||||
|
||||
## Dynamic Variables
|
||||
Dynamic variables are generated during node execution and include:
|
||||
- Parameters defined by `gather_facts`.
|
||||
- Parameters defined by `register`.
|
||||
- Parameters defined by `set_fact`.
|
||||
Priority follows the order of definition: later definitions override earlier ones.
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
# kubernetes cluster manager
|
||||
|
||||
|
|
@ -0,0 +1,83 @@
|
|||
# architecture
|
||||
|
||||

|
||||
|
||||
## pre_hook
|
||||
|
||||
The pre_hook allows users to execute scripts on the corresponding nodes before creating the cluster.
|
||||
|
||||
Execution flow:
|
||||
1. Copy local scripts to remote nodes at `/etc/kubekey/scripts/pre_install_{{ .inventory_hostname }}.sh`
|
||||
2. Set the script file permission to 0755
|
||||
3. Iterate over all `pre_install_*.sh` files in `/etc/kubekey/scripts/` on each remote node and execute them
|
||||
|
||||
> **work_dir**: working directory, defaults to the current command execution directory.
|
||||
> **inventory_hostname**: the host name defined in the inventory.yaml file.
|
||||
|
||||
## precheck
|
||||
|
||||
The precheck phase verifies that cluster nodes meet the installation requirements.
|
||||
|
||||
**os_precheck**: OS checks, including:
|
||||
- **Hostname check**: Verify that the hostname format is valid (contains only lowercase letters, digits, '.', or '-', and must start and end with a letter or digit)
|
||||
- **OS version check**: Verify that the current OS is in the supported OS distribution list, unless unsupported distributions are allowed
|
||||
- **Architecture check**: Verify that the system architecture is supported (amd64 or arm64)
|
||||
- **Memory check**:
|
||||
- Master nodes: verify memory meets the minimum master node requirement
|
||||
- Worker nodes: verify memory meets the minimum worker node requirement
|
||||
- **Kernel version check**: Verify that the kernel version meets the minimum requirement
|
||||
**kubernetes_precheck**: Kubernetes-related checks, including:
|
||||
- **IP address check**: Verify that the node defines either internal_ipv4 or internal_ipv6, neither can be empty
|
||||
- **KubeVIP check**: When using kube_vip as the control plane endpoint, verify that the kube_vip address is valid and not in use
|
||||
- **Kubernetes version check**: Verify that the Kubernetes version meets the minimum requirement
|
||||
- **Installed Kubernetes check**: Verify whether Kubernetes is already installed; if installed, check that the version matches the configured kube_version
|
||||
**network_precheck**: Network connectivity checks, including:
|
||||
- **Network interface check**: Verify that the node has the configured IPv4 or IPv6 network interfaces
|
||||
- **CIDR configuration check**: Verify that Pod CIDR and Service CIDR are properly formatted (supports dual-stack: ipv4_cidr/ipv6_cidr or ipv4_cidr,ipv6_cidr)
|
||||
- **Dual-stack support check**: When using dual-stack networking, verify that Kubernetes version supports it (v1.20.0+)
|
||||
- **Network plugin check**: Verify that the configured network plugin is supported
|
||||
- **Network address space check**: Ensure the node has enough network address space to accommodate the maximum number of pods
|
||||
- **Hybridnet version check**: When using the Hybridnet network plugin, verify that Kubernetes version meets requirements (v1.16.0+)
|
||||
**etcd_precheck**: etcd cluster checks, including:
|
||||
- **Deployment type check**: Validate etcd deployment type (internal or external), and ensure external etcd group is not empty and node count is odd
|
||||
- **Disk IO performance check**: Use fio to test write latency on etcd data disks, ensuring disk sync latency (e.g., WAL fsync) meets cluster requirements
|
||||
- **Installed etcd check**: Check whether etcd is already installed on the host
|
||||
**cri_precheck**: Container runtime checks, including:
|
||||
- **Container manager check**: Verify that the configured container manager is supported (docker or containerd)
|
||||
- **containerd version check**: When using containerd, verify that the version meets the minimum requirement
|
||||
**nfs_precheck**: NFS storage checks, including:
|
||||
- **NFS server count check**: Verify that there is only one NFS server node in the cluster to ensure uniqueness of NFS deployment
|
||||
**image_registry_precheck**: Image registry checks, including:
|
||||
- **Required software check**: Verify that `docker_version` and `dockercompose_version` are configured and not empty. The image registry is installed via docker_compose, missing required software will cause failure.
|
||||
|
||||
## init
|
||||
|
||||
The init phase prepares and constructs all resources required for cluster installation, including:
|
||||
- **Package download**: Download binaries for Kubernetes, container runtimes, network plugins, and other core components
|
||||
- **Helm chart preparation**: Fetch and verify Helm charts for subsequent application deployment
|
||||
- **Container image pull**: Download Docker images required by cluster components
|
||||
- **Offline package construction**: When performing offline installation, package all dependencies (binaries, images, charts, etc.) into a complete offline package
|
||||
- **Certificate management**: Generate certificates required for cluster installation and inter-component communication, including CA and service certificates
|
||||
|
||||
## install
|
||||
|
||||
The install phase is the core of KubeKey, responsible for deploying and configuring the Kubernetes cluster on the nodes, including:
|
||||
|
||||
**install nfs**: Install NFS service on nodes in the `nfs` group.
|
||||
**install image_registry**: Install an image registry on nodes in the `image_registry` group. Currently supports harbor and registry.
|
||||
**install etcd**: Install etcd on nodes in the `etcd` group.
|
||||
**install cri**: Install container runtime on nodes in the `k8s_cluster` group. Supports docker and containerd.
|
||||
**kubernetes_install**: Install Kubernetes on nodes in the `k8s_cluster` group.
|
||||
**install helm**: Install additional Helm applications on the existing Kubernetes cluster, including CNI plugins (calico, cilium, flannel, hybridnet, kubeovn, multus)
|
||||
|
||||
## post_hook
|
||||
|
||||
The post_hook phase executes after cluster installation, handling final configuration and validation:
|
||||
|
||||
Execution flow:
|
||||
1. Copy local scripts to remote nodes at `/etc/kubekey/scripts/post_install_{{ .inventory_hostname }}.sh`
|
||||
2. Set the script file permission to 0755
|
||||
3. Iterate over all `post_install_*.sh` files in `/etc/kubekey/scripts/` on each remote node and execute them
|
||||
|
||||
> **work_dir**: working directory, defaults to the current command execution directory.
|
||||
> **inventory_hostname**: the host name defined in the inventory.yaml file.
|
||||
|
|
@ -0,0 +1,239 @@
|
|||
# image_registry
|
||||
|
||||
The image_registry module allows users to install an image registry. It supports both `harbor` and `docker-registry` types.
|
||||
|
||||
## Requirements
|
||||
|
||||
- One or more computers running a Linux OS compatible with deb/rpm, e.g., Ubuntu or CentOS.
|
||||
- Each machine should have at least 8 GB of memory; insufficient memory may limit application performance.
|
||||
- Control plane nodes should have at least 4 CPU cores.
|
||||
- Full network connectivity between all machines in the cluster. You can use public or private networks.
|
||||
- When using local storage, each machine should have 100 GB of high-speed disk space.
|
||||
|
||||
## Install Harbor
|
||||
|
||||
### Build Inventory
|
||||
```yaml
|
||||
apiVersion: kubekey.kubesphere.io/v1
|
||||
kind: Inventory
|
||||
metadata:
|
||||
name: default
|
||||
spec:
|
||||
hosts: # You can set all nodes here or assign nodes to specific groups.
|
||||
# node1:
|
||||
# connector:
|
||||
# type: ssh
|
||||
# host: node1
|
||||
# port: 22
|
||||
# user: root
|
||||
# password: 123456
|
||||
groups:
|
||||
# all Kubernetes nodes
|
||||
k8s_cluster:
|
||||
groups:
|
||||
- kube_control_plane
|
||||
- kube_worker
|
||||
# control plane nodes
|
||||
kube_control_plane:
|
||||
hosts:
|
||||
- localhost
|
||||
# worker nodes
|
||||
kube_worker:
|
||||
hosts:
|
||||
- localhost
|
||||
# etcd nodes when etcd_deployment_type is external
|
||||
etcd:
|
||||
hosts:
|
||||
- localhost
|
||||
image_registry:
|
||||
hosts:
|
||||
- localhost
|
||||
# nfs nodes for registry storage and Kubernetes NFS storage
|
||||
# nfs:
|
||||
# hosts:
|
||||
# - localhost
|
||||
```
|
||||
Set the `image_registry` group.
|
||||
|
||||
### Installation
|
||||
Harbor is the default image registry.
|
||||
|
||||
1. Precheck before installation
|
||||
```shell
|
||||
kk precheck image_registry -i inventory.yaml --set harbor_version=v2.10.1,docker_version=24.0.7,dockercompose_version=v2.20.3
|
||||
```
|
||||
|
||||
2. Installation
|
||||
- Standalone installation
|
||||
`image_registry` can be installed independently of the cluster.
|
||||
```shell
|
||||
kk init registry -i inventory.yaml --set harbor_version=v2.10.1,docker_version=24.0.7,dockercompose_version=v2.20.3
|
||||
```
|
||||
|
||||
- Automatic installation during cluster creation
|
||||
When creating a cluster, KubeKey will detect if `harbor` is installed on `image_registry` nodes; if not, it will install `harbor` based on configuration.
|
||||
```shell
|
||||
kk create cluster -i inventory.yaml --set harbor_version=v2.10.1,docker_version=24.0.7,dockercompose_version=v2.20.3
|
||||
```
|
||||
|
||||
### Harbor High Availability
|
||||
|
||||
Harbor HA can be implemented in two ways:
|
||||
|
||||
1. All Harbor instances share a single storage service.
|
||||
Official method, suitable for installation within a Kubernetes cluster. Requires separate PostgreSQL and Redis services.
|
||||
Reference: https://goharbor.io/docs/edge/install-config/harbor-ha-helm/
|
||||
|
||||
2. Each Harbor has its own storage service.
|
||||
KubeKey method, suitable for server deployment.
|
||||

|
||||
- load balancer: implemented via Docker Compose deploying keepalived.
|
||||
- harbor service: implemented via Docker Compose deploying Harbor.
|
||||
- sync images: achieved using Harbor replication.
|
||||
|
||||
Installation example:
|
||||
```shell
|
||||
./kk init registry -i inventory.yaml --set image_registry.ha_vip=xx.xx.xx.xx --set harbor_version=v2.10.1,docker_version=24.0.7,dockercompose_version=v2.20.3 --set keepalived_version=2.0.20,artifact.artifact_url.keepalived.amd64=keepalived-2.0.20-linux-amd64.tgz
|
||||
```
|
||||
|
||||
Steps:
|
||||
1. Set multiple nodes in the `image_registry` group in the inventory.
|
||||
2. Set `image_registry.ha_vip`, which is the entry for load balancing.
|
||||
3. Set `keepalived_version` and `artifact.artifact_url.keepalived.amd64`. Keepalived is used for load balancing. KubeKey does not provide a download address, so you need to manually package it.
|
||||
```shell
|
||||
# download keepalived images
|
||||
docker pull osixia/keepalived:{{ .keepalived_version }}
|
||||
# package image
|
||||
docker save -o keepalived-{{ .keepalived_version }}-linux-{{ .binary_type }}.tgz osixia/keepalived:{{ .keepalived_version }}
|
||||
# move image to workdir
|
||||
mv keepalived-{{ .keepalived_version }}-linux-{{ .binary_type }}.tgz {{ .binary_dir }}/image-registry/keepalived/{{ .keepalived_version }}/{{ .binary_type }}/
|
||||
```
|
||||
- `binary_type`: machine architecture (currently supports amd64 and arm64, auto-detected via `gather_fact`)
|
||||
- `binary_dir`: software package storage path, usually `{{ .work_dir }}/kubekey`.
|
||||
|
||||
4. Set `harbor_version`, `docker_version`, and `dockercompose_version`. Harbor is installed via Docker Compose.
|
||||
|
||||
## Install Registry
|
||||
|
||||
### Build Inventory
|
||||
```yaml
|
||||
apiVersion: kubekey.kubesphere.io/v1
|
||||
kind: Inventory
|
||||
metadata:
|
||||
name: default
|
||||
spec:
|
||||
hosts: # You can set all nodes here or assign nodes to specific groups.
|
||||
# node1:
|
||||
# connector:
|
||||
# type: ssh
|
||||
# host: node1
|
||||
# port: 22
|
||||
# user: root
|
||||
# password: 123456
|
||||
groups:
|
||||
k8s_cluster:
|
||||
groups:
|
||||
- kube_control_plane
|
||||
- kube_worker
|
||||
kube_control_plane:
|
||||
hosts:
|
||||
- localhost
|
||||
kube_worker:
|
||||
hosts:
|
||||
- localhost
|
||||
etcd:
|
||||
hosts:
|
||||
- localhost
|
||||
image_registry:
|
||||
hosts:
|
||||
- localhost
|
||||
# nfs:
|
||||
# hosts:
|
||||
# - localhost
|
||||
```
|
||||
|
||||
### Build Registry Image Package
|
||||
KubeKey does not provide an offline registry image package. Manual packaging is required.
|
||||
```shell
|
||||
# download registry images
|
||||
docker pull registry:{{ .docker_registry_version }}
|
||||
# package image
|
||||
docker save -o docker-registry-{{ .docker_registry_version }}-linux-{{ .binary_type }}.tgz registry:{{ .docker_registry_version }}
|
||||
# move image to workdir
|
||||
mv docker-registry-{{ .docker_registry_version }}-linux-{{ .binary_type }}.tgz {{ .binary_dir }}/image-registry/docker-registry/{{ .docker_registry_version }}/{{ .binary_type }}/
|
||||
```
|
||||
- `binary_type`: machine architecture (amd64 or arm64, auto-detected via `gather_fact`)
|
||||
- `binary_dir`: software package storage path, usually `{{ .work_dir }}/kubekey`.
|
||||
|
||||
### Installation
|
||||
Set `image_registry.type` to `docker-registry` to install the registry.
|
||||
|
||||
1. Precheck
|
||||
```shell
|
||||
kk precheck image_registry -i inventory.yaml --set image_registry.type=docker-registry --set docker_registry_version=2.8.3,docker_version=24.0.7,dockercompose_version=v2.20.3
|
||||
```
|
||||
|
||||
2. Installation
|
||||
- Standalone installation
|
||||
```shell
|
||||
kk init registry -i inventory.yaml --set image_registry.type=docker-registry --set docker_registry_version=2.8.3,docker_version=24.0.7,dockercompose_version=v2.20.3 --set artifact.artifact_url.docker_registry.amd64=docker-registry-2.8.3-linux.amd64.tgz
|
||||
```
|
||||
|
||||
- Automatic installation during cluster creation
|
||||
```shell
|
||||
kk create cluster -i inventory.yaml --set image_registry.type=docker-registry --set docker_registry_version=2.8.3,docker_version=24.0.7,dockercompose_version=v2.20.3 --set artifact.artifact_url.docker_registry.amd64=docker-registry-2.8.3-linux.amd64.tgz
|
||||
```
|
||||
|
||||
### Registry High Availability
|
||||
|
||||

|
||||
- load balancer: implemented via Docker Compose deploying keepalived.
|
||||
- registry service: implemented via Docker Compose deploying the registry.
|
||||
- storage service: Registry HA can be achieved using shared storage. Docker Registry supports multiple storage backends, including:
|
||||
- **filesystem**: local storage. By default, Docker Registry uses local disk. For HA, mount the local directory to NFS or other shared storage. Example:
|
||||
```yaml
|
||||
image_registry:
|
||||
docker_registry:
|
||||
storage:
|
||||
filesystem:
|
||||
rootdir: /opt/docker-registry/data
|
||||
nfs_mount: /repository/docker-registry # optional, mount rootdir to NFS
|
||||
```
|
||||
Ensure all registry nodes have consistent data via shared storage.
|
||||
|
||||
- **azure**: Azure Blob Storage as backend. Suitable for Azure cloud deployments. Example:
|
||||
```yaml
|
||||
image_registry:
|
||||
docker_registry:
|
||||
storage:
|
||||
azure:
|
||||
accountname: <your-account-name>
|
||||
accountkey: <your-account-key>
|
||||
container: <your-container-name>
|
||||
```
|
||||
|
||||
- **gcs**: Google Cloud Storage as backend. Suitable for GCP deployments. Example:
|
||||
```yaml
|
||||
image_registry:
|
||||
docker_registry:
|
||||
storage:
|
||||
gcs:
|
||||
bucket: <your-bucket-name>
|
||||
keyfile: /path/to/keyfile.json
|
||||
```
|
||||
|
||||
- **s3**: Amazon S3 or S3-compatible storage. Suitable for AWS or private clouds. Example:
|
||||
```yaml
|
||||
image_registry:
|
||||
docker_registry:
|
||||
storage:
|
||||
s3:
|
||||
accesskey: <your-access-key>
|
||||
secretkey: <your-secret-key>
|
||||
region: <your-region>
|
||||
bucket: <your-bucket-name>
|
||||
```
|
||||
|
||||
> **Note:**
|
||||
> 1. For shared storage (NFS, S3, GCS, Azure Blob), deploy at least 2 registry instances with load balancing (e.g., keepalived+nginx) for HA access.
|
||||
> 2. Ensure read/write permissions and network connectivity for all registry nodes to the shared storage.
|
||||
|
|
@ -0,0 +1,31 @@
|
|||
# add_hostvars Module
|
||||
|
||||
The add_hostvars module allows users to set variables that take effect on the specified hosts.
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Description | Type | Required | Default |
|
||||
|-----------|-------------|------|----------|---------|
|
||||
| hosts | Target hosts to set parameters for | String or array of strings | No | - |
|
||||
| vars | Parameters to set | Map | No | - |
|
||||
|
||||
## Usage Examples
|
||||
|
||||
1. Set a string parameter
|
||||
```yaml
|
||||
- name: set string
|
||||
add_hostvars:
|
||||
hosts: all
|
||||
vars:
|
||||
c: d
|
||||
```
|
||||
|
||||
2. Set a map parameter
|
||||
```yaml
|
||||
- name: set map
|
||||
add_hostvars:
|
||||
hosts: all
|
||||
vars:
|
||||
a:
|
||||
b: c
|
||||
```
|
||||
|
|
@ -0,0 +1,59 @@
|
|||
# assert Module
|
||||
|
||||
The assert module allows users to perform assertions on parameter conditions.
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Description | Type | Required | Default |
|
||||
|-------------|-------------|------|----------|---------|
|
||||
| that | Assertion condition. Must use [template syntax](../101-syntax.md). | Array or string | Yes | - |
|
||||
| success_msg | Message output to task result stdout when the assertion evaluates to true. | String | No | True |
|
||||
| fail_msg | Message output to task result stderr when the assertion evaluates to false. | String | No | False |
|
||||
| msg | Same as fail_msg. Lower priority than fail_msg. | String | No | False |
|
||||
|
||||
## Usage Examples
|
||||
|
||||
1. Assertion condition as a string
|
||||
```yaml
|
||||
- name: assert single condition
|
||||
assert:
|
||||
that: eq 1 1
|
||||
```
|
||||
Task execution result:
|
||||
stdout: "True"
|
||||
stderr: ""
|
||||
|
||||
2. Assertion condition as an array
|
||||
```yaml
|
||||
- name: assert multi-condition
|
||||
assert:
|
||||
that:
|
||||
- eq 1 1
|
||||
- eq 1 2
|
||||
```
|
||||
Task execution result:
|
||||
stdout: "False"
|
||||
stderr: "False"
|
||||
|
||||
3. Set custom success output
|
||||
```yaml
|
||||
- name: assert is succeed
|
||||
assert:
|
||||
that: eq 1 1
|
||||
success_msg: "It's succeed"
|
||||
```
|
||||
Task execution result:
|
||||
stdout: "It's succeed"
|
||||
stderr: ""
|
||||
|
||||
4. Set custom failure output
|
||||
```yaml
|
||||
- name: assert is failed
|
||||
assert:
|
||||
that: eq 1 2
|
||||
fail_msg: "It's failed"
|
||||
msg: "It's failed!"
|
||||
```
|
||||
Task execution result:
|
||||
stdout: "False"
|
||||
stderr: "It's failed"
|
||||
|
|
@ -0,0 +1,25 @@
|
|||
# command (shell) Module
|
||||
|
||||
The command or shell module allows users to execute specific commands. The type of command executed is determined by the corresponding connector implementation.
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Description | Type | Required | Default |
|
||||
|-----------|------------|------|---------|---------|
|
||||
| command | The command to execute. Template syntax can be used. | string | yes | - |
|
||||
|
||||
## Usage Examples
|
||||
|
||||
1. Execute a shell command
|
||||
Connector type is `local` or `ssh`:
|
||||
```yaml
|
||||
- name: execute shell command
|
||||
command: echo "aaa"
|
||||
```
|
||||
|
||||
2. Execute a Kubernetes command
|
||||
Connector type is `kubernetes`:
|
||||
```yaml
|
||||
- name: execute kubernetes command
|
||||
command: kubectl get pod
|
||||
```
|
||||
|
|
@ -0,0 +1,48 @@
|
|||
# copy Module
|
||||
|
||||
The copy module allows users to copy files or directories to connected target hosts.
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Description | Type | Required | Default |
|
||||
|-----------|------------|------|---------|---------|
|
||||
| src | Source file or directory path | string | No (required if content is empty) | - |
|
||||
| content | Content of the source file or directory | string | No (required if src is empty) | - |
|
||||
| dest | Destination path on the target host | string | Yes | - |
|
||||
|
||||
## Usage Examples
|
||||
|
||||
1. Copy a relative path file to the target host
|
||||
The relative path is under the `files` directory corresponding to the current task. The current task path is specified by the task annotation `kubesphere.io/rel-path`.
|
||||
```yaml
|
||||
- name: copy relative path
|
||||
copy:
|
||||
src: a.yaml
|
||||
dest: /tmp/b.yaml
|
||||
```
|
||||
|
||||
2. Copy an absolute path file to the target host
|
||||
Local absolute path file:
|
||||
```yaml
|
||||
- name: copy absolute path
|
||||
copy:
|
||||
src: /tmp/a.yaml
|
||||
dest: /tmp/b.yaml
|
||||
```
|
||||
|
||||
3. Copy a directory to the target host
|
||||
Copies all files and directories under the directory to the target host:
|
||||
```yaml
|
||||
- name: copy dir
|
||||
copy:
|
||||
src: /tmp
|
||||
dest: /tmp
|
||||
```
|
||||
|
||||
4. Copy content to a file on the target host
|
||||
```yaml
|
||||
- name: copy content
|
||||
copy:
|
||||
content: hello
|
||||
dest: /tmp/b.txt
|
||||
```
|
||||
|
|
@ -0,0 +1,54 @@
|
|||
# debug Module
|
||||
|
||||
The debug module lets users print variables.
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Description | Type | Required | Default |
|
||||
|-----------|------------|------|---------|---------|
|
||||
| msg | Content to print | string | Yes | - |
|
||||
|
||||
## Usage Examples
|
||||
|
||||
1. Print a string
|
||||
```yaml
|
||||
- name: debug string
|
||||
debug:
|
||||
msg: I'm {{ .name }}
|
||||
```
|
||||
If the variable `name` is `kubekey`, the output will be:
|
||||
```txt
|
||||
DEBUG:
|
||||
I'm kubekey
|
||||
```
|
||||
|
||||
2. Print a map
|
||||
```yaml
|
||||
- name: debug map
|
||||
debug:
|
||||
msg: >-
|
||||
{{ .product }}
|
||||
```
|
||||
If the variable `product` is a map, e.g., `{"name":"kubekey"}`, the output will be:
|
||||
```txt
|
||||
DEBUG:
|
||||
{
|
||||
"name": "kubekey"
|
||||
}
|
||||
```
|
||||
|
||||
3. Print an array
|
||||
```yaml
|
||||
- name: debug array
|
||||
debug:
|
||||
msg: >-
|
||||
{{ .version }}
|
||||
```
|
||||
If the variable `version` is an array, e.g., `["1","2"]`, the output will be:
|
||||
```txt
|
||||
DEBUG:
|
||||
[
|
||||
"1",
|
||||
"2"
|
||||
]
|
||||
```
|
||||
|
|
@ -0,0 +1,20 @@
|
|||
# fetch Module
|
||||
|
||||
The fetch module allows users to pull files from a remote host to the local machine.
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Description | Type | Required | Default |
|
||||
|-----------|------------|------|---------|---------|
|
||||
| src | Path of the file on the remote host to fetch | string | Yes | - |
|
||||
| dest | Path on the local machine to save the fetched file | string | Yes | - |
|
||||
|
||||
## Usage Examples
|
||||
|
||||
1. Fetch a file
|
||||
```yaml
|
||||
- name: fetch file
|
||||
fetch:
|
||||
src: /tmp/src.yaml
|
||||
dest: /tmp/dest.yaml
|
||||
```
|
||||
|
|
@ -0,0 +1,56 @@
|
|||
# gen_cert Module
|
||||
|
||||
The gen_cert module allows users to validate or generate certificate files.
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Description | Type | Required | Default |
|
||||
|-----------|------------|------|---------|---------|
|
||||
| root_key | Path to the CA certificate key | string | No | - |
|
||||
| root_cert | Path to the CA certificate | string | No | - |
|
||||
| date | Certificate expiration duration | string | No | 1y |
|
||||
| policy | Certificate generation policy (Always, IfNotPresent, None) | string | No | IfNotPresent |
|
||||
| sans | Subject Alternative Names. Allowed IPs and DNS | string | No | - |
|
||||
| cn | Common Name | string | Yes | - |
|
||||
| out_key | Path to generate the certificate key | string | Yes | - |
|
||||
| out_cert | Path to generate the certificate | string | Yes | - |
|
||||
|
||||
Certificate generation policy:
|
||||
|
||||
- **Always**: Always regenerate the certificate and overwrite existing files, regardless of whether `out_key` and `out_cert` exist.
|
||||
- **IfNotPresent**: Generate a new certificate only if `out_key` and `out_cert` do not exist; if files exist, validate them first and regenerate only if validation fails.
|
||||
- **None**: If `out_key` and `out_cert` exist, only validate them without generating or overwriting; if files do not exist, no new certificate will be generated.
|
||||
|
||||
This policy allows flexible control of certificate generation and validation to meet different scenarios.
|
||||
|
||||
## Usage Examples
|
||||
|
||||
1. Generate a self-signed CA certificate
|
||||
When generating a CA certificate, `root_key` and `root_cert` should be empty.
|
||||
```yaml
|
||||
- name: Generate root CA file
|
||||
gen_cert:
|
||||
cn: root
|
||||
date: 87600h
|
||||
policy: IfNotPresent
|
||||
out_key: /tmp/pki/root.key
|
||||
out_cert: /tmp/pki/root.crt
|
||||
```
|
||||
|
||||
2. Validate or issue a certificate
|
||||
For non-CA certificates, `root_key` and `root_cert` should point to an existing CA certificate.
|
||||
```yaml
|
||||
- name: Generate registry image cert file
|
||||
gen_cert:
|
||||
root_key: /tmp/pki/root.key
|
||||
root_cert: /tmp/pki/root.crt
|
||||
cn: server
|
||||
sans:
|
||||
- 127.0.0.1
|
||||
- localhost
|
||||
date: 87600h
|
||||
policy: IfNotPresent
|
||||
out_key: /tmp/pki/server.key
|
||||
out_cert: /tmp/pki/server.crt
|
||||
when: .groups.image_registry | default list | empty | not
|
||||
```
|
||||
|
|
@ -0,0 +1,71 @@
|
|||
# image Module
|
||||
|
||||
The image module allows users to pull images to a local directory or push images to a remote repository.
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Description | Type | Required | Default |
|
||||
|-----------|------------|------|---------|---------|
|
||||
| pull | Pull images from a remote repository to a local directory | map | No | - |
|
||||
| pull.images_dir | Local directory to store images | string | No | - |
|
||||
| pull.manifests | List of images to pull | array of strings | Yes | - |
|
||||
| pull.auths | Authentication information for the remote repository | array of objects | No | - |
|
||||
| pull.auths.repo | Repository address for authentication | string | No | - |
|
||||
| pull.auths.username | Username for repository authentication | string | No | - |
|
||||
| pull.auths.password | Password for repository authentication | string | No | - |
|
||||
| pull.platform | Image platform/architecture | string | No | - |
|
||||
| pull.skip_tls_verify | Skip TLS verification for the remote repository | bool | No | - |
|
||||
| push | Push images from a local directory to a remote repository | map | No | - |
|
||||
| push.images_dir | Local directory storing images | string | No | - |
|
||||
| push.username | Username for remote repository authentication | string | No | - |
|
||||
| push.password | Password for remote repository authentication | string | No | - |
|
||||
| push.skip_tls_verify | Skip TLS verification for the remote repository | bool | No | - |
|
||||
| push.src_pattern | Regex to filter images in the local directory | map | No | - |
|
||||
| push.dest | Template syntax for the destination remote repository image | map | No | - |
|
||||
|
||||
Each image in the local directory corresponds to a dest image.
|
||||
```txt
|
||||
|-- images_dir/
|
||||
| |-- registry1/
|
||||
| | |-- image1/
|
||||
| | | |-- manifests/
|
||||
| | | | |-- reference
|
||||
| | |-- image2/
|
||||
| | | |-- manifests/
|
||||
| | | | |-- reference
|
||||
| |-- registry2/
|
||||
| | |-- image1/
|
||||
| | | |-- manifests/
|
||||
| | | | |-- reference
|
||||
```
|
||||
For each src image, there is a corresponding dest. The dest template supports the following variables:
|
||||
{{ .module.image.src.reference.registry }}: registry of the local image
|
||||
{{ .module.image.src.reference.repository }}: repository of the local image
|
||||
{{ .module.image.src.reference.reference }}: reference of the local image
|
||||
|
||||
## Usage Examples
|
||||
|
||||
1. Pull images
|
||||
```yaml
|
||||
- name: pull images
|
||||
image:
|
||||
pull:
|
||||
images_dir: /tmp/images/
|
||||
platform: linux/amd64
|
||||
manifests:
|
||||
- "docker.io/kubesphere/ks-apiserver:v4.1.3"
|
||||
- "docker.io/kubesphere/ks-controller-manager:v4.1.3"
|
||||
- "docker.io/kubesphere/ks-console:3.19"
|
||||
```
|
||||
|
||||
2. Push images to a remote repository
|
||||
```yaml
|
||||
- name: push images
|
||||
push:
|
||||
images_dir: /tmp/images/
|
||||
dest: hub.kubekey/{{ .module.image.src.reference.repository }}:{{ .module.image.src.reference.reference }}
|
||||
```
|
||||
For example:
|
||||
docker.io/kubesphere/ks-apiserver:v4.1.3 => hub.kubekey/kubesphere/ks-apiserver:v4.1.3
|
||||
docker.io/kubesphere/ks-controller-manager:v4.1.3 => hub.kubekey/kubesphere/ks-controller-manager:v4.1.3
|
||||
docker.io/kubesphere/ks-console:3.19 => hub.kubekey/kubesphere/ks-console:v4.1.3
|
||||
|
|
@ -0,0 +1,17 @@
|
|||
# include_vars Module
|
||||
|
||||
The include_vars module allows users to apply variables to the specified hosts.
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Description | Type | Required | Default |
|
||||
|-----------|------------|------|---------|---------|
|
||||
| include_vars | Path to the referenced file, must be YAML/YML format | string | Yes | - |
|
||||
|
||||
## Usage Examples
|
||||
|
||||
1. Set string variables
|
||||
```yaml
|
||||
- name: set other var file
|
||||
include_vars: "{{ .os.architecture }}/var.yaml"
|
||||
```
|
||||
|
|
@ -0,0 +1,88 @@
|
|||
# Prometheus Module
|
||||
|
||||
The Prometheus module allows users to query metric data from a Prometheus server. It uses a dedicated Prometheus connector and supports running PromQL queries, formatting results, and fetching server information.
|
||||
|
||||
## Configuration
|
||||
|
||||
To use the Prometheus module, define Prometheus hosts and connection info in the inventory:
|
||||
|
||||
```yaml
|
||||
prometheus:
|
||||
connector:
|
||||
type: prometheus
|
||||
host: http://prometheus-server:9090 # URL of the Prometheus server
|
||||
username: admin # Optional: basic auth username
|
||||
password: password # Optional: basic auth password
|
||||
token: my-token # Optional: Bearer token
|
||||
timeout: 15s # Optional: request timeout (default 10s)
|
||||
headers: # Optional: custom HTTP headers
|
||||
X-Custom-Header: custom-value
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Description | Type | Required | Default |
|
||||
|-----------|------------|------|---------|---------|
|
||||
| query | PromQL query statement | string | Yes (unless using info) | - |
|
||||
| format | Result format: raw, value, table | string | No | raw |
|
||||
| time | Query time (RFC3339 or Unix timestamp) | string | No | current time |
|
||||
|
||||
## Output
|
||||
|
||||
The module returns query results or server information depending on the specified format:
|
||||
|
||||
- **raw**: returns the original JSON response
|
||||
- **value**: extracts a single scalar/vector value if possible
|
||||
- **table**: formats vector results as a table with columns for metric, value, and timestamp
|
||||
|
||||
## Usage Examples
|
||||
|
||||
1. Basic query:
|
||||
```yaml
|
||||
- name: Get Prometheus metrics
|
||||
prometheus:
|
||||
query: up
|
||||
register: prometheus_result
|
||||
```
|
||||
|
||||
2. With format option:
|
||||
```yaml
|
||||
- name: Get CPU idle time
|
||||
prometheus:
|
||||
query: sum(rate(node_cpu_seconds_total{mode='idle'}[5m]))
|
||||
format: value
|
||||
register: cpu_idle
|
||||
```
|
||||
|
||||
3. Specify time parameter:
|
||||
```yaml
|
||||
- name: Get historical Goroutines count
|
||||
prometheus:
|
||||
query: go_goroutines
|
||||
time: 2023-01-01T12:00:00Z
|
||||
register: goroutines
|
||||
```
|
||||
|
||||
4. Fetch Prometheus server information:
|
||||
```yaml
|
||||
- name: Fetch Prometheus server info
|
||||
fetch:
|
||||
src: api/v1/status/buildinfo
|
||||
dest: info.json
|
||||
```
|
||||
|
||||
5. Format results as table:
|
||||
```yaml
|
||||
- name: Get node CPU usage and format as table
|
||||
prometheus:
|
||||
query: node_cpu_seconds_total{mode="idle"}
|
||||
format: table
|
||||
register: cpu_table
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
1. The `query` parameter is required when executing queries
|
||||
2. Time must be in RFC3339 format (e.g., 2023-01-01T12:00:00Z) or Unix timestamp
|
||||
3. Table formatting only applies to vector results; other types will return an error
|
||||
4. For security, HTTPS connections to Prometheus are recommended
|
||||
|
|
@ -0,0 +1,73 @@
|
|||
# result Module
|
||||
|
||||
The result module allows users to set variables to be displayed in the playbook's status detail.
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Description | Type | Required | Default |
|
||||
|-----------|------------|------|---------|---------|
|
||||
| any | Any parameter to set | string or map | No | - |
|
||||
|
||||
## Usage Examples
|
||||
|
||||
1. Set string parameters
|
||||
```yaml
|
||||
- name: set string
|
||||
result:
|
||||
a: b
|
||||
c: d
|
||||
```
|
||||
The status in the playbook will show:
|
||||
```yaml
|
||||
apiVersion: kubekey.kubesphere.io/v1
|
||||
kind: Playbook
|
||||
status:
|
||||
detail:
|
||||
a: b
|
||||
c: d
|
||||
phase: Succeeded
|
||||
```
|
||||
|
||||
2. Set map parameters
|
||||
```yaml
|
||||
- name: set map
|
||||
result:
|
||||
a:
|
||||
b: c
|
||||
```
|
||||
The status in the playbook will show:
|
||||
```yaml
|
||||
apiVersion: kubekey.kubesphere.io/v1
|
||||
kind: Playbook
|
||||
status:
|
||||
detail:
|
||||
a:
|
||||
b: c
|
||||
phase: Succeeded
|
||||
```
|
||||
|
||||
3. Set multiple results
|
||||
```yaml
|
||||
- name: set result1
|
||||
result:
|
||||
k1: v1
|
||||
|
||||
- name: set result2
|
||||
result:
|
||||
k2: v2
|
||||
|
||||
- name: set result3
|
||||
result:
|
||||
k2: v3
|
||||
```
|
||||
All results will be merged. If there are duplicate keys, the last set key will take precedence.
|
||||
The status in the playbook will show:
|
||||
```yaml
|
||||
apiVersion: kubekey.kubesphere.io/v1
|
||||
kind: Playbook
|
||||
status:
|
||||
detail:
|
||||
k1: v1
|
||||
k2: v3
|
||||
phase: Succeeded
|
||||
```
|
||||
|
|
@ -0,0 +1,27 @@
|
|||
# set_fact Module
|
||||
|
||||
The set_fact module allows users to set variables effective on the currently executing host.
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Description | Type | Required | Default |
|
||||
|-----------|------------|------|---------|---------|
|
||||
| any | Any parameter to set | string or map | No | - |
|
||||
|
||||
## Usage Examples
|
||||
|
||||
1. Set string variables
|
||||
```yaml
|
||||
- name: set string
|
||||
set_fact:
|
||||
a: b
|
||||
c: d
|
||||
```
|
||||
|
||||
2. Set map variables
|
||||
```yaml
|
||||
- name: set map
|
||||
set_fact:
|
||||
a:
|
||||
b: c
|
||||
```
|
||||
|
|
@ -0,0 +1,22 @@
|
|||
# setup Module
|
||||
|
||||
The setup module is the underlying implementation of gather_fact, allowing users to retrieve information about hosts.
|
||||
|
||||
## Parameters
|
||||
|
||||
null
|
||||
|
||||
## Usage Examples
|
||||
|
||||
1. Use gather_fact in a playbook
|
||||
```yaml
|
||||
- name: playbook
|
||||
hosts: localhost
|
||||
gather_fact: true
|
||||
```
|
||||
|
||||
2. Use setup in a task
|
||||
```yaml
|
||||
- name: setup
|
||||
setup: {}
|
||||
```
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
# template Module
|
||||
|
||||
The template module allows users to parse a template file and copy it to the connected target host.
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Description | Type | Required | Default |
|
||||
|-----------|------------|------|---------|---------|
|
||||
| src | Path to the original file or directory | string | No (required if content is empty) | - |
|
||||
| dest | Destination path on the target host | string | Yes | - |
|
||||
|
||||
## Usage Examples
|
||||
|
||||
1. Copy a relative path file to the target host
|
||||
Relative paths are under the `templates` directory of the current task. The current task path is specified by the task annotation `kubesphere.io/rel-path`.
|
||||
```yaml
|
||||
- name: copy relative path
|
||||
template:
|
||||
src: a.yaml
|
||||
dest: /tmp/b.yaml
|
||||
```
|
||||
|
||||
2. Copy an absolute path file to the target host
|
||||
Local template file with an absolute path
|
||||
```yaml
|
||||
- name: copy absolute path
|
||||
template:
|
||||
src: /tmp/a.yaml
|
||||
dest: /tmp/b.yaml
|
||||
```
|
||||
|
||||
3. Copy a directory to the target host
|
||||
Parse all template files in the directory and copy them to the target host
|
||||
```yaml
|
||||
- name: copy dir
|
||||
template:
|
||||
src: /tmp
|
||||
dest: /tmp
|
||||
```
|
||||
|
||||
4. Copy content to the target host
|
||||
```yaml
|
||||
- name: copy content
|
||||
template:
|
||||
content: hello
|
||||
dest: /tmp/b.txt
|
||||
```
|
||||
|
|
@ -0,0 +1,3 @@
|
|||
<?xml version="1.0" standalone="no"?>
|
||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
|
||||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 454.34 79.33" width="454.3399963378906" height="79.33000183105469"><defs><style>.cls-1{fill:#3e3d49;}.cls-2{fill:#00a971;}</style></defs><g id="图层_2" data-name="图层 2"><g id="图层_1-2" data-name="图层 1"><path class="cls-1" d="M172.92,79.16c-7.3,0-21.58-9.84-21.58-17.82V7.79a2.55,2.55,0,0,1,2.55-2.43h5.86a2.48,2.48,0,0,1,2.54,2.43V59.91c0,3.88,7.2,9.17,10.63,9.17s10.63-5.29,10.63-9.17V7.79a2.36,2.36,0,0,1,2.33-2.43H192a2.55,2.55,0,0,1,2.55,2.43V61.34C194.5,69.08,180.23,79.16,172.92,79.16Z"/><path class="cls-2" d="M400.73,70.65a2.5,2.5,0,0,0-2.47-2.48H372V47.84h26.27a2.42,2.42,0,0,0,2.47-2.48v-6.2a2.5,2.5,0,0,0-2.48-2.48H372V16.35h26.26a2.43,2.43,0,0,0,2.48-2.48V7.67a2.5,2.5,0,0,0-2.48-2.48H363.32a2.5,2.5,0,0,0-2.48,2.48V76.85a2.5,2.5,0,0,0,2.48,2.48h34.94a2.42,2.42,0,0,0,2.47-2.48Z"/><path class="cls-1" d="M295.56,70.65a2.5,2.5,0,0,0-2.48-2.48H266.82V47.84h26.26a2.42,2.42,0,0,0,2.48-2.48v-6.2a2.5,2.5,0,0,0-2.48-2.48H266.82V16.35h26.26a2.44,2.44,0,0,0,2.48-2.48V7.67a2.51,2.51,0,0,0-2.48-2.48H258.15a2.5,2.5,0,0,0-2.48,2.48V76.85a2.5,2.5,0,0,0,2.48,2.48h34.94a2.42,2.42,0,0,0,2.47-2.48Z"/><path class="cls-1" d="M236.58,41c5.11-3.36,10.25-9.9,10.25-15.32,0-8.4-11.43-20.31-19.61-20.31H209.68c-2.47,0-5.49,2.58-5.49,5.15V76.66a2.5,2.5,0,0,0,2.57,2.46H227.6c8.18,0,19.23-13.76,19.23-22.16C246.83,51.53,241.69,44.43,236.58,41Zm-21.3-26h10.49c4.48,0,10.08,6.41,10.08,10.67s-5.38,11.51-10,11.51h-10.6Zm10.49,54.45H215.28V44.84h10.6c4.59,0,10,7.75,10,12.12S230.25,69.49,225.77,69.49Z"/><path class="cls-2" d="M323.88,42.67l24.53-28.41a2.42,2.42,0,0,0-.25-3.49l-4.69-4A2.5,2.5,0,0,0,340,7L318.52,31.81V7.67A2.43,2.43,0,0,0,316,5.19h-6.2a2.51,2.51,0,0,0-2.48,2.48V76.85a2.51,2.51,0,0,0,2.48,2.48H316a2.43,2.43,0,0,0,2.48-2.48V53.53L340,78.37a2.51,2.51,0,0,0,3.5.26l4.69-4a2.43,2.43,0,0,0,.25-3.5Z"/><path class="cls-1" d="M116,42.67l24.53-28.41a2.42,2.42,0,0,0-.26-3.49l-4.69-4a2.49,2.49,0,0,0-3.49.25L110.66,31.81V7.67a2.44,2.44,0,0,0-2.48-2.48H102A2.51,2.51,0,0,0,99.5,7.67V76.85A2.51,2.51,0,0,0,102,79.33h6.2a2.44,2.44,0,0,0,2.48-2.48V53.53l21.45,24.84a2.5,2.5,0,0,0,3.49.26l4.69-4a2.44,2.44,0,0,0,.26-3.5Z"/><rect class="cls-2" x="427.63" y="39.99" width="11.16" height="38.95" rx="2.48"/><path class="cls-2" d="M451.82,5.59h-6.28A2.53,2.53,0,0,0,443,8.1V23.27c0,3.91-5.36,13.25-9.66,13.62-4.3-.37-9.65-9.71-9.65-13.62V8.1a2.54,2.54,0,0,0-2.52-2.51h-6.28A2.53,2.53,0,0,0,412.4,8.1V25c0,5.6,5.65,15.06,9.79,17.7,2.36,1.52,5.78,3.88,10.9,4h.56c5.12-.17,8.54-2.53,10.91-4,4.13-2.64,9.78-12.1,9.78-17.7V8.1A2.53,2.53,0,0,0,451.82,5.59Z"/><path class="cls-2" d="M76.63,22.12V66.36L54.54,79.12,44.91,49.23a13.63,13.63,0,1,0-13.48.22L21.91,79,0,66.36V22.12L38.31,0Z"/></g></g></svg>
|
||||
|
After Width: | Height: | Size: 2.8 KiB |
|
|
@ -82,4 +82,3 @@ post_hook 阶段在集群安装完成后执行,负责集群的最终配置和
|
|||
|
||||
> **work_dir**: 工作目录,默认当前命令执行目录。
|
||||
> **inventory_hostname**: Inventory.yaml 文件中定义的host对应的名称。
|
||||
|
||||
|
|
|
|||
Loading…
Reference in New Issue