Merge pull request #3205 from zhuxiujuan28/docs

【documentation】增加离线安装文档
This commit is contained in:
KubeSphere CI Bot 2024-11-01 14:06:08 +08:00 committed by GitHub
commit c86a36036e
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
595 changed files with 25033 additions and 513 deletions

View File

@ -0,0 +1,35 @@
// :ks_include_id: 9b12ede280884331985685925cf5dfc4
* 容器组列表提供以下信息:
+
--
[%header,cols="1a,4a"]
|===
|参数 |描述
|名称
|容器组的名称。
|状态
|容器组的状态。
include::pods-para-podStatus_overview.adoc[]
// pod 状态不一样
|节点
|容器组所在的节点和节点的 IP 地址。
include::pods-para-podIpPool.adoc[]
// |应用
// |容器组所属的应用。
|项目
|容器组所属的项目。
|集群
|容器组所属的集群。
|更新时间
|容器组的更新时间。
|===
--

View File

@ -0,0 +1,13 @@
// :ks_include_id: 8cc83a9c58b8460cbcf369b1a07288b1
* **运行中**:容器组已分配给某个节点,容器组中的所有容器都已被创建,至少有一个容器正在运行、启动或重启。
* **等待中**:容组器已被系统接受,但有至少一个容器尚未创建也未运行。此状态下,容器组可能正在等待调度,或等待容器镜像下载完成。
* **成功完成**:容器组中的所有容器都成功终止(以 0 退出码终止),并且不再重启。
* **失败**:容器组中的所有容器都已终止,并且至少有一个容器以非 0 退出码终止。
* **未知**:系统无法获取容器组状态。出现这种状态通常是由于系统与容器组所在的主机通信失败。
// 已完成 vs 成功完成https://github.com/kubesphere/project/issues/3983#issuecomment-2246982909

View File

@ -1,2 +1,2 @@
// :ks_include_id: 797c5d8830fe45bfb4452dd98086d8ed
本节介绍如何创建应用路由。
This section describes how to create an Ingress.

View File

@ -1,9 +1,9 @@
// :ks_include_id: 4f3a812c48b342fdb0cec7f38b00ce81
本节介绍如何删除应用路由。
This section describes how to delete an Ingress.
// Warning
include::../../../../_ks_components-en/admonitions/warning.adoc[]
删除应用路由后将无法通过应用路由访问其后端的服务,请谨慎执行此操作。
After deleting an Ingress, you will no longer be able to access its backend services through the Ingress. Please proceed with caution.
include::../../../../_ks_components-en/admonitions/admonEnd.adoc[]
include::../../../../_ks_components-en/admonitions/admonEnd.adoc[]

View File

@ -1,2 +1,2 @@
// :ks_include_id: 9a1f0d5fdb294c79a6051a90fe1a17be
本节介绍如何编辑应用路由注解。
This section describes how to edit Ingress annotations.

View File

@ -1,4 +1,4 @@
// :ks_include_id: ab9cb5143fe449bb900ce47e7fb62049
本节介绍如何编辑应用路由信息。
This section describes how to edit Ingress information.
您可以编辑应用路由的别名和描述。{ks_product-en}不支持编辑已创建应用路由的名称。
You can edit the alias and description of an Ingress. KubeSphere does not support editing the name of an already created Ingress.

View File

@ -1,2 +1,2 @@
// :ks_include_id: b4c404ff621146f799e720597d3aac84
本节介绍如何编辑路由规则。
This section describes how to edit routing rules.

View File

@ -1,4 +1,4 @@
// :ks_include_id: c69900173bca4b109a4b8a178ce15e64
本节介绍如何管理应用路由。
This section describes how to manage ingresses.
应用路由用于对服务进行聚合并提供给集群外部访问。每个应用路由包含域名及其子路径到不同服务的映射规则。来自客户端的业务流量先发送给集群网关或项目网关,集群网关或项目网关根据应用路由中定义的规则将业务流量转发给不同的服务,从而实现对多个服务的反向代理。
Ingresses are used to aggregate services and provide external access. Each ingress contains a domain name and its sub-paths mapped to different services. Business traffic from clients is first sent to the cluster gateway or project gateway, which then forwards the traffic to different services based on the rules defined in the ingress, thereby achieving reverse proxy for multiple services.

View File

@ -1,2 +1,2 @@
// :ks_include_id: 74e87c7e7c4a42b59f6c9013b617a2f7
本节介绍如何查看应用路由列表。
This section describes how to view the Ingress list.

View File

@ -1,2 +1,2 @@
// :ks_include_id: c74f0c52dbf440a98ed71f677036f155
本节介绍如何查看应用路由详情。
This section describes how to view Ingress details.

View File

@ -2,6 +2,6 @@
// Note
include::../../../../_ks_components-en/admonitions/note.adoc[]
{ks_product-en}的集群网关和项目网关底层基于 Nginx Ingress Controller 实现。您可以在应用路由上设置注解控制网关的行为。有关更多信息,请参阅 link:https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/[Nginx Ingress Controller 官方文档]。
The cluster gateway and project gateway in KubeSphere are implemented based on Nginx Ingress Controller. You can set annotations on the Ingress to control the behavior of the gateway. For more information, see link:https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/[Nginx Ingress Controller Documentation].
include::../../../../_ks_components-en/admonitions/admonEnd.adoc[]

View File

@ -1,24 +1,24 @@
// :ks_include_id: 570405898db841389a0ce7ed42a9a8e3
. 在**基本信息**页签,设置应用路由的基本信息,然后点击**下一步**。
. On the **Basic Information** tab, set the basic information for the Ingress, then click **Next**.
+
--
[%header,cols="1a,4a"]
|===
|参数 |描述
|Parameter |Description
|名称
|应用路由的名称。名称只能包含小写字母、数字和连字符(-),必须以小写字母或数字开头和结尾,最长 253 个字符。
|Name
|The name of the Ingress. The name can only contain lowercase letters, numbers, and hyphens (-), must start and end with a lowercase letter or number, and can be up to 253 characters long.
|别名
|应用路由的别名。不同应用路由的别名可以相同。别名只能包含中文、字母、数字和连字符(-),不得以连字符(-)开头或结尾,最长 63 个字符。
|Alias
|The alias of the Ingress. Different Ingresses can have the same alias. The alias can only contain Chinese characters, letters, numbers, and hyphens (-), cannot start or end with a hyphen (-), and can be up to 63 characters long.
|描述
|应用路由的描述。描述可包含任意字符,最长 256 个字符。
|Description
|The description of the Ingress. The description can contain any characters and can be up to 256 characters long.
|===
--
. 在**路由规则**页签,点击**添加路由规则**,设置路由规则参数,然后点击**下一步**。
. On the **Routing Rules** tab, click **Add Routing Rule**, set the routing rule parameters, then click **Next**.
+
--
ifdef::multicluster[]
@ -28,14 +28,14 @@ endif::[]
include::routes-para-routingRules.adoc[]
--
. 在**高级设置**页签,为应用路由设置标签和注解,然后点击**创建**。
. On the **Advanced Settings** tab, set labels and annotations for the Ingress, then click **Create**.
+
--
* 点击**添加**可设置多条标签或注解。
* Click **Add** to set multiple labels or annotations.
* 在已创建的标签或注解右侧点击image:/images/ks-qkcp/zh/icons/trash-light.svg[trash-light,18,18]可删除标签或注解。
* Click image:/images/ks-qkcp/zh/icons/trash-light.svg[trash-light,18,18] on the right side of a created label or annotation to delete it.
include::routes-note-annotations.adoc[]
应用路由创建完成后将显示在应用路由列表中。
--
After the Ingress is created, it will be displayed in the Ingress list.
--

View File

@ -1,4 +1,4 @@
// :ks_include_id: 1e5380a648764bae9ac650a53316501d
. 在需要删除的应用路由右侧点击image:/images/ks-qkcp/zh/icons/more.svg[more,18,18],然后在下拉列表中选择**删除**。
. Click image:/images/ks-qkcp/zh/icons/more.svg[more,18,18] on the right side of the ingress you want to delete, then select **Delete** from the dropdown list.
. 在**删除应用路由**对话框,输入应用路由的名称,然后点击**确定**。
. In the **Delete Ingress** dialog, enter the name of the ingress, then click **OK**.

View File

@ -1,8 +1,8 @@
// :ks_include_id: 6ec380c8bcbe4e1589334e0b050b0b6c
. 选择需要删除的应用路由左侧的复选框,然后在应用路由列表上方点击**删除**。
. Select the checkbox on the left side of the Ingresses you want to delete, then click **Delete** above the ingress list.
. 在**批量删除应用路由**对话框,输入应用路由的名称,然后点击**确定**。
. In the **Delete Multiple Ingresses** dialog, enter the names of the Ingresses, then click **OK**.
+
--
include::../../note-separateNamesByComma.adoc[]
--
--

View File

@ -1,12 +1,12 @@
// :ks_include_id: 2ead4c416e934d44b2a8a404251bdfe8
. 在需要操作的应用路由右侧点击image:/images/ks-qkcp/zh/icons/more.svg[more,18,18],然后在下拉列表中选择**编辑注解**。
. Click image:/images/ks-qkcp/zh/icons/more.svg[more,18,18] on the right side of the ingress you want to edit, then select **Edit Annotations** from the dropdown list.
. 在**编辑注解**对话框,设置注解键值对,然后点击**确定**。
. In the **Edit Annotations** dialog, set the annotation key-value pairs, then click **OK**.
+
--
* 点击**添加**可设置多条注解。
* Click **Add** to set multiple annotations.
* 在已创建的注解右侧点击image:/images/ks-qkcp/zh/icons/trash-light.svg[trash-light,18,18]可删除注解。
* Click image:/images/ks-qkcp/zh/icons/trash-light.svg[trash-light,18,18] on the right side of a created annotation to delete it.
include::routes-note-annotations.adoc[]
--
--

View File

@ -1,17 +1,17 @@
// :ks_include_id: 9f1f9315fbc0466396a168bfa897683f
. 在需要操作的应用路由右侧点击image:/images/ks-qkcp/zh/icons/more.svg[more,18,18],然后在下拉列表中选择**编辑信息**。
. Click image:/images/ks-qkcp/zh/icons/more.svg[more,18,18] on the right side of the Ingress you want to edit, then select **Edit Information** from the dropdown list.
. 在**编辑信息**对话框,设置应用路由的别名和描述,然后点击**确定**。
. In the **Edit Information** dialog, set the alias and description of the ingress, then click **OK**.
+
--
[%header,cols="1a,4a"]
|===
|参数 |描述
|Parameter |Description
|别名
|应用路由的别名。不同应用路由的别名可以相同。别名只能包含中文、字母、数字和连字符(-),不得以连字符(-)开头或结尾,最长 63 个字符。
|Alias
|The alias of the ingress. Different ingresses can have the same alias. The alias can only contain Chinese characters, letters, numbers, and hyphens (-), and cannot start or end with a hyphen (-), with a maximum length of 63 characters.
|描述
|应用路由的描述信息。描述可包含任意字符,最多包含 256 个字符。
|Description
|The description information of the ingress. The description can contain any characters, with a maximum of 256 characters.
|===
--
--

View File

@ -1,8 +1,8 @@
// :ks_include_id: b0e0fbee5bf54cfda0ac0d8847b90185
. 在需要操作的应用路由右侧点击image:/images/ks-qkcp/zh/icons/more.svg[more,18,18],然后在下拉列表中选择**编辑路由规则**。
. Click image:/images/ks-qkcp/zh/icons/more.svg[more,18,18] on the right side of the ingress you want to edit, then select **Edit Routing Rules** from the dropdown list.
. 在**编辑路由规则**对话框,设置路由规则,然后点击**确定**。
. In the **Edit Routing Rules** dialog, set the routing rules, then click **OK**.
+
--
include::routes-para-routingRules.adoc[]
--
--

View File

@ -1,2 +1,2 @@
// :ks_include_id: 01c521d890b44fab91dfa6803d6a6bb5
. 在应用路由列表中点击一个应用路由的名称打开其详情页面。
. Click the name of an Ingress in the Ingress list to open its details page.

View File

@ -1,2 +1,2 @@
// :ks_include_id: a3a4d6c4c46d4b8dbf054da9c20cd804
* 在列表上方点击搜索框并设置搜索条件,可按名称搜索应用路由。
* Click the search box at the top of the list to search for Ingresses by name.

View File

@ -1,70 +1,71 @@
// :ks_include_id: 02ac1cebc06f4893a036c2e77c21d999
. 在应用路由详情页面左侧的**属性**区域查看应用路由的详细信息。
. On the Ingress details page, view the detailed information of the Ingress in the **Attributes** area on the left.
+
--
[%header,cols="1a,4a"]
|===
|参数 |描述
|Parameter |Description
// |集群
// |应用路由的所属集群。
// |Cluster
// |The cluster to which the Ingress belongs.
|项目
|应用路由的所属项目。
|Project
|The project to which the Ingress belongs.
|应用
|应用路由对应的应用。
|App
|The app corresponding to the Ingress.
// |网关地址
// |Gateway Address
// |
// include::../gatewaySettings/gatewaySettings-para-address.adoc[]
|创建时间
|应用路由的创建时间。
|Creation Time
|The creation time of the Ingress.
|创建者
|创建应用路由的用户。
|Creator
|The user who created the Ingress.
|===
--
. 在应用路由详情页面右侧的**资源状态**页签查看应用路由的路由规则。
. On the Ingress details page, view the routing rules of the Ingress in the **Resource Status** tab on the right.
+
--
**资源状态**页签显示当前应用路由的所有路由规则。
The **Resource Status** tab displays all routing rules of the current Ingress.
[%header,cols="1a,4a"]
|===
|参数 |描述
|Parameter |Description
|域名和端口
|应用路由的域名和节点端口号。节点端口号仅在集群网关或项目网关的外部访问为 NodePort 时显示。
|Domain and Port
|The domain name and node port number of the Ingress. The node port number is only displayed when the external access mode of the cluster gateway or project gateway is NodePort.
* 如果集群网关或项目网关的外部访问模式为 NodePort客户端需要通过 DNS 服务或本地 **hosts** 文件将域名解析为集群中任意节点的 IP 地址,并通过域名、路径和端口号(例如 **example.com/test:30240**)访问应用路由。
* If the external access mode of the cluster gateway or project gateway is NodePort, clients need to resolve the domain name to the IP address of any node in the cluster through DNS service or local **hosts** file, and access the Ingress through the domain name, path, and port number (e.g., **example.com/test:30240**).
* 如果集群网关或项目网关的外部访问模式为 LoadBalancer客户端需要通过 DNS 服务或本地 **hosts** 文件将域名解析为项目网关负载均衡器的 IP 地址,并通过域名和路径(例如 **example.com/test**)访问应用路由。
* If the external access mode of the cluster gateway or project gateway is LoadBalancer, clients need to resolve the domain name to the IP address of the project gateway load balancer through DNS service or local **hosts** file, and access the Ingress through the domain name and path (e.g., **example.com/test**).
|协议
|应用路由支持的协议,取值为**HTTP** 或 **HTTPS**。
|Protocol
|The protocol supported by the Ingress, with values of **HTTP** or **HTTPS**.
|证书
|应用路由协议为 HTTPS 时,所使用的包含证书和私钥的保密字典的名称。仅在应用路由协议为 HTTPS 时显示。
|Certificate
|The name of the secret containing the certificate and private key used when the Ingress protocol is HTTPS. Only displayed when the Ingress protocol is HTTPS.
|路径
|域名的路径,每条路径对应一个服务。
|Path
|The path of the domain name, with each path corresponding to a service.
|服务
|域名路径所对应的服务的名称。
|Service
|The name of the service corresponding to the domain name path.
|端口
|域名路径所对应的服务的端口号。
|Port
|The port number of the service corresponding to the domain name path.
|===
在路由规则右侧点击**访问服务**可访问应用路由的后端服务。
Click **Access Service** on the right side of the routing rule to access the backend service of the Ingress.
--
. 在应用路由详情页面右侧点击**元数据**页签查看应用路由的**标签**和**注解**。
. On the Ingress details page, click the **Metadata** tab on the right to view the **Labels** and **Annotations** of the Ingress.
. 在应用路由详情页面右侧点击**事件**页签查看应用路由的事件。
. On the Ingress details page, click the **Events** tab on the right to view events related to the Ingress.
+
--
include::../clusterManagement-para-eventsTab.adoc[]
--
--

View File

@ -1,27 +1,27 @@
// :ks_include_id: cd11a468685d4e6fadc53bf1c8827311
* 点击**添加路由规则**可设置路由规则。您可以设置多条路由规则,每条规则对应一个域名。
* Click **Add Routing Rule** to set routing rules. You can set multiple routing rules, each corresponding to a domain name.
* 将光标悬停在已创建的路由规则上然后在右侧点击image:/images/ks-qkcp/zh/icons/pen-light.svg[pen,18,18]可编辑路由规则的设置。
* Hover the cursor over a created routing rule, then click image:/images/ks-qkcp/zh/icons/pen-light.svg[pen,18,18] on the right side to edit the routing rule settings.
* 将光标悬停在已创建的路由规则上然后在右侧点击image:/images/ks-qkcp/zh/icons/trash-light.svg[trash-light,18,18]可删除路由规则。
* Hover the cursor over a created routing rule, then click image:/images/ks-qkcp/zh/icons/trash-light.svg[trash-light,18,18] on the right side to delete the routing rule.
[%header,cols="1a,4a"]
|===
|参数 |描述
|Parameter |Description
|域名
|用户自定义的域名。
|Domain
|The user-defined domain name.
|协议
|应用路由支持的协议,参数值可以为 **HTTP** 或 **HTTPS**。
|Protocol
|The protocol supported by the Ingress. The value can be **HTTP** or **HTTPS**.
|保密字典
|应用路由协议为 **HTTPS** 时,用于提供证书和密钥的保密字典。该保密字典必须包含 **tls.cert** 和 **tls.key** 字段,分别存储 Base64 编码的证书和私钥。
|Secret
|The Secret used to provide the certificate and key when the Ingress protocol is **HTTPS**. This Secret must contain the **tls.cert** and **tls.key** fields, which store the Base64-encoded certificate and private key, respectively.
|路径
|域名路径及其与服务端口的映射关系。
|Path
|The domain path and its mapping relationship with the service port.
* 点击**添加**可设置多条路径。
* Click **Add** to set multiple paths.
* 在已创建的路径右侧点击image:/images/ks-qkcp/zh/icons/trash-light.svg[trash-light,18,18]可删除路径。
* Click image:/images/ks-qkcp/zh/icons/trash-light.svg[trash-light,18,18] on the right side of a created path to delete the path.
|===

View File

@ -1,17 +1,17 @@
// :ks_include_id: f0f32c026c8a44b7ac18acbadf465ea5
. 在需要操作的服务右侧点击image:/images/ks-qkcp/zh/icons/more.svg[more,18,18],然后在下拉列表中选择**编辑设置**。
. Click image:/images/ks-qkcp/zh/icons/more.svg[more,18,18] on the right side of the service you want to operate, then select **Edit Settings** from the dropdown list.
. 在**编辑设置**对话框的**服务设置**页签,修改服务的设置。
. On the **Service Settings** tab, modify the service settings.
* 对于内部访问模式为 ExternalName 的服务,您可以修改外部服务的地址。
* For services with an internal access mode of ExternalName, you can modify the address of the external service.
* 对于其他服务,您可以修改服务的内部访问模式、工作负载选择器和端口。
* For other services, you can modify the internal access mode, workload selector, and ports.
+
--
include::services-para-serviceSettings.adoc[]
--
. 在**编辑设置**对话框的**集群差异设置**页签,为不同集群中的服务基于端口进行差异化设置,然后点击**确定**。
. On the **Cluster Differences** tab, differentiate the service based on ports in different clusters, then click **OK**.
+
--
// include::../../../multi-clusterProjectManagement/services/services-oper-setClusterDiff.adoc[]

View File

@ -1,5 +1,5 @@
// :ks_include_id: f8bbecbf87544c4f9173c8107364d8ee
. 在服务详情页面右侧点击**访问信息**页签查看服务的访问信息。
. On the service details page, click the **Access Information** tab on the right to view the service's access information.
. 点击页面右侧的image:/images/ks-qkcp/zh/icons/more.svg[more,18,18],然后从下拉列表中选择**编辑外部访问**。
. Click image:/images/ks-qkcp/zh/icons/more.svg[more,18,18] on the right side of the page, then select **Edit External Access** from the dropdown list.

View File

@ -1,2 +1,2 @@
// :ks_include_id: d085604adc244a4cbb580fb88485f275
* Click the search box at the top of the list and search for services by name.
* Click the search box at the top of the list to search for services by name.

View File

@ -1,19 +1,19 @@
// :ks_include_id: ec83e4ff0eb74cdaa02d3a52062d9bc5
. 在服务详情页面左侧的**属性**区域查看服务的资源属性。
. On the service details page, view the service's resource attributes in the **Attributes** area on the left.
+
--
[%header,cols="1a,4a"]
|===
|参数 |描述
|Parameter |Description
// |集群
// |服务所属的集群。
// |Cluster
// |The cluster to which the service belongs.
|项目
|服务所属的项目。
|Project
|The project to which the service belongs.
|类型
|Type
|
include::services-para-internalAccess.adoc[]
@ -21,94 +21,94 @@ include::services-para-virtualip-headless.adoc[]
include::services-para-externalName.adoc[]
|应用
|服务所属的应用名称。您可以创建一个包含多个服务的应用,每个服务都对应一个工作负载。
|App
|The name of the app to which the service belongs. You can create an app that includes multiple services, each corresponding to a workload.
// |虚拟 IP 地址
// |服务供集群内部访问的虚拟 IP 地址,仅对 VirtualIP 类型的服务显示。
// |Virtual IP Address
// |The virtual IP address for internal access within the cluster, only displayed for VirtualIP type services.
// |外部 IP 地址
// |服务供集群外部访问的 IP 地址,仅在服务启用外部访问时显示。
// |External IP Address
// |The IP address for external access outside the cluster, only displayed when the service has external access enabled.
// |会话保持
// |是否已启用会话保持功能,取值可以为:
// |Session Persistence
// |Whether session persistence is enabled, with possible values:
// * **已启用**:已启用会话保持。如果服务有多个容器组,在一定时间内(默认值为 10800 秒),来自相同客户端 IP 地址的请求将被转发给同一个容器组。
// * **Enabled**: Session persistence is enabled. If the service has multiple pods, requests from the same client IP address will be forwarded to the same pod within a certain time (default is 10800 seconds).
// * **未启用**:未启用会话保持。如果服务有多个容器组,来自相同客户端 IP 地址的请求将被随机转发给不同的容器组。
// * **Not Enabled**: Session persistence is not enabled. If the service has multiple pods, requests from the same client IP address will be randomly forwarded to different pods.
// |选择器
// |服务的容器组选择器。容器组选择器由一个多个容器组标签组成,服务会将客户端请求转发给具有全部指定标签的容器组。
// |Selector
// |The pod selector for the service. The pod selector consists of one or more pod labels, and the service will forward client requests to pods that have all the specified labels.
// |DNS
// |服务在集群内部的域名,可在集群内部访问。
// |The internal domain name of the service, accessible within the cluster.
// |端点
// |服务的目标容器组的虚拟 IP 地址和容器端口。
// |Endpoints
// |The virtual IP address and container port of the target pods for the service.
|创建时间
|服务的创建时间。
|Creation Time
|The creation time of the service.
|更新时间
|服务的最后更新时间。
|Update Time
|The last update time of the service.
|创建者
|创建服务的用户。
|Creator
|The user who created the service.
|===
--
. 在服务详情页面右侧的**资源状态**页签查看服务的容器组副本数量和容器组。
. On the service details page, view the number of pod replicas and pods for the service in the **Resource Status** tab on the right.
+
--
[%header,cols="1a,4a"]
|===
|参数 |描述
|Parameter |Description
|容器组副本数量
|设置每个集群的容器组副本数。
|Pod Replicas
|The number of pod replicas for each cluster.
|容器组
|Pods
|
服务中运行的所有容器组。展开下拉框可以选择查看特定集群中的容器组信息。
All pods running in the service. Expand the dropdown to view pod information for specific clusters.
include::../nodes/nodes-para-podList.adoc[]
|===
--
. 在服务详情页面右侧的**访问信息**页签查看服务的访问信息。
. On the service details page, view the service's access information in the **Access Information** tab on the right.
+
--
[%header,cols="1a,4a"]
|===
|参数 |描述
|Parameter |Description
|内部域名
|可通过 <service name>.<project name>.svc 格式的域名从集群内部访问服务。
|Internal Domain Name
|The service can be accessed from within the cluster using a domain name in the format <service name>.<project name>.svc.
|虚拟 IP 地址
|服务供集群内部访问的虚拟 IP 地址。
|Virtual IP Address
|The virtual IP address for internal access within the cluster.
|端口
|为使容器能够被正常访问,{ks_product-en}平台上定义了以下端口类型:
|Ports
|To ensure that containers can be accessed normally, {ks_product-en} defines the following port types:
* 容器端口:容器中的应用程序监听的端口,只能在容器组内部访问。
* Container Port: The port on which the application in the container is listening, only accessible within the pod.
* 服务端口:服务虚拟 IP 地址的端口,只能在集群内部访问,发送到服务端口的请求将被转发给容器端口。
* Service Port: The port of the service's virtual IP address, only accessible within the cluster, and requests sent to the service port will be forwarded to the container port.
* 节点端口节点主机上的端口可以从集群外部访问发送到节点端口的请求将被转发给服务端口。NodePort 或 LoadBalancer 类型的服务具有节点端口。
* Node Port: The port on the node host, accessible from outside the cluster, and requests sent to the node port will be forwarded to the service port. NodePort or LoadBalancer type services have node ports.
// |工作负载
// |显示管理容器组的工作负载的名称、更新时间、类型、状态和当前修改记录。
// |Workload
// |Displays the name, update time, type, status, and current revision record of the workload that manages the pods.
// |容器组
// |Pods
// |
// include::../nodes/nodes-para-podList.adoc[]
|===
--
. 在服务详情页面右侧点击**元数据**页签查看服务的标签和注解。
. On the service details page, click the **Metadata** tab on the right to view the service's labels and annotations.
. 在服务详情页面右侧点击**事件**页签查看服务相关的事件。
. On the service details page, click the **Events** tab on the right to view events related to the service.
+
--
include::../clusterManagement-para-eventsTab.adoc[]

View File

@ -1,24 +1,24 @@
// :ks_include_id: cad509443a554a38ab6ce4a11e4d2b73
* 工作负载列表提供以下信息:
* The workload list provides the following information:
+
--
[%header,cols="1a,4a"]
|===
|参数 |描述
|Parameter |Description
|名称
|工作负载的名称和描述信息。
|Name
|The name and description of the workload.
|状态
|Status
|
工作负载的正常容器组副本数、期望容器组副本数和运行状态。工作负载状态包括以下类型:
The number of normal pod replicas, the desired number of pod replicas, and the running status of the workload. The workload status includes the following types:
include::../workloads-para-workloadStatus.adoc[]
|应用
|工作负载所声明的应用。
|App
|The app declared by the workload.
|更新时间
|工作负载的更新时间。
|Update Time
|The update time of the workload.
|===
--
--

View File

@ -1,18 +1,18 @@
// :ks_include_id: cc570a193fc8465392e3f53790581f56
为容器挂载临时卷。临时卷具有以下特点:
Mount a temporary volume for the container. Temporary volumes have the following characteristics:
* 由系统自动在容器组所在的节点的存储系统中创建。
* Automatically created by the system in the storage system of the node where the pod is located.
* 由系统自动管理,容量上限为节点的存储容量。
* Automatically managed by the system, with a capacity limit equal to the storage capacity of the node.
* 无法保存持久化数据,容器组创建时由系统自动创建临时卷,容器组删除时由系统自动删除临时卷。
* Cannot save persistent data. The system automatically creates a temporary volume when the pod is created and automatically deletes the temporary volume when the pod is deleted.
[%header,cols="1a,4a"]
|===
|参数 |描述
|Parameter |Description
|卷名称
|临时卷的名称。
|Volume Name
|The name of the temporary volume.
include::workloads-para-volumeMountModePath.adoc[]

View File

@ -1,2 +1,2 @@
// :ks_include_id: 04ecee90589140c28c84ab59ddd6aeb7
This section explains how to adjust the replica count of pods in a workload.
This section explains how to adjust the number of pod replicas in a workload.

View File

@ -1,11 +1,13 @@
// :ks_include_id: 436a1f9062db4517a30c7da15aad1061
. 在**基本信息**页签,设置工作负载的基本信息,然后点击**下一步**。
. On the **Basic Information** tab, set the basic information for the workload, then click **Next**.
. 在**容器组设置**页签,为工作负载管理的容器组设置副本数量、容器、更新策略、安全上下文、调度规则和元数据,然后点击**下一步**。
. On the **Pod Settings** tab, set the number of replicas, containers, update strategy, security context, scheduling rules, and metadata for the pods managed by the workload, then click **Next**.
. 在**存储设置**页签,为工作负载管理的容器挂载卷,然后点击**下一步**。
. On the **Storage Settings** tab, mount volumes for the containers managed by the workload, then click **Next**.
. 在**高级设置**页签,为工作负载管理的容器组指定节点,并设置工作负载的元数据。
. On the **Advanced Settings** tab, specify nodes for the pods managed by the workload and set the metadata for the workload.
. 在**集群差异设置**页签,为不同集群中的工作负载基于容器、端口和环境变量进行差异化设置,然后点击**创建**。工作负载创建完成后将显示在工作负载列表中。
. On the **Cluster Differences** tab, differentiate the workload based on containers, ports, and environment variables in different clusters, then click **Create**.
+
After the workload is created, it will be displayed in the workload list.

View File

@ -1,2 +1,2 @@
// :ks_include_id: d908eb90806d4d4ba8b6cbb65a3b96e1
. 在**工作负载**页面,点击**部署**或**有状态副本集**打开工作负载列表。
. On the **Workloads** page, click **Deployments** or **StatefulSets** to open the workload list.

View File

@ -1,2 +1,2 @@
// :ks_include_id: 2f8f4e8c4cba43e0b81959fc339f9ca5
* 在列表上方点击搜索框并设置搜索条件,可按名称搜索工作负载。
* Click the search box at the top of the list to search for workloads by name.

View File

@ -1,23 +1,23 @@
// :ks_include_id: 0432dd129aa949c9b90b43831d5d0157
. 在应用详情页面右侧的**资源状态**页签查看组成应用的服务。
. On the app details page, view the services that make up the app in the **Resource Status** tab on the right.
+
--
**服务**区域提供以下信息:
The **Services** area provides the following information:
[%header,cols="1a,4a"]
|===
|参数 |描述
|Parameter |Description
|有状态服务
|组成该应用的有状态服务的名称。
|Stateful Service
|The name of the stateful service that makes up the app.
|无状态服务
|组成该应用的无状态服务的名称。
|Stateless Service
|The name of the stateless service that makes up the app.
|内部域名
|可通过 <service name>.<project name>.svc 格式的域名从集群内部访问服务。
|Internal Domain Name
|The service can be accessed from within the cluster using a domain name in the format <service name>.<project name>.svc.
|虚拟 IP 地址
|服务供集群内部访问的虚拟 IP 地址。
|Virtual IP Address
|The virtual IP address for internal access within the cluster.
|===
--

View File

@ -1,19 +1,19 @@
// :ks_include_id: 70f19c4ccbb54fb4ad10de0f5c8a4a1e
|项目
|应用所属的项目。
|Project
|The project to which the app belongs.
|应用
|应用的名称。
|App
|The name of the app.
|版本
|应用的版本。
|Version
|The version of the app.
|创建时间
|应用的创建时间。
|Creation Time
|The creation time of the app.
|更新时间
|应用的更新时间。
|Update Time
|The update time of the app.
|创建者
|创建应用的用户。
|Creator
|The user who created the app.

View File

@ -1,2 +1,2 @@
// :ks_include_id: 161aaad98ab04d7fa4240eec51d232f7
. 以具有pass:a,q[{ks_permission}]权限的用户登录{ks_product-en} Web 控制台并进入您的联邦项目。
. Log in to the {ks_product-en} web console with a user who has the pass:a,q[{ks_permission}] permission, and access your multi-cluster project.

View File

@ -1,2 +1,2 @@
// :ks_include_id: a7b11e38d6794c2692390f9d0afbb7df
您需要加入一个多集群项目并在项目中具有pass:a,q[{ks_permission}]权限。
You should join a multi-cluster project and have the pass:a,q[{ks_permission}] permission within the project.

View File

@ -1,2 +1,2 @@
// :ks_include_id: a7b11e38d6794c2692390f9d0afbb7df
您需要加入一个联邦项目并在对应企业空间中具有pass:a,q[{ks_permission}]权限。
You should join a multi-cluster project and have the pass:a,q[{ks_permission}] permission within the project.

View File

@ -1,3 +1,2 @@
// :ks_include_id: 479a0d3323374bee8e2220e0fdafd307
* 在**集群**区域,勾选项目所在的一个或多个集群,可为指定集群中的应用添加路由规则。
* In the **Cluster** area, select one or more clusters where the project belongs to add routing rules for the app in the specified clusters.

View File

@ -1,21 +1,21 @@
// :ks_include_id: d43be0d6bddf43e5aacfeed52c0fe32a
* 应用路由列表提供以下信息:
* The Ingress list provides the following information:
+
--
[%header,cols="1a,4a"]
|===
|参数 |描述
|Parameter |Description
|名称
|应用路由的名称和描述。
|Name
|The name and description of the Ingress.
|状态
|应用路由当前的状态。
|Status
|The current status of the Ingress.
|应用
|应用路由所对应的应用名称。
|App
|The name of the app to which the Ingress corresponds.
|创建时间
|应用路由的创建时间。
|Creation Time
|The creation time of the Ingress.
|===
--
--

View File

@ -1,14 +1,14 @@
// :ks_include_id: 1c90e483af564b3eb017afec1b5da0c8
[%header,cols="1a,4a"]
|===
|参数 |描述
|Parameter |Description
|容器设置
|在不同的集群中使用不同的容器设置。在多集群环境下,您可以为指定集群中的服务设置不同的容器镜像、容器类型和资源配置等。
|Container Settings
|Use different container settings in different clusters. In a multi-cluster environment, you can set different container images, container types, and resource configurations for the service in specified clusters.
|端口设置
|为不同集群中的容器设置不同的端口。在多集群环境下,您可以为指定集群中的服务设置不同的访问协议、容器端口和服务端口等。
|Port Settings
|Set different ports for containers in different clusters. In a multi-cluster environment, you can set different access protocols, container ports, and service ports for the service in specified clusters.
|环境变量
|为不同集群中的容器设置不同的环境变量。在多集群环境下,您可以为指定集群中的服务设置不同的环境变量。
|Environment Variables
|Set different environment variables for containers in different clusters. In a multi-cluster environment, you can set different environment variables for the service in specified clusters.
|===

View File

@ -1,6 +1,6 @@
// :ks_include_id: 417489540caa4044871d8fba1c13e801
. 在**存储设置**页签,为服务后端工作负载管理的容器挂载卷,然后点击**下一步**。
. On the **Storage Settings** tab, mount volumes for the containers managed by the service backend workload, then click **Next**.
. 在**高级设置**页签,为服务后端工作负载管理的容器组指定 IP 池和节点,设置服务的外部访问模式、会话保持设置和元数据,然后点击**下一步**。
. On the **Advanced Settings** tab, specify IP pools and nodes for the pods managed by the service backend workload, set the external access mode, session persistence settings, and metadata for the service, then click **Next**.
. 在**集群差异设置**页签,为不同集群中的服务基于容器、端口和环境变量进行差异化设置,然后点击**创建**。服务创建完成后将显示在服务列表中。
. On the **Cluster Differences** tab, differentiate the services in different clusters based on containers, ports, and environment variables, then click **Create**. After the service is created, it will be displayed in the service list.

View File

@ -1,17 +1,17 @@
// :ks_include_id: 57816266c5504de8838e5d900bee849d
[%header,cols="1a,4a"]
|===
|参数 |描述
|Parameter |Description
|容器组副本数量
|各集群中工作负载的当前容器组副本数和期望容器组副本数。
|Pod Replicas
|The current and desired number of pod replicas for the workload in each cluster.
|端口
|工作负载管理的容器的端口名称、协议和端口号。
|Ports
|The port name, protocol, and port number of the containers managed by the workload.
|容器组
|Pods
|
工作负载中运行的所有容器组。展开下拉框可以选择查看特定集群中的容器组信息。
All pods running in the workload. Expand the dropdown to view pod information for specific clusters.
include::../../clusterManagement/nodes/nodes-para-podList.adoc[]
|===

View File

@ -1,3 +1,3 @@
// :ks_include_id: 1901acc4d08f4b24bff6496619ed61bc
. 在**部署**或**有状态副本集**页签,点击目标工作负载名称,进入工作负载详情页面。
. On the **Deployments** or **StatefulSets** tab, click the name of a workload to enter the workload details page.

View File

@ -1,10 +1,10 @@
// :ks_include_id: e767267c152f4de48a2d1585837e30e0
|容器设置
|在不同的集群中使用不同的容器设置。在多集群环境下,您可以为指定集群中的工作负载设置不同的容器镜像、容器类型和资源配置等。
|Container Settings
|Use different container settings in different clusters. In a multi-cluster environment, you can set different container images, container types, and resource configurations for the workload in specified clusters.
|端口设置
|为不同集群中的容器设置不同的端口。在多集群环境下,您可以为指定集群中的工作负载设置不同的访问协议、容器端口和工作负载端口等。
|Port Settings
|Set different ports for containers in different clusters. In a multi-cluster environment, you can set different access protocols, container ports, and workload ports for the workload in specified clusters.
|环境变量
|为不同集群中的容器设置不同的环境变量。在多集群环境下,您可以为指定集群中的工作负载设置不同的环境变量。
|Environment Variables
|Set different environment variables for containers in different clusters. In a multi-cluster environment, you can set different environment variables for the workload in specified clusters.

View File

@ -1,64 +1,64 @@
// :ks_include_id: 53806508deb8493a8bded94825780b98
. 在工作负载详情页面左侧的**属性**区域查看工作负载的资源属性。
. On the workload details page, view the workload's resource attributes in the **Attributes** area on the left.
+
--
[%header,cols="1a,4a"]
|===
|参数 |描述
|Parameter |Description
|项目
|工作负载所属的项目。
|Project
|The project to which the workload belongs.
|应用
|工作负载所属的应用名称。您可以创建一个包含多个服务的应用,每个服务都对应一个工作负载。
|App
|The name of the app to which the workload belongs. You can create an app that includes multiple services, each corresponding to a workload.
|创建时间
|工作负载的创建时间。
|Creation Time
|The creation time of the workload.
|更新时间
|工作负载的更新时间。
|Update Time
|The update time of the workload.
|创建者
|创建工作负载的用户。
|Creator
|The user who created the workload.
|===
--
. 在工作负载详情页面右侧的**资源状态**页签查看工作负载的容器组副本数量、容器端口和容器组。
. On the workload details page, view the number of pod replicas, container ports, and pods for the workload in the **Resource Status** tab on the right.
+
--
include::para-replicasPortsPods.adoc[]
--
. 在工作负载详情页面右侧点击**元数据**页签查看工作负载的标签和注解。
. On the workload details page, click the **Metadata** tab on the right to view the workload's labels and annotations.
// . 在工作负载详情页面右侧点击**监控**页签查看工作负载的实时资源使用情况。
// . On the workload details page, click the **Monitoring** tab on the right to view the workload's real-time resource usage.
// +
// --
// [%header,cols="1a,4a"]
// |===
// |参数 |描述
// |Parameter |Description
// |CPU 用量
// |工作负载管理的所有容器组的实时 CPU 用量。
// |CPU Usage
// |The real-time CPU usage of all pods managed by the workload.
// |内存用量
// |工作负载管理的所有容器组的实时内存用量。
// |Memory Usage
// |The real-time memory usage of all pods managed by the workload.
// |出站流量
// |工作负载管理的所有容器组的出站流量。
// |Outbound Traffic
// |The outbound traffic of all pods managed by the workload.
// |入站流量
// |工作负载管理的所有容器组的入站流量。
// |Inbound Traffic
// |The inbound traffic of all pods managed by the workload.
// |===
// * 在**监控**右侧的第一个下拉框可以选择查看指定集群的资源监控信息。
// * In the first dropdown on the right of **Monitoring**, you can select to view resource monitoring information for a specified cluster.
// include::../../../../_ks_components-en/oper-selectTimeRange.adoc[]
// include::../../../../_ks_components-en/oper-Autorefresh.adoc[]
// include::../../../../_ks_components-en/oper-refreshData.adoc[]
// --
. 在工作负载详情页面右侧点击**事件**页签查看工作负载相关的事件。
. On the workload details page, click the **Events** tab on the right to view events related to the workload.
+
--
include::../../clusterManagement/clusterManagement-para-eventsTab.adoc[]

View File

@ -2,17 +2,17 @@
[%header,cols="1a,4a"]
|===
|参数 |描述
|Parameter |Description
|名称
|应用的名称。名称只能包含小写字母、数字和连字符(-),必须以小写字母或数字开头和结尾,最长 63 个字符。
|Name
|The name of the app. The name can only contain lowercase letters, numbers, and hyphens (-), must start and end with a lowercase letter or number, and can be up to 63 characters long.
|版本
|用户自定义的应用版本。版本只能包含小写字母和数字,最长 16 个字符。
|Version
|The user-defined version of the app. The version can only contain lowercase letters and numbers and can be up to 16 characters long.
|应用治理
|是否为应用启用应用治理功能。开启应用治理后可以对应用使用流量监控、灰度发布和链路追踪功能。
|Application Governance
|Whether to enable application governance for the app. After enabling app governance, you can use traffic monitoring, grayscale release, and tracing features for the app.
|描述
|应用的描述信息。描述可包含任意字符,最长 256 个字符。
|Description
|The description of the app. The description can contain any characters and can be up to 256 characters long.
|===

View File

@ -1,3 +1,2 @@
// :ks_include_id: 282a1d4ff17c46e19164103e677b6b0d
您可以自定义应用的服务、工作负载和路由创建自制应用。相比基于模板的应用,自制应用支持应用治理,您可以为自制应用启用应用治理从而使用流量监控、灰度发布和链路追踪功能。
You can customize the services, workloads, and routes of an app to create a composed app. Compared to template-based apps, composed apps support app governance. You can enable app governance for a composed app to use traffic monitoring, grayscale release, and link tracing features.

View File

@ -1,69 +1,66 @@
// :ks_include_id: c4590bcc1e7e440b8eaf162491107dc0
. 在应用详情页面左侧的**资源状态**页签查看组成应用的应用路由、服务和工作负载。
. On the app details page, view the Ingresses, services, and workloads that make up the app in the **Resource Status** tab.
* **应用路由**区域提供以下信息:
* The **Ingresses** area provides the following information:
+
--
[%header,cols="1a,4a"]
|===
|参数 |描述
|Parameter |Description
|名称
|应用路由的名称。
|Name
|The name of the Ingress.
|域名
|应用路由的域名。
|Domain
|The domain of the Ingress.
|URL
|应用路由所对应服务的访问地址。
|The access address of the service corresponding to the Ingress.
|===
在应用路由右侧点击**访问服务**可访问应用路由的后端服务。
Click **Access Service** on the right side of the Ingress to access the backend service of the Ingress.
--
* **服务**区域提供以下信息:
* The **Services** area provides the following information:
+
--
[%header,cols="1a,4a"]
|===
|参数 |描述
|Parameter |Description
|名称
|服务的名称。
|Name
|The name of the service.
|内部访问模式
|Internal Access Mode
|
include::../../clusterManagement/services/services-para-internalAccess.adoc[]
include::../../clusterManagement/services/services-para-virtualip-headless.adoc[]
// |应用治理
// |应用是否已启用应用治理。应用治理启用后,您可以使用{ks_product-en}提供的流量监控、灰度发布和链路追踪功能。
|虚拟 IP 地址
|服务供集群内部访问的虚拟 IP 地址。仅在服务的内部访问类型为 **VirtualIP** 时显示。
|Virtual IP Address
|The virtual IP address for internal access within the cluster. Only displayed when the service's internal access type is **VirtualIP**.
|===
--
* **工作负载**区域提供以下信息:
* The **Workloads** area provides the following information:
+
--
[%header,cols="1a,4a"]
|===
|参数 |描述
|Parameter |Description
|名称
|工作负载的名称。
|Name
|The name of the workload.
|类型
|工作负载的类型。
|Type
|The type of the workload.
|状态
|工作负载当前的状态。
|Status
|The current status of the workload.
include::../../clusterManagement/workloads/workloads-para-workloadStatus.adoc[]
|修改记录
|工作负载的当前修改记录。
|Revision Record
|The current revision record of the workload.
|===
--

View File

@ -1,8 +1,8 @@
// :ks_include_id: 6d568952e6604999a005dfff5d21d3d3
|集群
|应用所属的集群。
|Cluster
|The cluster to which the app belongs.
|项目
|应用所属的项目。
|Project
|The project to which the app belongs.
include::apps-para-status.adoc[]

View File

@ -1,13 +1,13 @@
// :ks_include_id: 9db80030fef4430e98fae7a372d67f6d
|状态
|应用当前的状态。
|Status
|The current status of the application.
* **创建中**:系统正在创建应用。
* **Creating**: The system is creating the application.
* **运行中**:应用运行正常。
* **Running**: The application is running normally.
* **升级中**:系统正在升级应用版本。
* **Upgrading**: The system is upgrading the application version.
* **删除中**:系统正在删除应用。
* **Deleting**: The system is deleting the application.
* **失败**:应用创建失败。
* **Failed**: The application creation failed.

View File

@ -1,7 +1,6 @@
// :ks_include_id: faff93159cca48358390bdd176c1577d
In KubeSphere, an application refers to a business program composed of one or more workloads, services, Ingresses, and other resources. Based on the creation method, apps in KubeSphere are divided into the following two types:
在{ks_product-en}平台,应用特指由一个或多个工作负载、服务、应用路由等资源组成的业务程序。根据应用的创建方式,{ks_product-en}平台上的应用分为以下两类:
* Template-based apps: Apps created using existing app templates. The app templates used to create apps can be app templates uploaded to the workspace, app templates published to the App Store, or app templates from third-party app repositories.
* 基于模板的应用:通过已有的应用模板创建的应用。创建应用所使用的应用模板可以为上传到企业空间的应用模板、已发布到应用商店的应用模板或第三方应用仓库中的应用模板。
* 自制应用:由用户手动编排工作负载、服务、应用路由等资源创建的应用。在创建自制应用时,您可以启用应用治理以使用{ks_product-en}提供的流量监控、灰度发布和链接追踪功能。
* Composed apps: Apps created by manually orchestrating workloads, services, Ingresses, and other resources. When creating a composed app, you can enable application governance to use the traffic monitoring, grayscale release, and tracing features provided by KubeSphere.

View File

@ -1,2 +1,2 @@
// :ks_include_id: c9236cd08c5e43f9a20e107705d04a48
. 在**灰度发布**页面,点击**发布任务**,然后点击一个灰度发布任务的名称打开其详情页面。
. On the **Grayscale Release** page, click **Release Tasks**, then click the name of a grayscale release task to open its details page.

View File

@ -1,24 +1,24 @@
// :ks_include_id: 24a43a70d2cc491b86afd4bda8e41b78
* 对于**蓝绿部署**,在新版本或旧版本右侧点击**接管**可将业务流量全部转发给该版本。
* For **Blue-Green Deployment**, click **Take Over** on the right side of the new or old version to forward all business traffic to that version.
* 对于**金丝雀发布**任务,您可以选择指定新旧版本的流量分配比例,或根据请求参数将请求转发给新版本或旧版本。
* For **Canary Release** tasks, you can specify the traffic distribution ratio between the new and old versions, or forward requests to the new or old version based on request parameters.
+
--
[%header,cols="1a,4a"]
|===
|参数 |描述
|Parameter |Description
|指定流量分配
|拖动滑块可设置新旧版本接收业务流量的百分比。
|Specify Traffic Distribution
|Drag the slider to set the percentage of business traffic received by the new and old versions.
|指定请求参数
|将参数满足特定条件的请求转发给新版本,其他请求转发给旧版本。
|Specify Request Parameters
|Forward requests with parameters meeting specific conditions to the new version, and other requests to the old version.
|===
--
// Note
include::../../../../_ks_components-en/admonitions/note.adoc[]
流量镜像任务将业务流量的副本发送给新版本进行测试,而不实际暴露新版本,所以不需要设置业务流量转发策略。
Traffic Mirroring tasks send a copy of the business traffic to the new version for testing without actually exposing the new version, so there is no need to set a business traffic distribution strategy.
include::../../../../_ks_components-en/admonitions/admonEnd.adoc[]
include::../../../../_ks_components-en/admonitions/admonEnd.adoc[]

View File

@ -1,9 +1,9 @@
// :ks_include_id: c831ace6bfe442abba34ed44c8c2ec4b
|流量
|新旧版本的每秒请求数量。
|Traffic
|The number of requests per second for the new and old versions.
|请求成功率
|新旧版本的成功请求百分比。
|Successful Request Rate
|The percentage of successful requests for the new and old versions.
|请求延迟
|新旧版本的平均请求延迟。
|Request Latency
|The average request latency for the new and old versions.

View File

@ -1,17 +1,17 @@
// :ks_include_id: 9dce53f38c804429a1d874c4d0f635a3
[%header,cols="1a,4a"]
|===
|参数 |描述
|Parameter |Description
|名称
|应用仓库的名称。
|Name
|The name of the app repository.
|URL
|Helm Chart 仓库的 URL。点击**验证**可测试 Helm Chart 仓库是否可用。
|The URL of the Helm Chart repository. Click **Validate** to test if the Helm Chart repository is available.
|同步周期
|应用仓库与 Helm Chart 仓库的自动同步周期。取值范围为 3 分钟到 24 小时。默认值 **0** 表示不自动同步。
|Sync Interval
|The automatic sync interval between the app repository and the Helm Chart repository. The value range is from 3 minutes to 24 hours. The default value **0** means no automatic sync.
|描述
|应用仓库的描述信息。描述可包含任意字符,最长 256 个字符。
|Description
|The description of the app repository. The description can contain any characters, with a maximum length of 256 characters.
|===

View File

@ -1,2 +1,2 @@
// :ks_include_id: 6c2cd879adcb4a5fa3abaf7929167ef7
. 在应用模板列表中点击一个应用模板的名称打开其详情页面。
. Click the name of an application template in the list to open its details page.

View File

@ -1,2 +1,2 @@
// :ks_include_id: 2f8bbec5d37b4c239396337ce7576a71
* 在列表上方点击搜索框并输入关键字,可搜索名称包含特定关键字的应用模板。
* Click the search box at the top of the list to search for application templates by name.

View File

@ -1,25 +1,25 @@
// :ks_include_id: a04eb03cbce9496996bd54443b6e4d64
. 在应用模板详情页面右侧点击**应用实例**页签,查看使用应用模板在{ks_product-en}平台安装的应用。
. Click the **Application Instance** tab on the right side of the details page to view the applications installed using the application template on the {ks_product-en} platform.
+
--
[%header,cols="1a,4a"]
|===
|参数 |描述
|Parameter |Description
|名称
|应用的名称。
|Name
|The name of the application.
include::../../projectManagement/apps/apps-para-status.adoc[]
include::appTemplates-para-version.adoc[]
|项目
|应用所属的项目。
|Project
|The project to which the application belongs.
|集群
|应用所属的集群。
|Cluster
|The cluster to which the application belongs.
|创建时间
|应用的创建时间。
|Creation Time
|The creation time of the application.
|===
--
--

View File

@ -1,2 +1,2 @@
// :ks_include_id: 97cb07634c9f4b08ab3ebea9f440d8e8
. 在应用模板详情页面左侧的**属性**区域,查看应用模板的资源属性。
. In the **Attributes** area on the left side of the details page, view the resource attributes of the application template.

View File

@ -1,2 +1,2 @@
// :ks_include_id: d78cb35f91534425932451fc1aeb33e2
. 在应用模板详情页面右侧的**版本**页签,查看应用模板中包含的应用版本。
. In the **Versions** tab on the right side of the details page, view the application versions included in the application template.

View File

@ -1,3 +1,3 @@
// :ks_include_id: de8b37eae7ea4bdba3f2534f9d1b19c2
|名称
|应用模板的名称、图标和描述信息。
|Name
|The name, icon, and description of the application template.

View File

@ -1,3 +1,3 @@
// :ks_include_id: e53d38c7198848ea827f13da3541565e
|创建时间
|应用模板的创建时间。
|Creation Time
|The creation time of the application template.

View File

@ -1,3 +1,3 @@
// :ks_include_id: 76c5e6ca0e0d40fb8a75e2ea04ba859a
|开发者
|上传应用版本的用户。
|Developer
|The user who uploaded the application version.

View File

@ -1,3 +1,3 @@
// :ks_include_id: 3348c4c6bb6b473e887f8a6b5d1883c9
|最新版本
|应用模板中 Helm Chart 的最新版本。每个应用模板可包含应用的多个版本。
|Latest Version
|The latest version of the Helm Chart in the application template. Each application template can contain multiple versions of the application.

View File

@ -1,9 +1,9 @@
// :ks_include_id: b1063d2135a7413f839d796f93c2afa0
|状态
|应用模板当前的状态。
|Status
|The current status of the application template.
* **未上架**:应用模板已创建成功,但是未上架到{ks_product-en}平台的应用商店。
* **Not Listed**: The application template has been successfully created but is not listed in the App Store.
* **已上架**:应用模板已创建成功,并且已上架到{ks_product-en}平台的应用商店。
* **Listed**: The application template has been successfully created and is listed in in the App Store.
* **已下架**:应用模板上架到{ks_product-en}平台的应用商店后被应用商店管理员下架。
* **Suspended**: The application template was listed in the App Store but has been delisted by the App Store administrator.

View File

@ -1,3 +1,3 @@
// :ks_include_id: 933bd63e86ea4c958e7578f625e38dca
|类型
|应用模板的类型。
|Type
|The type of the application template.

View File

@ -1,3 +1,3 @@
// :ks_include_id: 942eb298f1394bef9c3269ba02cc1311
|版本
|Helm Chart 的版本。
|Version
|The version of the Helm Chart.

View File

@ -1,3 +1,3 @@
// :ks_include_id: 3c207e89fc77423187d81dd47480e0b4
|更新时间
|应用版本的更新时间。
|Update Time
|The update time of the application version.

View File

@ -1,2 +1,2 @@
// :ks_include_id: 869781900cdb48f19e54811ea9a8abcc
. 在应用模板详情页面右侧点击**应用信息**页签,查看应用模板的介绍、截图和版本信息。
. Click the **Application Information** tab on the right side of the details page to view the introduction, screenshots, and version information of the application template.

View File

@ -1,3 +1,3 @@
// :ks_include_id: d2cbf65cb5824a99bf21c210999ce5a7
|企业空间
|提交应用模板的企业空间。
|Workspace
|The workspace that submitted the application template.

View File

@ -1,6 +1,6 @@
// :ks_include_id: 41158ab30242438694d2437566046d38
|别名
|DevOps 项目的别名。别名只能包含中文、字母、数字和连字符(-),不得以连字符(-)开头或结尾,最长 63 个字符。
|Alias
|The alias of the DevOps project. Aliases can only contain Chinese characters, letters, numbers, and hyphens (-), cannot start or end with a hyphen (-), and can be up to 63 characters long.
|描述
|DevOps 项目的描述信息。描述可包含任意字符,最长 256 个字符。
|Description
|The description of the DevOps project. Descriptions can contain any characters and can be up to 256 characters long.

View File

@ -168,7 +168,7 @@ If the cluster nodes use other operating systems, replace **apt** with the corre
== Install Kubernetes
// ifeval::["{file_output_type}" == "pdf"]
// include::../../../_custom/installationAndUpgrade/installationAndUpgrade-oper-decompressInstallationPackage_new.adoc[]
// include::../../../_custom-en/installationAndUpgrade/installationAndUpgrade-oper-decompressInstallationPackage_new.adoc[]
// endif::[]
// ifeval::["{file_output_type}" == "html"]
@ -248,7 +248,6 @@ spec:
# Harbor does not support arm64. This parameter does not need to be configured when deploying in an arm64 environment.
type: harbor
# If you use kk to deploy harbor or other registries that require authentication, you need to set the auths of the corresponding registries. If you use kk to deploy the default docker registry, you do not need to configure the auths parameter.
# Note: If you use kk to deploy harbor, please set the auths parameter after creating the harbor project.
auths:
"dockerhub.kubekey.local":
username: admin # harbor default username

View File

@ -114,7 +114,7 @@ include::../../../../_ks_components-en/admonitions/admonEnd.adoc[]
----
 ./kk add nodes -f config-sample.yaml
----
// include::../../../_custom/installationAndUpgrade/installationAndUpgrade-code-addNodes.adoc[]
// include::../../../_custom-en/installationAndUpgrade/installationAndUpgrade-code-addNodes.adoc[]
--
. Execute the following command to view the nodes of the current cluster:

View File

@ -32,9 +32,6 @@ You should have the pass:a,q[{ks_permission}] permission on the {ks_product-en}
. Check the boxes next to the users you want to delete, then click **Delete** above the list.
. In the **Delete Multiple Users** dialog, enter the names of the users, then click **OK**.
+
include::../../../../_ks_components-en/admonitions/note.adoc[]
Please use a comma (,) or space to separate multiple names.
include::../../../../_ks_components-en/admonitions/admonEnd.adoc[]
--
include::../../../_custom-en/note-separateNamesByComma.adoc[]
--

View File

@ -54,7 +54,7 @@ include::../../../../_custom-en/clusterManagement/clusterMembers/clusterMembers-
|===
--
* Click the search box at the top of the list and search for cluster members by name.
* Click the search box at the top of the list to search for cluster members by name.
include::../../../../../_ks_components-en/oper-refreshListData.adoc[]

View File

@ -46,10 +46,7 @@ include::../../../_custom-en/platformManagement/platformManagement-oper-logIn.ad
. In the **Delete Multiple Workspaces** dialog, enter the name of the workspaces, then click **OK**.
+
[.admon.note,cols="a"]
|===
|Note
--
include::../../../_custom-en/note-separateNamesByComma.adoc[]
--
|Please separate multiple names using a comma (,) and a space.
|===

View File

@ -17,7 +17,7 @@ This section introduces how to add an application repository in the workspace.
* {empty}
include::../../../../_custom-en/workspaceManagement/workspaceManagement-prer-requiredPermission_v4.adoc[]
* The Helm Chart repository has been created in advance. For information on how to create a Helm Chart repository, please refer to the link:https://helm.sh/zh/docs/topics/chart_repository/[Helm Documentation].
* The Helm Chart repository has been created in advance. For information on how to create a Helm Chart repository, please refer to the link:https://helm.sh/docs/topics/chart_repository/[Helm Documentation].
== Steps

View File

@ -11,4 +11,4 @@ This section introduces how to manage application repositories in the workspace.
In KubeSphere, applications specifically refer to business programs composed of one or more workloads, services, ingresses, and other resources. The application repository in KubeSphere is based on Helm, defining the orchestration of applications through Helm Charts.
You can add a Helm Chart repository as an application repository to the workspace, allowing you to install applications from the application repository into projects within the workspace. For information on how to create a Helm Chart repository, please refer to the link:https://helm.sh/zh/docs/topics/chart_repository/[Helm Documentation].
You can add a Helm Chart repository as an application repository to the workspace, allowing you to install applications from the application repository into projects within the workspace. For information on how to create a Helm Chart repository, please refer to the link:https://helm.sh/docs/topics/chart_repository/[Helm Documentation].

View File

@ -15,13 +15,8 @@ This section introduces how to get an overview of the project.
== Prerequisites
:relfileprefix: ../../../
include::../../../_custom-en/projectManagement/projectManagement-prer-requiredPermission_new.adoc[]
:relfileprefix: ./
== Steps

View File

@ -1,6 +1,6 @@
---
title: "Create Stateful or Stateless Services"
linkTitle: "Create Stateful or Stateless Services"
title: "Create a Stateful or Stateless Service"
linkTitle: "Create a Stateful or Stateless Service"
keywords: "Kubernetes, KubeSphere, Project Management, Workloads, Services, Create Service, Create Stateful or Stateless Services"
description: "Introduces how to create stateful or stateless services."
weight: 01

View File

@ -0,0 +1,42 @@
---
title: "Overview"
keywords: "Kubernetes, {ks_product-en}, DevOps, Overview"
description: "Introduces the basic principles of DevOps."
weight: 01
---
DevOps provides a series of Continuous Integration (CI) and Continuous Delivery (CD) tools that automate processes between IT and software development teams. In CI/CD workflows, each integration is verified through automated builds, including coding, releasing, and testing, helping developers catch integration errors early and enabling teams to deliver internal software to production environments quickly, securely, and reliably.
However, the traditional Jenkins Controller-Agent architecture (where multiple Agents work for one Controller) has the following shortcomings:
- If the Controller crashes, the entire CI/CD pipeline collapses.
- Resource allocation is uneven, with some Agents' pipeline jobs queuing while others remain idle.
- Different Agents may have different configurations and require the use of different coding languages, bringing inconvenience for management and maintenance.
KubeSphere DevOps projects support source code management tools such as GitHub, Git, and Bitbucket, enabling the building of CI/CD pipelines through a graphical editing panel (Jenkinsfile out of SCM) or creating Jenkinsfile-based pipelines from code repositories (Jenkinsfile in SCM).
== Features
DevOps offers the following features:
- Independent DevOps projects for CI/CD pipelines with access control.
- Out-of-the-box DevOps functionality without complex Jenkins configurations.
// - Source-to-image (S2I) and Binary-to-image (B2I) for rapid delivery of images.
- link:../03-how-to-use/02-pipelines/02-create-a-pipeline-using-jenkinsfile/[Jenkinsfile-based Pipelines] for a consistent user experience supporting multiple code repositories.
- link:../03-how-to-use/02-pipelines/01-create-a-pipeline-using-graphical-editing-panel/[Graphical Editing Panel] to create pipelines with a low learning curve.
- Robust tool integration mechanisms, such as link:../04-how-to-integrate/01-sonarqube/[SonarQube], for code quality checks.
- Continuous delivery capabilities based on ArgoCD for automated deployment to multi-cluster environments.
== DevOps Pipeline Workflows
The DevOps CI/CD pipeline runs on underlying Kubernetes Jenkins Agents. These Jenkins Agents can dynamically scale up or down based on job statuses. The Jenkins Controller and Agents run as Pods on KubeSphere nodes. The Controller runs on one of the nodes, with its configuration data stored in a persistent volume claim. Agents run on various nodes but may not always be active, as they are dynamically created and automatically removed based on demand.
When the Jenkins Controller receives a build request, it dynamically creates a Jenkins Agent running in a Pod based on labels and registers it with the Controller. After the Agent completes its job, it is released, and the related Pods are deleted.
== Dynamic Provisioning of Jenkins Agents
Dynamic provisioning of Jenkins Agents has the following advantages:
- **Reasonable resource allocation**: Agents are dynamically allocated to idle nodes to prevent job queues due to high resource utilization on a single node.
- **High scalability**: Supports adding nodes to the cluster when jobs queue for extended periods due to insufficient resources.
- **High availability**: In case of a Jenkins Controller failure, DevOps automatically creates a new Jenkins Controller container, mounts the persistent volume to the newly created container to ensure data integrity, achieving cluster high availability.

View File

@ -0,0 +1,50 @@
---
title: "Create DevOps Projects"
keywords: "Kubernetes, {ks_product-en}, Workspace, DevOps Projects, Create DevOps Projects"
description: "Learn how to create DevOps projects."
weight: 01
---
:ks_permission: **DevOps Project Creation**
:ks_navigation: **DevOps Projects**
This section explains how to create DevOps projects.
== Prerequisites
* {empty}
include::../../../../_custom-en/workspaceManagement/workspaceManagement-prer-requiredPermission_v4.adoc[]
* **DevOps** must have been installed and enabled.
== Steps
include::../../../../_custom-en/workspaceManagement/workspaceManagement-oper-openWorkspacePage.adoc[]
+
include::../../../../../_ks_components-en/oper-navigate.adoc[]
. On the **DevOps Projects** page, click **Create**.
. In the **Create DevOps Project** dialog, configure the parameters for the DevOps project and then click **OK**.
+
--
[%header,cols="1a,4a"]
|===
|Parameter |Description
|Name
|The name of the DevOps project. Names can only contain lowercase letters, numbers, and hyphens (-), must start with a lowercase letter, end with a lowercase letter or number, and can be up to 63 characters long.
include::../../../../_custom-en/workspaceManagement/devopsProjects/devopsProject-para-aliasAndDescription.adoc[]
|Cluster Settings
|The cluster can be used by the DevOps project. Resources in the DevOps project run on the cluster selected here.
|===
After creating the DevOps project, you can invite users to join the project and deploy business within the DevOps project.
--

View File

@ -0,0 +1,67 @@
---
title: "View DevOps Project List"
keywords: "Kubernetes, {ks_product-en}, Workspace, DevOps Projects, View DevOps Project List"
description: "Learn how to view the DevOps project list."
weight: 02
---
:ks_permission: **DevOps Project Viewing**
:ks_navigation: **DevOps Projects**
This section explains how to view the DevOps project list.
== Prerequisites
* {empty}
include::../../../../_custom-en/workspaceManagement/workspaceManagement-prer-requiredPermission_v4.adoc[]
* **DevOps** must have been installed and enabled.
== Steps
include::../../../../_custom-en/workspaceManagement/workspaceManagement-oper-openWorkspacePage.adoc[]
+
include::../../../../../_ks_components-en/oper-navigate.adoc[]
+
====
* The DevOps project list provides the following information:
+
--
[%header,cols="1a,4a"]
|===
|Parameter |Description
|Name
|The name of the DevOps project.
|Status
|The current status of the DevOps project.
* **Successful**: The DevOps project has been successfully created and is available.
* **Pending**: The DevOps project is being created.
* **Deleting**: The DevOps project is in the process of being deleted.
|Creator
|The user who created the DevOps project.
|Creation Time
|The creation time of the DevOps project.
|===
--
* Click the search box at the top of the list to search for DevOps projects by name.
include::../../../../../_ks_components-en/oper-refreshListData.adoc[]
include::../../../../../_ks_components-en/oper-customizeColumns.adoc[]
* Click the name of a DevOps project in the list to open the DevOps project management page. You can view and manage the resources in the DevOps project on the management page.
====

View File

@ -0,0 +1,47 @@
---
title: "Edit DevOps Project Information"
keywords: "Kubernetes, {ks_product-en}, Workspace Management, DevOps Projects, Edit DevOps Project Information"
description: "Learn how to edit DevOps project information."
weight: 03
---
:ks_permission: **DevOps Project Management**
:ks_navigation: **DevOps Projects**
This section explains how to edit project information.
You can edit the alias and description of a DevOps project. KubeSphere does not support editing the name of an already created DevOps project.
== Prerequisites
* {empty}
include::../../../../_custom-en/workspaceManagement/workspaceManagement-prer-requiredPermission_v4.adoc[]
* **DevOps** must be installed and enabled.
== Steps
include::../../../../_custom-en/workspaceManagement/workspaceManagement-oper-openWorkspacePage.adoc[]
+
include::../../../../../_ks_components-en/oper-navigate.adoc[]
+
. On the right side of the DevOps project you want to edit, click image:/images/ks-qkcp/zh/icons/more.svg[more,18,18], and then select **Edit Information** from the dropdown list.
. In the **Edit Information** dialog, set the alias and description of the DevOps project, then click **OK**.
+
--
[%header,cols="1a,4a"]
|===
|Parameter |Description
include::../../../../_custom-en/workspaceManagement/devopsProjects/devopsProject-para-aliasAndDescription.adoc[]
|===
--

View File

@ -0,0 +1,60 @@
---
title: "Delete DevOps Projects"
keywords: "Kubernetes, {ks_product-en}, Workspace Management, DevOps Projects, Delete DevOps Project"
description: "Learn how to delete DevOps projects."
weight: 04
---
:ks_permission: **DevOps Project Management**
:ks_navigation: **DevOps Projects**
This section explains how to delete DevOps projects.
// Note
include::../../../../../_ks_components-en/admonitions/note.adoc[]
Once a DevOps project is deleted, it cannot be recovered, and all resources within the DevOps project will also be deleted. Please proceed with caution.
include::../../../../../_ks_components-en/admonitions/admonEnd.adoc[]
== Prerequisites
* {empty}
include::../../../../_custom-en/workspaceManagement/workspaceManagement-prer-requiredPermission_v4.adoc[]
* **DevOps** must be installed and enabled.
== Delete A Single Project
include::../../../../_custom-en/workspaceManagement/workspaceManagement-oper-openWorkspacePage.adoc[]
+
include::../../../../../_ks_components-en/oper-navigate.adoc[]
+
. On the right side of the DevOps project you want to delete, click image:/images/ks-qkcp/zh/icons/more.svg[more,18,18], and then select **Delete** from the dropdown list.
. In the **Delete DevOps Project** dialog, enter the name of the DevOps project, then click **OK**.
== Delete Multiple Projects
include::../../../../_custom-en/workspaceManagement/workspaceManagement-oper-openWorkspacePage.adoc[]
+
include::../../../../../_ks_components-en/oper-navigate.adoc[]
+
. Select the checkbox on the left side of the DevOps projects you want to delete, then click **Delete** above the DevOps project list.
. In the **Delete Multiple DevOps Projects** dialog, enter the names of the DevOps projects, then click **OK**.
+
--
include::../../../../_custom-en/note-separateNamesByComma.adoc[]
--

View File

@ -0,0 +1,11 @@
---
title: "Manage DevOps Projects"
keywords: "Kubernetes, {ks_product-en}, Workspace, DevOps Projects"
description: "Learn how to view and manage DevOps projects."
weight: 02
layout: "second"
---
This section covers managing DevOps projects.
DevOps projects provide users with Continuous Integration and Continuous Deployment (CI/CD) capabilities. You can integrate KubeSphere with third-party code repositories in a DevOps project and then automatically update source code changes to the target environment through pipelines or continuous deployment.

View File

@ -0,0 +1,61 @@
---
title: "Create and Manage DevOps Projects"
keywords: "Kubernetes, {ks_product-en}, DevOps projects, DevOps project management"
description: "Demonstrates how to create and manage DevOps projects."
weight: 01
---
This section demonstrates how to create and manage DevOps projects.
== Prerequisites
* A workspace and a user (**project-admin**) have been created. Invite this user to the workspace and assign them the **workspace-self-provisioner** role. For more information, see link:../../../../02-quickstart/03-control-user-permissions[Control User Permissions].
* **DevOps** must have been installed and enabled.
== Create a DevOps Project
. Log in to the {ks_product-en} web console as the **project-admin** user and navigate to a workspace.
. Click **DevOps Projects** and then click **Create**.
. Enter the basic information for the DevOps project and click **OK**.
+
--
* **Name**: A concise name for the DevOps project, e.g., **demo-devops**.
* **Alias**: An alias for the DevOps project.
* **Description**: A brief introduction to the DevOps project.
* **Cluster Settings**: In the current version, a DevOps project cannot run across multiple clusters simultaneously. If there are multiple clusters, you must choose one cluster to run the DevOps project.
--
. Once the DevOps project is created, it will be displayed in the list on the DevOps Projects page.
== View the DevOps Project
Click the newly created DevOps project to navigate to its details page.
In a DevOps project, users can create CI/CD pipelines, credentials, and manage project members and roles. The actions that users can perform in a DevOps project vary based on their permissions.
* Pipelines
+
--
Pipelines are a collection of plugins that support continuously integrating, testing, and building code. Pipelines combine continuous integration (CI) and continuous delivery (CD) to provide streamlined workflows that automatically deliver your code to any target.
--
* Credentials
+
--
DevOps project users with appropriate permissions can configure credentials for pipelines to interact with external environments. After users add credentials in the DevOps project, the project can use these credentials to interact with third-party applications such as GitHub, GitLab, and Docker Hub. For more information, see link:../../03-how-to-use/05-devops-settings/01-credential-management[Credential Management].
--
* Members and Roles
+
--
Similar to projects, DevOps projects also need to assign roles to users so that they have different permissions within the DevOps project. Project administrators (e.g., **project-admin**) are responsible for inviting users and granting them different roles. For more information, see link:../../03-how-to-use/05-devops-settings/02-role-and-member-management[Role and Member Management].
--
== Edit or Delete a DevOps Project
. Click **Basic Information** under **DevOps Project Settings** to view an overview of the current DevOps project, including the number of project roles and members, project name, and project creator.
. Click the **Manage** button on the right to edit the basic information of this DevOps project or delete the DevOps project.

View File

@ -0,0 +1,383 @@
---
title: "Create a Pipeline Using Graphic Editing Panels"
keywords: "Kubernetes, {ks_product-en}, DevOps projects, Using DevOps, Pipelines, Create Pipelines Using the Graphic Editing Panel"
description: "Introduces how to create pipelines using the graphic editing panel."
weight: 01
---
The graphic editing panel in DevOps includes all the necessary operations for Jenkins link:https://www.jenkins.io/en/doc/book/pipeline/#阶段[Stages] and link:https://www.jenkins.io/en/doc/book/pipeline/#步骤[Steps]. DevOps supports defining these stages and steps directly on the interactive panel without the need to create any Jenkinsfile.
This section demonstrates how to use the graphic editing panel to create pipelines in KubeSphere. Throughout the process, DevOps will automatically generate a Jenkinsfile based on the settings on the editing panel, eliminating the need to manually create a Jenkinsfile. Once the pipeline runs successfully, it will push the image to Docker Hub.
== Prerequisites
* **DevOps** must have been installed and enabled.
* You have an account on link:http://www.dockerhub.com[Docker Hub].
* A workspace, a DevOps project, and a user (e.g. **project-regular**) have been created, and the user has been invited to the DevOps project with the **operator** role. Refer to link:../../05-devops-settings/02-role-and-member-management[Role and Member Management].
* A dedicated CI node has been set up to run pipelines. Refer to link:../../05-devops-settings/04-set-ci-node[Set CI Nodes for Dependency Cache].
* An email server has been configured to receive pipeline notifications (optional). Refer to link:../09-jenkins-email[Set an Email Server for Pipelines].
* SonarQube has been configured to include code analysis in the pipeline (optional). Refer to link:../../../04-how-to-integrate/01-sonarqube/[Integrate SonarQube into Pipelines].
== Pipeline Overview
This example pipeline consists of the following stages:
[.admon.note,cols="a"]
|===
| Note
|
* **Stage 1: Checkout SCM**: Fetch the source code from the GitHub repository.
* **Stage 2: Unit Test**: It will not proceed with the next stage until the test is passed.
* **Stage 3: Code Analysis**: Configure SonarQube for static code analysis.
* **Stage 4: Build and Push**: Build the image, tag it as **snapshot-$BUILD_NUMBER**, and push it to Docker Hub, where **$BUILD_NUMBER** is the run ID of the record in the pipeline run records.
* **Stage 5: Artifacts**: Generate an artifact (JAR package) and save it.
|===
== Step 1: Create Credentials
. Log in to the {ks_product-en} web console as the **project-regular** user.
. Click **Workspace Management** and navigate to your DevOps project. Under **DevOps Project Settings**, create the following credentials in the **Credentials** page. For more information on creating credentials, refer to link:../../05-devops-settings/01-credential-management[Credential Management].
+
--
[.admon.note,cols="a"]
|===
| Note
|
If your account or password contains special characters such as **@** and **$**, errors may occur during pipeline runs due to unrecognized characters. In such cases, encode your account or password on a third-party website (e.g., link:https://www.urlencoder.org[urlencoder]) and then copy and paste the encoded result as your credential information.
|===
[%header,cols="1a,2a,2a"]
|===
| Credential ID | Type | Where to use
| dockerhub-id
| Username and Password
| Docker Hub
|===
--
. Create another credential for SonarQube (**sonar-token**) for Stage 3 (Code Analysis). Choose the credential type **Access Token** and enter the SonarQube token in the **Token** field. Refer to link:../../../04-how-to-integrate/01-sonarqube/#_create_a_sonarqube_token_for_the_new_project[Create a SonarQube Token for the New Project]. Click **OK** to complete the process.
. Once created, you will see the credentials on the credentials page.
== Step 2: Create a Pipeline
. Log in to the {ks_product-en} web console as the **project-regular** user.
. Click **Workspace Management** and navigate to your DevOps project. Click **Pipelines** and then click **Create**.
. In the pop-up dialog, name it **graphical-pipeline** and click **Next**.
. On the **Advanced Settings** page, click **Add** to add the following string parameters. These parameters will be used for Docker commands in the pipeline. Once added, click **Create**.
+
--
[%header,cols="1a,2a,2a,2a"]
|===
| Parameter Type | Name | Value | Description
| String
| REGISTRY
| `docker.io`
| Image registry address. This example uses **docker.io**.
| String
| DOCKERHUB_NAMESPACE
| Your Docker ID
| Your Docker Hub account or organization name under the account.
| String
| APP_NAME
| `devops-sample`
| Application name. This example uses **devops-sample**.
|===
// note
[.admon.note,cols="a"]
|===
| Note
|
For other fields, use default values or refer to link:../05-pipeline-settings[Pipeline Settings] for custom configurations.
|===
--
== Step 3: Edit the Pipeline
. Click the pipeline name to enter its details page.
. To use the graphical editing panel, click **Edit Pipeline** under the **Pipeline Configurations** tab. In the pop-up dialog:
* Click **Custom Pipeline** and follow the steps to configure each stage.
* Alternatively, use the link:../03-use-pipeline-templates/[built-in pipeline templates] provided by DevOps.
. Click **Next** and then click **Create**.
[.admon.note,cols="a"]
|===
|Note
|
The **Sync Status** on the pipeline details page shows the synchronization result between KubeSphere and Jenkins. You can also click **Edit Jenkinsfile** to manually create a Jenkinsfile for the pipeline.
|===
=== Stage 1: Fetch Source Code (Checkout SCM)
The graphical editing panel consists of two areas: the **canvas** on the left and the **content** on the right. It automatically generates a Jenkinsfile based on your configurations for different stages and steps, providing a more user-friendly experience for developers.
[.admon.note,cols="a"]
|===
|Note
|
The pipeline includes link:https://www.jenkins.io/en/doc/book/pipeline/syntax/#declarative-pipeline[Declarative Pipeline] and link:https://www.jenkins.io/en/doc/book/pipeline/syntax/#scripted-pipeline[Scripted Pipeline]. Currently, creating Declarative Pipelines using this panel is supported. For more information on pipeline syntax, refer to the link:https://www.jenkins.io/en/doc/book/pipeline/syntax/[Jenkins Documentation].
|===
. On the graphical editing panel, select **node** from the **Type** dropdown list and **maven** from the **Label** dropdown list.
+
--
[.admon.note,cols="a"]
|===
|Note
|
**Agent** is used to define the execution environment. The **Agent** directive specifies where and how Jenkins executes the pipeline. For more information, see link:../10-choose-jenkins-agent/[Choose Jenkins Agent].
|===
image:/images/ks-qkcp/en/devops-user-guide/use-devops/create-a-pipeline-using-graphical-editing-panel/graphical_panel.png[,100%]
--
. Click the plus icon on the left to add a stage. Click the text box above **Add Step** and set the name for the stage on the right in the **Name** field (e.g., **Checkout SCM**).
+
image:/images/ks-qkcp/en/devops-user-guide/use-devops/create-a-pipeline-using-graphical-editing-panel/edit_panel.png[,100%]
. Click **Add Step**. Select **Git Clone** from the list to fetch sample code from GitHub. Fill in the required fields in the pop-up dialog. Click **OK** to confirm the operation.
+
--
* **URL**: Enter the GitHub repository link:https://github.com/kubesphere/devops-maven-sample.git[]. Note that this is a sample address; please use your own repository address.
* **Credential ID**: No need to input a credential ID in this example.
* **Branch**: Enter **v4.1.0-sonarqube**. Use the default v4.1.0 branch if the code analysis stage is not required.
image:/images/ks-qkcp/en/devops-user-guide/use-devops/create-a-pipeline-using-graphical-editing-panel/enter_repo_url.png[,100%]
--
=== Stage 2: Unit Test
. Click the plus icon to the right of Stage 1 to add a new stage for running unit tests in a container. Name it **Unit Test**.
+
image:/images/ks-qkcp/en/devops-user-guide/use-devops/create-a-pipeline-using-graphical-editing-panel/unit_test.png[,100%]
. Click **Add Step**, select **Specify Container** from the list. Name it **maven** and click **OK**.
+
image:/images/ks-qkcp/en/devops-user-guide/use-devops/create-a-pipeline-using-graphical-editing-panel/container_maven.png[,100%]
. Click the **maven** container step and **Add nesting steps**. Select **shell** from the list and enter the following command. Click **OK** to save it.
+
--
[,bash]
----
mvn clean test
----
[.admon.note,cols="a"]
|===
|Note
|
In the graphical editing panel, you can specify a series of link:https://www.jenkins.io/en/doc/book/pipeline/syntax/#steps[steps] to be executed within a given stage.
|===
--
=== Stage 3: Code Analysis (Optional)
This stage uses SonarQube for code testing. If code analysis is not needed, this stage can be skipped.
. Click the plus icon to the right of **Unit Test** to add a stage for performing SonarQube code analysis in a container. Name it **Code Analysis**.
+
image:/images/ks-qkcp/en/devops-user-guide/use-devops/create-a-pipeline-using-graphical-editing-panel/code_analysis_stage.png[,100%]
. In **Code Analysis**, click **Add Step** and select **Specify Container**. Name it **maven** and click **OK**.
+
image:/images/ks-qkcp/en/devops-user-guide/use-devops/create-a-pipeline-using-graphical-editing-panel/maven_container.png[,100%]
. Click the **maven** container step and **Add nesting steps** to add a nesting step. Click **WithCredentials** and select SonarQube token (**sonar-token**) from the **Credential Name** list. Enter **SONAR_TOKEN** in the **Variable** and click **OK**.
+
image:/images/ks-qkcp/en/devops-user-guide/use-devops/create-a-pipeline-using-graphical-editing-panel/sonarqube_credentials.png[,100%]
. Under the **WithCredentials** step, click **Add nesting steps** to add another nesting step.
+
image:/images/ks-qkcp/en/devops-user-guide/use-devops/create-a-pipeline-using-graphical-editing-panel/nested_step.png[,100%]
. Click **WithSonarQubeEnv**, enter the name **sonar** in the pop-up dialog, and click **OK** to save it.
+
image:/images/ks-qkcp/en/devops-user-guide/use-devops/create-a-pipeline-using-graphical-editing-panel/sonar_env.png[,100%]
. Under the **WithSonarQubeEnv** step, click **Add nesting steps** to add another nesting step.
+
image:/images/ks-qkcp/en/devops-user-guide/use-devops/create-a-pipeline-using-graphical-editing-panel/add_nested_step.png[,100%]
. Click **shell** and enter the following command in the command line for SonarQube authentication and analysis. Click **OK** to complete the operation.
+
--
[,bash]
----
mvn sonar:sonar -Dsonar.login=$SONAR_TOKEN
----
--
. Click **Add nesting steps** under the **Specify Container** step (the third one), select **Timeout**. Enter **1** in time and choose **hours** as the unit, then click **OK** to complete the operation.
+
image:/images/ks-qkcp/en/devops-user-guide/use-devops/create-a-pipeline-using-graphical-editing-panel/add_nested_step_2.png[,100%]
+
image:/images/ks-qkcp/en/devops-user-guide/use-devops/create-a-pipeline-using-graphical-editing-panel/timeout_set.png[,100%]
. Click **Add nesting steps** under the **Timeout** step, select **waitForQualityGate**. Check **Abort the pipeline if quality gate status is not green** in the pop-up dialog. Click **OK** to save it.
+
image:/images/ks-qkcp/en/devops-user-guide/use-devops/create-a-pipeline-using-graphical-editing-panel/waitforqualitygate_set.png[,100%]
+
image:/images/ks-qkcp/en/devops-user-guide/use-devops/create-a-pipeline-using-graphical-editing-panel/sonar_ready.png[,100%]
=== Stage 4: Build and Push Image
. Click the plus icon to the right of the previous stage to add a new stage for building and pushing the image to Docker Hub. Name it **Build and Push**.
. In the **Build and Push** stage, click **Add Step**, select **Specify Container**, name it **maven**, and then click **OK**.
. Click **Add nesting steps** under the **maven** container step, select **shell** from the list, enter the following command in the pop-up window, and click **OK** to complete the action.
+
[source,bash]
----
mvn -Dmaven.test.skip=true clean package
----
. Again, click **Add nesting steps**, select **shell**. Enter the following command to build the Docker image based on the link:https://github.com/kubesphere/devops-maven-sample/blob/sonarqube/Dockerfile-online[Dockerfile].
+
--
[source,bash]
----
docker build -f Dockerfile-online -t $REGISTRY/$DOCKERHUB_NAMESPACE/$APP_NAME:SNAPSHOT-$BUILD_NUMBER .
----
image::/images/ks-qkcp/en/devops-user-guide/use-devops/create-a-pipeline-using-graphical-editing-panel/shell_command.png[100%]
--
. Once more, click **Add nesting steps**, select **WithCredential**. Fill in the following fields in the dialog that appears, and then click **OK**.
+
--
* **Credential Name**: Choose the Docker Hub credential you created, for example, **dockerhub-id**.
* **Username Variable**: Enter **DOCKER_USERNAME**.
* **Password Variable**: Enter **DOCKER_PASSWORD**.
image::/images/ks-qkcp/en/devops-user-guide/use-devops/create-a-pipeline-using-graphical-editing-panel/docker_credential.png[100%]
[.admon.note,cols="a"]
|===
|Note
|
For security reasons, account information is displayed as variables in the script.
|===
--
. In the **WithCredential** step, click **Add nesting steps** (the first one). Select **shell** and enter the following command in the pop-up window to log in to Docker Hub. Click **OK** to confirm the operation.
+
--
[source,bash]
----
echo "$DOCKER_PASSWORD" | docker login $REGISTRY -u "$DOCKER_USERNAME" --password-stdin
----
image::/images/ks-qkcp/en/devops-user-guide/use-devops/create-a-pipeline-using-graphical-editing-panel/login_docker_command.png[100%]
--
. In the **WithCredential** step, click **Add nesting steps**. Select **shell** and enter the following command to push the SNAPSHOT image to Docker Hub. Click **OK** to complete the operation.
+
--
[source,bash]
----
docker push $REGISTRY/$DOCKERHUB_NAMESPACE/$APP_NAME:SNAPSHOT-$BUILD_NUMBER
----
image::/images/ks-qkcp/en/devops-user-guide/use-devops/create-a-pipeline-using-graphical-editing-panel/push_to_docker.png[100%]
--
=== Stage 5: Artifacts
. Click the plus icon to the right of the **Build and Push** stage to add a new stage for storing artifacts, name it **Artifacts**. In this example, a JAR file is used.
+
image:/images/ks-qkcp/en/devops-user-guide/use-devops/create-a-pipeline-using-graphical-editing-panel/add_artifact_stage.png[,100%]
. Select the **Artifacts** stage, click **Add Step**, choose **Archive artifacts**. In the pop-up dialog, enter **target/*.jar** to set the path for archiving artifacts in Jenkins. Click **OK** to complete the editing.
+
image:/images/ks-qkcp/en/devops-user-guide/use-devops/create-a-pipeline-using-graphical-editing-panel/artifact_info.png[,100%]
== Step 4: Run the Pipeline
. Pipelines created using the graphical editing panel need to be manually executed. Click **Run**, and a dialog will appear displaying the three string parameters defined in link:#_step_2_create_a_pipeline[Step 2: Create a Pipeline]. Click **OK** to run the pipeline.
+
image:/images/ks-qkcp/en/devops-user-guide/use-devops/create-a-pipeline-using-graphical-editing-panel/run_pipeline.png[,100%]
. Click the **Run Records** tab to view the running status of the pipeline and click a record to see details.
. If the pipeline reaches the **Push with Tag** stage, it will pause at this stage and require a user with approval permissions to click **Proceed**.
. Log in to the {ks_product-en} web console as the **project-admin** user, navigate to **Workspace Management**, access your DevOps project, and click the **graphical-pipeline** pipeline. Under the **Run Records** tab, click the record to be reviewed and click **Proceed** to approve the pipeline.
[.admon.note,cols="a"]
|===
|Note
|
To simultaneously run multiple pipelines that do not include multibranch configurations, select these pipelines on the **Pipelines** list page and click **Run** to run them in bulk.
|===
== Step 5: View Pipeline Details
. Log in to the {ks_product-en} web console as the **project-regular** user, navigate to **Workspace Management**, access your DevOps project, and click the **graphical-pipeline** pipeline.
. Under the **Run Records** tab, click a record under **Status** to access the details of the run record. If the task status is **Successful**, all stages of the pipeline will show **Successful**.
. Under the **Run Logs** tab, click each stage to view detailed logs. Click **View Full Logs** to troubleshoot and analyze issues based on the logs, which can also be downloaded for further analysis.
== Step 6: Download Artifacts
On the **Artifacts** tab of the run record details page, click the icon next to the artifact to download it.
== Step 7: View Code Analysis Results
Navigate to the **Code Check** page to view the code analysis results provided by SonarQube for this pipeline. This page will be unavailable if SonarQube has not been configured beforehand. For more information, refer to link:../../../04-how-to-integrate/01-sonarqube/[Integrate SonarQube into Pipelines].
== Step 8: Verify Kubernetes Resources
If each stage of the pipeline runs successfully, a Docker image will be automatically built and pushed to your Docker Hub repository.
. After a successful pipeline run, an image will be pushed to Docker Hub. Log in to Docker Hub to view the result.
+
image:/images/ks-qkcp/en/devops-user-guide/use-devops/create-a-pipeline-using-graphical-editing-panel/dockerhub_image.png[,100%]
. The application name is **APP_NAME**, which in this example is **devops-sample**. The tag value is **SNAPSHOT-$BUILD_NUMBER**, where **$BUILD_NUMBER** corresponds to the **Run ID** listed under the **Run Records** tab.

View File

@ -0,0 +1,329 @@
---
title: "Create a Pipeline Using a Jenkinsfile"
keywords: "Kubernetes, {ks_product-en}, DevOps projects, using DevOps, pipelines, creating pipelines using Jenkinsfile"
description: "Introduction to creating pipelines using Jenkinsfile."
weight: 02
---
A Jenkinsfile is a text file that contains the definition of a Jenkins pipeline and is checked into a source code control repository. As it stores the entire workflow as code, the Jenkinsfile forms the basis for code reviews and pipeline iterations. For more information, refer to the link:https://www.jenkins.io/zh/doc/book/pipeline/jenkinsfile/[Jenkins Documentation].
This document demonstrates how to create a pipeline based on a Jenkinsfile from a GitHub repository.
[.admon.note,cols="a"]
|===
|Note
|
DevOps supports creating two types of pipelines: pipelines created based on a Jenkinsfile in SCM as described in this document, and link:../01-create-a-pipeline-using-graphical-editing-panel/[pipelines created through the graphical editing panel].
The Jenkinsfile in SCM requires an internal Jenkinsfile in Source Control Management (SCM), meaning the Jenkinsfile must be part of the SCM. The DevOps system automatically builds the CI/CD pipeline based on the existing Jenkinsfile in the code repository. By defining workflows such as **stage** and **step**, specific build, test, and deployment requirements can be met.
|===
== Prerequisites
* **DevOps** must have been installed and enabled.
* You have a link:https://hub.docker.com[Docker Hub] account and a link:https://github.com[GitHub] account.
* A workspace, a DevOps project, and a user (e.g. **project-regular**) have been created, and the user has been invited to the DevOps project with the **operator** role. Refer to link:../../05-devops-settings/02-role-and-member-management[Role and Member Management].
* A dedicated CI node has been set up to run pipelines. Refer to link:../../05-devops-settings/04-set-ci-node[Set CI Nodes for Dependency Cache].
* SonarQube has been installed and configured (optional). Refer to link:../../../04-how-to-integrate/01-sonarqube/[Integrate SonarQube into Pipelines]. If you skip this, the **SonarQube Analysis** stage will be omitted.
== Pipeline Overview
This example pipeline consists of the following stages:
[.admon.note,cols="a"]
|===
| Note
|
* **Stage 1: Checkout SCM**: Fetch the source code from the GitHub repository.
* **Stage 2: Unit Test**: It will not proceed with the next stage until the test is passed.
* **Stage 3: SonarQube Analysis**: The SonarQube code quality analysis.
* **Stage 4: Build and Push**: Build an image based on the selected branches in **Strategy Settings** and push the **SNAPSHOT-$BRANCH_NAME-$BUILD_NUMBER** tag to Docker Hub, where **$BUILD_NUMBER** is the run ID of the record in the pipeline run records.
* **Stage 5: Push Latest**: Tag the `v4.1.0-sonarqube` branch as **latest** and push it to Docker Hub.
* **Stage 6: Push with Tag**: Generate a tag and release it to GitHub, which will be pushed to Docker Hub.
|===
== Step 1: Create Credentials
. Log in to the {ks_product-en} web console as the **project-regular** user.
. Click **Workspace Management** and navigate to your DevOps project. Under **DevOps Project Settings**, create the following credentials in the **Credentials** page. For more information on creating credentials, refer to link:../../05-devops-settings/01-credential-management[Credential Management].
+
--
[.admon.note,cols="a"]
|===
| Note
|
If your account or password contains special characters such as **@** and **$**, errors may occur during pipeline runs due to unrecognized characters. In such cases, encode your account or password on a third-party website (e.g., link:https://www.urlencoder.org[urlencoder]) and then copy and paste the encoded result as your credential information.
|===
[%header,cols="1a,2a,2a"]
|===
| Credential ID | Type | Where to use
| dockerhub-id
| Username and Password
| Docker Hub
|github-id
| Username and Password
| GitHub
|===
--
. Create another credential for SonarQube (**sonar-token**) for Stage 3 (Code Analysis). Choose the credential type **Access Token** and enter the SonarQube token in the **Token** field. Refer to link:../../../04-how-to-integrate/01-sonarqube/#_create_a_sonarqube_token_for_the_new_project[Create a SonarQube Token for the New Project]. Click **OK** to complete the process.
. You also need to create a GitHub Personal Access Token (PAT) with the permissions shown in the following image. Then, in the DevOps project, use the generated token to create account credentials for GitHub authentication (e.g., **github-token**).
+
--
image:/images/ks-qkcp/zh/devops-user-guide/use-devops/create-a-pipeline-using-a-jenkinsfile/github-token-scope.png[,100%]
[.admon.note,cols="a"]
|===
|Note
|
To create a GitHub Personal Access Token, go to your GitHub account's **Settings**, click **Developer settings**, select **Personal access tokens**, and then click **Generate new token**.
|===
--
. Once created, you will see the credentials on the credentials page.
== Step 2: Modify the Jenkinsfile in your GitHub repository
. Log in to GitHub and fork all branches of the repository link:https://github.com/kubesphere/devops-maven-sample[devops-maven-sample] to your personal GitHub account.
. In your GitHub repository **devops-maven-sample**, switch to the `v4.1.0-sonarqube` branch and click on the file **Jenkinsfile-online** in the root directory.
. Click the edit icon on the right to edit the environment variables.
+
--
[%header,cols="1a,1a,2a"]
|===
|Entry |Value |Description
|DOCKER_CREDENTIAL_ID
|dockerhub-id
|The **name** for your Docker Hub account in KubeSphere.
|GITHUB_CREDENTIAL_ID
|github-id
|The **name** for your GitHub account in KubeSphere to push tags to your GitHub repository.
|REGISTRY
|docker.io
|It defaults to **docker.io**, used as the address to push images.
|DOCKERHUB_NAMESPACE
|your-dockerhub-id
|Replace it with your Docker Hub account name or the organization name under that account.
|GITHUB_ACCOUNT
|your-github-id
|Replace it with your GitHub account name. For example, if your GitHub URL is link:https://github.com/kubesphere/[], your GitHub account name is **kubesphere** or the organization name under that account.
|APP_NAME
|devops-maven-sample
|The application name.
|SONAR_CREDENTIAL_ID
|sonar-token
|The **name** for the SonarQube token in KubeSphere used for code quality checks.
|===
[.admon.note,cols="a"]
|===
|Note
|
In the Jenkinsfile, the **-o** parameter for the **mvn** command enables offline mode. Relevant dependencies have been downloaded in this tutorial to save time and accommodate network disruptions in certain environments. Offline mode is enabled by default.
|===
--
. After editing the environment variables, click **Commit changes** to update the file in the `v4.1.0-sonarqube` branch.
== Step 3: Create a Pipeline
. Log in to {ks_product-en} web console as the **project-regular** user.
. Click **Workspace Management** and navigate to your DevOps project. Click **Pipelines** and then click **Create**.
. In the pop-up dialog, name it **jenkinsfile-in-scm**.
. Under **Pipeline Type**, select **Multi-branch Pipeline**.
. Under **Code Repository**, choose a code repository and click **Next** to proceed.
+
--
If there are no available code repositories, click **Create a code repository** below. For more information, see link:../../04-import-code-repositories/[Import Code Repositories].
--
.. In the **Import Code Repository** dialog, enter a custom code repository name and click **Select a code repository**.
.. On the **GitHub** tab, select **github-token** from the **Credential** dropdown menu and click **OK**.
.. In the GitHub list, select your GitHub account, and all repositories associated with that token will be listed on the right. Choose **devops-maven-sample** and click **Select**.
.. Click **OK** to select your code repository.
. In **Advanced Settings**, check **Delete outdated branches**. In this tutorial, it is recommended to leave **Branch Retention Period (days)** and **Maximum Branches** at their default values.
+
--
Delete outdated branches means that you will discard the branch record all together. The branch record includes console output, archived artifacts and other relevant metadata of specific branches. Fewer branches mean that you can save the disk space used by Jenkins. KubeSphere provides two options to determine when old branches are discarded:
* Branch Retention Period (days). Branches that exceed the retention period are deleted.
* Maximum Branches. The earliest branch is deleted when the number of branches exceeds the maximum number.
[.admon.note,cols="a"]
|===
|Note
|
**Branch Retention Period (days)** and **Maximum Branches** apply to branches at the same time. As long as a branch meets the condition of either field, it is deleted. For example, if you specify 2 as the retention period and 3 as the maximum number of branches, any branch that exceed either number is deleted. DevOps prepopulates these two fields with 7 and 5 by default respectively.
|===
--
. In **Strategy Settings**, DevOps offers four strategies by default. You can delete **Discover PRs from Forks**, as this strategy will not be used in this example. For other strategies, no need to change the setting and you can use the default value directly.
+
--
[.admon.note,cols="a"]
|===
|Note
|
To enable **Strategy Settings** here, you should select GitHub as the code repository.
|===
As a Jenkins pipeline runs, the Pull Request (PR) submitted by developers will also be regarded as a separate branch.
**Discover Branches**
* **Exclude branches field as PRs**. The source branch is not scanned such as the origin's master branch. These branches need to be merged.
* **Include only branches filed as PRs**. Only scan the PR branch.
* **Include all branches**. Pull all the branches from the repository origin.
**Discover PRs from Origin**
* **Pull the code with the PR merged**. A pipeline is created and runs based on the source code after the PR is merged into the target branch.
* **Pull the code at the point of the PR**. A pipeline is created and runs based on the source code of the PR itself.
* **Create two pipelines respectively**. Two pipelines are created, one is based on the source code after the PR is merged into the target branch, and the other is based on the source code of the PR itself.
--
. Scroll down to **Script Path**, set it to **Jenkinsfile-online**, which is the file name of Jenkinsfile in the example repository located in the root directory. The field specifies the Jenkinsfile path in the code repository. It indicates the repository's root directory. If the file location changes, the script path also needs to be changed.
. In **Scan Trigger**, select **Scan periodically** and set the interval to **5 minutes**. Click **Create** to finish.
+
[.admon.note,cols="a"]
|===
|Note
|
You can set a specific interval to allow pipelines to scan remote repositories, so that any code updates or new PRs can be detected based on the strategy you set in **Strategy Settings**.
|===
== Step 4: Run the pipeline
. After a pipeline is created, click its name to go to its details page.
+
--
[.admon.note,cols="a"]
|===
|Note
|
* On the **Pipelines** list page, click image:/images/ks-qkcp/zh/icons/more.svg[more,18,18] on the right of the pipeline, and select **Copy** to create a duplicate of that pipeline.
* To simultaneously run multiple pipelines that do not include multibranch configurations , select these pipelines on the **Pipelines** list page and click **Run** to run them in bulk.
* The **Sync Status** on the pipeline details page shows the synchronization result between KubeSphere and Jenkins. If the synchronization is successful, it will display **Successful** along with a green checkmark icon.
|===
--
. Under **Run Records**, multiple branches are being scanned. Click **Run** on the right and the pipeline runs based on the behavioral strategy you set. Select **v4.1.0-sonarqube** from the drop-down list and add a tag number such as `v0.0.2`. Click **OK** to trigger a new run.
+
--
[.admon.note,cols="a"]
|===
|Note
|
* If you do not see any run records on this page, you need to refresh your browser manually or click **More > Scan Repository**.
* The tag name is used to refer to the newly generated release and image in GitHub and Docker Hub. Existing tag names cannot be reused for the **TAG_NAME** field. Otherwise, the pipeline will not be running successfully.
|===
--
. Wait for a while, click run records to view details.
+
--
[.admon.note,cols="a"]
|===
|Note
|
Activity failures may be caused by different factors. In this example, only the Jenkinsfile of the branch `v4.1.0-sonarqube` is changed as you edit the environment variables in the steps above. While, these variables in the v4.1.0 branch remain unchanged (namely, wrong GitHub and Docker Hub account). If you choose v4.1.0 branch to run, it will result in a failure. Other reasons for failures may be network issues, incorrect coding in the Jenkinsfile and so on.
In the **Run Logs** tab on the run record details page, you can view detailed information of the logs to troubleshoot and resolve issues.
|===
--
. If the pipeline reaches the **Push with Tag** stage, it will pause at this point and require a user with approval permissions to click **Proceed**.
+
--
In a development or production environment, it requires someone who has higher permissions (for example, release manager) to review the pipeline, images, as well as the code analysis result. They have the authority to determine whether the pipeline can go to the next stage. In the Jenkinsfile, you use the section `input` to specify who reviews the pipeline. If you want to specify a user (for example, `project-admin`) to review it, you can add a field in the Jenkinsfile. If there are multiple users, you need to use commas to separate them as follows:
[,bash]
----
input(id: 'release-image-with-tag', message: 'release image with tag?', submitter: 'project-admin,project-admin1')
----
--
. Log in to the {ks_product-en} web console as a user with pipeline approval permissions. Click **Workspace Management** and navigate to your DevOps project. Click the pipeline name to access its details page. Under the **Run Records** tab, click the record you want to review, then click **Proceed** to approve the pipeline.
+
[.admon.note,cols="a"]
|===
|Note
|
In KubeSphere, if you do not specify a reviewer, the user that can run a pipeline will be able to continue or terminate the pipeline. Additionally, the pipeline creator, users with the project administrator role, or any accounts specified by you also have the authority to continue or terminate the pipeline.
|===
== Step 5: Check Pipeline Status
. Under the **Pipeline** tab in the run records, check the running status of the pipeline. The pipeline may take a few minutes to initialize when first created.
// The sample pipeline consists of eight stages, each defined separately in the link:https://github.com/kubesphere/devops-maven-sample/blob/sonarqube/Jenkinsfile-online[Jenkinsfile-online].
. Click the **Run Logs** tab to view the pipeline's running logs. Click each stage to view detailed logs. Click **View Full Logs** to troubleshoot and resolve issues based on the logs, and you can also download the logs for further analysis.
== Step 6: Verify Results
. After a successful pipeline run, click **Code Check** to view the code analysis results provided by SonarQube. This page will be unavailable if SonarQube has not been configured beforehand.
. Following the definitions in the Jenkinsfile, the Docker image built by the pipeline has been successfully pushed to Docker Hub. In Docker Hub, you will see an image with the tag **v0.0.2**, specified before the pipeline runs.
. At the same time, a new tag and a new release have been generated in GitHub.

View File

@ -0,0 +1,119 @@
---
title: "Create Pipelines Using Pipeline Templates"
keywords: "Kubernetes, {ks_product-en}, DevOps Projects, Using DevOps, Pipelines"
description: "Learn how to create pipelines using pipeline templates."
weight: 03
---
This document illustrates how to create pipelines using pipeline templates on KubeSphere.
DevOps provides a graphic editing panel that facilitates the definition of stages and steps in Jenkins pipelines through interactive operations. It includes various built-in pipeline templates like Node.js, Maven, and Golang, enabling users to swiftly create pipelines based on these templates. While DevOps also offers CI and CI & CD pipeline templates, they might not fully align with custom requirements. It is advisable to use other built-in templates or directly customize pipelines.
* CI Pipeline Template
+
--
The CI pipeline template comprises two stages. The **clone code** stage fetches the code, while the **build & push** stage builds the image and pushes it to Docker Hub. Prior to editing, create credentials for the code repository and Docker Hub repository, and then configure the URLs and credentials in the corresponding steps. Once editing is finalized, the pipeline can be initiated.
--
* CI & CD Pipeline Template
+
--
The CI & CD pipeline template consists of six stages. For detailed information on each stage, please refer to link:../02-create-a-pipeline-using-jenkinsfile/[Create a Pipeline Using a Jenkinsfile]. Prior to editing, create credentials for the code repository and Docker Hub repository, and then configure the URLs and credentials in the corresponding steps. Once editing is finalized, the pipeline can be initiated.
--
== Prerequisites
* **DevOps** must have been installed and enabled.
* A workspace, a DevOps project, and a user (e.g. **project-regular**) have been created, and the user has been invited to the DevOps project with the **operator** role. Refer to link:../../05-devops-settings/02-role-and-member-management[Role and Member Management].
== Steps
The following takes Node.js as an example to show how to use a built-in pipeline template. Steps for using Maven and Golang pipeline templates are analogous.
. Log in to the {ks_product-en} web console as the **project-regular** user.
. Click **Workspace Management** and navigate to your DevOps project. Click **Pipelines** and then click **Create**.
. In the pop-up dialog, input the pipeline name, click **Next**, and then click **Create**.
. Click the created pipeline, proceed to the **Pipeline Configurations** tab, and select **Edit Pipeline**.
. In the **Create Pipeline** dialog, select **Node.js**, and then click **Next**.
. On the **Parameter Configuration** tab, configure the following parameters according to the actual situation, and then click **Create**.
+
--
[%header,cols="1a,4a"]
|===
|Parameter |Description
|GitURL
|The URL of the project repository to be cloned.
|GitRevision
|The branch to be checked out.
|NodeDockerImage
|The Docker image version for Node.js.
|InstallScript
|Shell script to install dependencies.
|TestScript
|Shell script for project testing.
|BuildScript
|Shell script to build the project.
|ArtifactsLocation
|Path where artifact files are located.
|===
--
. By default, a series of steps has been added on the left graphic editing panel. Select **Add Step** or **Add Parallel Stage** to make adjustments.
. Click on a step, on the right side of the page, you can:
+
--
* Modify the stage name.
* Delete the stage.
* Specify the agent type.
* Add conditions.
* Edit or remove a specific task.
* Add steps or add nesting steps.
//note
[.admon.note,cols="a"]
|===
|Note
|
Refer to link:../01-create-a-pipeline-using-graphical-editing-panel/[Create a Pipeline Using Graphic Editing Panels] to get how to customize steps and stages in the pipeline template.
|===
--
. In the **Agent** section on the right, choose the agent type, defaulting to **kubernetes**, and click **OK**.
+
--
[%header,cols="1a,4a"]
|===
|Agent Type |Description
|any
|Uses the default base pod template to create a Jenkins agent for running pipelines.
|node
|Uses a pod template with the specific label to create a Jenkins agent for running pipelines. Available labels include base, java, nodejs, maven, go, and more.
|kubernetes
|Use a standard Kubernetes pod template defined in a yaml file to create a jenkins agent for running pipelines.
|===
--
. Review the details of the created pipeline template, and click **Run** to run the pipeline.

View File

@ -0,0 +1,159 @@
---
title: "Create a Multi-branch Pipeline with GitLab"
keywords: "Kubernetes, {ks_product-en}, DevOps project, use DevOps, pipeline"
description: "Learn how to create a multi-branch pipeline using GitLab."
weight: 04
---
link:https://gitlab.com/users/sign_in[GitLab] is a web-based Git repository management tool that supports public and private repositories, and provides comprehensive DevOps functionalities including source code management, code review, issue tracking, continuous integration, and more. With GitLab, teams can collaborate efficiently on a single platform to complete the entire software development process from coding to deployment.
{ks_product-en} supports creating multi-branch pipelines using GitLab in DevOps projects. This document demonstrates how to create a multi-branch pipeline with GitLab.
== Prerequisites
* **DevOps** must have been installed and enabled.
* A workspace, a DevOps project, and a user (e.g., **project-regular**) have been created, and the user has been invited to the DevOps project with the **operator** role. Refer to link:../../05-devops-settings/02-role-and-member-management[Role and Member Management].
* You have a link:https://gitlab.com/users/sign_in[GitLab] account and a link:https://hub.docker.com/[Docker Hub] account.
== Step 1: Create Credentials
. Log in to the {ks_product-en} web console as the **project-regular** user.
. Click **Workspace Management** and enter your DevOps project, then create the following credentials under **Credentials** in **DevOps Project Settings**. For more information on how to create credentials, refer to link:../../05-devops-settings/01-credential-management/[Credential Management].
+
--
//note
[.admon.note,cols="a"]
|===
|Note
|
If your account or password contains special characters such as **@** and **$**, errors may occur during pipeline runs due to unrecognized characters. In such cases, encode your account or password on a third-party website (e.g., link:https://www.urlencoder.org[urlencoder]) and then copy and paste the encoded result as your credential information.
|===
[%header,cols="1a,2a,2a"]
|===
| Credential ID | Type | Where to use
|dockerhub-id
|Username and password
|Docker Hub
|gitlab-id
|Username and password
|GitLab
|===
--
. After creation, you will see the created credentials on the credentials page.
== Step 2: Edit the Jenkinsfile in your GitLab Repository
. Log in to GitLab and create a public project. Click **New Project > Import Project**, select **Import repository from URL**, enter the URL of link:https://github.com/kubesphere/devops-maven-sample[devops-maven-sample], choose the visibility level **Public**, and then click **Create Project**.
. In the newly created project, create a new branch from the v4.1.0 branch, named **gitlab-demo**.
. In the **gitlab-demo** branch, click the **Jenkinsfile-online** file in the root directory.
. Click **Edit**, change **GITHUB_CREDENTIAL_ID**, **GITHUB_ACCOUNT**, and **@github.com** to **GITLAB_CREDENTIAL_ID**, **GITLAB_ACCOUNT**, and **@gitlab.com** respectively, and edit the entries listed in the table below. Also, change the value of **branch** in **push latest** to **gitlab-demo**.
+
--
[%header,cols="1a,2a,2a"]
|===
|Entry|Value|Description
|GITLAB_CREDENTIAL_ID
|gitlab-id
|The **name** you set for your GitLab account in KubeSphere, used to push tags to your GitLab repository.
|DOCKERHUB_NAMESPACE
|your-dockerhub-id
|Replace with your Docker Hub account name, or the name of an organization under the account.
|GITLAB_ACCOUNT
|your-gitlab-id
|Replace with your GitLab account name, or the name of a user group under the account.
|===
//note
[.admon.note,cols="a"]
|===
|Note
|
For more information about environment variables in Jenkinsfile, refer to link:../02-create-a-pipeline-using-jenkinsfile/[Create a Pipeline Using a Jenkinsfile].
|===
--
. Click **Commit changes** to update the file.
== Step 3: Create a Pipeline
. Log in to the {ks_product-en} web console as the **project-regular** user.
. Click **Workspace Management** and enter your DevOps project, then click **Create** on the **Pipelines** page.
. In the pop-up dialog, name it **gitlab-multi-branch**.
. Under **Pipeline Category**, select **Multi-branch Pipeline**.
. Under **Code Repository**, select a code repository and click **Next** to continue.
+
--
If no code repository is available, click **Create a code repository** below. For more information, refer to link:../../04-import-code-repositories/[Import Code Repositories].
--
.. In the **Import Code Repository** dialog, enter a name for the code repository (customizable), then click to select the code repository.
.. On the **GitLab** tab, select the default option link:https://gitlab.com[] under **GitLab Server Address**, enter the name of the group the GitLab project belongs to in **Project Group/Owner**, then select the **devops-maven-sample** repository from the dropdown menu under **Code Repository**. Click image:/images/ks-qkcp/zh/icons/check-dark.svg[check,18,18] in the bottom right corner, then click **OK**.
+
--
//note
[.admon.note,cols="a"]
|===
|Note
|
To use a private GitLab repository, follow these steps:
* Go to **User Settings > Access Tokens** on GitLab, create a personal access token with API and read_repository permissions.
* link:../07-access-jenkins-console[Access Jenkins Dashboard], go to **Manage Jenkins > Manage Credentials**, create Jenkins credentials using your GitLab token for accessing GitLab. Then go to **Manage Jenkins > Configure System**, add the credentials in **GitLab**.
* In the DevOps project, select **DevOps Project Settings > Credentials**, create a credential using your GitLab token. When creating the pipeline, specify this credential in the **Credentials** under the **GitLab** tab so that the pipeline can pull code from your private GitLab repository.
|===
--
. On the **Advanced Settings** tab, change the **Script Path** to **Jenkinsfile-online** and then click **Create**.
+
--
//note
[.admon.note,cols="a"]
|===
|Note
|
This field specifies the path of the Jenkinsfile in the code repository, which represents the root directory of the repository. If the file location changes, the script path also needs to be changed.
|===
--
== Step 4: Run the Pipeline
. After the pipeline is created, it will be displayed in the list. Click the pipeline name to view its details page.
. Click **Run** on the right. In the pop-up dialog, select **gitlab-demo** from the dropdown menu and add a tag number, such as **v0.0.2**. Click **OK** to trigger a new run.
. Wait a moment, then click the run record to view the details.
. If the pipeline reaches the **Push with Tag** stage, it will pause at this stage and require a user with approval permissions to click **Proceed**.
== Step 5: Check Pipeline Status
. On the **Pipeline** tab of the run record, check the running status of the pipeline.
. Click the **Run Logs** tab to view the pipeline run logs. Click each stage to view its detailed logs. Click **View Full Logs** to troubleshoot and resolve issues based on the logs, or download the logs to your local machine for further analysis.
== Step 6: Verify Results
. As defined in the Jenkinsfile, the Docker image built by the pipeline has also been successfully pushed to Docker Hub. In Docker Hub, you will see the image with the tag **v0.0.2**, which was specified before the pipeline run.
. Meanwhile, a new tag has been generated in GitLab.

View File

@ -0,0 +1,204 @@
---
title: "Pipeline Settings"
keywords: "Kubernetes, {ks_product-en}, DevOps project, use DevOps, pipeline"
description: "Learn how to customize pipeline configurations."
weight: 05
---
When creating a pipeline, you can customize the pipeline configuration through various settings.
After the pipeline is created, you can also edit the pipeline's configuration by entering the pipeline details page, clicking **Edit Information** and **More > Edit Settings**.
This document details how to configure pipelines.
== Prerequisites
* **DevOps** must have been installed and enabled.
* A workspace, a DevOps project, and a user (e.g., **project-regular**) have been created, and the user has been invited to the DevOps project with the **operator** role. Refer to link:../../05-devops-settings/02-role-and-member-management[Role and Member Management].
== Basic Information
When creating a pipeline, on the **Basic Information** tab, you can customize the following information:
* **Name**: The name of the pipeline. Pipelines within the same DevOps project cannot have the same name.
* **DevOps Project**: The DevOps project to which the pipeline belongs.
* **Description**: Additional information describing the pipeline. The description should not exceed 256 characters.
* **Pipeline Type**: Regular pipeline or multi-branch pipeline. If you choose a multi-branch pipeline, you need to select a code repository.
* **Code Repository (Optional)**: Select a code repository as the code source for the pipeline. You can choose GitHub, GitLab, Bitbucket, and Git as the code source.
+
====
* GitHub
+
--
If you choose **GitHub**, you must specify the credentials for accessing GitHub. If you have already created credentials using your GitHub token, select the existing credentials from the dropdown menu, or click **Create Credential** to create new credentials. After selecting the credentials, click **OK** to choose your repository on the right. After completing all operations, click image:/images/ks-qkcp/zh/icons/check-dark.svg[check,18,18].
--
* GitLab
+
--
If you choose **GitLab**, you must specify the GitLab server address, group/owner, and code repository. If credentials are required to access the code repository, you need to specify a credential. After completing all operations, click image:/images/ks-qkcp/zh/icons/check-dark.svg[check,18,18].
--
* Bitbucket
+
--
If you choose **Bitbucket**, you need to enter your Bitbucket server address. Create a credential using your Bitbucket username and password in advance, or click **Create Credential** to create a new credential. After entering the information, click **OK** to choose your repository on the right. After completing all operations, click image:/images/ks-qkcp/zh/icons/check-dark.svg[check,18,18].
--
* Git
+
--
If you choose **Git**, you need to specify the repository URL. If credentials are required to access the code repository, you need to specify a credential, or click **Create Credential** to add a new credential. After completing all operations, click image:/images/ks-qkcp/zh/icons/check-dark.svg[check,18,18].
--
====
== Advanced Settings
=== Code Repository Specified
If you specify a code repository, you can customize the following configurations on the **Advanced Settings** tab:
* Branch Settings
+
--
**Delete outdated branches**: Automatically delete old branches. Branch records will be deleted together. The branch record includes console output, archived artifacts and other relevant metadata of specific branches. Fewer branches mean that you can save the disk space used by Jenkins. KubeSphere provides two options to determine when old branches are discarded:
* **Branch Retention Period (days)**: Branches that exceed the retention period are deleted.
* **Maximum Branches**: The earliest branch is deleted when the number of branches exceeds the maximum number.
//note
[.admon.note,cols="a"]
|===
|Note
|
**Branch Retention Period (days)** and **Maximum Branches** apply to branches at the same time. As long as a branch meets the condition of either field, it is deleted. For example, if you specify 2 as the retention period and 3 as the maximum number of branches, any branch that exceed either number is deleted. DevOps prepopulates these two fields with 7 and 5 by default respectively.
|===
--
* Strategy Settings
+
--
DevOps provides four default policies in **Strategy Settings**. As a Jenkins pipeline runs, the Pull Request (PR) submitted by developers will also be regarded as a separate branch.
[.admon.note,cols="a"]
|===
|Note
|
To enable **Strategy Settings** here, you should select GitHub as the code repository.
|===
**Discover Branches**
- **Exclude branches field as PRs**. The source branch is not scanned such as the origin's master branch. These branches need to be merged.
- **Include only branches filed as PRs**. Only scan the PR branch.
- **Include all branches**. Pull all the branches from the repository origin.
**Discover Tags**
* **Enable tag discovery**: Branches with specified tags are scanned.
* **Disable tag discovery**: Branches with specified tags are not scanned.
**Discover PRs from Origin**
* **Pull the code with the PR merged**. A pipeline is created and runs based on the source code after the PR is merged into the target branch.
* **Pull the code at the point of the PR**. A pipeline is created and runs based on the source code of the PR itself.
* **Create two pipelines respectively**. Two pipelines are created, one is based on the source code after the PR is merged into the target branch, and the other is based on the source code of the PR itself.
**Discover PRs from Forks**
Pull Strategy:
* **Pull the code with the PR merged**. A pipeline is created and runs based on the source code after the PR is merged into the target branch.
* **Pull the code at the point of the PR**. A pipeline is created and runs based on the source code of the PR itself.
* **Create two pipelines respectively**. Two pipelines are created, one is based on the source code after the PR is merged into the target branch, and the other is based on the source code of the PR itself.
Trusted Users:
* **Contributors**: Users who have contributed to the PR.
* **Everyone**: Every user who can access the PR.
* **Users with admin or write permission**: Only users with admin or write permissions to the PR.
* **None**: If you choose this option, PRs will not be discovered regardless of the option selected in **Pull Strategy**.
--
* Filter by Regex
+
--
Check the box to specify a regular expression to filter branches, PRs, and tags.
--
* Script Path
+
--
The **Script Path** parameter specifies the path of the Jenkinsfile in the code repository, which represents the root directory of the repository. If the file location changes, the script path also needs to be changed.
--
* Scan Trigger
+
--
Check **Scan periodically** and set the scan interval from the dropdown list.
--
* Build Trigger
+
--
Check **Trigger through pipeline events** and select a pipeline from the dropdown lists of **Trigger on Pipeline Creation** and **Trigger on Pipeline Deletion** to automatically trigger tasks in the specified pipeline when a new pipeline is created or an existing pipeline is deleted.
--
* Clone Settings
+
--
* **Enable shallow clone**: If shallow clone is enabled, the cloned code will not include tags.
* **Clone Depth**: The number of commits to fetch during cloning.
* **Clone Timeout Period (min)**: The time required to complete the cloning process (in minutes).
--
* Webhook
+
--
**Webhook** effectively allows the pipeline to discover changes in the remote code repository and automatically trigger a new run. Webhook should be the primary method to trigger Jenkins to automatically scan GitHub and Git (e.g., GitLab). For more information, refer to link:../06-pipeline-webhook/[Trigger a Pipeline Using a Webhook].
--
=== Code Repository Not Specified
If you do not specify a code repository, you can customize the following configurations on the **Advanced Settings** tab:
* Build Settings
+
--
**Delete outdated build records**: Specify when to delete build records under branches. Build records include console output, archived artifacts, and other metadata related to specific builds. Fewer builds mean that you can save the disk space used by Jenkins. KubeSphere provides two options to determine when old builds are discarded:
* **Build Record Retention Period (Days)**: Build records that exceed the retention period are deleted.
* **Maximum Build Records**: When the number of build records exceeds the maximum number allowed, the earliest build record is deleted.
//note
[.admon.note,cols="a"]
|===
|Note
|
**Build Record Retention Period (Days)** and **Maximum Build Records** can be applied to build records simultaneously. As long as a build record meets the condition set by one of the fields, it will be deleted. For example, if you specify 2 days for retention and 3 for the maximum number of build records, a build record will be deleted if its retention days exceed 2 or the number of build records exceeds 3. DevOps prepopulates these two fields with 7 and 10 by default respectively.
|===
* **No concurrent builds**: If this option is checked, multiple builds cannot run concurrently.
--
* Build Parameters
+
--
Parameterized build processes allow passing one or more parameters when starting the pipeline run. DevOps provides five default parameter types, including **String**, **Multi-line string**, **Boolean**, **Options**, and **Password**. When parameterizing a project, the build is replaced with a parameterized build, which will prompt the user to enter a value for each defined parameter.
--
* Build Trigger
+
--
**Build periodically**: Allows periodic execution of builds. Enter a CRON expression to set the schedule.
--

View File

@ -0,0 +1,71 @@
---
title: "Trigger a Pipeline Using a Webhook"
keywords: "Kubernetes, {ks_product-en}, DevOps project, use DevOps, pipeline"
description: "Learn how to trigger pipelines using webhook in GitHub repositories."
weight: 06
---
If you create a Jenkinsfile-based pipeline from a remote code repository, you can configure a webhook in the remote repository so that the pipeline is automatically triggered when changes are made to the remote repository.
This tutorial demonstrates how to trigger a pipeline by using a webhook in GitHub.
== Prerequisites
* **DevOps** must have been installed and enabled.
* A workspace, a DevOps project, and a user (e.g., **project-regular**) have been created, and the user has been invited to the DevOps project with the **operator** role. Refer to link:../../05-devops-settings/02-role-and-member-management[Role and Member Management].
* You have created a Jenkinsfile-based pipeline from a remote code repository. For more information, refer to link:../02-create-a-pipeline-using-jenkinsfile/[Create a Pipeline Using a Jenkinsfile]。
== Configure a Webhook
=== Get a webhook URL
. Log in to the {ks_product-en} web console as the **project-regular** user.
. Click **Workspace Management** and enter your DevOps project.
. On the **Pipelines** page, click a pipeline (e.g., **jenkins-in-scm**) to view its details page.
. Click **More** and select **Edit Settings** in the drop-down list.
. In the pop-up dialog box, scroll down to **Webhook** to get the webhook push URL.
=== Set a webhook in the GitHub repository
. Log in to GitHub and go to your own repository `devops-maven-sample`.
. Click **Settings** > **Webhooks**, and click **Add webhook**.
. Enter the webhook push URL of the pipeline for **Payload URL** and click **Add webhook**. This tutorial selects **Just the push event** for demonstration purposes. You can make other settings based on your needs. For more information, see link:https://docs.github.com/en/developers/webhooks-and-events/webhooks/creating-webhooks[GitHub Documentation].
. The configured webhook is displayed on the **Webhooks** page.
== Trigger the Pipeline Using the Webhook
=== Submit a Pull Request to the Repository
. On the **Code** page of the devops-maven-sample repository, click **master** and then select the **v4.1.0-sonarqube** branch.
. Go to **/deploy/dev-ol** and click the file **devops-sample.yaml**.
. Click image:/images/ks-qkcp/zh/icons/pen-light.svg[pen-light,18,18] to edit the file. For example, change the value of **spec.replicas** to **3**.
. Click **Commit changes**.
=== Check Webhook Delivery
. On the **Settings** > **Webhooks** page of the devops-maven-sample repository, click the created webhook.
. Click **Recent Deliveries**, then click a specific delivery record to view the details.
== Check Pipeline Status
. Log in to the {ks_product-en} web console as the **project-regular** user.
. Click **Workspace Management** and enter your DevOps project.
. On the **Pipelines** page, click a pipeline (e.g., **jenkins-in-scm**) to view its details page.
. On the **Run Records** tab, check if the pull request submitted to the remote repository **v4.1.0-sonarqube** branch has triggered a new run.

View File

@ -0,0 +1,87 @@
---
title: "Access Jenkins Dashboard"
keywords: "Kubernetes, {ks_product-en}, DevOps project, use DevOps, access Jenkins"
description: "Learn how to access the Jenkins dashboard."
weight: 07
---
When DevOps is installed, the Jenkins dashboard is also installed by default. However, you need to configure it according to the following steps before you can access the Jenkins dashboard.
== Prerequisites
**DevOps** must have been installed and enabled.
== Steps
. Run the following command on the cluster node to get the Jenkins address.
+
--
// Bash
[,bash]
----
export NODE_PORT=$(kubectl get --namespace kubesphere-devops-system -o jsonpath="{.spec.ports[0].nodePort}" services devops-jenkins)
export NODE_IP=$(kubectl get nodes --namespace kubesphere-devops-system -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
----
You will get output similar to the following:
[,bash]
----
http://10.77.1.201:30180
----
--
. Check the `jenkins.securityRealm.openIdConnect.kubesphereCoreApi` and `jenkins.securityRealm.openIdConnect.jenkinsURL` in the DevOps configuration, ensuring they are modified to the actual accessible addresses of the kubesphere-console and devops-jenkins services, respectively. If not, modify them and wait for the extension to update.
+
[,yaml]
----
jenkins:
securityRealm:
openIdConnect:
# The kubesphere-core api used for jenkins OIDC
# If you want to access to jenkinsWebUI, the kubesphereCoreApi must be specified and browser-accessible
# Modifying this configuration will take effect only during installation
# If you wish for changes to take effect after installation, you need to update the jenkins-casc-config ConfigMap, copy the securityRealm configuration from jenkins.yaml to jenkins_user.yaml, save, and wait for approximately 70 seconds for the changes to take effect.
kubesphereCoreApi: "http://192.168.1.1:30880"
# The jenkins web URL used for OIDC redirect
jenkinsURL: "http://192.168.1.1:30180"
----
. Check all addresses under `securityRealm.oic` in the `jenkins_user.yaml` of the `jenkins-casc-config` ConfigMap, ensuring they are the same as those under `securityRealm.oic` in `jenkins.yaml`, and are modified to the actual accessible address of kubesphere-console. If they are not the same, modify them and wait for them to take effect.
+
[,yaml]
----
securityRealm:
oic:
clientId: "jenkins"
clientSecret: "jenkins"
tokenServerUrl: "http://192.168.1.1:30880/oauth/token"
authorizationServerUrl: "http://192.168.1.1:30880/oauth/authorize"
userInfoServerUrl: "http://192.168.1.1:30880/oauth/userinfo"
endSessionEndpoint: "http://192.168.1.1:30880/oauth/logout"
logoutFromOpenidProvider: true
scopes: openid profile email
fullNameFieldName: url
userNameField: preferred_username
----
. Check the `authentication.issuer.url` in the `kubesphere-config` ConfigMap, ensuring it is modified to the actual accessible address of kubesphere-console. If not, modify it and restart the deployment `ks-apiserver` for it to take effect.
+
--
[,yaml]
----
authentication:
issuer:
url: "http://192.168.1.1:30880"
----
[source,bash]
----
kubectl -n kubesphere-system rollout restart deploy ks-apiserver
----
--
. Use the address http://NodeIP:30180 to access the Jenkins dashboard.
+
Jenkins is configured with KubeSphere LDAP, which means you can log in to Jenkins directly using your KubeSphere account (e.g., `admin/P@88w0rd`).

View File

@ -0,0 +1,50 @@
---
title: "Jenkins System Settings"
keywords: "Kubernetes, {ks_product-en}, DevOps project, use DevOps, pipeline"
description: "Learn how to set up Jenkins and reload configurations on the Jenkins dashboard."
weight: 07
---
The DevOps system provides containerized CI/CD functionalities based on Jenkins. As the standard for CI/CD workflows, Jenkins is powerful and flexible. However, many plugins require users to perform system-level configurations before using Jenkins.
To provide a schedulable Jenkins environment, KubeSphere adopts the **Configuration as Code** approach for Jenkins system settings. Users need to log in to the Jenkins dashboard, modify the configurations, and then reload them.
This document demonstrates how to set up Jenkins and reload configurations on the Jenkins dashboard.
== Prerequisites
**DevOps** must have been installed and enabled.
== Jenkins Configuration as Code
KubeSphere installs the Jenkins Configuration as Code plugin by default, which supports defining the desired state of Jenkins through YAML files, making it easy to reproduce Jenkins configurations (including plugin configurations). Refer to link:https://github.com/jenkinsci/configuration-as-code-plugin/tree/master/demos[this directory] for specific Jenkins configurations and example YAML files.
Additionally, you can find the **formula.yaml** file in the link:https://github.com/kubesphere/ks-jenkins[ks-jenkins] repository to view plugin versions and customize these versions as needed.
== Modify ConfigMap
It is recommended to configure Jenkins in KubeSphere through Configuration as Code (CasC). The built-in Jenkins CasC file is stored as a ConfigMap.
. Log in to the {ks_product-en} web console as the **platform-admin** user.
. Click **Cluster Management** and enter a cluster.
. In the left navigation pane, select **Configuration** > **ConfigMaps**. On the **ConfigMaps** page, select **kubesphere-devops-system** from the list, then click **jenkins-casc-config**.
. On the details page, click **More** and select **Edit YAML** from the dropdown list.
. The configuration template for **jenkins-casc-config** is a YAML file located under **data:jenkins_user.yaml:**. Modify the container image, labels, resource requests (Request), and limits (Limit) in the ConfigMap's agent (Kubernetes Jenkins Agent), or add containers in the `podTemplate`. After completing the operations, click **OK**.
. Wait for 1 to 2 minutes, and the new configuration will be reloaded automatically.
//note
[.admon.note,cols="a"]
|===
|Note
|
* For more information on how to configure Jenkins through CasC, refer to link:https://github.com/jenkinsci/configuration-as-code-plugin[Jenkins Documentation].
* In the current version, not all plugins support CasC settings. CasC only overrides plugin configurations set through CasC.
|===

View File

@ -0,0 +1,146 @@
---
title: "Use Jenkins Shared Libraries in a Pipeline"
keywords: "Kubernetes, {ks_product-en}, DevOps project, use DevOps, pipeline"
description: "Learn how to use Jenkins shared libraries in a pipeline."
weight: 08
---
For Jenkins pipelines that contain the same stages or steps, you can use Jenkins shared libraries in the Jenkinsfile to avoid pipeline code duplication.
This document demonstrates how to use Jenkins shared libraries in a DevOps pipeline.
== Prerequisites
* **DevOps** must have been installed and enabled.
* A workspace, a DevOps project, and a user (e.g. **project-regular**) have been created, and the user has been invited to the DevOps project with the **operator** role. See link:../../05-devops-settings/02-role-and-member-management[Role and Member Management].
* You have a usable Jenkins shared library. This tutorial uses the Jenkins shared library in the link:https://github.com/devops-ws/jenkins-shared-library[GitHub repository] as an example.
== Step 1: Configure Shared Libraries in the Jenkins Dashboard
. link:../07-access-jenkins-console[Log in to the Jenkins dashboard] and click **Manage Jenkins** in the left navigation pane.
. Scroll down and click **Configure System**.
. Scroll down to **Global Pipeline Libraries** and click **Add**.
. Configure the fields as follows.
* **Name:** Set a name for the shared library (e.g., `demo-shared-library`) so that you can import the shared library by referring to this name in a Jenkinsfile.
* **Default version:** Set a branch name of the repository where the shared library is located as the default branch to import the shared library. Enter `master` for this tutorial.
* Under **Retrieval method**, select **Modern SCM**.
* Under **Source Code Management**, select **Git** and enter the URL of the example repository for **Project Repository**. If you use your own repository and access to this repository requires credentials, you also need to configure **Credentials**.
. After editing, click **Apply**.
+
--
//note
[.admon.note,cols="a"]
|===
|Note
|
You can also configure link:https://www.jenkins.io/zh/doc/book/pipeline/shared-libraries/#folder-level-shared-libraries[folder-level shared libraries].
|===
--
== Step 2: Use Shared Libraries in a Pipeline
=== Create a Pipeline
. Log in to the {ks_product-en} web console as the **project-regular** user.
. Click **Workspace Management** and enter your DevOps project, then click **Create** on the **Pipelines** page.
. In the dialog that appears, name it **demo-shared-library** and click **Next**.
. In **Advanced Settings**, click **Create** directly to create the pipeline with the default settings.
=== Edit the Pipeline
. On the pipeline list page, click the pipeline name to enter its detail page, then click **Edit Jenkinsfile**.
. In the dialog that appears, add the following example Jenkinsfile. After editing, click **OK**.
+
--
[,json]
----
library identifier: 'devops-ws-demo@master', retriever: modernSCM([
$class: 'GitSCMSource',
remote: 'https://github.com/devops-ws/jenkins-shared-library',
traits: [[$class: 'jenkins.plugins.git.traits.BranchDiscoveryTrait']]
])
pipeline {
agent any
stages {
stage('Demo') {
steps {
script {
mvn.fake()
}
}
}
}
}
----
//note
[.admon.note,cols="a"]
|===
|Note
|
Specify a **label** for **agent** as needed.
|===
--
+
Alternatively, use a Jenkinsfile that starts with **@Library('<the configured shared library name>') _**. If you use this type of Jenkinsfile, you need to configure the shared library on the Jenkins dashboard in advance. In this tutorial, you can use the following example Jenkinsfile.
+
--
[,json]
----
@Library('demo-shared-library') _
pipeline {
agent any
stages {
stage('Demo') {
steps {
script {
mvn.fake()
}
}
}
}
}
----
//note
[.admon.note,cols="a"]
|===
|Note
|
Use **@Library('demo-shared-library@<branch name>') _** to specify a specific branch.
|===
--
== Step 3: Run the Pipeline
. On the pipeline detail page, click **Run** to run the pipeline.
. Click the record under the **Run Records** tab to view the pipeline run details. Click **Run Logs** to view the log details.

View File

@ -0,0 +1,58 @@
---
title: "Set an Email Server for Pipelines"
keywords: "Kubernetes, {ks_product-en}, DevOps project, use DevOps, pipeline"
description: "Introduces how to set an email server for pipelines."
weight: 09
---
The built-in Jenkins cannot share the same email configuration with the notification system of KubeSphere Therefore, you need to configure an email server separately for DevOps pipelines.
== Prerequisites
* **DevOps** must have been installed and enabled.
* You should have the **Cluster Management** permission on the {ks_product-en} platform.
== Steps
. Log in to the {ks_product-en} web console with an account that has the **Cluster Management** permission.
. Click **Cluster Management** and enter a cluster.
. In the left navigation pane, select **Application Workloads** > **Workloads**, and choose the **kubesphere-devops-system** project from the dropdown list. Click image:/images/ks-qkcp/zh/icons/more.svg[more,18,18] on the right side of **devops-jenkins** and select **Edit YAML**.
. Edit the fields in the YAML file as shown below. After making the changes, click **OK**.
+
--
//warning
[.admon.warning,cols="a"]
|===
|Warning
|
After modifying the email server configuration, the **devops-jenkins** deployment will restart. Therefore, the DevOps system will be unavailable for a few minutes. Please modify these configurations at an appropriate time.
|===
[%header,cols="1a,3a"]
|===
|Environment Variable Name |Description
|EMAIL_SMTP_HOST
|SMTP server address.
|EMAIL_SMTP_PORT
|SMTP server port (e.g., 25).
|EMAIL_FROM_ADDR
|Email sender address.
|EMAIL_FROM_NAME
|Email sender name.
|EMAIL_FROM_PASS
|Email sender password.
|EMAIL_USE_SSL
|Whether to enable SSL configuration.
|===
--

View File

@ -0,0 +1,216 @@
---
title: "Choose Jenkins Agent"
keywords: "Kubernetes, {ks_product-en}, DevOps project, use DevOps, pipeline"
description: "Introduces how to select Jenkins Agent."
weight: 10
---
The **agent** section specifies where the entire pipeline or a specific stage will be executed in the Jenkins environment, depending on where the `agent` section is placed. This section must be defined at the top level inside the **pipeline** block, but stage-level usage is optional. For more information, see link:https://www.jenkins.io/zh/doc/book/pipeline/syntax/#代理[Jenkins Documentation].
== Built-in podTemplate
podTemplate is a Pod template used to create Agents. You can define podTemplates to be used in the Kubernetes plugin.
During the pipeline run, each Jenkins Agent Pod must have a container named **jnlp** to facilitate communication between the Jenkins Controller and the Jenkins Agent. Additionally, you can add containers in the podTemplate to meet personalized needs. You can use a custom Pod YAML to flexibly control the runtime environment and switch containers using the **container** command. The following is an example.
[,json]
----
pipeline {
agent {
kubernetes {
//cloud 'kubernetes'
label 'mypod'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: maven
image: maven:3.3.9-jdk-8-alpine
command: ['cat']
tty: true
"""
}
}
stages {
stage('Run maven') {
steps {
container('maven') {
sh 'mvn -version'
}
}
}
}
}
----
In the current version, KubeSphere comes with 4 types of podTemplates: **base**, **nodejs**, **maven**, and **go**, and provides an isolated Docker environment in the Pod.
You can use the built-in podTemplate by specifying the label of the Agent. For example, to use the nodejs podTemplate, specify the label as **nodejs** when creating the pipeline, as shown in the following example.
[,json]
----
pipeline {
agent {
node {
label 'nodejs'
}
}
stages {
stage('nodejs hello') {
steps {
container('nodejs') {
sh 'yarn -v'
sh 'node -v'
sh 'docker version'
sh 'docker images'
}
}
}
}
}
----
* podTemplate base
+
--
[%header,cols="1a,4a"]
|===
|Name |Type/Version
|Jenkins Agent Label
|base
|Container Name
|base
|Operating System
|centos-7
|Docker
|18.06.0
|Helm
|2.11.0
|Kubectl
|Stable version
|Built-in Tools
|unzip, which, make, wget, zip, bzip2, git
|===
--
* podTemplate nodejs
+
--
[%header,cols="1a,4a"]
|===
|Name |Type/Version
|Jenkins Agent Label
|nodejs
|Container Name
|nodejs
|Operating System
|centos-7
|Node
|9.11.2
|Yarn
|1.3.2
|Docker
|18.06.0
|Helm
|2.11.0
|Kubectl
|Stable version
|Built-in Tools
|unzip, which, make, wget, zip, bzip2, git
|===
--
* podTemplate maven
+
--
[%header,cols="1a,4a"]
|===
|Name |Type/Version
|Jenkins Agent Label
|maven
|Container Name
|maven
|Operating System
|centos-7
|Jdk
|openjdk-1.8.0
|Maven
|3.5.3
|Docker
|18.06.0
|Helm
|2.11.0
|Kubectl
|Stable version
|Built-in Tools
|unzip, which, make, wget, zip, bzip2, git
|===
--
* podTemplate go
+
--
[%header,cols="1a,4a"]
|===
|Name |Type/Version
|Jenkins Agent Label
|go
|Container Name
|go
|Operating System
|centos-7
|Go
|1.11
|GOPATH
|/home/jenkins/go
|GOROOT
|/usr/local/go
|Docker
|18.06.0
|Helm
|2.11.0
|Kubectl
|Stable version
|Built-in Tools
|unzip, which, make, wget, zip, bzip2, git
|===
--

View File

@ -0,0 +1,81 @@
---
title: "Customize Jenkins Agent"
keywords: "Kubernetes, {ks_product-en}, DevOps project, use DevOps, pipeline"
description: "Introduces how to customize Jenkins Agent."
weight: 11
---
To use a Jenkins Agent that runs a specific environment (e.g., JDK 11), you can customize the Jenkins Agent on KubeSphere.
This document describes how to customize the Jenkins Agent on KubeSphere.
== Prerequisites
**DevOps** must have been installed and enabled.
== Customize Jenkins Agent
. Log in to the {ks_product-en} web console as the **admin** user.
. Click **Cluster Management** and enter a cluster.
. In the left navigation pane, select **Configuration** > **ConfigMaps**.
. On the **ConfigMaps** page, enter **jenkins-casc-config** in the search box and press **Enter**.
. Click **jenkins-casc-config** to enter its detail page, click **More**, and select **Edit YAML**.
. In the dialog that appears, search for **data:jenkins_user.yaml:jenkins:clouds:kubernetes:templates** and enter the following code below it, then click **OK**.
+
--
[,yaml]
----
- name: "maven-jdk11" # Customize the name of the Jenkins Agent.
label: "maven jdk11" # Customize the label of the Jenkins Agent. If you want to specify multiple labels, separate them with spaces.
inheritFrom: "maven" # The name of the existing Pod template from which the custom Jenkins Agent inherits.
containers:
- name: "maven" # The container name specified in the existing Pod template from which the custom Jenkins Agent inherits.
image: "kubespheredev/builder-maven:v3.2.0jdk11" # This image is for testing purposes only. Please use your own image.
----
//note
[.admon.note,cols="a"]
|===
|Note
|
Ensure that the indentation in the YAML file is correct.
|===
--
. Wait for 1 to 2 minutes for the new configuration to reload automatically.
. To use the customized Jenkins Agent, refer to the example Jenkinsfile below, specifying the label and container name corresponding to the customized Jenkins Agent when creating the pipeline.
+
--
[,json]
----
pipeline {
agent {
node {
label 'maven && jdk11'
}
}
stages {
stage('Print Maven and JDK version') {
steps {
container('maven') {
sh '''
mvn -v
java -version
'''
}
}
}
}
}
----
--

View File

@ -0,0 +1,9 @@
---
title: "Pipelines"
keywords: "Kubernetes, {ks_product-en}, DevOps projects, Using DevOps, Pipelines"
description: "This section introduces how to use pipelines."
weight: 02
layout: "second"
---
This section introduces how to use the pipeline feature.

View File

@ -0,0 +1,223 @@
---
title: "Use GitOps to Achieve Continuous Deployment of Applications"
keywords: "Kubernetes, {ks_product-en}, DevOps project, use DevOps"
description: "Introduces how to create continuous deployment to achieve application deployment."
weight: 03
---
KubeSphere introduces a philosophy for implementing continuous deployment of cloud-native applications GitOps. The core idea of GitOps is to have a Git repository where the declarative infrastructure and applications of the application system are stored and version-controlled. GitOps combined with Kubernetes can use an automated delivery pipeline to apply changes to any number of specified clusters, thus solving the consistency problem of cross-cloud deployment.
This document walks you through the process of deploying an application using a continuous deployment.
== Prerequisites
* **DevOps** must have been installed and enabled.
* A workspace, a DevOps project, and a user (e.g. **project-regular**) have been created, and the user has been invited to the DevOps project with the **operator** role. See link:../05-devops-settings/02-role-and-member-management[Role and Member Management].
== Import a Code Repository
. Log in to the {ks_product-en} web console as the **project-regular** user.
. Click **Workspace Management** and enter your DevOps project.
. In the left navigation pane, click **Code Repositories**.
. On the right side of the code repositories page, click **Add**.
. In the **Import Code Repository** dialog, enter a code repository name, such as **open-podcasts**, and click **Select a code repository**. You can also set an alias and add a description for the code repository.
. In the **Select Code Repository** dialog, click **Git**, enter the repository address in the **Code Repository URL** area, such as link:https://github.com/kubesphere-sigs/open-podcasts[], and click **OK**.
+
--
//note
[.admon.note,cols="a"]
|===
|Note
|
The repository imported here is a public repository, so no credentials are needed. If you are adding a private repository, you need to create credentials. For more information on how to add credentials, see link:../05-devops-settings/01-credential-management/[Credential Management].
|===
--
== Create Continuous Deployment
. In the left navigation pane, click **Continuous Deployments**.
. On the right side of the **Continuous Deployments** page, click **Create**.
. On the **Basic Information** tab, enter a continuous deployment name, such as **open-podcasts**. In the **Deployment Location** area, select the cluster and project for continuous deployment. Click **Next**.
. On the **Code Repository Settings** tab, select the code repository created in the previous step, set the branch or tag of the code repository, and the path of the `Kustomization` manifest file. Click **Next**.
+
--
[%header, cols="1a,3a"]
|===
|Parameter |Description
|Revision
|The commit ID, branch, or tag of the Git repository. For example, **master**, **v1.2.0**, **0a1b2c3**, or **HEAD**.
|Manifest File Path
|Set the path of the manifest file. For example, **config/default**.
|===
--
. On the **Sync Settings** tab, in the **Sync Strategy** area, select **Auto Sync** or **Manual Sync** as needed.
+
--
* **Auto Sync**: Automatically trigger application synchronization when a difference is detected between the manifest in the Git repository and the real-time state of the deployment resources, according to the sync options. The specific parameters are shown in the table below.
+
====
[%header, cols="1a,3a"]
|===
|Parameter |Description
|Prune resources
|If selected, resources that do not exist in Git will be deleted during automatic sync. If not selected, resources in the cluster will not be deleted when automatic sync is triggered.
|Self-heal
|If selected, when there is a deviation between the defined state in Git and the deployed resources, the defined state in Git will be enforced. If not selected, automatic sync will not be triggered when changes are made to the deployed resources.
|===
====
* **Manual Sync**: Manually trigger application synchronization according to the sync options.
--
. In the **Sync Settings** area, set the sync options as needed.
+
--
[%header, cols="1a,3a"]
|===
|Parameter |Description
|Skip schema validation
|Skip **kubectl** validation. When executing **kubectl apply**, add the **--validate=false** flag.
|Auto create project
|Automatically create a project for application resources if the project does not exist.
|Prune last
|Clean up resources after all other resources have been deployed and are in a healthy state.
|Apply out of sync only
|Only sync resources in the **out-of-sync** state.
|===
--
. In the **Prune Propagation Policy** area, select the dependency cleanup policy as needed.
+
--
[%header, cols="1a,3a"]
|===
|Parameter |Description
|foreground
|Delete dependent resources first, then delete the main resource.
|background
|Delete the main resource first, then delete the dependent resources.
|orphan
|Delete the main resource, leaving the dependent resource as an orphan.
|===
--
. In the **Replace Resource** area, select whether existing resources need to be replaced.
+
--
//note
[.admon.note,cols="a"]
|===
|Note
|
If checked, the **kubectl replace/create** command will be executed to sync resources. If unchecked, the **kubectl apply** command will be used to sync resources.
|===
--
. Click **Create**. The created continuous deployment will be displayed in the list.
== View the Created Continuous Deployment
. On the **Continuous Deployments** page, view the created continuous deployment information. The parameters are shown in the table below.
+
--
[%header,cols="1a,4a"]
|===
|Parameter |Description
|Name
|The name of the continuous deployment.
|Health Status
|The health status of the continuous deployment, which includes:
* **Healthy**: The resources are healthy.
* **Degraded**: The resources have been degraded.
* **Progressing**: The resources are being synchronized. This state is returned by default.
* **Suspended**: The resources have been paused and are waiting to be resumed.
* **Unknown**: The health status of the resources is unknown.
* **Missing**: The resources are missing.
|Sync Status
|The sync status of the continuous deployment, which includes:
* **Synced**: The resource sync has been completed.
* **Out of Sync**: The actual running status of the resources is inconsistent with the expected status.
* **Unknown**: The sync status of the resources is unknown.
|Deployment Location
|The cluster and project where the resources are deployed.
|Update Time
|The time when the resources are updated.
|===
--
. Click image:/images/ks-qkcp/zh/icons/more.svg[more,18,18] on the right side of the continuous deployment, and you can perform the following operations:
+
--
* **Edit Information**: Edit the alias and description of the continuous deployment.
* **Edit YAML**: Edit the YAML file of the continuous deployment.
* **Sync**: Trigger resource synchronization.
* **Delete**: Delete the continuous deployment.
//warning
[.admon.warning,cols="a"]
|===
|Warning
|
Deleting the continuous deployment will also delete the resources associated with it. Please proceed with caution.
|===
--
. Click the created continuous deployment to enter the detail page and view the sync status and results.
== Access the Created Application
. Enter the project to which the continuous deployment belongs, and in the left navigation pane, click **Application Workloads** > **Services**.
. On the **Services** page, find the deployed application and click image:/images/ks-qkcp/zh/icons/more.svg[more,18,18] on the right side, then select **Edit External Access**.
. Select **NodePort** in the **Access Mode**, and click **OK**.
. On the service list page, view the exposed port in the **External Access** column, and access the application through {Node IP}:{NodePort}.
+
--
//note
[.admon.note,cols="a"]
|===
|Note
|
Before accessing the service, please ensure that the port is open in the security group.
|===
--

Some files were not shown because too many files have changed in this diff Show More