reword the path on-premises
Signed-off-by: FeynmanZhou <pengfeizhou@yunify.com>
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
linkTitle: "Install on On-premise Kubernetes"
|
||||
linkTitle: "Installing on On-premises Kubernetes"
|
||||
weight: 2300
|
||||
|
||||
_build:
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ weight: 2112
|
|||
|
||||
In a production environment, a single-node cluster cannot satisfy most of the needs as the cluster has limited resources with insufficient compute capabilities. Thus, single-node clusters are not recommended for large-scale data processing. Besides, a cluster of this kind is not available with high availability as it only has one node. On the other hand, a multi-node architecture is the most common and preferred choice in terms of application deployment and distribution.
|
||||
|
||||
This section gives you an overview of multi-node installation, including the concept, KubeKey and steps. For information about HA installation, refer to Installing on Public Cloud and Installing on On-premises Environment.
|
||||
This section gives you an overview of multi-node installation, including the concept, KubeKey and steps. For information about HA installation, refer to Installing on Public Cloud and Installing in On-premises Environment.
|
||||
|
||||
## Concept
|
||||
|
||||
|
|
|
|||
|
|
@ -1,7 +1,9 @@
|
|||
---
|
||||
linkTitle: "Install on On-premise environment"
|
||||
linkTitle: "Install on On-premises environment"
|
||||
weight: 2200
|
||||
|
||||
_build:
|
||||
render: false
|
||||
---
|
||||
|
||||
In this chapter, we will demonstrate how to use KubeKey or Kubeadm to provision a new Kubernetes and KubeSphere cluster on some on on-premises environments, such as VMware vSphere, OpenStack, Bare Metal, etc. You just need prepare the machines with supported operating system before you start installation. The air-gapped installation guide is also included in this chapter.
|
||||
|
|
|
|||
|
|
@ -0,0 +1,465 @@
|
|||
---
|
||||
title: "VMware vSphere Installation"
|
||||
keywords: 'kubernetes, kubesphere, VMware vSphere, installation'
|
||||
description: 'How to install KubeSphere on VMware vSphere Linux machines'
|
||||
|
||||
|
||||
weight: 2260
|
||||
---
|
||||
|
||||
# Introduction
|
||||
|
||||
For a production environment, we need to consider the high availability of the cluster. If the key components (e.g. kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, we need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
|
||||
This tutorial walks you through an example of how to create keepalived + haproxy, and implement high availability of master and etcd nodes using the load balancers.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Please make sure that you already know how to install KubeSphere with a multi-node cluster by following the [guide](https://github.com/kubesphere/kubekey). For the detailed information about the config yaml file that is used for installation, see Multi-node Installation. This tutorial focuses more on how to configure load balancers.
|
||||
- You need a VMware vSphere account to create VMs.
|
||||
- Considering data persistence, for a production environment, we recommend you to prepare persistent storage and create a StorageClass in advance. For development and testing, you can use the integrated OpenEBS to provision LocalPV as the storage service directly.
|
||||
|
||||
## Architecture
|
||||
|
||||

|
||||
|
||||
## Prepare Linux Hosts
|
||||
|
||||
This tutorial creates 9 virtual machines with **CentOS Linux release 7.6.1810 (Core)**, the default minimal installation, each configuration is 2 Core 4 GB 40 G.
|
||||
|
||||
|
||||
| Host IP | Host Name | Role |
|
||||
| --- | --- | --- |
|
||||
|10.10.71.214|master1|master1, etcd|
|
||||
|10.10.71.73|master2|master2, etcd|
|
||||
|10.10.71.62|master3|master3, etcd|
|
||||
|10.10.71.75|node1|node|
|
||||
|10.10.71.76|node2|node|
|
||||
|10.10.71.79|node3|node|
|
||||
|10.10.71.67|vip|vip|
|
||||
|10.10.71.77|lb-0|lb(keepalived + haproxy)|
|
||||
|10.10.71.66|lb-1|lb(keepalived + haproxy)|
|
||||
|
||||
Start the Virtual Machine Creation Process in the VMware Host Client
|
||||
You use the New Virtual Machine wizard to create a virtual machine to place in the VMware Host Client inventory
|
||||
|
||||

|
||||
|
||||
You use the Select creation type page of the New Virtual Machine wizard to create a new virtual machine, deploy a virtual machine from an OVF or OVA file, or register an existing virtual machine
|
||||
|
||||

|
||||
|
||||
When you create a new virtual machine, provide a unique name for the virtual machine to distinguish it from existing virtual machines on the host you are managing.
|
||||
|
||||

|
||||
|
||||
Select the datastore or datastore cluster to store the virtual machine configuration files and all of the virtual disks in. You can select the datastore that has the most suitable properties, such as size, speed, and availability, for your virtual machine storage.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
you select a guest operating system, the wizard provides the appropriate defaults for the operating system installation.
|
||||
|
||||

|
||||
|
||||
Before you deploy a new virtual machine, you have the option to configure the virtual machine hardware and the virtual machine options
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
In the Ready to complete page, you review the configuration selections that you made for the virtual machine.
|
||||
|
||||

|
||||
|
||||
|
||||
## Keepalived+Haproxy
|
||||
### Yum Install
|
||||
|
||||
host lb-0(10.10.71.77) and host lb-1(10.10.71.66)
|
||||
|
||||
```bash
|
||||
yum install keepalived haproxy psmisc -y
|
||||
```
|
||||
|
||||
### Configure Haproxy
|
||||
|
||||
On the servers with IP 10.10.71.77 and 10.10.71.66, configure haproxy (the configuration of the two lb machines is the same, pay attention to the back-end service address).
|
||||
```bash
|
||||
#Haproxy Configure /etc/haproxy/haproxy.cfg
|
||||
global
|
||||
log 127.0.0.1 local2
|
||||
chroot /var/lib/haproxy
|
||||
pidfile /var/run/haproxy.pid
|
||||
maxconn 4000
|
||||
user haproxy
|
||||
group haproxy
|
||||
daemon
|
||||
# turn on stats unix socket
|
||||
stats socket /var/lib/haproxy/stats
|
||||
#---------------------------------------------------------------------
|
||||
# common defaults that all the 'listen' and 'backend' sections will
|
||||
# use if not designated in their block
|
||||
#---------------------------------------------------------------------
|
||||
defaults
|
||||
log global
|
||||
option httplog
|
||||
option dontlognull
|
||||
timeout connect 5000
|
||||
timeout client 5000
|
||||
timeout server 5000
|
||||
#---------------------------------------------------------------------
|
||||
# main frontend which proxys to the backends
|
||||
#---------------------------------------------------------------------
|
||||
frontend kube-apiserver
|
||||
bind *:6443
|
||||
mode tcp
|
||||
option tcplog
|
||||
default_backend kube-apiserver
|
||||
#---------------------------------------------------------------------
|
||||
# round robin balancing between the various backends
|
||||
#---------------------------------------------------------------------
|
||||
backend kube-apiserver
|
||||
mode tcp
|
||||
option tcplog
|
||||
balance roundrobin
|
||||
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
|
||||
server kube-apiserver-1 10.10.71.214:6443 check
|
||||
server kube-apiserver-2 10.10.71.73:6443 check
|
||||
server kube-apiserver-3 10.10.71.62:6443 check
|
||||
```
|
||||
Check for grammar before starting
|
||||
|
||||
```bash
|
||||
haproxy -f /etc/haproxy/haproxy.cfg -c
|
||||
```
|
||||
Start Haproxy and set it to enable haproxy
|
||||
|
||||
```bash
|
||||
systemctl restart haproxy && systemctl enable haproxy
|
||||
```
|
||||
Stop Haproxy
|
||||
|
||||
```bash
|
||||
systemctl stop haproxy
|
||||
```
|
||||
### Configure Keepalived
|
||||
|
||||
Main haproxy 77 lb-0-10.10.71.77 (/etc/keepalived/keepalived.conf)
|
||||
|
||||
```bash
|
||||
global_defs {
|
||||
notification_email {
|
||||
}
|
||||
smtp_connect_timeout 30
|
||||
router_id LVS_DEVEL01
|
||||
vrrp_skip_check_adv_addr
|
||||
vrrp_garp_interval 0
|
||||
vrrp_gna_interval 0
|
||||
}
|
||||
vrrp_script chk_haproxy {
|
||||
script "killall -0 haproxy"
|
||||
interval 2
|
||||
weight 2
|
||||
}
|
||||
vrrp_instance haproxy-vip {
|
||||
state MASTER
|
||||
priority 100
|
||||
interface ens192
|
||||
virtual_router_id 60
|
||||
advert_int 1
|
||||
authentication {
|
||||
auth_type PASS
|
||||
auth_pass 1111
|
||||
}
|
||||
unicast_src_ip 10.10.71.77
|
||||
unicast_peer {
|
||||
10.10.71.66
|
||||
}
|
||||
virtual_ipaddress {
|
||||
#vip
|
||||
10.10.71.67/24
|
||||
}
|
||||
track_script {
|
||||
chk_haproxy
|
||||
}
|
||||
}
|
||||
```
|
||||
remarks haproxy 66 lb-1-10.10.71.66 (/etc/keepalived/keepalived.conf)
|
||||
|
||||
```bash
|
||||
global_defs {
|
||||
notification_email {
|
||||
}
|
||||
router_id LVS_DEVEL02
|
||||
vrrp_skip_check_adv_addr
|
||||
vrrp_garp_interval 0
|
||||
vrrp_gna_interval 0
|
||||
}
|
||||
vrrp_script chk_haproxy {
|
||||
script "killall -0 haproxy"
|
||||
interval 2
|
||||
weight 2
|
||||
}
|
||||
vrrp_instance haproxy-vip {
|
||||
state BACKUP
|
||||
priority 90
|
||||
interface ens192
|
||||
virtual_router_id 60
|
||||
advert_int 1
|
||||
authentication {
|
||||
auth_type PASS
|
||||
auth_pass 1111
|
||||
}
|
||||
unicast_src_ip 10.10.71.66
|
||||
unicast_peer {
|
||||
10.10.71.77
|
||||
}
|
||||
virtual_ipaddress {
|
||||
10.10.71.67/24
|
||||
}
|
||||
track_script {
|
||||
chk_haproxy
|
||||
}
|
||||
}
|
||||
```
|
||||
start keepalived and enable keepalived
|
||||
|
||||
```bash
|
||||
systemctl restart keepalived && systemctl enable keepalived
|
||||
systemctl stop keepaliv
|
||||
systemctl start keepalived
|
||||
```
|
||||
|
||||
### Verify availability
|
||||
|
||||
Use `ip a s` to view the vip binding status of each lb node
|
||||
|
||||
```bash
|
||||
ip a s
|
||||
```
|
||||
|
||||
Pause VIP node haproxy:`systemctl stop haproxy`
|
||||
|
||||
```
|
||||
systemctl stop haproxy
|
||||
```
|
||||
|
||||
Use `ip a s` again to check the vip binding of each lb node, and check whether vip drifts
|
||||
|
||||
```bash
|
||||
ip a s
|
||||
```
|
||||
|
||||
Or use `systemctl status -l keepalived` command to view
|
||||
|
||||
```bash
|
||||
systemctl status -l keepalived
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Get the Installer Excutable File
|
||||
|
||||
Download Binary
|
||||
|
||||
```bash
|
||||
curl -O -k https://kubernetes.pek3b.qingstor.com/tools/kubekey/kk
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
## Create a Multi-Node Cluster
|
||||
|
||||
You have more control to customize parameters or create a multi-node cluster using the advanced installation. Specifically, create a cluster by specifying a configuration file.。
|
||||
|
||||
### With KubeKey, you can install Kubernetes and KubeSphere
|
||||
|
||||
Create a Kubernetes cluster with KubeSphere installed (e.g. --with-kubesphere v3.0.0)
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubesphere v3.0.0 -f ~/config-sample.yaml
|
||||
```
|
||||
#### Modify the file config-sample.yaml according to your environment
|
||||
|
||||
vi ~/config-sample.yaml
|
||||
|
||||
```yaml
|
||||
apiVersion: kubekey.kubesphere.io/v1alpha1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: config-sample
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master1, address: 10.10.71.214, internalAddress: 10.10.71.214, password: P@ssw0rd!}
|
||||
- {name: master2, address: 10.10.71.73, internalAddress: 10.10.71.73, password: P@ssw0rd!}
|
||||
- {name: master3, address: 10.10.71.62, internalAddress: 10.10.71.62, password: P@ssw0rd!}
|
||||
- {name: node1, address: 10.10.71.75, internalAddress: 10.10.71.75, password: P@ssw0rd!}
|
||||
- {name: node2, address: 10.10.71.76, internalAddress: 10.10.71.76, password: P@ssw0rd!}
|
||||
- {name: node3, address: 10.10.71.79, internalAddress: 10.10.71.79, password: P@ssw0rd!}
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
master:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
worker:
|
||||
- node1
|
||||
- node2
|
||||
- node3
|
||||
controlPlaneEndpoint:
|
||||
domain: lb.kubesphere.local
|
||||
# vip
|
||||
address: "10.10.71.67"
|
||||
port: "6443"
|
||||
kubernetes:
|
||||
version: v1.17.9
|
||||
imageRepo: kubesphere
|
||||
clusterName: cluster.local
|
||||
masqueradeAll: false # masqueradeAll tells kube-proxy to SNAT everything if using the pure iptables proxy mode. [Default: false]
|
||||
maxPods: 110 # maxPods is the number of pods that can run on this Kubelet. [Default: 110]
|
||||
nodeCidrMaskSize: 24 # internal network node size allocation. This is the size allocated to each node on your network. [Default: 24]
|
||||
proxyMode: ipvs # mode specifies which proxy mode to use. [Default: ipvs]
|
||||
network:
|
||||
plugin: calico
|
||||
calico:
|
||||
ipipMode: Always # IPIP Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, vxlanMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Always]
|
||||
vxlanMode: Never # VXLAN Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, ipipMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Never]
|
||||
vethMTU: 1440 # The maximum transmission unit (MTU) setting determines the largest packet size that can be transmitted through your network. [Default: 1440]
|
||||
kubePodsCIDR: 10.233.64.0/18
|
||||
kubeServiceCIDR: 10.233.0.0/18
|
||||
registry:
|
||||
registryMirrors: []
|
||||
insecureRegistries: []
|
||||
privateRegistry: ""
|
||||
storage:
|
||||
defaultStorageClass: localVolume
|
||||
localVolume:
|
||||
storageClassName: local
|
||||
|
||||
---
|
||||
apiVersion: installer.kubesphere.io/v1alpha1
|
||||
kind: ClusterConfiguration
|
||||
metadata:
|
||||
name: ks-installer
|
||||
namespace: kubesphere-system
|
||||
labels:
|
||||
version: v3.0.0
|
||||
spec:
|
||||
local_registry: ""
|
||||
persistence:
|
||||
storageClass: ""
|
||||
authentication:
|
||||
jwtSecret: ""
|
||||
etcd:
|
||||
monitoring: true # Whether to install etcd monitoring dashboard
|
||||
endpointIps: 192.168.0.7,192.168.0.8,192.168.0.9 # etcd cluster endpointIps
|
||||
port: 2379 # etcd port
|
||||
tlsEnable: true
|
||||
common:
|
||||
mysqlVolumeSize: 20Gi # MySQL PVC size
|
||||
minioVolumeSize: 20Gi # Minio PVC size
|
||||
etcdVolumeSize: 20Gi # etcd PVC size
|
||||
openldapVolumeSize: 2Gi # openldap PVC size
|
||||
redisVolumSize: 2Gi # Redis PVC size
|
||||
es: # Storage backend for logging, tracing, events and auditing.
|
||||
elasticsearchMasterReplicas: 1 # total number of master nodes, it's not allowed to use even number
|
||||
elasticsearchDataReplicas: 1 # total number of data nodes
|
||||
elasticsearchMasterVolumeSize: 4Gi # Volume size of Elasticsearch master nodes
|
||||
elasticsearchDataVolumeSize: 20Gi # Volume size of Elasticsearch data nodes
|
||||
logMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
|
||||
elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log
|
||||
# externalElasticsearchUrl:
|
||||
# externalElasticsearchPort:
|
||||
console:
|
||||
enableMultiLogin: false # enable/disable multiple sing on, it allows an account can be used by different users at the same time.
|
||||
port: 30880
|
||||
alerting: # Whether to install KubeSphere alerting system. It enables Users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
|
||||
enabled: false
|
||||
auditing: # Whether to install KubeSphere audit log system. It provides a security-relevant chronological set of records,recording the sequence of activities happened in platform, initiated by different tenants.
|
||||
enabled: false
|
||||
devops: # Whether to install KubeSphere DevOps System. It provides out-of-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image
|
||||
enabled: false
|
||||
jenkinsMemoryLim: 2Gi # Jenkins memory limit
|
||||
jenkinsMemoryReq: 1500Mi # Jenkins memory request
|
||||
jenkinsVolumeSize: 8Gi # Jenkins volume size
|
||||
jenkinsJavaOpts_Xms: 512m # The following three fields are JVM parameters
|
||||
jenkinsJavaOpts_Xmx: 512m
|
||||
jenkinsJavaOpts_MaxRAM: 2g
|
||||
events: # Whether to install KubeSphere events system. It provides a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
|
||||
enabled: false
|
||||
logging: # Whether to install KubeSphere logging system. Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
|
||||
enabled: false
|
||||
logsidecarReplicas: 2
|
||||
metrics_server: # Whether to install metrics-server. IT enables HPA (Horizontal Pod Autoscaler).
|
||||
enabled: true
|
||||
monitoring: #
|
||||
prometheusReplicas: 1 # Prometheus replicas are responsible for monitoring different segments of data source and provide high availability as well.
|
||||
prometheusMemoryRequest: 400Mi # Prometheus request memory
|
||||
prometheusVolumeSize: 20Gi # Prometheus PVC size
|
||||
alertmanagerReplicas: 1 # AlertManager Replicas
|
||||
multicluster:
|
||||
clusterRole: none # host | member | none # You can install a solo cluster, or specify it as the role of host or member cluster
|
||||
networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
|
||||
enabled: false
|
||||
notification: # It supports notification management in multi-tenant Kubernetes clusters. It allows you to set AlertManager as its sender, and receivers include Email, Wechat Work, and Slack.
|
||||
enabled: false
|
||||
openpitrix: # Whether to install KubeSphere Application Store. It provides an application store for Helm-based applications, and offer application lifecycle management
|
||||
enabled: false
|
||||
servicemesh: # Whether to install KubeSphere Service Mesh (Istio-based). It provides fine-grained traffic management, observability and tracing, and offer visualization for traffic topology
|
||||
enabled: false
|
||||
```
|
||||
Create a cluster using the configuration file you customized above:
|
||||
|
||||
```bash
|
||||
./kk create cluster -f config-sample.yaml
|
||||
```
|
||||
|
||||
#### Verify the multi-node installation
|
||||
|
||||
Inspect the logs of installation, and wait a while:
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
If you can see the welcome log return, it means the installation is successful. You are ready to go.
|
||||
|
||||
```bash
|
||||
**************************************************
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
Console: http://10.10.71.214:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
NOTES:
|
||||
1. After logging into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are ready.
|
||||
2. Please modify the default password after login.
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-08-15 23:32:12
|
||||
#####################################################
|
||||
```
|
||||
|
||||
#### Log in the console
|
||||
|
||||
You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in.
|
||||
|
||||

|
||||
|
||||
#### Enable Pluggable Components (Optional)
|
||||
The example above demonstrates the process of a default minimal installation. To enable other components in KubeSphere, see [Enable Pluggable Components for more details](https://github.com/kubesphere/ks-installer#enable-pluggable-components).
|
||||
|
||||
|
|
@ -16,7 +16,7 @@ KubeSphere delivers **consolidated views while integrating a wide breadth of eco
|
|||
|
||||
## Run KubeSphere Everywhere
|
||||
|
||||
As a lightweight platform, KubeSphere has become more friendly to different cloud ecosystems as it does not change Kubernetes itself at all. In other words, KubeSphere can be deployed **on any existing version-compatible Kubernetes cluster on any infrastructure** including virtual machine, bare metal, on-premise, public cloud and hybrid cloud. KubeSphere users have the choice of installing KubeSphere on cloud and container platforms, such as Alibaba Cloud, AWS, QingCloud, Tencent Cloud, Huawei Cloud and Rancher, and even importing and managing their existing Kubernetes clusters created using major Kubernetes distributions. The seamless integration of KubeSphere into existing Kubernetes platforms means that the business of users will not be affected, without any modification to their current resources or assets. For more information, see Installation.
|
||||
As a lightweight platform, KubeSphere has become more friendly to different cloud ecosystems as it does not change Kubernetes itself at all. In other words, KubeSphere can be deployed **on any existing version-compatible Kubernetes cluster on any infrastructure** including virtual machine, bare metal, on-premises, public cloud and hybrid cloud. KubeSphere users have the choice of installing KubeSphere on cloud and container platforms, such as Alibaba Cloud, AWS, QingCloud, Tencent Cloud, Huawei Cloud and Rancher, and even importing and managing their existing Kubernetes clusters created using major Kubernetes distributions. The seamless integration of KubeSphere into existing Kubernetes platforms means that the business of users will not be affected, without any modification to their current resources or assets. For more information, see Installation.
|
||||
|
||||
KubeSphere screens users from the infrastructure underneath and helps enterprises modernize, migrate, deploy and manage existing and containerized apps seamlessly across a variety of infrastructure types. This is how KubeSphere empowers developers and Ops teams to focus on application development and accelerate DevOps automated workflows and delivery processes with enterprise-level observability and troubleshooting, unified monitoring and logging, centralized storage and networking management, easy-to-use CI/CD pipelines, and so on.
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: "Multi-cluster Management"
|
||||
description: "Import a hosted or on-premise Kubernetes cluster into KubeSphere"
|
||||
description: "Import a hosted or on-premises Kubernetes cluster into KubeSphere"
|
||||
layout: "single"
|
||||
|
||||
linkTitle: "Multi-cluster Management"
|
||||
|
|
|
|||
|
|
@ -1,9 +1,9 @@
|
|||
---
|
||||
title: "Quick Start"
|
||||
title: "Quickstarts"
|
||||
description: "Help you to better understand KubeSphere with detailed graphics and contents"
|
||||
layout: "single"
|
||||
|
||||
linkTitle: "Quick Start"
|
||||
linkTitle: "Quickstarts"
|
||||
|
||||
weight: 1500
|
||||
|
||||
|
|
|
|||
|
|
@ -1,8 +1,252 @@
|
|||
---
|
||||
title: "Create Workspace, Project, Account, Role"
|
||||
keywords: 'kubesphere, kubernetes, docker, multi-tenant'
|
||||
description: 'Create Workspace, Project, Account, and Role'
|
||||
title: "Create Workspace, Project, Account and Role"
|
||||
keywords: 'KubeSphere, Kubernetes, Multi-tenant, Workspace, Account, Role, Project'
|
||||
description: 'Create Workspace, Project, Account and Role'
|
||||
|
||||
linkTitle: "Create Workspace, Project, Account, Role"
|
||||
linkTitle: "Create Workspace, Project, Account and Role"
|
||||
weight: 3030
|
||||
---
|
||||
|
||||
|
||||
## Objective
|
||||
|
||||
This guide demonstrates how to create roles and user accounts which are required for the following tutorials. Meanwhile, you will learn how to create projects and DevOps projects within your workspace where your workloads are running. After this tutorial, you will become familiar with KubeSphere multi-tenant management system.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
KubeSphere needs to be installed in your machine.
|
||||
|
||||
## Estimated Time
|
||||
|
||||
About 15 minutes.
|
||||
|
||||
## Architecture
|
||||
|
||||
The multi-tenant system of KubeSphere features **three** levels of hierarchical structure which are cluster, workspace and project. A project in KubeSphere is a Kubernetes namespace.
|
||||
|
||||
You can create multiple workspaces within a Kubernetes cluster. Under each workspace, you can also create multiple projects.
|
||||
|
||||
Each level has multiple built-in roles. Besides, KubeSphere allows you to create roles with customized authorization as well. The KubeSphere hierarchy is applicable for enterprise users with different teams or groups, and different roles within each team.
|
||||
|
||||
## Hands-on Lab
|
||||
|
||||
### Task 1: Create an Account
|
||||
|
||||
After KubeSphere is installed, you need to add different users with varied roles to the platform so that they can work at different levels on various resources. Initially, you only have one default account, which is admin, granted the role `platform-admin`. In the first task, you will create an account `user-manager` and further create more accounts as `user-manager`.
|
||||
|
||||
1. Log in the web console as `admin` with the default account and password (`admin/P@88w0rd`).
|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
For account security, it is highly recommended that you change your password the first time you log in the console. To change your password, select **User Settings** in the drop-down menu at the top right corner. In **Password Setting**, set a new password.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
2. After you log in the console, click **Platform** at the top left corner and select **Access Control**.
|
||||
|
||||

|
||||
|
||||
In **Account Roles**, there are four available built-in roles as shown below. The account to be created next will be assigned the role `users-manager`.
|
||||
|
||||
| Built-in Roles | Description |
|
||||
| ------------------ | ------------------------------------------------------------ |
|
||||
| workspaces-manager | Workspace manager in the platform who manages all workspaces in the platform. |
|
||||
| users-manager | User manager in the platform who manages all users. |
|
||||
| platform-regular | Normal user in the platform who has no access to any resources before joining a workspace or cluster. |
|
||||
| platform-admin | Platform administrator who can manage all resources in the platform. |
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
Built-in roles are created automatically by KubeSphere and cannot be edited or deleted.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
3. In **Accounts**, click **Create**. In the pop-up window, provide all the necessary information (marked with *) and select `users-manager` for **Role**. Refer to the image below as an example.
|
||||
|
||||

|
||||
|
||||
Click **OK** after you finish. A newly-created account will display in the account list in **Accounts**.
|
||||
|
||||
4. Log out of the console and log back in with the account `user-manager` to create four accounts that will be used in the following tutorials.
|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
To log out, click your username at the top right corner and select **Log Out**.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
For detailed information about the four accounts you need to create, refer to the table below.
|
||||
|
||||
| Account | Role | Description |
|
||||
| --------------- | ------------------ | ------------------------------------------------------------ |
|
||||
| ws-manager | workspaces-manager | Create and manage all workspaces. |
|
||||
| ws-admin | platform-regular | Manage all resources in a specified workspace (This account is used to invite new members to a workspace in this example). |
|
||||
| project-admin | platform-regular | Create and manage projects and DevOps projects, and invite new members into the projects. |
|
||||
| project-regular | platform-regular | `project-regular` will be invited to a project or DevOps project by `project-admin`. This account will be used to create workloads, pipelines and other resources in a specified project. |
|
||||
|
||||
5. Verify the four accounts created.
|
||||
|
||||

|
||||
|
||||
### Task 2: Create a Workspace
|
||||
|
||||
In this task, you need to create a workspace using the account `ws-manager` created in the previous task. As the basic logic unit for the management of projects, DevOps projects and organization members, workspaces underpin multi-tenant system of KubeSphere.
|
||||
|
||||
1. Log in KubeSphere as `ws-manager` which has the authorization to manage all workspaces on the platform. Click **Platform** at the top left corner. In **Workspaces**, you can see there is only one default workspace **system-workspace** listed, where system-related components and services run. You are not allowed to delete this workspace.
|
||||
|
||||

|
||||
|
||||
2. Click **Create** on the right, name the new workspace `demo-workspace` and set the user `ws-admin` as the workspace manager shown in the screenshot below:
|
||||
|
||||

|
||||
|
||||
Click **Create** after you finish.
|
||||
|
||||
3. Log out of the console and log back in as `ws-admin`. In **Workspace Settings**, select **Workspace Members** and click **Invite Member**.
|
||||
|
||||

|
||||
|
||||
4. Invite both `project-admin` and `project-regular` to the workspace. Grant them the role `workspace-self-provisioner` and `workspace-viewer` respectively.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The actual role name follows a naming convention: `workspace name-role name`. For example, in this workspace named `demo`, the actual role name of the role `workspace-viewer` is `demo-workspace-viewer`.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||

|
||||
|
||||
5. After you add both `project-admin` and `project-regular` to the workspace, click **OK**. In **Workspace Members**, you can see three members listed.
|
||||
|
||||
| Account | Role | Description |
|
||||
| --------------- | -------------------------- | ------------------------------------------------------------ |
|
||||
| ws-admin | workspace-admin | Manage all resources under the workspace (We use this account to invite new members to the workspace). |
|
||||
| project-admin | workspace-self-provisioner | Create and manage projects and DevOps projects, and invite new members to join the projects. |
|
||||
| project-regular | workspace-viewer | `project-regular` will be invited by `project-admin` to join a project or DevOps project. The account can be used to create workloads, pipelines, etc. |
|
||||
|
||||
### Task 3: Create a Project
|
||||
|
||||
In this task, you need to create a project using the account `project-admin` created in the previous task. A project in KubeSphere is the same as a namespace in Kubernetes, which provides virtual isolation for resources. For more information, see [Namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/).
|
||||
|
||||
1. Log in KubeSphere as `project-admin`. In **Projects**, click **Create**.
|
||||
|
||||

|
||||
|
||||
2. Enter the project name (e.g. `demo-project`) and click **OK** to finish. You can also add an alias and description for the project.
|
||||
|
||||

|
||||
|
||||
3. In **Projects**, click the project created just now to view its detailed information.
|
||||
|
||||

|
||||
|
||||
4. In the overview page of the project, the project quota remains unset by default. You can click **Set** and specify resource requests and limits based on your needs (e.g. 1 core for CPU and 1000Gi for memory).
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
5. Invite `project-regular` to this project and grant this user the role `operator`. Please refer to the image below for specific steps.
|
||||
|
||||

|
||||
|
||||
{{< notice info >}}
|
||||
|
||||
The user granted the role `operator` will be a project maintainer who can manage resources other than users and roles in the project.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
#### Set Gateway
|
||||
|
||||
Before creating a route, you need to enable a gateway for this project. The gateway is an [NGINX Ingress controller](https://github.com/kubernetes/ingress-nginx) running in the project.
|
||||
|
||||
{{< notice info >}}
|
||||
|
||||
A route refers to Ingress in Kubernetes, which is an API object that manages external access to the services in a cluster, typically HTTP.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
6. To set a gateway, go to **Advanced Settings** in **Project Settings** and click **Set Gateway**. The account `project-admin` is still used in this step.
|
||||
|
||||

|
||||
|
||||
7. Choose the access method **NodePort** and click **Save**.
|
||||
|
||||

|
||||
|
||||
8. Under **Internet Access**, it can be seen that the Gateway Address and the NodePort of http and https all display in the page.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
If you want to expose services using the type `LoadBalancer`, you need to use the [LoadBalancer plugin of cloud providers](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/). If your Kubernetes cluster is running in a bare metal environment, it is recommended you use [Porter](https://github.com/kubesphere/porter) as the LoadBalancer plugin.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||

|
||||
|
||||
### Task 4: Create a Role
|
||||
|
||||
After you finish the above tasks, you know that users can be granted different roles at different levels. The roles used in previous tasks are all built-in ones created by KubeSphere itself. In this task, you will learn how to define a role yourself to meet the needs in your work.
|
||||
|
||||
1. Log in the console as `admin` again and go to **Access Control**.
|
||||
2. In **Account Roles**, there are four system roles listed which cannot be deleted or edited. Click **Create** and set a **Role Identifier**. In this example, a role named `roles-manager` will be created.
|
||||
|
||||

|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
It is recommended you enter a description for the role as it explains what the role is used for. The role created here will be responsible for role management only, including adding and deleting roles.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
Click **Edit Authorization** to continue.
|
||||
|
||||
3. In **Access Control**, select the authorization that you want the user granted this role to have. For example, **Users View**, **Roles Management** and **Roles View** are selected for this role. Click **OK** to finish.
|
||||
|
||||

|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
**Depend on** means the major authorization (the one listed after **Depend on**) needs to be selected first so that the affiliated authorization can be assigned.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
4. Newly-created roles will be listed in **Account Roles**. You can click the three dots on the right to edit it.
|
||||
|
||||

|
||||
|
||||
5. In **Accounts**, you can add a new account and grant it the role `roles-manager` or change the role of an existing account to `roles-manager` by editing it.
|
||||
|
||||

|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The role of `roles-manager` overlaps with `users-manager` while the latter is also capable of user management. This example is only for demonstration purpose. You can create customized roles based on your needs.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Task 5: Create a DevOps Project (Optional)
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
To create a DevOps project, you need to install KubeSphere DevOps system in advance, which is a pluggable component providing CI/CD pipelines, Binary-to-image, Source-to-image features, and more. For more information about how to enable DevOps, see KubeSphere DevOps System.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
1. Log in the console as `project-admin` for this task. In **DevOps Projects**, click **Create**.
|
||||
|
||||

|
||||
|
||||
2. Enter the DevOps project name (e.g. `demo-devops`) and click **OK**. You can also add an alias and description for the project.
|
||||
|
||||

|
||||
|
||||
3. In **DevOps Projects**, click the project created just now to view its detailed information.
|
||||
|
||||

|
||||
|
||||
4. Go to **Project Management** and select **Project Members**. Click **Invite Member** to grant `project-regular` the role of `maintainer`, who is allowed to create pipelines and credentials.
|
||||
|
||||

|
||||
|
||||
Congratulations! You are now familiar with the multi-tenant management system of KubeSphere. In the next several tutorials, the account `project-regular` will also be used to demonstrate how to create applications and resources in a project or DevOps project.
|
||||
|
|
|
|||
|
|
@ -1,8 +0,0 @@
|
|||
---
|
||||
title: "Enable Pluggable Components"
|
||||
keywords: 'kubesphere, kubernetes, docker, multi-tenant'
|
||||
description: 'Enable Pluggable Components'
|
||||
|
||||
linkTitle: "Enable Pluggable Components"
|
||||
weight: 3060
|
||||
---
|
||||
|
|
@ -0,0 +1,152 @@
|
|||
---
|
||||
title: "Enable Pluggable Components"
|
||||
keywords: 'KubeSphere, Kubernetes, pluggable, components'
|
||||
description: 'Enable Pluggable Components'
|
||||
|
||||
linkTitle: "Enable Pluggable Components"
|
||||
weight: 3060
|
||||
---
|
||||
|
||||
This tutorial demonstrates how to enable pluggable components of KubeSphere both before and after the installation. KubeSphere features ten pluggable components which are listed below.
|
||||
|
||||
| Configuration Item | Corresponding Component | Description |
|
||||
| ------------------ | ------------------------------------- | ------------------------------------------------------------ |
|
||||
| alerting | KubeSphere alerting system | Enable users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from. |
|
||||
| auditing | KubeSphere audit log system | Provide a security-relevant chronological set of records, recording the sequence of activities that happen in the platform, initiated by different tenants. |
|
||||
| devops | KubeSphere DevOps system | Provide an out-of-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image and Binary-to-Image. |
|
||||
| events | KubeSphere events system | Provide a graphical web console for the exporting, filtering and alerting of Kubernetes events in multi-tenant Kubernetes clusters. |
|
||||
| logging | KubeSphere logging system | Provide flexible logging functions for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd. |
|
||||
| metrics_server | HPA | The Horizontal Pod Autoscaler automatically scales the number of pods based on needs. |
|
||||
| networkpolicy | Network policy | Allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods). |
|
||||
| notification | KubeSphere notification system | Allow users to set `AlertManager` as its sender. Receivers include Email, WeChat Work, and Slack. |
|
||||
| openpitrix | KubeSphere App Store | Provide an app store for Helm-based applications and allow users to manage apps throughout the entire lifecycle. |
|
||||
| servicemesh | KubeSphere Service Mesh (Istio-based) | Provide fine-grained traffic management, observability and tracing, and visualized traffic topology. |
|
||||
|
||||
For more information about each component, see Overview of Enable Pluggable Components.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- By default, the above components are not enabled except `metrics_server`. In some cases, you need to manually disable it by changing `true` to `false` in the configuration. This is because the component may already be installed in your environment, especially for cloud-hosted Kubernetes clusters.
|
||||
- `multicluster` is not covered in this tutorial. If you want to enable this feature, you need to set a corresponding value for `clusterRole`. For more information, see [Multi-cluster Management](https://kubesphere-v3.netlify.app/docs/multicluster-management/).
|
||||
- Make sure your machine meets the hardware requirements before the installation. Here is the recommendation if you want to enable all pluggable components: CPU ≥ 8 Cores, Memory ≥ 16 G, Disk Space ≥ 100 G.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
## Enable Pluggable Components before Installation
|
||||
|
||||
### **Installing on Linux**
|
||||
|
||||
When you install KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components.
|
||||
|
||||
1. In the tutorial of [Installing KubeSphere on Linux](https://kubesphere-v3.netlify.app/docs/installing-on-linux/introduction/multioverview/), you create a default file **config-sample.yaml**. Modify the file by executing the following command:
|
||||
|
||||
```bash
|
||||
vi config-sample.yaml
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
If you adopt [All-in-one Installation](https://kubesphere-v3.netlify.app/docs/quick-start/all-in-one-on-linux/), you do not need to create a config-sample.yaml file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable pluggable components in this mode (e.g. for testing purpose), refer to the following section to see how pluggable components can be installed after installation.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
2. In this file, enable the pluggable components you want to install by changing `false` to `true` for `enabled`. Here is [an example file](https://github.com/kubesphere/kubekey/blob/master/docs/config-example.md) for your reference. Save the file after you finish.
|
||||
3. Create a cluster using the configuration file:
|
||||
|
||||
```bash
|
||||
./kk create cluster -f config-sample.yaml
|
||||
```
|
||||
|
||||
### Installing on Kubernetes
|
||||
|
||||
When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) for cluster setting. If you want to install pluggable components, do not use `kubectl apply -f` directly for this file.
|
||||
|
||||
1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml). After that, to enable pluggable components, create a local file cluster-configuration.yaml.
|
||||
|
||||
```bash
|
||||
vi cluster-configuration.yaml
|
||||
```
|
||||
|
||||
2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) and paste it to the local file just created.
|
||||
3. In this local cluster-configuration.yaml file, enable the pluggable components you want to install by changing `false` to `true` for `enabled`. Here is [an example file](https://github.com/kubesphere/ks-installer/blob/master/deploy/cluster-configuration.yaml) for your reference. Save the file after you finish.
|
||||
4. Execute the following command to start installation:
|
||||
|
||||
```bash
|
||||
kubectl apply -f cluster-configuration.yaml
|
||||
```
|
||||
|
||||
Whether you install KubeSphere on Linux or on Kubernetes, you can check the status of the components you have enabled in the web console of KubeSphere after installation. Go to **Components**, and you can see an image below:
|
||||
|
||||

|
||||
|
||||
## Enable Pluggable Components after Installation
|
||||
|
||||
KubeSphere web console provides a convenient way for users to view and operate on different resources. To enable pluggable components after installation, you only need to make few adjustments in the console directly. For those who are accustomed to the Kubernetes command-line tool, kubectl, they will have no difficulty in using KubeSphere as the tool is integrated into the console.
|
||||
|
||||
1. Log in the console as `admin`. Click **Platform** at the top left corner and select **Clusters Management**.
|
||||
|
||||

|
||||
|
||||
2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detailed page.
|
||||
|
||||

|
||||
|
||||
{{< notice info >}}
|
||||
|
||||
A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
3. In **Resource List**, click the three dots on the right of `ks-installer` and select **Edit YAML**.
|
||||
|
||||

|
||||
|
||||
4. In this yaml file, enable the pluggable components you want to install by changing `false` to `true` for `enabled`. After you finish, click **Update** to save the configuration.
|
||||
|
||||

|
||||
|
||||
5. You can use the web kubectl to check the installation process by executing the following command:
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
You can find the web kubectl tool by clicking the hammer icon at the bottom right corner of the console.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
6. The output will display a message as below if the component is successfully installed.
|
||||
|
||||
```bash
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.0.2:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTES:
|
||||
1. After logging into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are ready.
|
||||
2. Please modify the default password after login.
|
||||
|
||||
#####################################################
|
||||
https://kubesphere.io 20xx-xx-xx xx:xx:xx
|
||||
#####################################################
|
||||
```
|
||||
|
||||
7. In **Components**, you can see the status of different components.
|
||||
|
||||

|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
If you do not see relevant components in the above image, some pods may not be ready yet. You can execute `kubectl get pod --all-namespaces` through kubectl to see the status of pods.
|
||||
|
||||
{{</ notice >}}
|
||||
|
|
@ -1,8 +1,62 @@
|
|||
---
|
||||
title: "Minimal KubeSphere on Kubernetes"
|
||||
keywords: 'kubesphere, kubernetes, docker, multi-tenant'
|
||||
description: 'Install a Minimal KubeSphere on Kubernetes'
|
||||
keywords: 'KubeSphere, Kubernetes, Minimal, Installation'
|
||||
description: 'Minimal Installation of KubeSphere on Kubernetes'
|
||||
|
||||
linkTitle: "Minimal KubeSphere on Kubernetes"
|
||||
weight: 3020
|
||||
---
|
||||
|
||||
In addition to installing KubeSphere on a Linux machine, you can also deploy it on existing Kubernetes clusters directly. This QuickStart guide walks you through the general steps of completing a minimal KubeSphere installation on Kubernetes. For more information, see [Installing on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/).
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- To install KubeSphere on Kubernetes, your Kubernetes version must be `1.15.x, 1.16.x, 1.17.x, or 1.18.x`;
|
||||
- Make sure your machine meets the minimal hardware requirement: CPU > 1 Core, Memory > 2 G;
|
||||
- A default Storage Class in your Kubernetes cluster needs to be configured before the installation;
|
||||
- The CSR signing feature is activated in kube-apiserver when it is started with the `--cluster-signing-cert-file` and `--cluster-signing-key-file` parameters. See [RKE installation issue](https://github.com/kubesphere/kubesphere/issues/1925#issuecomment-591698309).
|
||||
- For more information about the prerequisites of installing KubeSphere on Kubernetes, see [Prerequisites](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/prerequisites/).
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
## Deploy KubeSphere
|
||||
|
||||
After you make sure your machine meets the prerequisites, you can follow the steps below to install KubeSphere.
|
||||
|
||||
- Please read the note below before you execute the commands to start installation:
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- If your server has trouble accessing GitHub, you can copy the content in [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml) and [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) respectively and past it to local files. You then can use `kubectl apply -f` for the local files to install KubeSphere.
|
||||
- In cluster-configuration.yaml, you need to disable `metrics_server` manually by changing `true` to `false` if the component has already been installed in your environment, especially for cloud-hosted Kubernetes clusters.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml
|
||||
```
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml
|
||||
```
|
||||
|
||||
- Inspect the logs of installation:
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
- Use `kubectl get pod --all-namespaces` to see whether all pods are running normally in relevant namespaces of KubeSphere. If they are, check the port (30880 by default) of the console through the following command:
|
||||
|
||||
```bash
|
||||
kubectl get svc/ks-console -n kubesphere-system
|
||||
```
|
||||
|
||||
- Make sure port 30880 is opened in security groups and access the web console through the NodePort (`IP:30880`) with the default account and password (`admin/P@88w0rd`).
|
||||
- After logging in the console, you can check the status of different components in **Components**. You may need to wait for some components to be up and running if you want to use related services.
|
||||
|
||||

|
||||
|
||||
## Enable Pluggable Components (Optional)
|
||||
|
||||
The guide above is used only for minimal installation by default. To enable other components in KubeSphere, see Enable Pluggable Components for more details.
|
||||
|
|
@ -10,16 +10,16 @@ description: 'How to install KubeSphere on VMware vSphere Linux machines'
|
|||
|
||||
本教程为您提供了一个示例,说明如何使用 [keepalived + haproxy](https://kubesphere.com.cn/forum/d/1566-kubernetes-keepalived-haproxy) 对 kube-apiserver 进行负载均衡,实现高可用 kubernetes 集群。
|
||||
|
||||
## 1. 前提条件
|
||||
## 前提条件
|
||||
|
||||
- 请遵循该[指南](https://github.com/kubesphere/kubekey),确保您已经知道如何将 KubeSphere 与多节点集群一起安装。有关用于安装的 config yaml 文件的详细信息,请参阅多节点安装。本教程重点介绍如何配置负载均衡器。
|
||||
- 您需要一个 VMware vSphere 帐户来创建VM资源。
|
||||
- 考虑到数据的持久性,对于生产环境,我们建议您准备持久性存储并预先创建 StorageClass 。为了进行开发和测试,您可以使用集成的 OpenEBS 直接将 LocalPV设置为存储服务。
|
||||
|
||||
## 2. 部署架构
|
||||
## 部署架构
|
||||

|
||||
|
||||
## 3. 创建主机
|
||||
## 创建主机
|
||||
|
||||
本示例创建 9 台 **CentOS Linux release 7.6.1810(Core)** 的虚拟机,默认的最小化安装,每台配置为 2 Core 4 GB 40 G 即可。
|
||||
|
||||
|
|
@ -67,24 +67,26 @@ description: 'How to install KubeSphere on VMware vSphere Linux machines'
|
|||
|
||||

|
||||
|
||||
清单,确认无误后,点击确定。
|
||||
在`即将完成`页面上可查看为虚拟机选择的配置。
|
||||
|
||||

|
||||
|
||||
## 4. 部署 keepalived+haproxy
|
||||
### 1. yum 安装
|
||||
## 部署 keepalived+haproxy
|
||||
### yum 安装
|
||||
|
||||
在主机为lb-0和lb-1中部署keepalived+haproxy 即IP为10.10.71.77与10.10.71.66的服务器上安装部署haproxy、keepalived、psmisc
|
||||
|
||||
```bash
|
||||
#在主机为lb-0和lb-1中部署keepalived+haproxy
|
||||
#即IP为10.10.71.77与10.10.71.66的服务器上安装部署haproxy、keepalived、psmisc
|
||||
yum install keepalived haproxy psmisc -y
|
||||
```
|
||||
|
||||
### 2. 配置 haproxy
|
||||
### 配置 haproxy
|
||||
|
||||
在IP为 10.10.71.77 与 10.10.71.66 的服务器 ,配置 haproxy (两台 lb 机器配置一致即可,注意后端服务地址)。
|
||||
|
||||
Haproxy 配置 /etc/haproxy/haproxy.cfg
|
||||
|
||||
```bash
|
||||
#Haproxy 配置 /etc/haproxy/haproxy.cfg
|
||||
global
|
||||
log 127.0.0.1 local2
|
||||
chroot /var/lib/haproxy
|
||||
|
|
@ -126,127 +128,167 @@ backend kube-apiserver
|
|||
server kube-apiserver-2 10.10.71.73:6443 check
|
||||
server kube-apiserver-3 10.10.71.62:6443 check
|
||||
```
|
||||
|
||||
|
||||
|
||||
启动之前检查语法是否有问题
|
||||
|
||||
```bash
|
||||
#启动之前检查语法是否有问题
|
||||
haproxy -f /etc/haproxy/haproxy.cfg -c
|
||||
#启动 Haproxy,并设置开机自启动
|
||||
```
|
||||
|
||||
启动 Haproxy,并设置开机自启动
|
||||
|
||||
```bash
|
||||
systemctl restart haproxy && systemctl enable haproxy
|
||||
#停止 Haproxy
|
||||
```
|
||||
|
||||
停止 Haproxy
|
||||
|
||||
```bash
|
||||
systemctl stop haproxy
|
||||
```
|
||||
### 3. 配置 keepalived
|
||||
|
||||
### 配置 keepalived
|
||||
|
||||
主 haproxy 77 lb-0-10.10.71.77 (/etc/keepalived/keepalived.conf)
|
||||
|
||||
```bash
|
||||
# 主 haproxy 77 lb-0-10.10.71.77
|
||||
#/etc/keepalived/keepalived.conf
|
||||
global_defs {
|
||||
notification_email {
|
||||
}
|
||||
smtp_connect_timeout 30 #连接超时时间
|
||||
router_id LVS_DEVEL01 ##相当于给这个服务器起个昵称
|
||||
vrrp_skip_check_adv_addr
|
||||
vrrp_garp_interval 0
|
||||
vrrp_gna_interval 0
|
||||
notification_email {
|
||||
}
|
||||
smtp_connect_timeout 30 #连接超时时间
|
||||
router_id LVS_DEVEL01 ##相当于给这个服务器起个昵称
|
||||
vrrp_skip_check_adv_addr
|
||||
vrrp_garp_interval 0
|
||||
vrrp_gna_interval 0
|
||||
}
|
||||
vrrp_script chk_haproxy {
|
||||
script "killall -0 haproxy"
|
||||
interval 2
|
||||
weight 2
|
||||
script "killall -0 haproxy"
|
||||
interval 2
|
||||
weight 2
|
||||
}
|
||||
vrrp_instance haproxy-vip {
|
||||
state MASTER #主服务器 是MASTER
|
||||
priority 100 #主服务器优先级要比备服务器高
|
||||
interface ens192 #实例绑定的网卡
|
||||
virtual_router_id 60 #定义一个热备组,可以认为这是60号热备组
|
||||
advert_int 1 #1秒互相通告一次,检查对方死了没。
|
||||
authentication {
|
||||
auth_type PASS #认证类型
|
||||
auth_pass 1111 #认证密码 这些相当于暗号
|
||||
}
|
||||
unicast_src_ip 10.10.71.77 #当前机器地址
|
||||
unicast_peer {
|
||||
10.10.71.66 #peer中其它机器地址
|
||||
}
|
||||
virtual_ipaddress {
|
||||
#vip地址
|
||||
10.10.71.67/24
|
||||
}
|
||||
track_script {
|
||||
chk_haproxy
|
||||
}
|
||||
state MASTER #主服务器 是MASTER
|
||||
priority 100 #主服务器优先级要比备服务器高
|
||||
interface ens192 #实例绑定的网卡
|
||||
virtual_router_id 60 #定义一个热备组,可以认为这是60号热备组
|
||||
advert_int 1 #1秒互相通告一次,检查对方死了没。
|
||||
authentication {
|
||||
auth_type PASS #认证类型
|
||||
auth_pass 1111 #认证密码 这些相当于暗号
|
||||
}
|
||||
unicast_src_ip 10.10.71.77 #当前机器地址
|
||||
unicast_peer {
|
||||
10.10.71.66 #peer中其它机器地址
|
||||
}
|
||||
virtual_ipaddress {
|
||||
#vip地址
|
||||
10.10.71.67/24
|
||||
}
|
||||
track_script {
|
||||
chk_haproxy
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
备 haproxy 66 lb-1-10.10.71.66 (/etc/keepalived/keepalived.conf)
|
||||
```bash
|
||||
#备 haproxy 66 lb-1-10.10.71.66
|
||||
#/etc/keepalived/keepalived.conf
|
||||
global_defs {
|
||||
notification_email {
|
||||
}
|
||||
router_id LVS_DEVEL02 ##相当于给这个服务器起个昵称
|
||||
vrrp_skip_check_adv_addr
|
||||
vrrp_garp_interval 0
|
||||
vrrp_gna_interval 0
|
||||
notification_email {
|
||||
}
|
||||
router_id LVS_DEVEL02 ##相当于给这个服务器起个昵称
|
||||
vrrp_skip_check_adv_addr
|
||||
vrrp_garp_interval 0
|
||||
vrrp_gna_interval 0
|
||||
}
|
||||
vrrp_script chk_haproxy {
|
||||
script "killall -0 haproxy"
|
||||
interval 2
|
||||
weight 2
|
||||
script "killall -0 haproxy"
|
||||
interval 2
|
||||
weight 2
|
||||
}
|
||||
vrrp_instance haproxy-vip {
|
||||
state BACKUP #备份服务器 是 backup
|
||||
priority 90 #优先级要低(把备份的90修改为100)
|
||||
interface ens192 #实例绑定的网卡
|
||||
virtual_router_id 60
|
||||
advert_int 1
|
||||
authentication {
|
||||
auth_type PASS
|
||||
auth_pass 1111
|
||||
}
|
||||
unicast_src_ip 10.10.71.66 #当前机器地址
|
||||
unicast_peer {
|
||||
10.10.71.77 #peer 中其它机器地址
|
||||
}
|
||||
virtual_ipaddress {
|
||||
#加/24
|
||||
10.10.71.67/24
|
||||
}
|
||||
track_script {
|
||||
chk_haproxy
|
||||
}
|
||||
state BACKUP #备份服务器 是 backup
|
||||
priority 90 #优先级要低(把备份的90修改为100)
|
||||
interface ens192 #实例绑定的网卡
|
||||
virtual_router_id 60
|
||||
advert_int 1
|
||||
authentication {
|
||||
auth_type PASS
|
||||
auth_pass 1111
|
||||
}
|
||||
unicast_src_ip 10.10.71.66 #当前机器地址
|
||||
unicast_peer {
|
||||
10.10.71.77 #peer 中其它机器地址
|
||||
}
|
||||
virtual_ipaddress {
|
||||
#加/24
|
||||
10.10.71.67/24
|
||||
}
|
||||
track_script {
|
||||
chk_haproxy
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
启动 keepalived,设置开机自启动
|
||||
```bash
|
||||
#启动 keepalived,设置开机自启动
|
||||
systemctl restart keepalived && systemctl enable keepalived
|
||||
systemctl stop keepaliv
|
||||
systemctl start keepalived #开启 keepalived服务
|
||||
|
||||
```
|
||||
|
||||
### 4. 验证可用性
|
||||
|
||||
- 使用 `ip a s` 查看各 lb 节点 vip 绑定情况
|
||||
- 暂停vip所在节点 haproxy:`systemctl stop haproxy`
|
||||
- 再次使用 `ip a s ` 查看各 lb 节点 vip 绑定情况,查看 vip 是否发生漂移
|
||||
- 或者使用 `systemctl status -l keepalived` 命令查看
|
||||
|
||||
## 5. 获取安装程序可执行文件
|
||||
开启 keepalived服务
|
||||
|
||||
```bash
|
||||
systemctl start keepalivedb
|
||||
```
|
||||
|
||||
### 验证可用性
|
||||
|
||||
使用 `ip a s` 查看各 lb 节点 vip 绑定情况
|
||||
```bash
|
||||
ip a s
|
||||
```
|
||||
暂停vip所在节点 haproxy:`systemctl stop haproxy`
|
||||
```bash
|
||||
systemctl stop haproxy
|
||||
```
|
||||
再次使用 `ip a s ` 查看各 lb 节点 vip 绑定情况,查看 vip 是否发生漂移
|
||||
```bash
|
||||
ip a s
|
||||
```
|
||||
或者使用 `systemctl status -l keepalived` 命令查看
|
||||
```bash
|
||||
systemctl status -l keepalived
|
||||
```
|
||||
|
||||
|
||||
## 获取安装程序可执行文件
|
||||
|
||||
下载 installer 至一台目标机器
|
||||
|
||||
```bash
|
||||
#下载 installer 至一台目标机器
|
||||
curl -O -k https://kubernetes.pek3b.qingstor.com/tools/kubekey/kk
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
## 6. 创建多节点群集
|
||||
## 创建多节点群集
|
||||
|
||||
您可以使用高级安装来控制自定义参数或创建多节点群集。具体来说,通过指定配置文件来创建集群。
|
||||
|
||||
### 1. kubekey 部署 k8s 集群
|
||||
### kubekey 部署 k8s 集群
|
||||
|
||||
创建配置文件(一个示例配置文件)|包含 kubesphere 的配置文件
|
||||
|
||||
```bash
|
||||
# 创建配置文件(一个示例配置文件)|包含 kubesphere 的配置文件
|
||||
./kk create config --with-kubesphere v3.0.0 -f ~/config-sample.yaml
|
||||
```
|
||||
#### 1.1 集群节点配置
|
||||
|
||||
#### 集群节点配置
|
||||
|
||||
vi ~/config-sample.yaml
|
||||
|
||||
```yaml
|
||||
#vi ~/config-sample.yaml
|
||||
apiVersion: kubekey.kubesphere.io/v1alpha1
|
||||
|
|
@ -376,9 +418,22 @@ spec:
|
|||
servicemesh: # Whether to install KubeSphere Service Mesh (Istio-based). It provides fine-grained traffic management, observability and tracing, and offer visualization for traffic topology
|
||||
enabled: false
|
||||
```
|
||||
#### 1.2 验证安装结果
|
||||
|
||||
如果在 install.sh 最好返回 `Welcome to KubeSphere` ,则表示已安装成功。
|
||||
使用您在上面自定义的配置文件创建集群:
|
||||
|
||||
```
|
||||
./kk create cluster -f config-sample.yaml
|
||||
```
|
||||
|
||||
#### 验证安装结果
|
||||
|
||||
检查安装日志,然后等待一段时间
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
如果在创建集群,最后返回 `Welcome to KubeSphere` ,则表示已安装成功。
|
||||
|
||||
```bash
|
||||
**************************************************
|
||||
|
|
@ -400,10 +455,15 @@ https://kubesphere.io 2020-08-15 23:32:12
|
|||
#####################################################
|
||||
```
|
||||
|
||||
#### 1.3 登录 console 界面
|
||||
#### 登录 console 界面
|
||||
|
||||
使用给定的访问地址进行访问,进入到 KubeSphere 的登陆界面并使用默认账号(用户名 `admin`,密码 `P@88w0rd`)即可登陆平台。
|
||||
|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
#### 开启可插拔功能组件(可选)
|
||||
|
||||
上面的示例演示了默认最小安装的过程。若要在 KubeSphere 中启用其他组件,请参阅[启用可插拔组件](https://github.com/kubesphere/ks-installer/blob/master/README_zh.md#安装功能组件)了解更多详细信息。
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,276 @@
|
|||
---
|
||||
title: "KubeSphere 在阿里云 ECS 高可用实例"
|
||||
keywords: "Kubesphere 安装, 阿里云, ECS, 高可用性, 高可用性, 负载均衡器"
|
||||
description: "本教程用于安装高可用性集群"
|
||||
|
||||
Weight: 2230
|
||||
---
|
||||
|
||||
由于对于生产环境,我们需要考虑集群的高可用性。教你部署如何在阿里 ECS 实例服务快速部署一套高可用的生产环境
|
||||
Kubernetes 服务需要做到高可用,需要保证 kube-apiserver 的 HA ,推荐下列两种方式
|
||||
1. 阿里云 SLB
|
||||
2. keepalived + haproxy [keepalived + haproxy](https://kubesphere.com.cn/forum/d/1566-kubernetes-keepalived-haproxy)对 kube-apiserver 进行负载均衡,实现高可用 kubernetes 集群。
|
||||
|
||||
## 前提条件
|
||||
|
||||
- 请遵循该[指南](https://github.com/kubesphere/kubekey),确保您已经知道如何将 KubeSphere 与多节点集群一起安装。有关用于安装的 config.yaml 文件的详细信息。本教程重点介绍配置阿里负载均衡器服务高可用安装。
|
||||
- 考虑到数据的持久性,对于生产环境,我们不建议您使用存储OpenEBS,建议 NFS , GlusterFS 等存储(需要提前安装)。文章为了进行开发和测试,集成的 OpenEBS 直接将 LocalPV 设置为存储服务。
|
||||
- SSH 可以访问所有节点。
|
||||
- 所有节点的时间同步。
|
||||
- Red Hat 在其 Linux 发行版本中包括了SELinux,建议关闭 SELinux 或者将 SELinux 的模式切换为 Permissive [宽容]工作模式。
|
||||
|
||||
## 部署架构
|
||||
|
||||

|
||||
|
||||
## 创建主机
|
||||
|
||||
本示例创建 SLB + 6 台 **CentOS Linux release 7.6.1810 (Core)** 的虚拟机,每台配置为 2Core4GB40G
|
||||
|
||||
| 主机IP | 主机名称 | 角色 |
|
||||
| --- | --- | --- |
|
||||
|39.104.82.170|Eip|slb|
|
||||
|172.24.107.72|master1|master1, etcd|
|
||||
|172.24.107.73|master2|master2, etcd|
|
||||
|172.24.107.74|master3|master3, etcd|
|
||||
|172.24.107.75|node1|node|
|
||||
|172.24.107.76|node2|node|
|
||||
|172.24.107.77|node3|node|
|
||||
|
||||
> 注意:机器有限,所以把 etcd 放入 master,在生产环境建议单独部署 etcd,提高稳定性
|
||||
|
||||
## 使用阿里 SLB 部署
|
||||
### 创建 SLB
|
||||
|
||||
进入到阿里云控制, 在左侧列表选择'负载均衡', 选择'实例管理' 进入下图, 选择'创建负载均衡'
|
||||
|
||||

|
||||
|
||||
### 配置 SLB
|
||||
|
||||
配置规格根据自身流量规模创建
|
||||
|
||||

|
||||
|
||||
后面的 config.yaml 需要配置 slb 分配的地址
|
||||
```yaml
|
||||
controlPlaneEndpoint:
|
||||
domain: lb.kubesphere.local
|
||||
address: "39.104.82.170"
|
||||
port: "6443"
|
||||
```
|
||||
### 配置SLB 主机实例
|
||||
|
||||
需要在服务器组添加需要负载的3台 master 主机后按下图顺序配置监听 TCP 6443 端口( api-server )
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
再按上述操作配置监听 HTTP 30880 端口( ks-console ),主机添加选择全部主机节点。
|
||||
|
||||

|
||||
|
||||
- <font color=red>现在的健康检查暂时是失败的,因为还没部署 master 的服务,所以端口 telnet 不通的。</font>
|
||||
- 然后提交审核即可
|
||||
|
||||
### 获取安装程序可执行文件
|
||||
|
||||
```bash
|
||||
#下载 kk installer 至任意一台机器
|
||||
curl -O -k https://kubernetes.pek3b.qingstor.com/tools/kubekey/kk
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
您可以使用高级安装来控制自定义参数或创建多节点群集。具体来说,通过指定配置文件来创建集群。
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### 使用 kubekey 部署k8s集群和 KubeSphere 控制台
|
||||
|
||||
```bash
|
||||
# 在当前位置创建配置文件 config-sample.yaml |包含 KubeSphere 的配置文件
|
||||
./kk create config --with-kubesphere v3.0.0 -f config-sample.yaml
|
||||
---
|
||||
# 同时安装存储插件 (支持:localVolume、nfsClient、rbd、glusterfs)。您可以指定多个插件并用逗号分隔。请注意,您添加的第一个将是默认存储类。
|
||||
./kk create config --with-storage localVolume --with-kubesphere v3.0.0 -f config-sample.yaml
|
||||
```
|
||||
### 集群配置调整
|
||||
|
||||
```yaml
|
||||
#vi ~/config-sample.yaml
|
||||
apiVersion: kubekey.kubesphere.io/v1alpha1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: config-sample
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master1, address: 172.24.107.72, internalAddress: 172.24.107.72, user: root, password: QWEqwe123}
|
||||
- {name: master2, address: 172.24.107.73, internalAddress: 172.24.107.73, user: root, password: QWEqwe123}
|
||||
- {name: master3, address: 172.24.107.74, internalAddress: 172.24.107.74, user: root, password: QWEqwe123}
|
||||
- {name: node1, address: 172.24.107.75, internalAddress: 172.24.107.75, user: root, password: QWEqwe123}
|
||||
- {name: node2, address: 172.24.107.76, internalAddress: 172.24.107.76, user: root, password: QWEqwe123}
|
||||
- {name: node3, address: 172.24.107.77, internalAddress: 172.24.107.77, user: root, password: QWEqwe123}
|
||||
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
master:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
worker:
|
||||
- node1
|
||||
- node2
|
||||
- node3
|
||||
controlPlaneEndpoint:
|
||||
domain: lb.kubesphere.local
|
||||
address: "39.104.82.170"
|
||||
port: "6443"
|
||||
kubernetes:
|
||||
version: v1.17.9
|
||||
imageRepo: kubesphere
|
||||
clusterName: cluster.local
|
||||
network:
|
||||
plugin: calico
|
||||
kubePodsCIDR: 10.233.64.0/18
|
||||
kubeServiceCIDR: 10.233.0.0/18
|
||||
registry:
|
||||
registryMirrors: ["https://*.mirror.aliyuncs.com"] # # input your registryMirrors
|
||||
insecureRegistries: []
|
||||
storage:
|
||||
defaultStorageClass: localVolume
|
||||
localVolume:
|
||||
storageClassName: local
|
||||
|
||||
---
|
||||
apiVersion: installer.kubesphere.io/v1alpha1
|
||||
kind: ClusterConfiguration
|
||||
metadata:
|
||||
name: ks-installer
|
||||
namespace: kubesphere-system
|
||||
labels:
|
||||
version: v3.0.0
|
||||
spec:
|
||||
local_registry: ""
|
||||
persistence:
|
||||
storageClass: ""
|
||||
authentication:
|
||||
jwtSecret: ""
|
||||
etcd:
|
||||
monitoring: true
|
||||
endpointIps: 172.24.107.72,172.24.107.73,172.24.107.74
|
||||
port: 2379
|
||||
tlsEnable: true
|
||||
common:
|
||||
es:
|
||||
elasticsearchDataVolumeSize: 20Gi
|
||||
elasticsearchMasterVolumeSize: 4Gi
|
||||
elkPrefix: logstash
|
||||
logMaxAge: 7
|
||||
mysqlVolumeSize: 20Gi
|
||||
minioVolumeSize: 20Gi
|
||||
etcdVolumeSize: 20Gi
|
||||
openldapVolumeSize: 2Gi
|
||||
redisVolumSize: 2Gi
|
||||
console:
|
||||
enableMultiLogin: false # enable/disable multi login
|
||||
port: 30880
|
||||
alerting:
|
||||
enabled: false
|
||||
auditing:
|
||||
enabled: false
|
||||
devops:
|
||||
enabled: false
|
||||
jenkinsMemoryLim: 2Gi
|
||||
jenkinsMemoryReq: 1500Mi
|
||||
jenkinsVolumeSize: 8Gi
|
||||
jenkinsJavaOpts_Xms: 512m
|
||||
jenkinsJavaOpts_Xmx: 512m
|
||||
jenkinsJavaOpts_MaxRAM: 2g
|
||||
events:
|
||||
enabled: false
|
||||
ruler:
|
||||
enabled: true
|
||||
replicas: 2
|
||||
logging:
|
||||
enabled: false
|
||||
logsidecarReplicas: 2
|
||||
metrics_server:
|
||||
enabled: true
|
||||
monitoring:
|
||||
prometheusMemoryRequest: 400Mi
|
||||
prometheusVolumeSize: 20Gi
|
||||
multicluster:
|
||||
clusterRole: none # host | member | none
|
||||
networkpolicy:
|
||||
enabled: false
|
||||
notification:
|
||||
enabled: false
|
||||
openpitrix:
|
||||
enabled: false
|
||||
servicemesh:
|
||||
enabled: false
|
||||
```
|
||||
|
||||
### 执行命令创建集群
|
||||
```bash
|
||||
# 指定配置文件创建集群
|
||||
./kk create cluster -f config-sample.yaml
|
||||
|
||||
# 查看 KubeSphere 安装日志 -- 直到出现控制台的访问地址和登陆账号
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
```bash
|
||||
**************************************************
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://172.24.107.72:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTES:
|
||||
1. After logging into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are ready.
|
||||
2. Please modify the default password after login.
|
||||
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-08-24 23:30:06
|
||||
#####################################################
|
||||
```
|
||||
|
||||
- 访问公网 IP + Port 为部署后的使用情况,使用默认账号密码 (`admin/P@88w0rd`),文章安装为最小化,登陆点击`工作台` 可看到下图安装组件列表和机器情况。
|
||||
|
||||

|
||||
|
||||
## 如何自定义开启可插拔组件
|
||||
|
||||
+ 点击 `集群管理` - `自定义资源CRD` ,在过滤条件框输入 `ClusterConfiguration` ,如图下
|
||||
|
||||

|
||||
|
||||
+ 点击 `ClusterConfiguration` 详情,对 `ks-installer` 编辑保存退出即可,组件描述介绍:[文档说明](https://github.com/kubesphere/ks-installer/blob/master/deploy/cluster-configuration.yaml)
|
||||
|
||||

|
||||
|
||||
## 安装问题
|
||||
|
||||
> 提示: 如果安装过程中碰到 `Failed to add worker to cluster: Failed to exec command...`
|
||||
> <br>
|
||||
``` bash 处理方式
|
||||
kubeadm reset
|
||||
```
|
||||
|
|
@ -1,9 +1,9 @@
|
|||
---
|
||||
title: "Quick Start"
|
||||
title: "Quickstarts"
|
||||
description: "Help you to better understand KubeSphere with detailed graphics and contents"
|
||||
layout: "single"
|
||||
|
||||
linkTitle: "Quick Start"
|
||||
linkTitle: "Quickstarts"
|
||||
|
||||
weight: 1500
|
||||
|
||||
|
|
|
|||
|
After Width: | Height: | Size: 290 KiB |
|
After Width: | Height: | Size: 156 KiB |
|
After Width: | Height: | Size: 279 KiB |
|
After Width: | Height: | Size: 154 KiB |
|
After Width: | Height: | Size: 289 KiB |
|
After Width: | Height: | Size: 194 KiB |
|
After Width: | Height: | Size: 188 KiB |
|
After Width: | Height: | Size: 111 KiB |
|
After Width: | Height: | Size: 241 KiB |
|
After Width: | Height: | Size: 464 KiB |
|
After Width: | Height: | Size: 276 KiB |
|
After Width: | Height: | Size: 280 KiB |
|
After Width: | Height: | Size: 203 KiB |
|
After Width: | Height: | Size: 156 KiB |
|
After Width: | Height: | Size: 167 KiB |
|
After Width: | Height: | Size: 209 KiB |
|
After Width: | Height: | Size: 184 KiB |
|
After Width: | Height: | Size: 184 KiB |
|
After Width: | Height: | Size: 252 KiB |
|
After Width: | Height: | Size: 284 KiB |
|
After Width: | Height: | Size: 101 KiB |
|
After Width: | Height: | Size: 269 KiB |
|
After Width: | Height: | Size: 228 KiB |
|
After Width: | Height: | Size: 379 KiB |
|
After Width: | Height: | Size: 622 KiB |
|
After Width: | Height: | Size: 856 KiB |