mirror of
https://github.com/kubesphere/website.git
synced 2025-12-26 00:12:48 +00:00
Merge pull request #302 from zryfish/api_documentation
Api documentation
This commit is contained in:
commit
bcfb9a20fd
|
|
@ -0,0 +1,16 @@
|
|||
---
|
||||
title: "KubeSphere API"
|
||||
description: "How to use KubeSphere API to build your own application"
|
||||
layout: "single"
|
||||
|
||||
linkTitle: "API Documentation"
|
||||
|
||||
weight: 3100
|
||||
|
||||
icon: "/images/docs/docs.svg"
|
||||
|
||||
---
|
||||
|
||||
## [API Documentation](./kubesphere-api/)
|
||||
|
||||
The REST API is the fundamental fabric of KubeSphere. This page show you how to access KubeSphere API server.
|
||||
|
|
@ -0,0 +1,73 @@
|
|||
---
|
||||
title: "API Glossary"
|
||||
keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus'
|
||||
description: 'KubeSphere AOI Glossary documentation'
|
||||
|
||||
|
||||
weight: 240
|
||||
---
|
||||
|
||||
## DevOps
|
||||
|
||||
|English/英文| Chinese/中文|
|
||||
|---|---|
|
||||
|DevOps|DevOps 工程|
|
||||
|Workspace| 企业空间|
|
||||
|Pipeline|流水线|
|
||||
|Credential|凭证|
|
||||
|Artifact |制品|
|
||||
|Stage|流水线执行过程中的阶段|
|
||||
|Step|阶段中的步骤|
|
||||
|Branch|分支|
|
||||
|SCM|源代码管理工具,例如github、gitlab等|
|
||||
|sonar|代码质量分析工具 sonarqube|
|
||||
|
||||
## Monitoring
|
||||
|
||||
|English/英文| Chinese/中文|
|
||||
|---|---|
|
||||
|Metric|指标|
|
||||
|Usage|用量|
|
||||
|Utilisation|利用率|
|
||||
|Throughput|吞吐量|
|
||||
|Capacity|容量|
|
||||
|Proposal|Etcd 提案|
|
||||
|
||||
## Logging
|
||||
|
||||
|English/英文| Chinese/中文|
|
||||
|---|---|
|
||||
|Fuzzy Matching |模糊匹配|
|
||||
|
||||
|
||||
## Router
|
||||
|
||||
|English/英文| Chinese/中文|
|
||||
|---|---|
|
||||
|Gateway|网关|
|
||||
|Route|应用路由|
|
||||
|
||||
## Service Mesh
|
||||
|
||||
|English/英文| Chinese/中文|
|
||||
|---|---|
|
||||
|ServiceMesh|服务网格|
|
||||
|Tracing|追踪(分布式追踪)|
|
||||
|Canary Release| 金丝雀发布|
|
||||
|Traffic mirroring|流量镜像|
|
||||
|BlueGreen Release|蓝绿发布|
|
||||
|
||||
## Notification
|
||||
|
||||
|English/英文| Chinese/中文|
|
||||
|---|---|
|
||||
|addresslist|通知地址列表|
|
||||
|
||||
## Multi Cluster
|
||||
|
||||
|English/英文| Chinese/中文|
|
||||
|---|---|
|
||||
|Host Cluster|主集群/管理集群|
|
||||
|Member Cluster|成员集群|
|
||||
|Direct Connection|直接连接|
|
||||
|Agent Connection|代理连接|
|
||||
|
|
@ -0,0 +1,93 @@
|
|||
---
|
||||
title: "KubeSphere API"
|
||||
keywords: 'Kubernetes, KubeSphere, API'
|
||||
description: 'KubeSphere API documentation'
|
||||
|
||||
|
||||
weight: 240
|
||||
---
|
||||
|
||||
In KubeSphere v3.0, we move the functionalities of _ks-apigateway_, _ks-account_ into _ks-apiserver_ to make the architecture more compact and straight forward. In order to use KubeSphere API, you need to expose _ks-apiserver_ to your client.
|
||||
|
||||
## Expose KubeSphere API service
|
||||
If you are going to access KubeSphere inside the cluster, you can skip the following section and just using the KubeSphere API server endpoint **`http://ks-apiserver.kubesphere-system.svc`**.
|
||||
|
||||
But if not, you need to expose the KubeSphere API server endpoint to the outside of the cluster first.
|
||||
|
||||
There are many ways to expose a Kubernetes service, for simplicity, we use _NodePort_ in our case. Change service `ks-apiserver` type to NodePort by using following command, and then you are done.
|
||||
```bash
|
||||
root@master:~# kubectl -n kubesphere-system patch service ks-apiserver -p '{"spec":{"type":"NodePort"}}'
|
||||
root@master:~# kubectl -n kubesphere-system get svc
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
etcd ClusterIP 10.233.34.220 <none> 2379/TCP 44d
|
||||
ks-apiserver NodePort 10.233.15.31 <none> 80:31407/TCP 49d
|
||||
ks-console NodePort 10.233.3.45 <none> 80:30880/TCP 49d
|
||||
```
|
||||
|
||||
Now, you can access `ks-apiserver` outside the cluster through URL like `http://[node ip]:31407`, where `[node ip]` means IP of any node in your cluster.
|
||||
|
||||
## Generate a token
|
||||
There is one more thing to do before calling the API, authorization. Any clients that talk to the KubeSphere API server need to identify themselves first, only after successful authorization will the server respond to the call.
|
||||
|
||||
Let's say now a user `jeff` with password `P#$$w0rd` want to generate a token. He/She can issue a request like the following:
|
||||
```bash
|
||||
root@master:~# curl -X POST -H 'Content-Type: application/x-www-form-urlencoded' \
|
||||
'http://[node ip]:31407/oauth/token' \
|
||||
--data-urlencode 'grant_type=password' \
|
||||
--data-urlencode 'username=admin' \
|
||||
--data-urlencode 'password=P#$$w0rd'
|
||||
```
|
||||
If the identity is correct, the server will response something like the following. `access_token` is the token what we need to access the KubeSphere API Server.
|
||||
```json
|
||||
{
|
||||
"access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6ImFkbWluIiwidWlkIjoiYTlhNjJmOTEtYWQ2Yi00MjRlLWIxNWEtZTFkOTcyNmUzNDFhIiwidG9rZW5fdHlwZSI6ImFjY2Vzc190b2tlbiIsImV4cCI6MTYwMDg1MjM5OCwiaWF0IjoxNjAwODQ1MTk4LCJpc3MiOiJrdWJlc3BoZXJlIiwibmJmIjoxNjAwODQ1MTk4fQ.Hcyf-CPMeq8XyQQLz5PO-oE1Rp1QVkOeV_5J2oX1hvU",
|
||||
"token_type": "Bearer",
|
||||
"refresh_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6ImFkbWluIiwidWlkIjoiYTlhNjJmOTEtYWQ2Yi00MjRlLWIxNWEtZTFkOTcyNmUzNDFhIiwidG9rZW5fdHlwZSI6InJlZnJlc2hfdG9rZW4iLCJleHAiOjE2MDA4NTk1OTgsImlhdCI6MTYwMDg0NTE5OCwiaXNzIjoia3ViZXNwaGVyZSIsIm5iZiI6MTYwMDg0NTE5OH0.PerssCLVXJD7BuCF3Ow8QUNYLQxjwqC8m9iOkRRD6Tc",
|
||||
"expires_in": 7200
|
||||
}
|
||||
```
|
||||
> **Note**: Please substitue `[node ip]:31407` with the real ip address.
|
||||
|
||||
## Make the call
|
||||
|
||||
Now you got everything you need to access api server, make the call using the access token just acquire :
|
||||
```bash
|
||||
root@master1:~# curl -X GET -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6ImFkbWluIiwidWlkIjoiYTlhNjJmOTEtYWQ2Yi00MjRlLWIxNWEtZTFkOTcyNmUzNDFhIiwidG9rZW5fdHlwZSI6ImFjY2Vzc190b2tlbiIsImV4cCI6MTYwMDg1MjM5OCwiaWF0IjoxNjAwODQ1MTk4LCJpc3MiOiJrdWJlc3BoZXJlIiwibmJmIjoxNjAwODQ1MTk4fQ.Hcyf-CPMeq8XyQQLz5PO-oE1Rp1QVkOeV_5J2oX1hvU" \
|
||||
-H 'Content-Type: application/json' \
|
||||
'http://10.233.15.31/kapis/resources.kubesphere.io/v1alpha3/nodes'
|
||||
|
||||
{
|
||||
"items": [
|
||||
{
|
||||
"metadata": {
|
||||
"name": "node3",
|
||||
"selfLink": "/api/v1/nodes/node3",
|
||||
"uid": "dd8c01f3-76e8-4695-9e54-45be90d9ec53",
|
||||
"resourceVersion": "84170589",
|
||||
"creationTimestamp": "2020-06-18T07:36:41Z",
|
||||
"labels": {
|
||||
"a": "a",
|
||||
"beta.kubernetes.io/arch": "amd64",
|
||||
"beta.kubernetes.io/os": "linux",
|
||||
"gitpod.io/theia.v0.4.0": "available",
|
||||
"gitpod.io/ws-sync": "available",
|
||||
"kubernetes.io/arch": "amd64",
|
||||
"kubernetes.io/hostname": "node3",
|
||||
"kubernetes.io/os": "linux",
|
||||
"kubernetes.io/role": "new",
|
||||
"node-role.kubernetes.io/worker": "",
|
||||
"topology.disk.csi.qingcloud.com/instance-type": "Standard",
|
||||
"topology.disk.csi.qingcloud.com/zone": "ap2a"
|
||||
},
|
||||
"annotations": {
|
||||
"csi.volume.kubernetes.io/nodeid": "{\"disk.csi.qingcloud.com\":\"i-icjxhi1e\"}",
|
||||
"kubeadm.alpha.kubernetes.io/cri-socket": "/var/run/dockershim.sock",
|
||||
"node.alpha.kubernetes.io/ttl": "0",
|
||||
....
|
||||
```
|
||||
|
||||
## API Reference
|
||||
KubeSpehre API swagger json can be found in repo https://github.com/kubesphere/kubesphere/blob/master/api/
|
||||
|
||||
- KubeSphere specified API [swagger json](https://github.com/kubesphere/kubesphere/blob/master/api/ks-openapi-spec/swagger.json). It contains all the API that only applied to KubeSphere.
|
||||
- KubeSphere specified CRD [swagger json](https://github.com/kubesphere/kubesphere/blob/master/api/openapi-spec/swagger.json). Contains all the generated CRD api documentation, it's same with Kubernetes api objects.
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: "Nodes"
|
||||
title: "Nodes Management"
|
||||
keywords: "kubernetes, StorageClass, kubesphere, PVC"
|
||||
description: "Kubernetes Nodes Management"
|
||||
|
||||
|
|
@ -7,4 +7,32 @@ linkTitle: "Nodes"
|
|||
weight: 200
|
||||
---
|
||||
|
||||
TBD
|
||||
Kubernetes runs your workload by placing containers into Pods to run on Nodes. A node may be a virtual or physical machine, depending on the cluster. Each node contains the services necessary to run Pods, managed by the control plane.
|
||||
|
||||
## Nodes Status
|
||||
|
||||
Cluster nodes are only accessible to cluster administrators. Administrators can find cluster nodes page by _Cluster Adminstration_ -> _Nodes_ -> _Cluster Nodes_ . Some node metrics are very important to clusters, its administrators' responsibilities to watch over these numbers to make sure nodes are available.
|
||||
|
||||

|
||||
|
||||
- **Status** : Node current status, indicate node is available or not.
|
||||
- **CPU** : Node current cpu usage, these values are real-time numbers.
|
||||
- **Memory** : Current memory usage, same with _CPU_ stats, are also real-time numbers.
|
||||
- **Allocated CPU** : Calculated from summing up all pods cpu requests on this node, means how much amount of CPU reserved for workloads on this node, even workloads are using less CPU resource. This metric is vital to kubernetes scheduler, kube-scheduler will favor lower _Allocated CPU_ usage nodes when scheduling a pod in most cases. For more details, refer to [manage resources containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
|
||||
- **Allocated Memory** : Calculated from summing up all pods memory requests on this node. Same with _Allocated CPU_.
|
||||
|
||||
> **Note:** _CPU_ and _Allocated CPU_ are differ in most of times, so is memory, this is normal. As a cluster administrator, should focus on both kind of metrics instead of just one. It's always a good practice to set resources requests and limits for each pod to match the their real usage. Over allocating can lead to low cluster utilization, otherwise can lead to high pressure on cluster, and even cluster unhealthy.
|
||||
|
||||
## Nodes Management
|
||||
|
||||

|
||||
|
||||
- **Cordon/Uncordon** : Marking a node as unschedulable are very useful during a node reboot or othere maintenance. The kubernetes scheduler will not schedule new pods to this node if it's been marked unschedulable, but does not affect existing workloads already on this Node. In KubeSphere, you mark a node as unschedulable by click button _Cordon_ in node detail page, node will be schedulable again if click the button again.
|
||||
|
||||
- **Labels** : Node labels can be very useful when you want to assign pods to specific nodes. Label the nodes first, for example, label GPU nodes with label `node-role.kubernetes.io/gpu-node`, and then when create workloads with the label `node-role.kubernetes.io/gpu-node` you can assign pos to gpu node explictly.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
- **Tainits** : Taints allow node to repel a set of pods, [taints and tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/). You can add or remove node taints in node detail page, but be careful, taints can cause unexpected behavior may lead to service unavailable.
|
||||
Binary file not shown.
|
After Width: | Height: | Size: 73 KiB |
Binary file not shown.
|
After Width: | Height: | Size: 107 KiB |
Binary file not shown.
|
After Width: | Height: | Size: 188 KiB |
Binary file not shown.
|
After Width: | Height: | Size: 216 KiB |
Loading…
Reference in New Issue