Merge pull request #2463 from Santosh1176/porterlb

Resolves issue #2450 Corrected broken links of porterlb.io
This commit is contained in:
rayzhou2017 2022-06-12 18:47:00 +08:00 committed by GitHub
commit 3ab99dc86f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
12 changed files with 165 additions and 16 deletions

View File

@ -90,7 +90,7 @@ section4:
- name: Multiple Storage and Networking Solutions
icon: /images/home/multi-tenant-management.svg
content: Support GlusterFS, CephRBD, NFS, LocalPV solutions, and provide CSI plugins to consume storage from multiple cloud providers. Provide a <a class='inner-a' target='_blank' href='https://porterlb.io'>load balancer OpenELB</a> for bare metal Kubernetes, and offers network policy management, support Calico and Flannel CNI
content: Support GlusterFS, CephRBD, NFS, LocalPV solutions, and provide CSI plugins to consume storage from multiple cloud providers. Provide a <a class='inner-a' target='_blank' href='https://openelb.github.io/'>load balancer OpenELB</a> for bare metal Kubernetes, and offers network policy management, support Calico and Flannel CNI
features:

View File

@ -53,4 +53,4 @@ For detailed information about the architecture and principle, please refer to [
## Related Resources
- [Porter: A Promising Newcomer in CNCF Landscape for Bare Metal Kubernetes Clusters](https://dzone.com/articles/porter-an-open-source-load-balancer-for-kubernetes)
- [Porter Website](https://porterlb.io/)
- [Porter Website](https://openelb.github.io/)

View File

@ -60,7 +60,9 @@ For the first problem, Ingress can be used for L4 but the configuration of Ingre
## Porter Introduction
[Porter](https://porterlb.io) is an open source cloud native load balancing plugin designed by the KubeSphere development team based on Border Gateway Protocol (BGP). It mainly features:
[Porter](https://openelb.github.io/) is an open source cloud native load balancing plugin designed by the KubeSphere development team based on Border Gateway Protocol (BGP). It mainly features:
1. ECMP routing load balancing
2. BGP dynamic routing configuration
@ -144,4 +146,4 @@ There are two advantages in this method:
## Related Resources
- [KubeCon Shanghai: Porter - An Open Source Load Balancer for Bare Metal Kubernetes](https://www.youtube.com/watch?v=EjU1yAVxXYQ)
- [Porter Website](https://porterlb.io)
- [Porter Website](https://openelb.github.io/)

View File

View File

@ -24,7 +24,7 @@ groups:
children:
- title: OpenELB
icon: 'https://pek3b.qingstor.com/kubesphere-docs/png/20200608102707.png'
link: 'https://porterlb.io'
link: 'https://openelb.github.io/'
description: OpenELB is an open source load balancer designed for bare metal Kubernetes clusters. Its implemented by physical switch, and uses BGP and ECMP to achieve the best performance and high availability.
- name: Installer

View File

@ -90,7 +90,7 @@ section4:
- name: 支持多种存储与网络方案
icon: /images/home/multi-tenant-management.svg
content: 支持 GlusterFS、Ceph、NFS、LocalPV提供多个 CSI 插件对接公有云与企业级存储;提供面向物理机 Kubernetes 环境的负载均衡器 <a class='inner-a' target='_blank' href='https://porterlb.io'>OpenELB</a>,支持网络策略可视化,支持 Calico、Flannel、Cilium、Kube-OVN 等网络插件
content: 支持 GlusterFS、Ceph、NFS、LocalPV提供多个 CSI 插件对接公有云与企业级存储;提供面向物理机 Kubernetes 环境的负载均衡器 <a class='inner-a' target='_blank' href='https://openelb.github.io/'>OpenELB</a>,支持网络策略可视化,支持 Calico、Flannel、Cilium、Kube-OVN 等网络插件
features:
- name: Kubernetes DevOps 系统

View File

@ -59,7 +59,7 @@ Ingress 并不是 Kubernetes 服务本身提供的暴露方式,而是借助于
## Porter 介绍
[Porter](https://porterlb.io) 是 KubeSphere 团队研发的一款开源的基于 BGP 协议的云原生负载均衡器插件。它的主要特性有:
[Porter](https://openelb.github.io/) 是 KubeSphere 团队研发的一款开源的基于 BGP 协议的云原生负载均衡器插件。它的主要特性有:
1. 基于路由器 ECMP 的负载均衡
2. 基于 BGP 路由动态配置
@ -141,4 +141,4 @@ Porter 中的所有资源都是 CRD包括 VIP、BGPPeer、BGPConfig 等。对
## 相关资源
- [KubeCon Shanghai: Porter - An Open Source Load Balancer for Bare Metal Kubernetes](https://www.youtube.com/watch?v=EjU1yAVxXYQ)
- [Porter 官网](https://porterlb.io)
- [Porter 官网](https://openelb.github.io/)

View File

@ -69,7 +69,7 @@ A可以但是不建议共用。可以避免爆炸半径过大。
### Q6OpenELB 可以用在生产环境吗?
A可以[官网](https://porterlb.io/about/)上已经有一些生产环境的案例。
A可以[官网](https://openelb.github.io/)上已经有一些生产环境的案例。
### Q7APISIX 是以什么方式对外暴露的?

View File

@ -23,7 +23,7 @@ groups:
children:
- title: Porter 负载均衡器
icon: 'https://pek3b.qingstor.com/kubesphere-docs/png/20200608102707.png'
link: 'https://porterlb.io'
link: 'https://openelb.github.io/'
description: 适用于物理部署 Kubernetes 环境的负载均衡器插件Porter 使用物理交换机实现,利用 BGP 和 ECMP 从而达到性能最优和高可用性,提供用户在物理环境暴露 LoadBalancer 类型服务与云上获得一致性体验。
- name: 安装部署

View File

@ -12,10 +12,10 @@ footer:
link: observability/
- content: Bare Metal LoadBalancer
link: 'https://porterlb.io/'
link: 'https://openelb.github.io/'
- content: Functions-as-a-Service Platform and Serverless
link: 'https://github.com/OpenFunction/OpenFunction'
link: 'https://github.com/OpenFunction/OpenFunction'
- content: Multi-cloud Apps Mgmt
link: 'https://github.com/openpitrix/openpitrix'
@ -56,7 +56,7 @@ footer:
link: https://kubesphere.io
- content: China Site
link: https://kubesphere.com.cn/
- title: Products and Services
list:
- content: KubeSphere on AWS

View File

@ -12,7 +12,7 @@ footer:
link: observability/
- content: 物理机 K8s 暴露服务
link: 'https://porterlb.io'
link: 'https://openelb.github.io/'
- content: K8s 一键部署与运维
link: 'https://github.com/kubesphere/kubekey'
@ -52,7 +52,7 @@ footer:
- content: 参与贡献
link: contribution/
- content: 社区活动
link: live/
link: live/
- content: 案例学习
link: case/
- content: 合作伙伴
@ -83,6 +83,6 @@ footer:
- content: 云原生应用服务平台
link: 'https://kubesphere.cloud'
- content: 技术支持服务
link: 'https://kubesphere.cloud/ticket/'
link: 'https://kubesphere.cloud/ticket/'
- content: 了解商业产品与咨询合作
link: 'https://jinshuju.net/f/C8uB8k'

147
porter.md Normal file
View File

@ -0,0 +1,147 @@
---
title: 'Porter: An Open Source Load Balancer for Kubernetes in a Bare Metal Environment'
author: 'Xuetao Song'
createTime: '2019-06-25'
---
We know that we can use the service of LoadBalancer in the Kubernetes cluster to expose backend workloads externally. Cloud providers often offer cloud LoadBalancer plugins, which requires the cluster to be deployed on a specific IaaS platform. However, many enterprise users often deploy the Kubernetes cluster on bare metal, especially when it is used for the production environment. For the local bare metal cluster, Kubernetes does not provide LB implementation. Porter is an open source load balancer designed specifically for the bare metal Kubernetes cluster, which serves as an excellent solution to this problem.
## Kubernetes Service Introduction
In the Kubernetes cluster, network represents a very basic and important part. For large-scale nodes and containers, it entails very complicated and delicate design if it is to ensure the connectivity and efficiency in the network. Whats more, IP addresses and ports need to be automatically assigned and managed in the network, with a user-friendly approach in place for the direct and quick access to applications in need.
Kubernetes has made great efforts in this connection. With CNI, Service, DNS and Ingress, it has solved the problem of service discovery and load balancing, providing an easier way in usage and configuration. Among them, Service underlies Kubernetes microservices. And services are made possible through kube-proxy in Kubernetes.
This component runs on each node, monitoring the change in the service object in API Server and achieving network forwarding by managing iptables. Users can create different forms of Services such as those based on Label Selector, Headless or ExternalName. Kube-proxy will create a virtual IP (or cluster IP) for the service for the internal access of the cluster.
## Three Methods to Expose Services
If the access is required outside the cluster, or to expose the service to users, Kubernetes Service provides two methods: NodePort and LoadBalancer. Besides, Ingress is also a very common option to expose services.
### NodePort
If the service type is set to NodePort, kube-proxy will apply for a port for the service which is above 30000 (by default). Iptables rules will be configured for all the hosts in the cluster. In this way, users can access the service through any node in the cluster with the assigned port. Please see the image below:
![NodePort](https://pek3b.qingstor.com/kubesphere-docs/png/20200611115837.png)
NodePort is the most convenient way to expose services while it also has obvious shortcomings:
1. The real IP is not visible in Pod through the access based on SNAT.
2. A host in the cluster is used as a jumper server to access the backend service, which means all the traffic will go to the server first. This can easily lead to performance bottlenecks and a single point of failure, making it difficult to be used in the production environment.
3. Generally, NodePort uses large port numbers which are hard to remember.
Initially, NodePort is not designed for the exposure of services in the production environment which is why large port numbers are used by default.
### LoadBalancer
LoadBalancer is a preferred solution by Kubernetes to service exposure. However, this cannot be done without the load balancer offered by cloud providers, which means the Kubernetes cluster has to be deployed in the cloud. Here is how LoadBalancer works:
![LoadBalancer](https://pek3b.qingstor.com/kubesphere-docs/png/20200611115859.png)
The LoadBalancer service is achieved through the LB plugin offered by cloud providers. The package Kubernetes.io/cloud-provider will choose the appropriate backend service and expose it to the LB plugin, which creates a load balancer accordingly. That means network traffic will be distributed in the cloud service, avoiding a single point of failure and performance bottlenecks that may occur in NodePort. As mentioned above, LoadBalancer is a preferred solution by Kubernetes to service exposure, but it is only limited to the Kubernetes service offered by cloud providers. For the Kubernetes cluster that is deployed in a bare metal environment or in a non-cloud environment, this approach may not be applicable.
### Ingress
Kubernetes itself does not provide the way to expose services through Ingress. Rather, Ingress exposes multiple services simultaneously with the help of applications just like a router. This plugin identifies different services through domains and uses annotations to control the way services are exposed externally. Here is how it works:
![Ingress](https://pek3b.qingstor.com/kubesphere-docs/png/20200611115920.png)
Ingress is the most used method in a business environment than NodePort and LoadBalancer. The reasons include:
1. Compared with the load balancing way of kube-proxy, Ingress Controller is more capable (e.g. traffic control and security strategy).
2. It is more direct to identify services through domains; large port numbers in NodePort are also not needed for Ingress.
Nevertheless, the following problems need to be solved for Ingress:
1. Ingress is used more often for L7, with limited support for L4.
2. All the traffic will go to Ingress Controller, which requires a LB to expose Ingress Controller.
For the first problem, Ingress can be used for L4 but the configuration of Ingress is too complicated for L4 applications. The best practice is to use LB directly for exposure. For the second problem, Ingress Controller can be exposed in a test environment with NodePort (or hostnetwork), while a single point of failure and performance bottlenecks may happen inevitably and the HA feature of Ingress-controller has not been properly used.
## Porter Introduction
[Porter](https://openelb.github.io/) is an open source cloud native load balancing plugin designed by the KubeSphere development team based on Border Gateway Protocol (BGP). It mainly features:
1. ECMP routing load balancing
2. BGP dynamic routing configuration
3. VIP management
![Portter](https://pek3b.qingstor.com/kubesphere-docs/png/20200611120450.png)
All Porter codes are open source and documents are available in [GitHub](https://github.com/kubesphere/porter). You are welcome to star and use it.
## How to Install Porter
Porter has been deployed and tested in two environments so far as below. You can see more details in GitHub about the deployment, test and process by clicking the link below. It is recommended to have a try:
- [Deploy Porter on Bare Metal Kubernetes Cluster](https://github.com/kubesphere/porter/blob/master/doc/deploy_baremetal.md)
- [Test in the QingCloud Platform Using a Simulated Router](https://github.com/kubesphere/porter/blob/master/doc/simulate_with_bird.md)
## Principle
### ECMP
Equal-Cost Multi-Path (ECMP) means the package forwarding to a same destination can occur along multiple paths of equal cost. When the device supports ECMP, the three-layer traffic that is sent to the target IP or network segment can be distributed by different paths, achieving network load balancing. Besides, once a certain path malfunctions, other paths can finish the forwarding process instead, serving as the routing redundant backup. Please refer to the image below:
![ECMP Principle](https://pek3b.qingstor.com/kubesphere-docs/png/20200611115936.png)
With the help of the virtual router, ECMP can select the next hop (Pod) according to Hash algorithm from the existing routing paths for a certain IP (the corresponding VIP of the service). This is how load balancing is achieved. As virtual routers support ECMP in general, Porter only needs to check the Kubernetes API server and deliver the corresponding information of backend Pod of a service to the router.
### BGP
A Pod may be scheduled to other nodes in Kubernetes. For a router, the next hop of a service VIP is not fixed as the equal-cost routing information will often be updated. Calico, for example, uses BGP (Border Gateway Protocol) to advertise routes. BGP is a commonly used essential decentralized protocol to exchange routing information among autonomous systems on the Internet. Unlike other routing protocols, BGP uses L4 to ensure the update security of routing information. As BGP is decentralized, it is very easy to establish a routing layer of high availability to ensure network continuity.
![BGP](https://pek3b.qingstor.com/kubesphere-docs/png/20200611120800.png)
The image above briefly demonstrates how BGP works in Porter. In the bottom-left corner, it is a two-node Kubernetes cluster with two routers (Leaf1 and Leaf2) above it. These two routers are connected to two kernel switches (Spine layer). Users are on the right side, whose routers are Border1 and Border2 (also connected to Spine).
The three layers of users and Kubernetes server are reachable. Services are created in the Kubernetes cluster and Porter is also used. A VIP (or other manually assigned IP) 1.1.1.1 is assigned by Porter, which sends the information to Leaf1 and Leaf2 through BGP. The next hop to access 1.1.1.1 can be Node1 or Node2. Meanwhile, the Leaf layer also sends the message to the Spine layer, which also knows the next hop to access 1.1.1.1 can be Leaf1 or Leaf2 based on its BGP.
According to the same logic, the routing information will also be updated on Border, meaning the path for users to access 1.1.1.1 is complete. At the same time, as each layer in the image features HA, a total of 16 (`2*2*2*2`) paths are available to use for external access. Traffic can be distributed across the network and any downtime that occurs in the router in any layer will not affect users access.
## Architecture
![Porter Architecture](https://pek3b.qingstor.com/kubesphere-docs/png/20200611120827.png)
Porter has two components: a core controller and an agent deployed on each node. The main functions of the controller include:
1. Monitor cluster Services and corresponding endpoints; acquire the Scheduling information of Pods
2. VIP storage and assignment
3. Establish BGP and advertise routes
![Porter Logic](https://pek3b.qingstor.com/kubesphere-docs/png/20200611120857.png)
The image above shows the working principle of Porter's core controller.
Agent is a lightweight component to monitor VIP resources and add Iptables rules for external access to the VIP. By default, the kernel Forward table will drop any external access to VIP.
## Designed for Cloud Natives
All resources in Porter are CRD, including VIP, BGPPeer and BGPConfig. Users who are used to Kubectl will find Porter very easy to use. For advanced users who want to customize Porter, Kubernetes API can be called directly for tailor-made development. The core controller of Porter will soon support high availability (HA).
## Cautions
The VIP traffic of user access will go to a node in the Kubernetes cluster under BGP. This is because the routes advertised by Porter are also nodes instead of Pod IP which is inaccessible externally. The path from a node to a pod is maintained by kube-proxy as below:
![Cautions](https://pek3b.qingstor.com/kubesphere-docs/png/20200611120948.png)
The traffic will be sent to a pod randomly after the SNAT process. As Port will adjust routes based on the dynamic change of Service Endpoints to make sure a pod is available in a node for the next hop, we can change kube-proxy which is set by default. You can set **ExternalTrafficPolicy=local** in a Service and the result is shown as follows:
![ExternalTrafficPolicy](https://pek3b.qingstor.com/kubesphere-docs/png/20200611121114.png)
There are two advantages in this method:
1. SourceIP will not go through the process of NAT
2. Traffic will go locally, reducing a hop in the network
## Future Plans
1. Support of other simple routing protocols
2. More convenient VIP management
3. Policy support of BGP
4. Integration into KubeSphere with UI provided
## Related Resources
- [KubeCon Shanghai: Porter - An Open Source Load Balancer for Bare Metal Kubernetes](https://www.youtube.com/watch?v=EjU1yAVxXYQ)
- [Porter Website](https://openelb.github.io/)