Delete namespace in perpetual Terminating state

21 Mar, 2019

Comments

Error, perpetual Terminating state:

NAME             STATUS        AGE
cert-manager     Terminating   3h
default          Active        1y
kube-public      Active        1y
kube-system      Active        1y

Clean namespace:

kubectl delete all -n cert-manager --all --force --grace-period=0
kubectl delete ns cert-manager --force --grace-period=0

Variables for configuration:

export NAMESPACE_TO_DELETE="cert-manager"
export CLUSTER_NAME="gke_PRO-ID_ZONE-GCP_NAME-CLUSTER"

Create service account with permissions:

kubectl create -f - -o yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tmpadmin
EOF

Save the namespace to edit it kubectl get namespace $NAMESPACE_TO_DELETE -o json > tmp.json

Edit:

    "spec": {
        "finalizers": [
            "kubernetes"
        ]
    },

To:

    "spec": {
        "finalizers": []
    },

Create the following variables

APISERVER=$(kubectl config view -o jsonpath="{.clusters[?(@.name==\"$CLUSTER_NAME\")].cluster.server}")
TOKEN=$(kubectl get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='tmpadmin')].data.token}"|base64 -d)

Test token: curl -X GET $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure

Update namespace:

curl -X PUT $APISERVER/api/v1/namespaces/$NAMESPACE_TO_DELETE/finalize -H "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" --data-binary @tmp.json  --insecure

After this the namespace is erased.

Clean the service account:

kubectl delete sa tmpadmin

External Load Balancer for Kubernetes - HAProxy

10 Jun, 2017

Comments

You need:

  • Cluster Kubernetes
  • New node for HAProxy

Sources:

Instalation in node HAProxy

Install basic sowftware

yum install epel-release
yum install haproxy git socat python-pip
pip install jinja2
pip install deepdiff

Clone repository in / or other route for dinamic configuration of HAProxy

git clone https://github.com/Tedezed/Celtic-Kubernetes.git

Create errors html for service HAProxy

mkdir /etc/haproxy/errors/
cp /Celtic-Kubernetes/external_loadbalancer_hap/errors/* /etc/haproxy/errors/
cp /Celtic-Kubernetes/external_loadbalancer_hap/system/haproxy.cfg /etc/haproxy/

Create state global

mkdir -p /var/state/haproxy/
touch  /var/state/haproxy/global

Enable Haproxy

systemctl enable haproxy

Test

python hap_manager_daemon.py start
python hap_manager_daemon.py stop
sh haproxy_reload

 

HAP Manager

You need the repository https://github.com/Tedezed/Celtic-Kubernetes.git

Modify configuration.json for hap_manager

{
"kube_api": "morrigan:8080",
"version": "v1",
"file_conf": "template.cfg",
"stats": true,
"sleep": 3
}
  • Kube API master

      "kube_api": "ip_kube_api_server:port_http"
    

Unit for systemd of hap_manager

Copy file hap_manager.service

cp /Celtic-Kubernetes/external_loadbalancer_hap/system/hap_manager.service /lib/systemd/system/hap_manager.service

Modify permissions for hap_manager

chmod 644 /lib/systemd/system/hap_manager.service

Reload daemon systemctl for reload configuration of units

systemctl daemon-reload

Start hap_manager.service

systemctl start hap_manager.service

systemctl enable hap_manager.service

See settings

cat /etc/haproxy/haproxy.cfg | grep acl

 

Define services

Example rc

apiVersion: v1
kind: ReplicationController
metadata:
 name: nginx-controller
spec:
 replicas: 2
 selector:
   name: nginx
 template:
   metadata:
     labels:
       name: nginx
   spec:
     containers:
       - name: nginx
         image: nginx
         ports:
           - containerPort: 80

Example svc, you need NodePort

apiVersion: v1
kind: Service
metadata:
  name: nginx-service-domain
  labels:
    app: nginx
spec:
  type: NodePort
  ports:
  - port: 80
    protocol: TCP
    name: http
  selector:
    name: nginx

Enter with http://IP-SERVER-HAP/NAME-SERVICE/

You need domain for the service, no problem, you can use the label “domain”

Example svc with domain

apiVersion: v1
kind: Service
metadata:
  name: nginx-service-domain
  labels:
    app: nginx
    domain: www.test-domain.com
spec:
  type: NodePort
  ports:
  - port: 80
    protocol: TCP
    name: http
  selector:
    name: nginx

 

Not repeat the domain name

You can use manager_tools.py (function constraint_domain) for not to repeat the domain name. If return True domain name is in use.

Example

constraint_domain("morrigan:8080","v1","www.test-domain.com")

Galera Cluster on Debian 8

5 May, 2017

Comments

In this stage you need two nodes with Debian OS for deploy Galera Cluster Maria DB. The next entry add other two nodes with HAProxy and VIP for load balancer to Cluster.

The first step is add the repository of mariadb to Debian (Node1 and node2):

sudo apt-get install software-properties-common
sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 0xcbcb082a1bb943db
sudo add-apt-repository 'deb [arch=amd64,i386] http://mariadb.kisiek.net/repo/10.0/debian jessie main'
sudo apt-get update
sudo apt-get upgrade

When is done, install the software for Galera cluster (Node1 and node2):

apt-get install -y rsync galera mariadb-galera-server

The next step is configure the file of configuration of Galera:

For node01:

echo '[mysqld]
# MySQL Configuration
query_cache_size=0
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
query_cache_type=0
bind-address=0.0.0.0

# Galera Provider Configuration
wsrep_provider=/usr/lib/galera/libgalera_smm.so
#wsrep_provider_options="gcache.size=32G"

# Galera Cluster Configuration
wsrep_cluster_name="test_cluster"
wsrep_cluster_address="gcomm://192.168.30.11:4567,192.168.30.12:4567"

# Galera Synchronization Congifuration
wsrep_sst_method=rsync
#wsrep_sst_auth=user:pass

# Galera Node Configuration
wsrep_node_address="192.168.30.11"
wsrep_node_name="node01"' > /etc/mysql/conf.d/galera.cnf ; chmod 770 /etc/mysql/conf.d/galera.cnf

For node02:

echo '[mysqld]
# MySQL Configuration
query_cache_size=0
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
query_cache_type=0
bind-address=0.0.0.0

# Galera Provider Configuration
wsrep_provider=/usr/lib/galera/libgalera_smm.so
#wsrep_provider_options="gcache.size=32G"

# Galera Cluster Configuration
wsrep_cluster_name="test_cluster"
wsrep_cluster_address="gcomm://192.168.30.11:4567,192.168.30.12:4567"

# Galera Synchronization Congifuration
wsrep_sst_method=rsync
#wsrep_sst_auth=user:pass

# Galera Node Configuration
wsrep_node_address="192.168.30.12"
wsrep_node_name="node02"' > /etc/mysql/conf.d/galera.cnf ; chmod 770 /etc/mysql/conf.d/galera.cnf

BONUS configuration

[mysql_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

Configure the file /etc/hosts (Node1 and node2):

192.168.30.11 node01
192.168.30.12 node02

Copy file /etc/mysql/debian.cnf to the node01 to node02

Stop the service MySQL (Node1 and node2):

service mysql stop

Execute the next command for create new cluster (Node1):

service mysql start --wsrep-new-cluster

ERROR: WSREP: gcs connect failed: Connection timed out SOLUTION: Execute:

service mysql bootstrap

Restart all services of MySQL and Galera (Node1 and node2).

The next query response the number of nodes of cluster Galera:

mysql -u root -e 'SELECT VARIABLE_VALUE as "cluster size" FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME="wsrep_cluster_size"' -p

Finish the first part.