Sunday 22 November 2020

etcd cluster

 

{
  export ETCD_VER=v3.4.10
  wget -q "https://github.com/etcd-io/etcd/releases/download/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz"
  tar zxf etcd-v3.4.10-linux-amd64.tar.gz
  mv etcd-v3.4.10-linux-amd64/etcd* /usr/local/bin/
  rm -rf etcd*
PATH=$PATH:/usr/local/bin
}

Change below name and IPs for 1,2,3 accordingly
ETCD_NAME="etcd3" 
NODE_IP=$(hostname -i)

ETCD1_IP="172.31.8.194"
ETCD2_IP="172.31.12.116"
ETCD3_IP="172.31.13.204"


cat <<EOF >/etc/systemd/system/etcd.service
[Unit]
Description=etcd

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \\
  --name ${ETCD_NAME} \\
  --initial-advertise-peer-urls http://${NODE_IP}:2380 \\
  --listen-peer-urls http://${NODE_IP}:2380 \\
  --advertise-client-urls http://${NODE_IP}:2379 \\
  --listen-client-urls http://${NODE_IP}:2379,http://127.0.0.1:2379 \\
  --initial-cluster-token etcd-cluster-1 \\
  --initial-cluster etcd1=http://${ETCD1_IP}:2380,etcd2=http://${ETCD2_IP}:2380,etcd3=http://${ETCD3_IP}:2380 \\
  --initial-cluster-state new
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
{
  systemctl daemon-reload
  systemctl enable --now etcd
}
ETCDCTL_API=3 etcdctl --endpoints=http://127.0.0.1:2379 member list
 
ETCD Cluster is ready now
 
How to use cluster in kubeadm?
 {

ETCD1_IP="172.31.8.194"
ETCD2_IP="172.31.12.116"
ETCD3_IP="172.31.13.204"

cat <<EOF > kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
networking:
  podSubnet: "192.168.0.0/16"
etcd:
    external:
        endpoints:
        - https://${ETCD1_IP}:2379
        - https://${ETCD2_IP}:2379
        - https://${ETCD3_IP}:2379
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: "172.16.16.100"
EOF

} 
 
kubeadm init --config kubeadm-config.yaml --ignore-preflight-errors=all
kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml
kubeadm token create --print-join-command
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config  

How delete etcd data in cluster?
# rm -Rf /etc/etcd/* # rm -Rf /var/lib/etcd/*
 
 

Sunday 15 November 2020

login reboot records

 /var/run/utmp (virtual file)
who, w and uptime commands

who command
<username> <devicefile for this terminal> <date> <time when user logged in> <Ip from where user logged in>
vvdn     tty7         2020-11-15 15:02 (:0)

 

What is difference between pts and tty?
tty- Virtual Terminals
https://www.youtube.com/watch?v=vAr9PM9dEtE
Used for executing commands and offering inputs
You cannot use mouse with virtual terminal.
Enables different users to work on different program at the same time on same computer.
Enter into virtual terminal : Ctrl+Alt+F1
There are 6 virtual terminal : Ctrl+Alt+F1 to F6
To go back to main screen : Ctrl+Alt+F7
tty is the teletype number

Some useful commands
reset: reset the terminal
history: list of commands executed by the user
Ctrl+d: logout of terminal
Ctrl+Alt+del : Reboot the system




Virtual IP:
VRRP : Virtual Router Redundancy Protocol
keepalived is the software which allows us to do VRRP

/etc/keepalived/keepalived.conf
vrrp_instance httpd2{
 state BACKUP
 interface eth0
 virtual_router_id 101
 priority 100
 authentication{
  auth_type PASS
  auth_pass 1234
 }
 virtual_ipaddress{
  192.168.254.100
 }
}

Installing pcs cluster...
Check the selinux status: sestatus

yum repo:
[rhel]
name=redhatrepo
baseurl=file:///directory
enabled=1
gpgcheck=0

createrepo /directory

yum install pcs pacemaker fence-agents lvm2-cluster resource-agents psmisc policycoreutils-python gfs2-utils -y

Check password expiry of a user:
chage -l <username>

pcs cluster auth ip-172-31-14-80.ap-south-1.compute.internal ip-172-31-1-112.ap-south-1.compute.internal
give===>
username: hacluster
password: redhat

Create cluster
pcs cluster setup --start --name dheerajPCScluster ip-172-31-14-80.ap-south-1.compute.internal ip-172-31-1-112.ap-south-1.compute.internal --force


pcs cluster start --all

Checking status
systemctl status pacemaker
systemctl status corosync
pcs status
crm_mon -r1


pcs cluster destroy