internal DNS resolution with neutron network

The Networking service enables users to control the name assigned to ports by the internal DNS. We will check to enable internal DNS resolution with neutron network on openstack cloud. The internal DNS functionality offered by the Networking service and its interaction with the Compute service.

  • Integration of the Compute service and the Networking service with an external DNSaaS (DNS-as-a-Service).
  • Users can control the behaviour of the Networking service in regards to DNS using two attributes associated with ports, networks, and floating IPs.

Dnsmasq provides services as a DNS cacher and a DHCP server. dnsmasq does DHCP, DNS, DNS caching, and TFTP, so it’s four servers in one. As a Domain Name Server (DNS) it can cache DNS queries to improve connection speeds to previously visited sites, and as a DHCP server dnsmasq can be used to provide internal IP addresses and routes to computers on a LAN. Either or both of these services can be implemented. dnsmasq is considered to be lightweight and easy to configure.

Steps to enable internal DNS resolution with neutron network

Edit the neutron.conf file and assign a value different to openstacklocal (its default value) to the dns_domain parameter in the [default] section. As an example:

vi /etc/neutron/neutron.conf

dns_domain = example.org.

Add dns to extension_drivers in the [ml2] section of ml2_conf.ini. As an example:

vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
extension_drivers = port_security,dns
Restart neutron services and dnsmasq daemon
Create new private network
Copy subnet DHCP Ports IP

dhcp port

Edit new private network subnet DNS name servers

openstack subnet edit

internal DNS resolution with neutron network

 

Create a new Instances and check the internal DNS resolution.

 

 

cloud-init and key pair error openstack

Recently we have received the following errors in OpenStack console and  The ssh key could not be injected and the cloud-init cannot contact the server.

[ 43.395174] vdb: unknown partition table
[ 113.888499] cloud-init[774]: 2016-08-29 20:01:03,415 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [50/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 164.924980] cloud-init[774]: 2016-08-29 20:01:54,454 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [101/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 182.945299] cloud-init[774]: 2016-08-29 20:02:12,474 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [119/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 183.947488] cloud-init[774]: 2016-08-29 20:02:13,475 - DataSourceEc2.py[CRITICAL]: Giving up on md from ['http://169.254.169.254/2009-04-04/meta-data/instance-id'] after 120 seconds
[ 183.950442] cloud-init[774]: 2016-08-29 20:02:13,479 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [0/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 184.953521] cloud-init[774]: 2016-08-29 20:02:14,482 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [1/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 185.956850] cloud-init[774]: 2016-08-29 20:02:15,486 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [2/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 186.959196] cloud-init[774]: 2016-08-29 20:02:16,488 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [3/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 187.962298] cloud-init[774]: 2016-08-29 20:02:17,491 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [4/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 188.965372] cloud-init[774]: 2016-08-29 20:02:18,494 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [5/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 190.970043] cloud-init[774]: 2016-08-29 20:02:20,499 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [7/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 192.974139] cloud-init[774]: 2016-08-29 20:02:22,503 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [9/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 194.978289] cloud-init[774]: 2016-08-29 20:02:24,507 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [11/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 196.982307] cloud-init[774]: 2016-08-29 20:02:26,511 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [13/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 198.986454] cloud-init[774]: 2016-08-29 20:02:28,515 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [15/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 201.991528] cloud-init[774]: 2016-08-29 20:02:31,520 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [18/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 204.996829] cloud-init[774]: 2016-08-29 20:02:34,526 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [21/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 208.001978] cloud-init[774]: 2016-08-29 20:02:37,531 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [24/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 211.007219] cloud-init[774]: 2016-08-29 20:02:40,536 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [27/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 214.012246] cloud-init[774]: 2016-08-29 20:02:43,541 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [30/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 218.018210] cloud-init[774]: 2016-08-29 20:02:47,547 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [34/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 222.024298] cloud-init[774]: 2016-08-29 20:02:51,553 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [38/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 226.030538] cloud-init[774]: 2016-08-29 20:02:55,559 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [42/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 230.036614] cloud-init[774]: 2016-08-29 20:02:59,566 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [46/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 234.042767] cloud-init[774]: 2016-08-29 20:03:03,572 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [50/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 239.050140] cloud-init[774]: 2016-08-29 20:03:08,579 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [55/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 244.058137] cloud-init[774]: 2016-08-29 20:03:13,587 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [60/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 249.065886] cloud-init[774]: 2016-08-29 20:03:18,595 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [65/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 254.073089] cloud-init[774]: 2016-08-29 20:03:23,602 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [70/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 259.080255] cloud-init[774]: 2016-08-29 20:03:28,609 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [75/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 265.088215] cloud-init[774]: 2016-08-29 20:03:34,617 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [81/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 271.096308] cloud-init[774]: 2016-08-29 20:03:40,625 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [87/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 277.104435] cloud-init[774]: 2016-08-29 20:03:46,633 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [93/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 283.112577] cloud-init[774]: 2016-08-29 20:03:52,642 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [99/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 289.120693] cloud-init[774]: 2016-08-29 20:03:58,650 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [105/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 296.126349] cloud-init[774]: 2016-08-29 20:04:05,655 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [112/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 303.135384] cloud-init[774]: 2016-08-29 20:04:12,664 - url_helper.py[WARNING]: Calling 'http://10.0.0.2//latest/meta-data/instance-id' failed [119/120s]: unexpected error ['NoneType' object has no attribute 'status_code']
[ 310.144137] cloud-init[774]: 2016-08-29 20:04:19,672 - DataSourceCloudStack.py[CRITICAL]: Giving up on waiting for the metadata from ['http://10.0.0.2//latest/meta-data/instance-id'] after 126 seconds
[ 310.794152] cloud-init[9076]: Cloud-init v. 0.7.5 running 'modules:config' at Mon, 29 Aug 2016 20:04:20 +0000. Up 310.73 seconds.
[ 311.130418] cloud-init[9096]: Cloud-init v. 0.7.5 running 'modules:final' at Mon, 29 Aug 2016 20:04:20 +0000. Up 311.07 seconds.
ci-info: no authorized ssh keys fingerprints found for user centos.
[ 311.154977] cloud-init[9096]: ci-info: no authorized ssh keys fingerprints found for user centos.
ec2:

As per the openstack docs “Key pairs are SSH credentials that are injected into an instance when it starts. You can create or import key pairs. You must provide at least one key pair for each project.”

Solution:

Finally this issue is fixed after I added the following settings in /etc/nova/nova.conf on compute host and restarted services.

"force_config_drive=true"

Key pair is working fine for new virtual machines now.

 

 

Enable Nested virtualization openstack cloud

Nested virtualization is guest operating system is itself a hypervisor that virtualizes not just processors and memory but also storage, networking hardware assists, and other resources. The VM hypervisor represents the first implementation of practical nested virtualization with hardware assists for performance. There are a number of hypervisors support nested virtualization though not as efficiently as they could. The Linux KVM supports nesting on recent virtualization-enabled processors. How to enable Nested Virtualization OpenStack cloud ?

nested virtualization

 

You need to the vmx cpu flag to be enabled inside your instances.

How to enable Nested virtualization in OpenStack Cloud

I have installed OpenStack using PackStack in our physical server. After successfully installed, verified that the setup is working fine.

We need to enable nested virtualization OpenStack cloud at the kernel level:

[[email protected]]#echo "options kvm-intel nested=y" >> /etc/modprobe.d/dist.conf

Modify the following settings in nova.conf file.

virt_type=kvm
...
cpu_mode=host-passthrough

host-passthrough” – use the host CPU model exactly

this causes libvirt to tell KVM to passthrough the host CPU with no modifications. The difference to host-model, instead of just matching feature flags, every last detail of the host CPU is matched.

host-model” –  clone the host CPU feature flags

Reboot your compute Host

Validate that nested virtualization is enable at the kernel level:

[[email protected]]# cat /sys/module/kvm_intel/parameters/nested
Y

Launch new instance on this node, and validate that your instance at the vmx cpu flag enable:

[[email protected] ~]# cat /proc/cpuinfo | grep vmx
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl eagerfpu pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm tpr_shadow vnmi flexpriority ept fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl eagerfpu pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm tpr_shadow vnmi flexpriority ept fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt

That’s all. A new virtual machine will be running as a hypervisor.

 

 

Increase VM root partition size in ESXi host

Use the following steps to extend the root partition residing in a logical volume created with Logical Volume Manager (LVM) in a virtual machine running Red Hat/CentOS VMs. Increase the root partition size without server reboot in ESXi Host. You can do this simple steps to Increase VM root partition size in ESXi host.

If possible, take a complete backup of the virtual machine prior to making these changes.

Current root partition size in our test VM.

[[email protected] ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 98G 972M 97G 1% /

How to increase the root partition size for Red Hat/CentOS VMs in ESXi host ?

We are going to increase the size of the root partition to 200GB.  Login your ESXi host and go to your VM Edit settings page and increase the harddisk1 size.

esxi-vm-settings
Once done, login into the VM and rescan the SCSI BUS.

First, check the name(s) of your scsi devices.
[[email protected] ~]# ls /sys/class/scsi_device/

Then rescan the scsi bus. Below you can replace the ‘0\:0\:0\:0′ with the actual scsi bus name found with the previous command. Each colon is prefixed with a slash, which is what makes it look weird.

[[email protected] ~]# echo 1 > /sys/class/scsi_device/0\:0\:0\:0/device/rescan

That will rescan the current scsi bus and the disk size that has changed will show up.

Check your new disk size

[[email protected] ~]# fdisk -l

Disk /dev/sda: 214.7 GB, 214748364800 bytes, 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000b817c

Device Boot Start End Blocks Id System
/dev/sda1 * 2048 4196351 2097152 83 Linux
/dev/sda2 4196352 209715199 102759424 8e Linux LVM

Create new partition using fdisk

[[email protected] ~]# fdisk /dev/sda
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n
Partition type:
p primary (2 primary, 0 extended, 2 free)
e extended
Select (default p):
Using default response p
Partition number (3,4, default 3):
First sector (209715200-419430399, default 209715200):
Using default value 209715200
Last sector, +sectors or +size{K,M,G} (209715200-419430399, default 419430399):
Using default value 419430399
Partition 3 of type Linux and of size 100 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[[email protected] ~]# partprobe

[[email protected] ~]# fdisk -l

Disk /dev/sda: 214.7 GB, 214748364800 bytes, 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000b817c

Device Boot Start End Blocks Id System
/dev/sda1 * 2048 4196351 2097152 83 Linux
/dev/sda2 4196352 209715199 102759424 8e Linux LVM
/dev/sda3 209715200 419430399 104857600 83 Linux

Create a new PV from new partition

[[email protected] ~]# pvdisplay
— Physical volume —
PV Name /dev/sda2
VG Name centos
PV Size 98.00 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 25087
Free PE 0
Allocated PE 25087
PV UUID zQUIbS-RGYr-5XES-TwoT-i6SN-4Mv1-6mT2Lq

[[email protected] ~]# pvcreate /dev/sda3
Physical volume “/dev/sda3” successfully created

[[email protected] ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 centos lvm2 a– 98.00g 0
/dev/sda3 centos lvm2 a– 100.00g 1020.00m

Attach the new PV to the existing root VG

[[email protected] ~]# vgdisplay
— Volume group —
VG Name centos
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 98.00 GiB
PE Size 4.00 MiB
Total PE 25087
Alloc PE / Size 25087 / 98.00 GiB
Free PE / Size 0 / 0
VG UUID nTczj7-qB13-nIdw-e2Lk-fHkh-lttZ-EBMAK6

[[email protected] ~]# vgextend centos /dev/sda3
Volume group “centos” successfully extended

[[email protected] ~]# vgs
VG #PV #LV #SN Attr VSize VFree
centos 2 1 0 wz–n- 197.99g 100.00g

Extend the LV size

[[email protected] ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root centos -wi-ao—- 98.00g

[[email protected] ~]# lvextend -L+99G /dev/centos/root
Size of logical volume centos/root changed from 98.00 GiB (25087 extents) to 197.00 GiB (50431 extents).
Logical volume root successfully resized.

[[email protected] ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root centos -wi-ao—- 197.00g

We are using XFS filesystem, so it must be actually extended using xfs_growfs. Using ‘-d’ will utilize the maximum available space.

[[email protected] ~]# xfs_growfs -d /
meta-data=/dev/mapper/centos-root isize=256 agcount=4, agsize=6422272 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=25689088, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal bsize=4096 blocks=12543, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 25689088 to 51641344

[[email protected] ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 197G 972M 196G 1% /

You can use the same way to add a new virtual disk to the existing VM and create a new PV instead of a new partition.

 

Install docker swarm and configure cluster

Docker Swarm is a native clustering for Docker. The best part is that it exposes standard Docker API meaning that any tool that you used to communicate with Docker (Docker CLI, Docker Compose, Dokku, Krane, and so on) can work equally well with Docker Swarm. That in itself is both an advantage and a disadvantage at the same time. Being able to use familiar tools of your own choosing is great but for the same reasons we are bound by the limitations of Docker API. If the API doesn’t support something, there is no way around it through Swarm API and some clever tricks need to be performed.

Install Docker Swarm and configure cluster is easy, straightforward and flexible. All we have to do is install one of the service discovery tools and run the swarm container on all nodes. The first step to creating a swarm on your network is to pull the Docker Swarm image. Then, using Docker, you configure the swarm manager and all the nodes to run Docker Swarm.

docker swarm

This method requires that you:

  • open a TCP port on each node for communication with the swarm manager
  • install Docker on each node
  • create and manage TLS certificates to secure your swarm

How to install docker swarm and configure cluster

Install Docker on all the nodes and start with docker API. Use the following command to start it. This will be better to run from screen. I have used 3 node servers in my environment.

Master/node1 : ip-10-0-3-227
node2 : ip-10-0-3-226
node3 : ip-10-0-3-228

Login your all servers and start docker with API.

#docker -H tcp://0.0.0.0:2375 -d &

Install Docker swarm on the master node and create a swarm token using the following command.

[[email protected] ~]# docker -H tcp://10.0.3.227:2375 run --rm swarm create 

f63707621771250dc3925b8f4f6027ae

Note down this swarm token generated by the above command as you need it for the entire cluster set up.

Now login all your node servers and execute the following following to join with docker swarm.

Node1

Syntax Example

docker -H tcp://<node_1_ip>:2375 run -d swarm join –addr=<node1_ip>:2375 token://<cluster_token>

[[email protected] ~]#docker -H tcp://10.0.3.226:2375 run -d swarm join --addr=10.0.3.226:2375 token://f63707621771250dc3925b8f4f6027ae
 Unable to find image 'swarm:latest' locally
 latest: Pulling from docker.io/swarm
 ff560331264c: Pull complete
 d820e8bd65b2: Pull complete
 8d00f520df22: Pull complete
 e006ebc1de3a: Pull complete
 7390274120a7: Pull complete
 0036abe904ed: Pull complete
 bd420ed092aa: Pull complete
 8db3c7d27267: Pull complete
 docker.io/swarm:latest: The image you are pulling has been verified. Important: image verification is a tech preview
 feature and should not be relied on to provide security.
 Digest: sha256:e72c009813e43c68e01019df9d481e3009f41a26a4cad897a3b832100398459b
 Status: Downloaded newer image for docker.io/swarm:latest
 d04d00d5afacc37f290b92ed01658eca147c5510533d9cb0a0dfc1aa20edfcef

Node2

[[email protected] ~]# docker -H tcp://10.0.3.228:2375 run -d swarm join --addr=10.0.3.228:2375

Verify the swarm setup on your node server using the following command.

[[email protected] ~]# docker -H tcp://10.0.3.226:2375 ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d04d00d5afac swarm "/swarm join --addr= 2 minutes ago Up 2 minutes 2375/tcp 
sleepy_engelbart

Replace the ip address and check with all the node servers.

Now try to join all the nodes to the cluster, set up a swarm manager on the swarm master node using the following command.

[[email protected] ~]# docker -H tcp://10.0.3.227:2375 run -d -p 5000:5000 swarm manage token://f63707621771250dc3925b8f4f6027ae

To list all the nodes in the cluster, execute the following Docker command from the docker client node.

[[email protected] ~]# docker -H tcp://10.0.3.227:2375 run --rm swarm list token://f63707621771250dc3925b8f4f6027ae
10.0.3.227:2375
10.0.3.226:2375
10.0.3.228:2375

Execute the following command from the client and it will show the node server details.

Syntax

docker -H tcp://<node_ip>:2375 info

[[email protected] ~]#docker -H tcp://10.0.3.226:2375 info

Next test your cluster set up by deploying a container onto the cluster. For example, Run a test busybox container from the docker client using the following command.

[email protected] ~]# docker -H tcp://10.0.3.227:2375 run -dt --name swarm-test busybox /bin/sh

Now list the running docker container using the following command.

[[email protected] ~]# docker -H tcp://10.0.3.227:2375 ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS  NAMES 
6aaec7894903 busybox "/bin/sh" 2 hours ago Up 2 hours swarm-test 
7d1e74741eb1 swarm "/swarm manage token 2 hours ago Up 2 hours 2375/tcp, 0.0.0.0:5000->5000/tcp goofy_lalande 
f0b654832976 swarm "/swarm join --addr= 2 hours ago Up 2 hours 2375/tcp sharp_carson

That it. This is the steps to install docker swarm and configure cluster.

 

 

 

Install openstack liberty using packstack

Packstack is a utility that uses Puppet modules to deploy various parts of OpenStack on multiple pre-installed servers over SSH automatically. This utility is still in the early stages, a lot of the configuration options have yet to be added Currently its support Fedora, Red Hat Enterprise Linux (RHEL) and compatible derivatives of both are supported. Here we can discuss to Install openstack liberty using packstack in centos server.

packstack

How to install OpenStack liberty using packstack in centos 7

 

Update all your existing packages.

#yum update -y

Install all other useful tools

#yum install -y wget net-tools mlocate

Flush yum cache

#yum clean all #yum repolist

Set the Selinux in Permissive Mode

# setenforce 0

Disable firewalld & NetworkManager Service

# systemctl stop firewalld
# systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.

# systemctl stop NetworkManager
# systemctl disable NetworkManager
Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.

 

Install packstack repos.

#wget https://repos.fedorapeople.org/repos/openstack/openstack-liberty/rdo-release-liberty-2.noarch.rpm
#rpm -ivh rdo-release-liberty-2.noarch.rpm

Install openstack packstack

yum install -y openstack-packstack

Generate a openstack answerfile and customize your services to enable and disable components, also make sure to update the management IP address.

#packstack --gen-answer-file=youranwserfile.packstack

NOTE: If you want to ssl support for the horizon you need to install your certs into /etc/ssl/certs and enable SSL
CONFIG_HORIZON_SSL=y

Once the modification is done.

Install packstack

#packstack --answer-file=youranwserfile.packstack

It will take few minutes to complete that installation and will update admin user credential and demo user credential.

 

**** Installation completed successfully ******

Additional information:
* A new answerfile was created in: /root/packstack-answers-20160105-040349.txt
* Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
* Warning: NetworkManager is active on 127.0.0.1. OpenStack networking currently does not work on systems that have the Network Manager service enabled.
* File /root/keystonerc_admin has been created on OpenStack client host 127.0.0.1. To use the command line tools you need to source the file.
* To access the OpenStack Dashboard browse to http://127.0.0.1/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
* To use Nagios, browse to http://127.0.0.1/nagios username: nagiosadmin, password:
* The installation log file is available at: /var/tmp/packstack/20160105-040348-S2GgMl/openstack-setup.log
* The generated manifests are available at: /var/tmp/packstack/20160105-040348-S2GgMl/manifests

 

Once the installation is completed,

Setup network bridge for external network

In order to connect OpenStack with external network, you should configure network bridge on your server. Install and configuration network bridge

Next add the following to the /etc/neutron/plugin.ini file.

network_vlan_ranges = physnet1
bridge_mappings = physnet1:br-ex

Restart network and nuetron services.

Setup cinder-volumes to your secondary drive

After you installed openstack using packstack, default it will create 20G cinder volume. If you need to modify cinder volume with your secondary drive.

remove your old volume group cinder volume

#vgremove cinder-volumes

Create physical volume from your secondary drive

#pvcreate /dev/sdb

Create volume group using that physical volume.

#vgcreate cinder-volumes /dev/sdb

That’s it.

 

Verify your installation and admin credential.

keystonerc_admin keystonerc_demo

[[email protected] ~(keystone_admin)]#source /root/keystone_admin
[[email protected] ~(keystone_admin)]# nova image-list
+--------------------------------------+--------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+--------+--------+--------+
| 0aecad86-309f-43fc-925c-a6c9bba81b6f | cirros | ACTIVE | |
+--------------------------------------+--------+--------+--------+
[[email protected] ~(keystone_admin)]# nova hypervisor-list
+----+--------------------------------+-------+---------+
| ID | Hypervisor hostname | State | Status |
+----+--------------------------------+-------+---------+
| 1 | openstack-liberty.apporbit.com | up | enabled |
+----+--------------------------------+-------+---------+

That’s it, You can login openstack horizon dashboard.

http://10.47.13.196/dashboard/

 

Errors:
1) ERROR : Error appeared during Puppet run: 10.47.13.196_ring_swift.pp
Error: /Stage[main]/Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[object]/Exec[rebalance_object]: Failed to call
refresh: swift-ring-builder /etc/swift/object.builder rebalance returned 1 instead of one of [0]

Solutioin

Remove everything in /etc/swift/* then try to install packstack again.

2) Error: Unable to retrieve volume limit information.

Solutioin

vi /etc/cinder/cinder.conf
[keystone_authtoken]
auth_uri = http://192.168.1.10:5000
auth_url = http://192.168.1.10:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = services
username = cinder
password = eertr6645643453

 

Enable thin provisioning for the cinder volume

Add the following entries under your driver section i.e. [lvm]

vi /etc/cinder/cinder.conf

volume_clear = none
lvm_type = thin
volume_clear_size = 0

change the following values in nova configuration.

vi /etc/nova/nova.conf

volume_clear=none
volume_clear_size=0

restart both nova and cinder services.

 

 

kubernetes installation and configuration on centos 7

Kubernetes is an open-source platform for automating deployment, scaling, and operations of application containers across clusters of hosts. It aims to provide better ways of managing related, distributed components across varied infrastructure.

Kubernetes is,

  • lean: lightweight, simple, accessible
  • portable: public, private, hybrid, multi-cloud
  • extensible: modular, pluggable, hookable, composable
  • self-healing: auto-placement, auto-restart, auto-replication

Kubernetes has several components and it works in server-client setup, where it has a master providing centralized control for a number of minions.

etcd – A highly available key-value store for shared configuration and service discovery.
flannel – an overlay network fabric enabling container connectivity across multiple servers.
kube-apiserver – Provides the API for Kubernetes orchestration.
kube-controller-manager – Enforces Kubernetes services.
kube-scheduler – Schedules containers on hosts.
kubelet – Processes a container manifest so the containers are launched according to how they are described.
kube-proxy – Provides network proxy services.
Docker – An API and framework built around Linux Containers (LXC) that allows for the easy management of containers and their images.

kubernetes cluster with docker

How to install Kubernetes and setup minions in centos 7

We are using the following example master and minon hosts. You can add many extra nodes using the same installation procedure for Kubernetes nodes.

kub-master = 192.168.1.10
kub-minion1 = 192.168.1.11
kub-minion2 = 192.168.1.12

Prerequisites

1) Configure hostnames in all the nodes /etc/hosts file.

2) Disable iptables on the all nodes to avoid conflicts with Docker iptables rules:

# systemctl stop firewalld
# systemctl disable firewalld
3) Install NTP on the all nodes and enabled

# yum -y install ntp
# systemctl start ntpd
# systemctl enable ntpd

Setting up the Kubernetes Master server

4) Install etcd and Kubernetes through yum:

# yum -y install etcd kubernetes docker
5) Configure etcd to listen to all IP addresses.

# vi /etc/etcd/etcd.conf.

ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"

6) Configure Kubernetes API server

vi /etc/kubernetes/apiserver

KUBE_API_ADDRESS="--address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBELET_PORT="--kubelet_port=10250"
KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS=""

7) Use the following command to enable and start etcd, kube-apiserver, kube-controller-manager and kube-scheduler services.

# for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
 systemctl restart $SERVICES
 systemctl enable $SERVICES
 systemctl status $SERVICES 
done

8) Install and configure flannel overlay network fabric configuration to communicate each others minions:

# yum -y install flannel

Configure private ip address with flannel.

# etcdctl mk /coreos.com/network/config ‘{“Network”:”10.10.0.0/16″}’

Thats it.

Setting up Kubernetes Minions Nodes Servers

1) Login your minion server Install flannel and Kubernetes, Docker using yum

# yum -y install docker flannel kubernetes
2) Point flannel to the etcd server.

vi /etc/sysconfig/flanneld

FLANNEL_ETCD="http://192.168.1.10:2379"

3) Update Kubernetes config to connect Kubernetes master API server

vi /etc/kubernetes/config

KUBE_MASTER="--master=http://192.168.1.10:8080"

4) Configure kubelet service

vi /etc/kubernetes/kubelet

KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
# change the hostname to this host’s IP address
KUBELET_HOSTNAME="--hostname_override=192.168.1.11"
KUBELET_API_SERVER="--api_servers=http://192.168.1.10:8080"
KUBELET_ARGS=""

Thats it.. You can do the same steps on your all minions.

5) Start and enabled all the services.

for SERVICES in kube-proxy kubelet docker flanneld; do
 systemctl restart $SERVICES
 systemctl enable $SERVICES
 systemctl status $SERVICES 
done

Verify your flannel network interface.

#ip a | grep flannel | grep inet

Now login to Kubernetes master node and verify the minions’ status:

#kubectl get nodes

Thats it.. Verify your minion nodes are running fine.

 

 

SaltStack installation on Centos 7

SaltStack, or Salt is a tool which is open source configuration management software and remote execution engine, code deployment and communication topologies. Salt competes with popular cofiguration management tools like chef and puppet. Salt claims to scale up to tens and thousands of servers. Salt has been designed to be highly modular and easily extensible. The design goal is to make Salt easily moldable to diverse applications.

  • There is a master server and it connects to the agent servers (called minions) in your infrastructure.
  • The master can run commands in the minions parallelly, it is what make salt very fast.
  • The minions will execute the command sent by master and return it.

saltstack architecture

SaltStack installation on centos 7 server.

Login your master server.

Master – 192.168.1.5

To install using the SaltStack repository

rpm --import https://repo.saltstack.com/yum/redhat/7/x86_64/latest/SALTSTACK-GPG-KEY.pub

vi /etc/yum.repos.d/saltstack.repo

[saltstack-repo]
name=SaltStack repo for RHEL/CentOS $releasever
baseurl=https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest
enabled=1
gpgcheck=1
gpgkey=https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest/SALTSTACK-GPG-KEY.pub
#yum clean expire-cache
#yum update.

Install the salt-minion, salt-master, or other Salt components:

yum install salt-master
yum install salt-minion
yum install salt-ssh
yum install salt-syndic
yum install salt-cloud
#chkconfig salt-master on
#service salt-master start

Configure Master Configuration

Salt configuration is very simple. The default configuration for the master will work for most installations and the only requirement for setting up a minion is to set the location of the master in the minion configuration file.

The configuration files will be installed to /etc/salt and are named after the respective components, /etc/salt/master, and /etc/salt/minion.

By default the Salt master listens on ports 4505 and 4506 on all interfaces (0.0.0.0). To bind Salt to a specific IP, Change the interface conf in /etc/salt/master.

Find:

# The address of the interface to bind to
#interface: 0.0.0.0

Replace with:

# The address of the interface to bind to
interface: youripaddress

in my case, I have given interface: 192.168.1.5

Setting the states file_roots directory

All of salt’s policies or rather salt “states” need to live somewhere. The file_roots directory is the location on disk for these states. For this article we will place everything into /salt/states/base.

Find:

#file_roots:
#base:
#- /srv/salt

Replace with:

file_roots:
 base:
 - /salt/states/base

Setting the pillar_roots

The last item that we need for now is the pillar_roots dictionary. The pillar system is used to store configuration data that can be restricted to certain nodes. This allows us to customize behavior and to prevent sensitive data from being seen by infrastructure components not associated with the data. This format mirrors the file_roots exactly. The location of our pillar data will be at /srv/pillar:

Find:

#pillar_roots:
#base:
#- /srv/pillar

Replace:

pillar_roots:
 base:
 - /salt/pillars/base

 

Created those folders.

# mkdir /salt/pillars/base
# mkdir /salt/states/base

Restart the salt-master service

# service salt-master restart
Redirecting to /bin/systemctl restart salt-master.service

Thats it..

Configure the Salt-Minion Configuration

minion – 192.168.1.6

Install SaltStack repository and update repos as before in /etc/yum.repos.d/saltstack.repo.

Install salt minion

#yum install salt-minion

Update your salt master connection details.

# vi /etc/salt/minion

Find:

#master: salt

Replace with:

master: yourmasterip

in my case, I have given interface: 192.168.1.5

# service salt-minion restart
Redirecting to /bin/systemctl restart salt-minion.service

Thats it.. Once the salt-minion service is restarted the minion will start trying to communicate with the master. Go to Master node server and accept the Minions keys.

List the available keys

[[email protected] ~]# salt-key -L
Accepted Keys:
Denied Keys:
Unaccepted Keys:
192.168.1.6
Rejected Keys:

Accept the minion key

[[email protected] ~]# salt-key -a 192.168.1.6
The following keys are going to be accepted:
Unaccepted Keys:
209.205.208.100
Proceed? [n/Y] y
Key for minion 192.168.1.6 accepted.

To list all the accepted keys

#salt-key --list all
SENDING COMMANDS

Communication between the Master and a Minion may be verified by running the test.ping command:

# salt 192.168.1.6 test.ping
192.168.1.6:
 True

To list all minion

# salt '*' test.ping
192.168.1.6:
 True
minion2:
 True
minion3:
 True
minion4:
 True

To check minion disk usage

# salt '192.168.1.6' disk.usage
192.168.1.6:
 ----------
 /:
 ----------
 1K-blocks:
 37329092
 available:
 36223528
 capacity:
 3%
 filesystem:
 /dev/mapper/centos-root
 used:
 1105564

 

 

Enable Outbound simple NAT on FirewallD

Firewalld

firewalld provides a dynamically managed firewall with support for network “zones” to assign a level of trust to a network and its associated connections and interfaces. It has support for IPv4 and IPv6 firewall settings. It supports Ethernet bridges and has a separation of runtime and permanent configuration options. It also has an interface for services or applications to add firewall rules directly.

Network Zones

Firewalls can be used to separate networks into different zones based on the level of trust the user has decided to place on the devices and traffic within that network. NetworkManager informs firewalld to which zone an interface belongs. An interface’s assigned zone can be changed by NetworkManager or via the firewall-config tool which can open the relevant NetworkManager window for you.

The zone settings in /etc/firewalld/ are a range of preset settings which can be quickly applied to a network interface.

  • drop: The lowest level of trust. All incoming connections are dropped without reply and only outgoing connections are possible.
  • block: Similar to the above, but instead of simply dropping connections, incoming requests are rejected with an icmp-host-prohibited or icmp6-adm-prohibited message.
  • public: Represents public, untrusted networks. You don’t trust other computers but may allow selected incoming connections on a case-by-case basis.
  • external: External networks in the event that you are using the firewall as your gateway. It is configured for NAT masquerading so that your internal network remains private but reachable.
  • internal: The other side of the external zone, used for the internal portion of a gateway. The computers are fairly trustworthy and some additional services are available.
  • dmz: Used for computers located in a DMZ (isolated computers that will not have access to the rest of your network). Only certain incoming connections are allowed.
  • work: Used for work machines. Trust most of the computers in the network. A few more services might be allowed.
  • home: A home environment. It generally implies that you trust most of the other computers and that a few more services will be accepted.
  • trusted: Trust all of the machines in the network. The most open of the available options and should be used sparingly.

To use the firewall, we can create rules and alter the properties of our zones and then assign our network interfaces to whichever zones are most appropriate.

How to enable FirewallD Simple NAT on your CentOS7

You can Enable Outbound simple NAT on FirewallD using centos7 server.

Start your Firewalld

# systemctl start firewalld.service

 

Enable IP Forwarding

sysctl -w net.ipv4.ip_forward=1

Check with your network interface.

ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.10 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::20c:29ff:fecc:ac0c prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:cc:ac:0c txqueuelen 1000 (Ethernet)
RX packets 354931 bytes 23015677 (21.9 MiB)
RX errors 0 dropped 52 overruns 0 frame 0
TX packets 6896 bytes 626333 (611.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

ens192: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 104.24.101.150 netmask 255.255.255.224 broadcast 104.24.101.145
inet6 fe80::20c:29ff:fecc:ac16 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:cc:ac:16 txqueuelen 1000 (Ethernet)
RX packets 537458 bytes 41460161 (39.5 MiB)
RX errors 0 dropped 59 overruns 0 frame 0
TX packets 195260 bytes 47842690 (45.6 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
We are using ens160 as private network and ens192 as public network.

simple NAT on firewalld

Check your firewalld current lists on your VM1

# firewall-cmd --list-all
# firewall-cmd --list-all --zone=external

Add public interface as public zone permanent.

# firewall-cmd --change-interface=ens192 --zone=external --permanent

Restart Firewalld

# firewall-cmd --complete-reload
# firewall-cmd --list-all --zone=external

Configure private as internal zone permanent.

# firewall-cmd --change-interface=ens160 --zone=internal --permanent

Configure internal zone as default

# firewall-cmd --set-default-zone=internal

Restart Firewalld

# firewall-cmd --complete-reload

Add DNS service as permanent

# firewall-cmd --zone=internal --add-service=dns --permanent
# firewall-cmd --complete-reload

You can verify external zone

# firewall-cmd --list-all --zone=external

Done.

Login your Private Network server and configure the ens160 IP address as the gateway.

# ssh [email protected]

ping your external network

ping google.com

Example video

 

what is VMware memory balloon driver

The memory balloon driver (vmmemctl) collaborates with the server to reclaim pages that are considered least valuable by the guest operating system. The driver uses a proprietary ballooning technique that provides predictable performance that closely matches the behavior of a native system under similar memory constraints. This technique increases or decreases memory pressure on the guest operating system, causing the guest to use its own native memory management algorithms. When memory is tight, the guest operating system determines which pages to reclaim and, if necessary, swaps them to its own virtual disk.

memory balloon

For simple explanation the process where the hypervisor reclaims  memory back from the virtual machine. Ballooning is an activity that happens when the ESXi host is running out of physical memory. The demand of the virtual machine is too high for the host to handle.

First you need to install vmware tools on your VM to works this properly.

Install vmware tools on centos

#yum install open-vm-tools

or you can install it via vmware client or web client

You can use the esxtop and follow the below steps.

1. Connect ESX server via SSH and type esxtop. Default will show the CPU stats. Switch to the memory stats by entering  “m”

2.By default, Memory stats view will not show the balloon drive stats. To add the field, Press “f”

3 Press “j” to add the MCTL stats and Press “Enter” to switch back to the memory stat view
memctl

4. Now lookout for the value MCTL. If “y” means Balloon driver is enabled and running and “N” means Balloon driver not running.

memctl1 vmware
if you only want to see the virtual machines , Press “V”.

Try to monitor your memory balloon when you run application on your VM.