Configure docker local Registry Proxy Cache

If you are running multiple servers with Docker daemon and each daemon goes out to the internet and fetches an image it doesn’t have locally, from the Docker repository or your private Docker registry. This will take extra internet traffic from your servers and resources. To avoid this extra bandwidth and servers loads, you can configure docker local registry Proxy Cache mirror and point all the server docker daemons to pull images.

It is possible to set-up a local docker registry which acts as a cache for already pulled images. If the image is not cached, the proxy will pull the image from the public Docker registry and stores it locally before handing it back to you, On subsequent requests, registry mirror is able to serve the image from its own storage to the required clients.

Docker Registry Proxy Cache Mirror

Docker Registry Proxy Cache Mirror

How to configure a Registry as a pull-through cache

The easiest way to run a registry as a pull through cache is to run the official Registry image and specify the proxy. remoteurl within /etc/docker/registry/config.yml as per the instruction.

Download the config.yml file.

docker run -it --rm --entrypoint cat registry:2 /etc/docker/registry/config.yml > /var/lib/registry/config.yml

To configure a Registry to run as a pull through cache, the addition of a proxy section is required to the config file config.yml.

proxy:
remoteurl: https://registry-1.docker.io
username: [username]
password: [password]

The ‘username’ and ‘password’ settings are optional.

The proxy structure allows a registry to be configured as a pull-through cache to Docker Hub.

# vi  /var/lib/registry/config.yml

##Example configuration file.

version: 0.1
log:
  fields:
    service: registry
storage:
  cache:
    blobdescriptor: inmemory
  filesystem:
    rootdirectory: /var/lib/registry
http:
  addr: :5000
  headers:
    X-Content-Type-Options: [nosniff]
health:
   storagedriver:
    enabled: true
    interval: 10s
    threshold: 3
proxy:
  remoteurl: https://registry-1.docker.io

Start your registry proxy cache container

# docker run -d --restart=always -p 5000:5000 --name registry-mirror -v /var/lib/registry:/var/lib/registry registry:2 /var/lib/registry/config.yml

Verify your registry proxy cache is up and running on your server.

[[email protected] ~]# curl localhost:5000/v2/_catalog
{"repositories":[]}

Configure the Docker daemon with registry mirror

Login your remote docker server.

Either pass the –registry-mirror option when starting dockerd manually, or edit /etc/docker/daemon.json and add the registry-mirrors key and value, to make the change persistent.

{
"registry-mirrors": ["http://<registry-mirror-host>:5000"]
}

Save the file and reload Docker for the change to take effect.

Or, you can configure the Docker daemon with the –registry-mirror startup parameter:

# dockerd --registry-mirror=http://registry-mirror-host:5000

For our Docker version 1.12.5, we added registry mirror on /etc/sysconfig/docker

# vi /etc/sysconfig/docker

add “–registry-mirror=http://registry-mirror-host:5000” on OPTIONS.

OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --registry-mirror=http://registry-mirror-host:5000'
# systemctl daemon-reload
# systemctl restart docker.service
Test your docker registry proxy cache

Pull an image from Docker Hub you currently do not have stored locally. For example, ubuntu:latest image

# docker pull ubuntu
registry-proxy-mirror

registry-proxy-mirror

Check the catalog to verify that the image.

# curl registry-mirror-host:5000/v2/_catalog
{"repositories":["library/ubuntu","library/wordpress"]}

 

 

Extend Oracle Instance Root Volume Size

There is an option to extend the Root volume size and Block volume size in Oracle cloud infrastructures. Use below steps to increased the Oracle cloud Instance Root Volume size in Linux.

Oracle Instance boot volume

Oracle Instance boot volume

 

How to Increase the Oracle Cloud Instance root volume size

Step1:

  • Login your Oracle Cloud console.
  • Stop the instance
  • Detach the boot volume
  • Open the navigation menu. Under Core Infrastructure, go to Compute and click Boot Volumes.
  • In the Boot Volumes list, click the boot volume you want to resize.
  • Click Resize.
Oracle Instance root volume resize

Oracle Instance root volume resize

 

  • Specify the new size and click Resize.
    – You must specify a larger value than the boot volume’s current size.

Step2:

This is required to attach the volume to the new instance as an additional disk and Extending the partition size. Attach the boot volume to a second instance as an additional data volume.

  • Extend the partition and grow the file system.

How to Extending the Root Partition on a Linux Instance.

After attaching the boot volume as an additional data volume to the second instance, connect to this instance and perform the following steps to extend the partition.

Run the following command to list the attached block volumes to identify the volume you want to extend the partition.

# lsblk

Run the following command to edit the volume’s partition table with parted

parted <volume_id>
<volume_id> is the volume identifier, for example /dev/sdc.

When you run parted, you may encounter the following error message:

Warning: Not all of the space available to <volume_id> appears to be used,
you can fix the GPT to use all of the space (an extra volume_size blocks)
or continue with the current setting?
You are then prompted to fix the error or ignore the error and continue with the current setting. Specify the option to fix the error.

Run the following command to change the display units to sectors so you can see the precise start position for the volume:

(parted) unit s

Run the following command to display the current partitions in the partition table:

(parted) print

Make note of the values in the Number, Start, and File system columns for the root partition.

Run the following command to remove the existing root partition:

(parted) rm <partition_number>

<partition_number> is the value from the Number column.

Run the following command to recreate the partition:

(parted) mkpart
At the Start? prompt, specify the value from the Start column. At the File system type? prompt, specify the value from the File system column. Specify 100% for the End? prompt.

Run the following command to exit parted:

(parted) quit

This command forces a rewrite of the partition table with the new partition settings you specified.

Run the following command to list the attached block volumes to verify that the root partition was extended:

# lsblk

After you extend the root partition you need to grow the file system.

The steps in the following procedure apply only to xfs file systems.

# xfs_repair <partition_id>
<partition_id> is the partition identifier, for example /dev/sdc3. See Checking and Repairing an XFS File System for more information.
# xfs_growfs -d <partition_id>
<partition_id> is the partition identifier, for example /dev/sdc3.

Run the following command to display the file system details to verify that the file system size.

# df -lh

Step3:

Once you have extended the partition and grown the file system, you can restart the original instance with the boot volume.

  • Disconnect the volume from the second instance.
  • Detach the volume from the second instance.
  • Attach the volume to the original instance as a boot volume.
  • Restart the instance.

 

Errors :

If you are getting the following error and unable to increase partition size, use growpart, fdisk to increase it.

Partition number? 3
Error: Partition /dev/sda3 is being used. You must unmount it before you modify it with Parted.

Fix:

Method1: Use growpart to increase.

# sudo growpart /dev/sda 3

# sudo xfs_growfs /dev/sda3

Method2: Use fdisk to increase.

[[email protected]_cloud ~]# fdisk /dev/sda
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): d
Partition number (1-3, default 3):
Partition 3 is deleted

Command (m for help): n
Partition number (3-128, default 3):
First sector (34-419430366, default 17827840):
Last sector, +sectors or +size{K,M,G,T,P} (17827840-419430366, default 419430366):
Created partition 3


Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[[email protected]_cloud ~]# sudo xfs_growfs /dev/sda3
meta-data=/dev/sda3 isize=512 agcount=4, agsize=2495232 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=0 spinodes=0
data = bsize=4096 blocks=9980928, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=4873, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 9980928 to 50200315

 

 

Install and Configure GNOME with VNC server on CentOS and RHEL

By default, CentOS 7 installed as the minimal server, and user intervention is required to change the installation type. VNC (Virtual Network Computing) Server allows the remote Desktop sharing using VNC viewer. In CentOS 7 & RHEL 7 package named “tigervnc-server” needs to be installed in order to set up the VNC server.

These are the steps to Install and Configure GNOME with VNC server on CentOS 7 / RHEL 7.

Configure the YUM repository on CentOS 7 / RHEL 7.

Run the following command to list available yum repository to install GNOME.

# yum group list

Make sure “GNOME Desktop” is available on this list.

Install Gnome GUI packages using the YUM command.

CentOS 7:

# yum groupinstall "GNOME Desktop"

RHEL 7:

# yum groupinstall "Server with GUI"

 

Install TigerVNC Packages

# yum install tigervnc-server xorg-x11-fonts-Type1

Create VNC user account

# adduser vncuser
# passwd vncuser

Always use a strong password for the user account.

Setup VNC Server Configuration File

Copy the VNC config file “/lib/systemd/system/[email protected]” to the “/etc/systemd/system/[email protected]:<Port_Number>.service“.

Here we are using 3 which will VNC listen on “5903“. So while Connecting to the VNC server you can specify port number as <IP_Address_VNC_Server:3> or <IP_Address_VNC_Server:5903>

# cp /lib/systemd/system/[email protected] /etc/systemd/system/[email protected]:3.serv

Once you copy, modify the VNC server configuration file and update user account.

# vi /etc/systemd/system/[email protected]:3.serv
GNOME with VNC

GNOME with VNC

Replace the “<USER>” user with a user account. In my case “vncuser” user will able to control and manage its desktop session using remote VNC clients.

Set the VNC password for the User Account.

Switch to the user “vncuser” and run vncserver command to set the password as shown below

# su - vncuser
[[email protected] ~] # vncserver 
You will require a password to access your desktops.

Password:
Verify:
Would you like to enter a view-only password (y/n)? n
A view-only password is not used
xauth: file /home/vncuser/.Xauthority does not exist

New 'centos7-test1:1 (vncuser)' desktop is centos7-test1:1

Creating default startup script /home/vncuser/.vnc/xstartup
Creating default config /home/vncuser/.vnc/config
Starting applications specified in /home/vncuser/.vnc/xstartup
Log file is /home/vncuser/.vnc/centos7-test1:1.log
Start and Enable the VNC Service
# systemctl daemon-reload
# systemctl start [email protected]:3.service
# systemctl enable [email protected]:3.service
Created symlink from /etc/systemd/system/multi-user.target.wants/[email protected]:3.service to /etc/systemd/system/[email protected]:3.service.
Enable Firewall Rule
# firewall-cmd --permanent --zone=public --add-port=5903/tcp
# firewall-cmd --reload

Connect Remote Desktop Session

Here we have installed VNCviewer and connected.

<ipaddress>:5903