Easy Install nagios vshell on Centos

If you have setup your nagios on your server and you want to install vshell so that it looks better, just follow the guide here!

Download vShell

First thing you need to do is to download vshell

wget http://assets.nagios.com/downloads/exchange/nagiosvshell/vshell.tar.gz

you can do it anywhere as long as gets download

Install vShell

Now you need to install vshell

tar -zxvf vshell.tar.gz
cd vshell

Before you start installing, check your configuration

vi install.php

make sure that the path is correct


//target directory where vshell's web files will be stored  
//target directory where your current apache configuration directory is located
//default for ubuntu/debian installs 

since i'm on centos and installed with apache, mine is by default correct! So all i did was installing this way


And done! you can access your vshell similar to nagios by typing http://localhost/vshell

Screen Shot 2015-03-30 at 7.06.43 PM

Important vShell path

There are a few things you need to know since everything above are quiet abstract.

/etc/httpd/conf.d/vshell.conf  #apache vshell setup
/etc/vshell.conf #vshell configuration

Now make sure that the vshell.conf file has the same as your nagios htpasswd.users or else you might not be able to login to vshell!


Once you done above, you might face a few problem like

Unable to login to vshell

Open up /etc/httpd/conf.d/vshell.conf and /etc/httpd/conf.d/nagios.conf
change your vshell.conf AuthUserFile to nagios.conf one

AuthUserFile /user/local/nagios/passwd


AuthUserFile /etc/nagios/passwd

above might vary but keep that in mind.

Unable to open '/usr/local/nagios/var/objects.cache' file!

Once you login you see the above error and you might want to open up the file /etc/vshell.conf and /etc/nagios/nagios.cfg and change vshell.conf from

; Full filesystem path to the Nagios object cache file
OBJECTSFILE = "/usr/local/nagios/var/objects.cache"


; Full filesystem path to the Nagios object cache file
OBJECTSFILE = "/var/log/nagios/objects.cache"

where the correct value should be the same as nagios one

Unable to open '/usr/local/nagios/var/status.dat' file!

Once you login you see the above error and you might want to open up the file /etc/vshell.conf and /etc/nagios/nagios.cfg and change vshell.conf from

; Full filesystem path to the Nagios status file
STATUSFILE = "/usr/local/nagios/var/status.dat"


; Full filesystem path to the Nagios status file
STATUSFILE = "/var/log/nagios/status.dat"

where the correct value should be the same as nagios one

Unable to open '/usr/local/nagios/etc/cgi.cfg' file!

Once you login you see the above error and you might want to open up the file /etc/vshell.conf and /etc/nagios/nagios.cfg and change vshell.conf from

; Full filesystem path to the Nagios CGI permissions configuration file
CGICFG = "/usr/local/nagios/etc/cgi.cfg"


; Full filesystem path to the Nagios CGI permissions configuration file
CGICFG = "/etc/nagios/cgi.cfg"

where the correct value should be the same as nagios one

Easy Install Nagios in Centos 6 via yum

Ok, i used to write the longer version when i was still using Centos 5, recently i went back to the article and found out that there is actually a shorter way to setup EVERYTHING. So here i am writing a shorter version to setup nagios on centos 6.

Setup Nagios Server

This is the server that will have vshell and nagios web interface. All you have to do is to install epel-release for your centos

yum install -y epel-release

and starts install nagios via yum

yum install -y nagios nagios-devel nagios-plugins* gd gd-devel httpd php gcc glibc glibc-common openssl

Now we need to make sure nagiosadmin is our username and password is set

htpasswd -c /etc/nagios/passwd nagiosadmin

which you can setup all the configuration at /etc/nagios/cgi.cfg and if you would like to change the configuration on apache, it is located at /etc/httpd/conf.d/nagios.conf

Once you have installed nagios, remember to startup nagios and apache whenever you restart!

service httpd restart
chkconfig httpd on

service nagios restart
chkconfig nagios on

and you can access it via http://localhost/nagios with username and password you have just setup above!!! Pretty easy ya!

Screen Shot 2015-03-30 at 6.38.04 PM

Now you might want to install nrpe on each server you wish to monitor next,

Installing nrpe with nagios-plugins on each server

Now, you really want to just have all these in yum when you have like 20 servers? It will be a nightmare if you just build all these by source (which i did last time instead of writing a scripts, yeah i'm dumb, i know). All you need to do via yum is

yum install nrpe nagios-plugins-all

And configure nrpe via

vi /etc/nagios/nrpe.cfg

adding your nagios ip into it so that your nagios server is allowed to penetrate each 'slave' you have lol.


Now all you need to do is to setup all the services you wish to let your man to do on your 'slaves'.

command[check_users]=/usr/lib64/nagios/plugins/check_users -w 5 -c 10
command[check_load]=/usr/lib64/nagios/plugins/check_load -w 15,10,5 -c 30,25,20
command[check_hda1]=/usr/lib64/nagios/plugins/check_disk -w 20% -c 10% -p /dev/hda1
command[check_zombie_procs]=/usr/lib64/nagios/plugins/check_procs -w 5 -c 10 -s Z
command[check_total_procs]=/usr/lib64/nagios/plugins/check_procs -w 150 -c 200 

and these can be setup at the same path as /etc/nagios/nrpe.cfg and you should be able to add more 'action' to it if you want.

and remember to setup nrpe to run on startup

service nrpe start
chkconfig nrpe on

Setup OpenVZ NFS Server on Centos

i though i writing this down how i setup my own openvz nfs server container to serve as a NFS server. Installing it is pretty easy until a lot of errors start popping out when you try to start your openvz, so i though of just writing them down just in case.

To install openvz nfs server, all you need to do is

yum install nfs* -y

and all nfs library will be installed into your container. Next start everything!

 service rpcbind start
 chkconfig rpcbind on
 service nfs start
 chkconfig nfs on

And this is what i get,

Starting NFS services:                                     [  OK  ]
Starting NFS quotas:                                       [  OK  ]
Starting NFS mountd:                                       [  OK  ]
Starting NFS daemon: rpc.nfsd: Unable to access /proc/fs/nfsd errno 2 (No such file or directory).
Please try, as root, 'mount -t nfsd nfsd /proc/fs/nfsd' and then restart rpc.nfsd to correct the problem

not too smooth journey as i though!

Different type of NFS errors

So there are a few kind of errors you will see when setting up an NFS server on Openvz container and you can see them on openvz.org NFS server article,

My issue was apparent this one,

If you see this:

# service nfs start
Starting NFS daemon: rpc.nfsd: Unable to access /proc/fs/nfsd errno 2 (No such file or directory).
Please try, as root, 'mount -t nfsd nfsd /proc/fs/nfsd' and then restart rpc.nfsd to correct the problem
# mount -t nfsd nfsd /proc/fs/nfsd
mount: unknown filesystem type 'nfsd'
It means you haven't loaded nfsd kernel module on host before starting container.

Well...this means that i need to do more things!

Kernel NFS server

Kernel-space NFS server is supported by latest RHEL5 and RHEL6 based kernels and since vzctl-3.0.24. But currently only NFSv3 is supported - no NFSv4 support yet. More info here: http://forum.openvz.org/index.php?t=msg&goto=46174&. NFSv3 is notorious for leaving hanging file locks and in my opinion NFSv3 should not be used in file intensive setups. So that's the thing i will need to live with for the time being.

NFS Openvz Prerequisites

In order to run an NFS server inside a container, make sure:

  • nfsd kernel module is loaded on host system before starting a container
  • nfsd feature for a container is turned on


  • Make sure that rpcbind service is started before nfs service:
chkconfig rpcbind on && service rpcbind start
  • Disable NFSv4 and nfsd module loading warnings in /etc/sysconfig/nfs by uncommenting the following lines:
  • Start NFS service:
chkconfig nfs on && service nfs start

Host node

Once you done that, remember to activate NFS inside a container issue the command below

vzctl set $CTID --feature nfsd:on --save

and ensure modules nfs and nfsd is loaded:

modprobe nfsd
modprobe nfs

All you left to do is to start your container and see whether the error is eliminated. If its still doesn't work, check out this forum thread where most of the information above are retrieved (credit goes to them).


Opps, i forget. On /etc/exports u need to state the directory that you want people to be able to use your nfs to mount their machine on, in my case it's /mnt/nfs so i did this,

mkdir -p /mnt/nfs

then on /etc/exports i did this


take note that is the range of ip that will mount my nfs directory.

Firewall Host

After everything is done up there, just remember to update your firewall to allow nfs port to your nfs server. On the path /etc/sysconfig/nfs open it up and update all the path and uncomment LOCKD_TCPPORT, MOUNTD_PORT, STATD_PORT and LOCKD_UDPPORT and allow the port as written below, ( and you will notice this is done on a Centos machine)

  1. Allow TCP and UDP port 2049 for NFS.
  2. Allow TCP and UDP port 111 (rpcbind/sunrpc).
  3. Allow the TCP and UDP port specified with MOUNTD_PORT="port"
  4. Allow the TCP and UDP port specified with STATD_PORT="port"
  5. Allow the TCP port specified with LOCKD_TCPPORT="port"
  6. Allow the UDP port specified with LOCKD_UDPPORT="p

and in case you need the iptables command, here you go, and you are welcome

-A PREROUTING -d -i vmbr0 -p tcp -m tcp --dport 2925 -j DNAT --to-destination
-A PREROUTING -d -i vmbr1 -p tcp -m tcp --dport 32803 -j DNAT --to-destination
-A PREROUTING -d -i vmbr1 -p tcp -m tcp --dport 892 -j DNAT --to-destination
-A PREROUTING -d -i vmbr1 -p tcp -m tcp --dport 662 -j DNAT --to-destination
-A PREROUTING -d -i vmbr1 -p tcp -m tcp --dport 2049 -j DNAT --to-destination
-A PREROUTING -d -i vmbr1 -p tcp -m tcp --dport 111 -j DNAT --to-destination
-A PREROUTING -d -i vmbr1 -p udp -m udp --dport 892 -j DNAT --to-destination
-A PREROUTING -d -i vmbr1 -p udp -m udp --dport 662 -j DNAT --to-destination
-A PREROUTING -d -i vmbr1 -p udp -m udp --dport 111 -j DNAT --to-destination
-A PREROUTING -d -i vmbr1 -p udp -m udp --dport 2049 -j DNAT --to-destination
-A PREROUTING -d -i vmbr1 -p udp -m udp --dport 32769 -j DNAT --to-destination
-A PREROUTING -d -i vmbr1 -p tcp -m tcp --dport 8000 -j DNAT --to-destination

just port forwarding for both up and tcp 😉

and remember to open up all the port at /etc/sysconfig/nfs

# Define which protocol versions mountd
# will advertise. The values are "no" or "yes"
# with yes being the default
# Path to remote quota server. See rquotad(8)
# Port rquotad should listen on.
# Optinal options passed to rquotad
# Optional arguments passed to in-kernel lockd
# TCP port rpc.lockd should listen on.
# UDP port rpc.lockd should listen on.
# Optional arguments passed to rpc.nfsd. See rpc.nfsd(8)
# Turn off v2 and v3 protocol support
# Turn off v4 protocol support
# Number of nfs server processes to be started.
# The default is 8.
# Stop the nfsd module from being pre-loaded
# Set V4 and NLM grace periods in seconds
# Warning, NFSD_V4_GRACE should not be less than
# NFSD_V4_LEASE was on the previous boot.
# To make NFSD_V4_GRACE shorter, with active v4 clients,
# first make NFSD_V4_LEASE shorter, then restart server.
# This will make the clients aware of the new value.
# Then NFSD_V4_GRACE can be decreased with another restart.
# When there are no active clients, changing these values
# can be done in a single server restart.
# Optional arguments passed to rpc.mountd. See rpc.mountd(8)
# Port rpc.mountd should listen on.
# Optional arguments passed to rpc.statd. See rpc.statd(8)
# Port rpc.statd should listen on.
# Outgoing port statd should used. The default is port
# is random
# Specify callout program
# Optional arguments passed to rpc.idmapd. See rpc.idmapd(8)
# Set to turn on Secure NFS mounts.
# Optional arguments passed to rpc.gssd. See rpc.gssd(8)
# Optional arguments passed to rpc.svcgssd. See rpc.svcgssd(8)
# To enable RDMA support on the server by setting this to
# the port the server should listen on

Firewall Guest

Remember to turn off or allow those port to your guest as well if you are on Centos.

chkconfig iptables off
service iptables stop

i prefer to off it entirely.


  • https://www.howtoforge.com/setting-up-an-nfs-server-and-client-on-centos-6.3
  • http://forum.proxmox.com/threads/9509-NFS-inside-OpenVZ-container
  • http://www.unixmen.com/nfs-server-installation-and-configuration-in-centos-6-3-rhel-6-3-and-scientific-linux-6-3/
  • https://openvz.org/NFS_server_inside_container
  • http://www.linuxquestions.org/questions/linux-server-73/nfs-share-setup-issue-mountd-refused-mount-request-unmatched-host-924105/
  • https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/s2-nfs-nfs-firewall-config.html

How to setup Internet on Openvz container with Private IP

Ok, so i don't want to waste my IP addresses so i would need to setup multiple openvz container with private ip with a single host public ip. If you did read the article on openvz.org about using NAT for container with private IPs, but you were unable to setup properly or your internet isn't going into your container, this is most likely due to a setup issue you have (which is what happen to me and this is why i am writing this up!)

Assuming i have 3 container with the following ips (please setup these ips on your containers)


and a single public ip host machine that all these 3 containers sites on with the following ip


i am going to setup and explain what is going to be done with the above machines.

OpenVZ Host

Most of the work, or to be exact, all of the work will need to be done on the Openvz Host machine in this case its the machine with the ip Make sure the following things are done on your host machine,

IP forwarding should be turned on on the hardware node in order for container networking to work. Make sure it is turned on:

$ cat /proc/sys/net/ipv4/ip_forward
Output should be '1'. If it is '0', enable IP forwarding as it is described in Quick installation#sysctl.

NOTE: Ubuntu made some changes to the syntax for NAT. See this link if you are needing to enable NAT on an Ubuntu host :


The syntax of /etc/sysctl.conf has changed to :


Once the above are done, it's time to setup our iptables to forward all internet traffic to our containers. All you need to do are to setup the iptables with the range of ips needed to forward to.

To enable the containers, which have only internal IP addresses, to access the Internet, SNAT (Source Network Address Translation, also known as IP masquerading) should be configured on the Hardware Node. This is ensured by the standard Linux iptables utility. To perform a simple SNAT setup, execute the following command on the Hardware Node:

# iptables -t nat -A POSTROUTING -s src_net -o eth0 -j SNAT --to ip_address
where src_net is a range of IP addresses of containers to be translated by SNAT, and ip_address is the external IP address of your Hardware Node. The format of src_net is xx.xx.xx.xx/xx (CIDR notation). For example:

# iptables -t nat -A POSTROUTING -s -o eth0 -j SNAT --to ip_address
Multiple rules are allowed, for example, in case you wish to specify several ranges of IP addresses. If you are using a number of physical network interfaces on the Node, you may need to specify a different interface for outgoing connections, e.g. -o eth2.

To make all IP addresses to be translated by SNAT (not only the ones of containers with private addresses), you should type the following string:

# iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to ip_address

Let me explain a little bit on what is really needed, you will really want to enter the below statement

iptables -t nat -A POSTROUTING -s -o eth0 -j SNAT --to

what the above does is to tell your iptables to allow all private ip of the range to be connect to the internet at but bear in mind what ethernet port you are using, it can be eth0 or vmbr01 depending what you have setup your on. Just firing up ifconfig and you should see where are being attached to.

and also, while you are still at it, do setup your firewall as well.

For Debian hardware node, you may need to allow a forward rule. The table still being the default table (filter) but the chain is FORWARD:

# iptables -A FORWARD -s -j ACCEPT
# iptables -A FORWARD -d -j ACCEPT
For default RedHat/CentOS firewall, allow outgoing connections from your containers, for example:

# iptables -A RH-Firewall-1-INPUT -s -j ACCEPT
# iptables-save > /etc/sysconfig/iptables
# service iptables restart

The above forward all internet and accept connection from the range of due to /24 being accepted.

Once you are done with the above, test it out

Now you should be able to reach internet from your container:

# vzctl exec $CTID ping openvz.org

where $CTID is the openvz id

if you don't get unknown host, and get a response back, you have just setup your machine with an internal ip!

Easy Resize Linux KVM on Proxmox VE 3.3

Ok, i lied, there is no easy way to resize kvm but it's pretty fast if you know what you are doing. Finding Proxmox VE resources isn't very easy since not many people are actually writing this to share their knowledge across. I figured that i should write down how i resize my KVM environment on a Centos 6.6 environment that i have for a KVM vm to better illustrate how this can be done quickly


Before i start let me explain what you need to have on your hard disk setup within your Linux VM environment. Your harddisk must be setup with LVM also known as logical volume manager. At least this is how i setup my hard disk, if not, you will need to do a huge around of crap just to resize your KVM environment.

Reszing linux KVM

On Proxmox, it's pretty simple, if you want to resize a particular VM, just click on it and hit on "Hardware" as shown below,

Screen Shot 2014-12-01 at 10.12.30 AM

Make sure your VM is off and hit on 'Resize disk' and this will pop out.

Screen Shot 2014-12-01 at 10.14.23 AM


And you will notice that nothing happens! Ha ha! Just Kidding! but seriously, nothing will happen if you start your vm and look at your machine size but still, start your VM. And hit on

df -kh

and you will see my initial hard disk space

Screen Shot 2014-12-01 at 10.20.13 AM

45G is my initial hard disk space and /dev/mapper/VolGroup-lv_root is my logical volume. Now, before i go crazy, i need to check whether the 10G that i just added is indeed in my machine. I can do this by hitting

fdisk -l

and you will see the following

Screen Shot 2014-12-01 at 10.24.53 AM

which indicate that my 10GB has been added into the machine (eh, yeah i have 50GB and now i added 10GB so i should have 60GB) and do take note of my drive name '/dev/vda'

Partitioning the new disk

We want to create a new partition by using the disk utility below,

fdisk /dev/vda

which will provide you with an interactive console, that you will use to create the partition.
Enter the commands in the following sequence:

Command (m for help): n

Command action
   e   extended
   p   primary partition (1-4)

Partition number (1-4): 4

First cylinder (1-1305, default 1): 1

Last cylinder, +cylinders or +size{K,M,G} (1-1305, default 1305): +9500M

Command (m for help): w

Since i have partition number 1-3, so i create 4 and from there on, just use the First cylinder default and last cylinder would be the number of size i am going to add, in this case, it is 9.5G as 0.5G just got suck off somewhere.

Now executing the following will show you that our changes have been made to the disk:

fdisk -l /dev/vda

Now, you will see the following new partition below,

Screen Shot 2014-12-01 at 10.36.41 AM

staring at partition 4, you will see the new 9500G partition (but do not delete partition to create the new drive, you will most likely see hell), now you will need to reboot your VM!

Initializing the new partition for use with LVM

Once you have rebooted your vm, we should have a new partition, lets initialize it for use with LVM by

pvcreate /dev/vda4

once this is done, you will get the following message

Physical volume "/dev/vda4" successfully created

Now, if you hit


it will display the volume group details and if u click on


it will display the logical volume details. These is needed for you to find out where your root partition is located.

Extending logical volumne

Now, to extend the logical volume, we will hit below,

vgextend VolGroup /dev/vda4

This will add the new disk/partition to our intended volume group “VolGroup”. Double check by hitting the following command,


will now display the new parameters, including the number of free PE (physical extents). Now we can increase the size of the logical volume our root partition is on as show below,

Screen Shot 2014-12-01 at 10.20.13 AM

by using the command,

lvextend -L +9.5G /dev/mapper/VolGroup-lv_root

We are almost done now: we just need to tell the guest that the root partition has increased in size, and this can be done live since we are doing this using lvm! Now, we will resize the logical volume by doing this,

resize2fs /dev/mapper/VolGroup-lv_root

And we are done! Now, check it out at

df -kh

and our kvm has been resized!