Installing Zabbix Server on Centos6

Zabbix as stated on official website is an “enterprise-level software designed for monitoring availability and performance of IT infrastructure components”.
It’s handy if you need to monitor a handful of servers and supports most major linux distributions, Windows, VMware and Java Applications. It’s also fairly easy to setup and understand with a short learning curve offering many built in “scenarios” and fancy graphs. I personally prefer it over other similar solutions like Nagios.
So let’s go through installing and configuring. There are two options, you can either use the official guide where you’ll have to go through all the steps and manually setup things or use a fancy ansible script and do things automagically! We’ll use the ansible script to make our lives easier.
Ansible is an automation tool that is able to deploy and configure software on multiple hosts simultaneously. Comparing it to other similar tools (chef, puppet, etc.) i find it much easier to read, write and maintain, has a big set of premade scripts that are expanding all the time at ansible galaxy and is agentless. All you need is ssh access to your machines. So let’s go through installing ansible and running our playbook.
You can install ansible on any machine you’d like and we’ll run the playbooks from there. On Fedora you can install it through the official repos with

sudo yum install ansible -y

We’ll use a premade playbook that i’ve found on ansible galaxy that installs zabbix server and agent on a Centos 6 system with MySQL and apache. First fetch the playbook.

sudo ansible-galaxy install akamine.c6-zabbix

Then edit the default configuration here /etc/ansible/roles/akamine.c6-zabbix/defaults/main.yml and follow the below steps.

 mkdir zabbix-ansible
cd zabbix-ansible
vi hosts
vi zabbix.yml
ansible-playbook -i hosts zabbix.yml

On the hosts file you’ll put the hosts you wish the playbook to run.
An example is as follows

[zabbix-server]
my.leet.host

And on zabbix.yml

---
- hosts: all
  roles:
  - akamine.c6-zabbix

When playing the book you should see it output what is doing similar to this:

akis@localhost ansible$ ansible-playbook -i zabbix.host zabbix.yml 

PLAY [all] ******************************************************************** 

GATHERING FACTS *************************************************************** 
ok: [my.leet.host]

TASK: [akamine.c6-zabbix | install libselinux-python] ************************* 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | install MySQL-python] ****************************** 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | change selinux mode to permissive] ***************** 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | disable iptables] ********************************** 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | install ntp] *************************************** 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | Start and enable ntp service] ********************** 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | install epel repo rpm key] ************************* 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | setup epel repo] *********************************** 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | install htop] ************************************** 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | install tmux] ************************************** 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | install mysql server] ****************************** 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | enable mysql server] ******************************* 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | set innodb_buffer_size in /etc/my.cnf] ************* 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | enable event_scheduler in /etc/my.cnf] ************* 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | install zabbix rpm key] **************************** 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | setup zabbix repo] ********************************* 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | install zabbix-server-mysql] *********************** 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | install zabbix-web-mysql] ************************** 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | install zabbix-agent] ****************************** 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | create database zabbix] **************************** 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | ensure mysqld is running] ************************** 
ok: [my.leet.host]

TASK: [akamine.c6-zabbix | initialize zabbix database] ************************ 
changed: [my.leet.host] => (item=schema.sql)
changed: [my.leet.host] => (item=images.sql)
changed: [my.leet.host] => (item=data.sql)

TASK: [akamine.c6-zabbix | copy sql scripts for database partitioning] ******** 
changed: [my.leet.host] => (item=alter_table_indexes.sql)
changed: [my.leet.host] => (item=stored_procedures.sql)
changed: [my.leet.host] => (item=event_schedule.sql)

TASK: [akamine.c6-zabbix | alter table indexes] ******************************* 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | add stored procedures for partitioning] ************ 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | add partitioning event to schedule] **************** 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | create mysql user] ********************************* 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | modify /etc/zabbix/zabbix_server.conf] ************* 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | modify /etc/httpd/conf.d/zabbix.conf] ************** 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | add e-mail sending script to /usr/lib/zabbix/alertscripts] *** 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | enable httpd] ************************************** 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | enable zabbix server] ****************************** 
changed: [my.leet.host]

TASK: [akamine.c6-zabbix | enable and start zabbix agent] ********************* 
changed: [my.leet.host]

NOTIFIED: [akamine.c6-zabbix | restart zabbix server] ************************* 
changed: [my.leet.host]

NOTIFIED: [akamine.c6-zabbix | restart httpd] ********************************* 
changed: [my.leet.host]

NOTIFIED: [akamine.c6-zabbix | restart mysqld] ******************************** 
changed: [my.leet.host]

PLAY RECAP ******************************************************************** 
my.leet.host    : ok=37   changed=35   unreachable=0    failed=0

Next all you need to do is login to your new zabbix server instance on http://my.leet.host/zabbix with username Admin and pass zabbix.
Will go through configuring the server and installing agents in a later post.
Happy playing around with zabbix!

PS: Some quick notes..
The ansible playbook installs mysql-server with default options, meaning no root password.
You can go further editing the playbook adding these to zabbix-database-init.yml

- name: Update mysql root password for all root accounts
  mysql_user: name=root host={{ item }} password=huhu
  with_items: [ '{{ ansible_hostname }}', '127.0.0.1', '::1', 'localhost' ]

- name: Delete anonymous mysql user
  mysql_user: user="" host={{ item }} state=absent
  with_items: [ '{{ ansible_hostname }}', 'localhost' ]

- name: Remove test database
  mysql_db: db=test state=absent

This will setup a password for root account and remove test database.
I’ll probably do a fork of the repo and add some bits and pieces to it and update this post but for the time being this should do.

Advertisements

#alternative, #centos, #linux, #monitor, #monitoring, #nagios, #server, #zabbix

Enabling automatic updates on Centos6.

First install package yum-cron.

sudo yum -y install yum-cron

Then edit /etc/sysconfig/yum-cron if needed. The options are self-explanatory.
Finally to start the service and enable it to auto start on boot issue the following commands.

sudo service yum cron-start
sudo chkconfig yum-cron on

Updates will be run daily.
Done!

#centos, #linux, #updates

Bash: A very basic folder and MySQL bash backup script.

Here is a basic bash script to backup a folder and a MySQL database.

#!/bin/bash
#Backup script for www and mysql.

DATE=`date +%-Y%-m%-d`
backup_path="/path/to/where/iwant/mybackup/stored/"
backup_source="/path/of/myfolder/iwant/to/backup/"
user="db_username"
password="db_password"
host="db_host"
db_name="db_name"


# The function to actually do the backup.
function dobackup {
	echo "Starting backup for" $user
	echo "Backing up" $backup_source
	tar -cvzPf $backup_path$DATE-$user.tar.gz $backup_source
	echo "Backing up MySQL database.."
	mysqldump --user=$user --password=$password --host=$host $db_name > $backup_path/$db_name-$date.sql
	gzip $backup_path/$db_name-$date.sql
	echo "Backup done"
}

# Script should be run with arguement dobackup if we want to backup or delete if we want to delete todays backup.
if [ $1 = "dobackup" ] ; then
	dobackup
elif [ $1 = "delete" ] ; then
	rm -f $backup_path$DATE-$user.tar.gz
  	echo "Deleted old backup"
fi

Then we crontab this accordingly.

Alternatively you could add something like this to delete files older than 2 days on the backup path.

#Delete files older than 2 days
find $backup_path/* -mtime +2 -exec rm {} \;

#backup, #bash-2, #mysql, #script

Migrating Subversion (SVN) to Git

We decided to move from a local subversion we were using to git using Bitbucket. Why moving from svn to git is a matter of taste imho. Whatever suits you best, there would definitely be pros and cons for each. We did it cause the programmers seemed to like working with git and we didn’t want to maintain an svn ourselves.  Bitbucket vs Github is all about the cost, in the end they are both git. In our case Bitbucket was cheaper since we are a small group of developers with large amount of repos, so the pricing of Bitbucket (per developer) is just right for us.

Bitbucket offers migration through their webpage but that was no option for us because we needed our commit history intact. So we were off for alternatives. Also our svn was pretty old so we had to do some extra steps before migrating. So let’s go through them.

Step 1

Install svn to a local machine.

sudo yum install mod_dav_svn subversion

For the rest you can follow this guide up to the point of creating a testrepo. Instead of creating testrepo we are going to take our own repos from the remote svn and put them in /var/www/svn.

Step 2
On the remote server.

tar -zcvf svnbackup.tar.gz /var/www/svn

Then get this to your local svn and extract accordingly.

Step 3

Give proper user rights

chown -R apache.apache *
chcon -R -t httpd_sys_rw_content_t /var/www/svn/*

And then goto your backuped svn folders and perform and svnadmin upgrade like this

 svnadmin upgrade /var/www/svn/myproject 

Step 4

Get a list of the authors so our commits get linked to them. Using this trick.

svn log --quiet http://localhost/svn/myproject \
| grep '^r' | awk '{print $3}' | sort | uniq > authors.txt

Step 5

Now the rest is simple using this guide.

git svn clone http://localhost/svn/svn.myrepo/ -A authors.txt my_repo
git init --bare my_bare_repo
cd my_bare_repo
git symbolic-ref HEAD refs/heads/trunk
cd ../my_repo
git remote add bare ../my_bare_repo
git config remote.bare.push 'refs/remotes/*:refs/heads/*'
git push bare
cd ../my_bare_repo
git branch -m git-svn master
git remote add origin https://me@bitbucket.org/me/myrepo.git
git push origin master

Bumps and how you can manage them!

    1. After git svn clone you end up with this error
fatal: refs/remotes/trunk: not a valid SHA1
update-ref HEAD refs/remotes/trunk: command returned error: 128

In most cases the below should do the job. Else do a git branch -a see your remotes and set master to follow one of them.

 git update-ref refs/heads/master refs/remotes/git-svn 

#bitbucket, #git-2, #github, #migration, #svn-2

Otenet outgoing smtp server blacklisted (mailgate.otenet.gr) and how to fix it.

If you use an email client for sending emails the default outgoing smtp server for otenet is mailgate.otenet.gr according to this.

From what i’ve figured out this acts as a gate that sends you to one of the smtp servers of Otenet.
The thing here is that Otenet smtp servers tend to get blacklisted every now and then and there is nothing you can do about it. The mails would bounce back from the server following this error.

This is the mail system at host medusa.otenet.gr.
I’m sorry to have to inform you that your message could not
be delivered to one or more recipients. It’s attached below.
For further assistance, please send mail to postmaster.
If you do so, please include this problem report. You can
delete your own text from the attached returned message.
The mail system

<some@client.gr>: host
mx.some.provider.gr [1.1.1.1]
said: 554 5.7.1 Service unavailable; Client host [83.235.69.31] blocked
using bl.spamcop.net; Blocked – see
http://www.spamcop.net/bl.shtml?83.235.69.31 (in reply to RCPT TO command)

What this informative yet unhelpfull error message says is that 83.235.69.31 which resolves to smtp-out31.otenet.gr is blacklisted by spamcop. What you can do is tell spamcop that this is not a smap through their page but even if you do you still have to wait for them to approve etc which in best case takes some hours.

So the alternative is to use directly one of the smtp servers of Otenet and not going through the mailgate.
To do so you can use any of these 4 ips making sure that they are not blacklisted.

83.235.69.31 smtp-out31.otenet.gr
83.235.69.32 smtp-out32.otenet.gr
83.235.69.33 smtp-out33.otenet.gr
83.235.69.33 smtp-out34.otenet.gr

You can check their blacklist status here http://mxtoolbox.com/blacklists.aspx .

Migrating VMware ESXi

Hello.
Recently one of our VMware ESXi servers started acting up weird.
Situation was that we had no means of accessing ESXi through SSH ( user/pass got refused although it was the right one) and the console was also unaccesible for each VM.
After invastigating a bit through the Vsphere Client it appearead that the system couldn’t find the necessary pam modules and thus no authentication for us.
Hopefully at least we had access through the VSphere Client and also to the Web-Based datastore Browser.
Anyway to make long story short we had to find a way of getting our VMs from there and moving them to another machine.
Typically and afaik we would simply login on our ESXi console and scp our files through the servers. In our case that was no option since we had no ssh access at all. So we came up with two solutions.

General Prerequisites 

  1. Power off your VM while transferring.
  2. Make sure there are no snapshots.
  3. If you got snapshots  take care of them and delete. You can’t move snapshots and you have to have only “the current working branch” .

Transferring (Having a shell access)

  1. scp [[user@]from-host:]source-file [[user@]to-host:][destination-file]  
  2. Example: scp root@myno1.esx.com:/vmfs/volumes/datastore1/linux/linux.vmdk root@myno2.esx.com:/vmfs/volumes/datastore1/linux

Transferring (Alternative way, no ssh required)

    1. I’ve found this very handy little script http://blogs.vmware.com/vsphere/2010/01/scripting-datastore-access.html . What it does is accessing your datastore through the web interface and downloads to local .
    2. So to make things faster we can have this script download the files on a remote box and then transfer from there to our new ESXi server.
#!/bin/bash

CURL_ARGS="--insecure"
# Change if you want to use an alternate user
# (you'll be prompted for the password each time)
USER=root
if ! $(which curl > /dev/null); then
echo "ERROR: curl not found in your path" >&2
echo "" >&2
echo "You'll need to install curl on your system for this script to work." >&2
exit 1
fi
usage() {
echo "USAGE: $0 <get|put> <hostname> <ds> <ds path> <local_path> [thread_count]" >&2
echo "" >&2
echo "one of source or target must be a datastore path" >&2
echo "Example: $0 put hostname datastore /file.iso ./file.iso'" >&2
echo "" >&2
echo "If you set the thread_count then this script will" >&2
echo "use that many parallel threads when downloading the file." >&2
echo "Warning: you'll need ~2x the file size in available space locally for this approach" >&2
}
urlescape() {
TMP=$(echo $1 | sed \
-e ' {
s/%/%25/g
s/ /%20/g
s/</%3C/g
s/>/%3E/g
s/#/%23/g
s/{/%7B/g
s/}/%7D/g
s/|/%7C/g
s/\\/%5C/g
s/\^/%5E/g
s/~/%7E/g
s/\[/%5B/g
s/\]/%5D/g
s/`/%60/g
s/;/%3B/g
s|/|%2F|g
s/?/%3F/g
s/:/%3A/g
s/@/%40/g
s/=/%3D/g
s/&/%26/g
s/\$/%24/g
}' )
echo ${TMP}
}
if [ $# -lt 5 ] ; then
usage
exit 1
fi
OPERATION=$1
HOSTNAME=$2
DATASTORE=$(urlescape "$3")
REMOTE_PATH=$(urlescape "$4")
LOCAL_PATH=$5
if [ $# == 6 ] ; then
THREADS=$6
else
# Default to 4 threads for better performance
THREADS=4
fi
URL="https://${HOSTNAME}/folder/${REMOTE_PATH}?dcPath=ha-datacenter&dsName=${DATASTORE}"
if [ "${OPERATION}" == "get" ]; then
if [ ${THREADS} -gt 1 ] ; then
echo -n "Enter password for ${USER}@${HOSTNAME}: "
stty -echo
read PASSWORD
stty echo
echo ""
# Note: This is somewhat insecure as the password will show up on the
#    command line. Consider switching to use netrc or SSPI
#	   see the curl man page for more details.
#
#    This first curl invocation grabs just the header to get the size
LENGTH=$(curl ${CURL_ARGS} -s -u "${USER}:${PASSWORD}" "$URL" -I | awk '/Content-Length:/ { print $2 }'|sed -e "s/\r//g")
CHUNK=$((LENGTH / THREADS + 1))
START=0
echo "Starting download (${THREADS} parallel threads)..."
COUNT=1
while [ ${COUNT} -le ${THREADS} ] ; do
END=$((START + CHUNK - 1))
# Progress reporting with multiple threads gets jumbled up, so be silent
curl -s ${CURL_ARGS} -u "${USER}:${PASSWORD}" --create-dirs --range ${START}-${END} "$URL" -o "${LOCAL_PATH}.${COUNT}" &
COUNT=$((COUNT + 1))
START=$((END + 1))
done
wait
# Merge the files back together
rm -f "${LOCAL_PATH}"
touch "${LOCAL_PATH}"
COUNT=1
while [ ${COUNT} -le ${THREADS} ] ; do
cat "${LOCAL_PATH}.${COUNT}" >> "${LOCAL_PATH}"
rm -f "${LOCAL_PATH}.${COUNT}"
COUNT=$((COUNT + 1))
done
echo "Done"
else
curl ${CURL_ARGS} -u ${USER} --create-dirs "$URL" -o "${LOCAL_PATH}"
fi
elif [ "${OPERATION}" == "put" ]; then
if [ ! -f ${LOCAL_PATH} ] ; then
echo "ERROR: ${LOCAL_PATH} does not exist" >&2
echo "" >&2
usage
exit 1
fi
curl ${CURL_ARGS} -u ${USER} -T "${LOCAL_PATH}" "$URL"
else
usage
exit 1
fi

Configuration of the VM to the new ESXi

  1. After having transfer all our files ( *-flat.vmdk *.vmdk *.vmx) we now have to create a new VM on the new server.
  2. Create a new VM using the exact same options as on the old ESXi.
  3. Choose Advanced options and select an existing hard disk.
  4. Finsh and boot.

Final Configuration

  1. Since you are now propably on a new network you need to reconfigure your network and your VMAC address.
  2. Create a VMAC address from your host panel.
  3. On CentOS goto /etc/udev/rules.d/70-persistent-net.rules and edit making sure eth0 uses the newly created VMAC.
  4. Goto /etc/sysconfig/network-scripts and edit ifcfg-eth0 accordingly with your new ip, gateway and hwaddr.
  5. Edit /etc/sysconfig/network-scripts/route-eth0 and configure your gateway.

 

Et voila! You have now succesfully migrated ! 🙂

Suggestions or alternative methods are really appreciated 🙂

#centos, #datastore, #esxi, #esxi4, #esxi5, #migration, #moving, #vmware-2

Git: Cherry-picking from another repository

Quite often when working with an android kernel there would be some nifty commits you’d wish to add to your local repo. So here is the way to do it.

In my case i am working on the Huawei kernel for G300 which involves the msm7627a chipset. So let’s patch it a bit with some commits from Code Aurora Forum (CAF).

Firstly let’s fetch our project locally

git clone https://github.com/ezeteze/android_kernel_huawei_u8815 g300-kernel

Okey now let’s add our remote branch and fetch.

git remote add msm  git://codeaurora.org/kernel/msm.git

git fetch

Cool! Now git branch -a should show your branches and all the remote tracking branches. You are now ready! You can either cherry-pick your commits or if you are brave enough do a full merge, which will probably result in ^n conflicts!

Let’s try out a cherry-pick

akis@localhost:~/kernel/huawei/u8815$ git cherry-pick 4e17bc5b7846448344c262c89ce769f756864b62
[detached HEAD 7ce9f8c] msm: qdsp6v2: Reduce the step size of soft volume and soft pause.
Author: Deepa Madiregama <dmadireg@codeaurora.org>
2 files changed, 8 insertions(+), 2 deletions(-)

Success! Happy cherry-picking! 🙂

#android, #cherry-pick, #cherry-picking, #code-aurora-forum, #github, #kernel, #msm