Archive for the ‘Linux’ Category

Centos and VMware tools

May 1, 2012

I found a funky blog post detailing the steps to get VMware tools from the official vmware repo. The details below are for 4.1 but it should be easy enough to tweak for other versions of ESX…

Thanks Emanuelis!

Add VMware GPG keys:

rpm --import
rpm --import

Edit /etc/yum.repos.d/vmware-tools.repo:

name=VMware Tools

Install VMware Tools:

yum install vmware-open-vm-tools-nox



Gentoo, Pacemaker, and Apache

March 22, 2011

I’ve been playing around with creating HA Load Balancing Proxy servers with Apache on top of Pacemaker today.

Since Gentoo does it’s configuration a little differently than most distributions, this hit a hurdle.

Gentoo puts some Apache command line options in a file /etc/conf.d/apache2 these decide the vohosts that start and other “-D” values. Without these apache will fail to start.

As Pacemaker doesn’t know about this file, or these values, apache was failing to spawn, and I was getting an error.

Simply copying the -D values, into the HTTPOPTS variable in the /usr/lib/ocf/resource.d/heartbeat/apache file fixed the problem:


Now I have two load balancers, running in an active/passive configuration…

Why I love Virtualisation…

February 18, 2011

I love VMware products in general, and ESX in particular, I just solved a problem in under 30 minutes, without any service interruption, that on the same physical box would have required a reboot… Amazing you say? Well yes, when your /var partition is dying on your mail gateway, it is!

So our /var was getting full, and has been on and off for a while, it needs to be bigger, about double it’s current 4GB. What to do? Well fire up the VI Client (yes, we are still running ESX3.5i), and add a 10GB disk to the VM.

Now, this is linux, the drive doesn’t automatically show up, wo what to do next? Well a quick Google said:

echo “- – -” > /sys/class/scsi_host/host0/scan

Sure enough, an ls of /dev now shows an sdb that wasn’t there before!

So, we add it into lvm:

pvcreate /dev/sdb
vgextend vg /dev/sdb

That gets us lots more space available, but I don’t really want /var split over two disks, so we will dedicate this pv to /var, and leave some free space on the old pv…

pvmove -n var /dev/sda4 /dev/sdb

That does the trick, /var is now on the new pv. But still only 4GB…I’ve never managed to make the lvextend command automatically use all of the available free space, so I looked at how many extents were free in pvdisplay -m and then used that figure to do the extension.

lvextend -l+1535 /dev/vg/var

Ok, so the partition is now 10GB, but the file system (etx3) is only 4GB..

resize2fs /dev/vg/var

And there we go, all done!

Edit: Apparently the rescan above only picks up new disks. It won’t notice a change on old disks, so if you increase a LUN, and want Linux to pick up the change you need to do

echo 1 > /sys/bus/scsi/devices/3:0:7:1/rescan

Where the 3:0:7:1 is the device ID.

Gentoo, WGET, and IPv6

February 15, 2011

Well, I may as well make use of my IPv6 Tunnel, but how to get Gentoo to prefer IPv6 where it exists?

My /etc/make.conf has lines for wget, but apparently they are out of date and have invalid escaping, leading to an error:

ValueError: No escaped character

Looking at a newer make.conf.example (hidden away in /usr/share/portage/config/make.conf.example) I see that the way file paths are written has changed. Merging the command to prefer IPv6 and the new format gives:

FETCHCOMMAND=”/usr/bin/wget –prefer-family=IPv6 -t 5 –passive-ftp -O \”\${DISTDIR}/\${FILE}\” \”\${URI}\””
RESUMECOMMAND=”/usr/bin/wget -c –prefer-family=IPv6 -t 5 –passive-ftp -O \”\${DISTDIR}/\${FILE}\” \”\${URI}\””

Which works like a charm….

Why I learnt to stop worrying and love LVM

May 15, 2010

A couple of days ago my home server fell over. It’s a gentoo box; running software raid1, with LVM on top, and acts as a web server, mail server, iSCSI server, IPv6 test bed etc etc. Basically it does everything I’ve needed to do on Linux at work over the years, and needed to test first.

The poor thing often falls over, it sits in a small, unventilated room, it doesn’t have the best supply if power. I thought nothing of if when it went down again. PITA, but just a reboot when i got home from work. Well, it turns out the drives were failing. Mdadm was reporting degraded raid sets.

First things first, I pulled the most obviously bad disk. Rebooted. Success! Stayed up another day. Then another failure, looks like it’s still having trouble Reading from disk. Problem, as I have no spare disks. I do have a laptop drive, from a previous machine, I use it for portable storage on a small USB caddy.

So last night I set to getting the server back up. Copied everything off the laptop drive and plugged it into the server, booted off a gentoo install disk.

Plus point no. 1: 2.5″ SATA drives use thd same power/connections as their bigger brothers, no converters needed!

Next job was to mirror as best i could the origional drive’s partition table, not easy as the laptop drive was 30% smaller! Fortunatly the iSCSI partition is ok staying on the old disk until I get chance to move it, it won’t degrade with the power off, and the rest will fit.

The first three partitions: boot, swap and root were created exactly the same size as the origionals and dd’d accross. That gave me a good base, and a quick fsck showed the copy had gone well. The final stage was to pull the LVM magic.

First step was to bring up the old lvm partition, and activate. Unfortunately vgscan didn’t see it. I scratched my head, then decided to mount the old root, and see if I had an LVM backup config I could use. Interstingly, the partition wouldn’t mount, it stated that the filesystem was unrecognised. Then I remembered the raid. Software raid puts a superblock at the start of the partition, enought to confuse the filesystem autodetect, but not enough to corrupt anything. Using the -t ext3 option mounted the drive.

This made me think, perhaps LVM was having the same problem? So I fired up mdadm, and assembeled a degraded raid- mdadm –assemble /dev/md4 /dev/sdb4 none

Then repeated the vgscan, and it found the volume group.

Now that I had the old volume group up and running, I could create a new pv on the new partition, and add it to the group.

The final job was to move the vital extents to the new drive. A couple of things could be just dropped(/use/portage/distfiles for example) and the rest were moved over a group of extents at a time. The iSCSI partition was left where it was for the moment.

A few fscks later, I had all of the new drives mounted, and a chroot. Grub setup as it was a new drive, and grub conf altered to reflect the new non-raid status. Now the server is running, albeit slower, and with no raid, until I get a couple of new disks.

Sometimes abstractions can be bloody useful.