Hardcore Linux

Anything about Ubuntu, Centos, openSuSe and Fedora

Category Archives: Virtualization

KVM guest not automatically starting at boot sequence

I don’t know but I encountered this problem using Ubuntu 12.04 as KVM host, even though I already added the guest VM in the /etc/libvirt/qemu/autostart as symlink  but during the host boot sequence it is still not automatically started (the guest machine).

Here’s a quickfix that works for me:

1. Modify the /etc/init/libvirt-bin.conf and look for the line with:

start on runlevel [2345]

2. Replace that with:

start on (runlevel [2345] and net-device-up IFACE=br0)

3. Done.

Please note that use your current bridge network device, in my example it is br0.

 

Advertisements

Installing KVM with OpenVSwitch on Ubuntu 12.04

Here’s a good article from http://blog.allanglesit.com/, I myself tried it my in test server and it’s working great, though I’m still new with OpenVSwitch.

The article has been released for Ubuntu 12.04 system. I also found out that the KVM version currently available in Ubuntu 12.04 has better performance compare to 10.04, which I think is a good sign when planning to deploy KVM host for your VMs.

The actual URL: http://blog.allanglesit.com/2012/03/linux-kvm-ubuntu-12-04-with-openvswitch/

Done.

Automatically shutdown KVM Virtual Machines

Currently, there’s no built-in tool to properly shutdown VMs  when the KVM host does. So after couple of hour “googling” it I got some idea from this link, made some changes to make simplier. Here’s my version:


#!/bin/bash

LIST_VM=`virsh list | grep running | awk '{print $2}'`
TIMEOUT=90
DATE=`date -R`
LOGFILE="/var/log/shutdownkvm.log"

if [ "x$activevm" =  "x" ]
then
 exit 0
fi

for activevm in $LIST_VM
do
 PIDNO=`ps ax | grep $activevm | grep kvm | cut -c 1-6 | head -n1`
 echo "$DATE : Shutdown : $activevm : $PIDNO" >> $LOGFILE
 virsh shutdown $activevm > /dev/null
 COUNT=0
 while [ "$COUNT" -lt "$TIMEOUT" ]
 do
ps --pid $PIDNO > /dev/null
if [ "$?" -eq "1" ]
then
COUNT=110
else
sleep 5
COUNT=$(($COUNT+5))
fi
done
if [ $COUNT -lt 110 ]
then
echo "$DATE : $activevm not successful force shutdown" >> $LOGFILE
virsh destroy $activevm > /dev/null
fi
done
  1. Save the code in /etc/init.d/shutdownvm
  2. Then make it an executable file
    chmod 755 /etc/init.d/shutdownvm
  3. Create links to both rc0.d and rc6.d
    cd /etc/rc0.d ln -s ../init.d/shutdownvm K18shutdownvm cd /etc/rc6.d ln -s ../init.d/shutdownvm K18shutdownvm
  4. Done.

Installing Virt Manager for Ubuntu 10.04

After establishing my KVM host from other machine, now it’s the time you need to configure your remote kvm guest manager, if you prefer the GUI management. Here’s the quickest way to do:

  1. Execute this in the console:
    $> sudo apt-get install ubuntu-virt-mgmt
  2. After that you can now add remote KVM host and  manage your vms.
  3. Done.

Backup Whole System via Network.

You could easily perform a full backup or even store a system into a raw image which can be used in as guest machine in KVM. This takes couples of hours (depends on the size of the source linux system).

Here’s how:

  1. From the target machine, you can either boot via a LiveCD or a rescue linux distro such as SystemRescueCD and via the console, check if it can detect your currently attached storage device.
    #> fdisk -l
  2. If it sees your hard drive  you can now perform the  arbitrary TCP/UDP connections and listens to port 12345.
    #> nc -l 12345 | dd of=/dev/sda
  3. In the source machine, you can know execute the command:
    #> dd if=/dev/sda | nc 192.168.1.20 12345
  4. Take note of the <port> 12345  and the <target IP> 192.168.1.20 I used in the example.
  5. You can also store the dump in a raw image using this command:
    #> nc -l 12345 | dd of=myimage.img
  6. Done.

LXC on Ubuntu 10.04

LXC (LinuX Containers) is a  lightweight virtualization that lets you run an isolated processes and resources. You can run this so-called “containers” inside your Linux host and use the resources without the need of paravirt drivers, the performance is almost as native machine does.

How to install?

1. First install the necessary packages.

$> sudo apt-get install lxc bridge-utils debootstrap

2. Mount the cgroup, which lxc uses to regulate and limit resources for the containers.Modify your /etc/fstab and add the following:

none  /cgroup  cgroup  defaults  0 0

3. Then configure your network card to bridge mode.

auto lo
iface lo inet loopback

auto br0
iface br0 inet dhcp
    bridge_ports eth0
    bridge_stp off
    bridge_maxwait 5
    post-up /usr/sbin/brctl setfd br0 0

4. You can also set it to static IP.

auto lo
iface lo inet loopback

auto br0 
iface br0 inet static
   address 192.168.1.10
   netmask 255.255.255.0
   broadcast 192.168.1.255
   gateway 192.168.1.1
   bridge_ports eth0
   bridge_stp off
   bridge_maxwait 5
   post-up /usr/sbin/brctl setfd br0 0

5. Reboot and now your set. You can now create your own sets of containers or use the pre-configured from this site. I, myself still failed to create my own container templates, so for now I’m still using what bodhizazen have created.

VMWare Server works fine for Centos 5.5

Discard all the previous problem with VMWare Server 2.0.2 on Centos 5.4, with the release of Centos 5.5, all of those are gone. VMWare Server can now run smoothly again.

Updating to the 5.5 is also a breeze, just run this command on root console:

1. First check your current Centos version

 rpm -q centos-release

2. Then, check the list of packages to be updated

 yum list updates 
 yum list updates | grep centos-release

3. Then if all is good to go

 yum update

4. You just need to reconfigure VMWare server via

 vmware-config.pl

5. Follow the same installation pattern it’s done.

Vmware to Virtualbox Migration

Most of us already know the power of Virtualbox and with the release of 3.0.2 (July 10, 2009). The new version are now more stable than before. With all the fuzz in VMware Server 2.x etc. I think for me it’s now time to migrate from VMware to Virtualbox, but the question is, How can we move out from VMware and migrate to Virtualbox together with our current VMs?

Here’s how:

1. Please note this! Remove the VMware toolbox from guest OS /VM first before doing the next step. Leaving VMware toolbox installed might fail this procedure.

2. Example your VM’s folder is called “myVM”, from console place your current location to that folder by doing:

cd myDisk

3. Multiple dynamic vmware disk needs to prepare first before the migration. You need to perform the following to organize it into a single file.

vmware-vdiskmanager -r mydisk.vmdk -t 0  output.vmdk

4.  The follow up with the Virtualbox clone command

VBoxManage  clonehd  output.vmdk mydisk.vdi

4. Finished! If you are in Linux, the VDI file will be created under ~/.VirtualBox/Harddisks/

5. Completing the process, create your new Virtualbox VM using the migration  VDI disk.

6. Done.

How to resize a VMWARE virtual disk

More often the initial disk space for your vmware devices is not enough in the long run and you wish extend its current capacity. I’ve  tried few tutorials I googled, but most of them are either not working for me or too complex to follow, and some commands are really not necessary.

So here’s my version of increasing your vmware virtual disk from linux for a vmware linux guest OS.

  1. First, check the current capacity of your vmware disk/drive. Start your guest OS and on console perform the following:

fdisk -l /dev/sda1 # or the device address of your vmware vdisk  on your guest OS.

it will response to something like this:

Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14          77      514080   82  Linux swap / Solaris
/dev/sda3              78        1044     7767427+  8e  Linux LVM

2.  Keep this in mind for future reference, now shutdown the guest OS.

shutdown -r now

3.  On your host OS (the one the serves your virtual devices), perform the following as root:

vmware-vdiskmanager -d myvirtual/myvirtual.vmdk # this will defrag the virtual disk.

and then followed by

vmware-vdiskmanager -x 30GB myvirtual/myvirtual.vmdk #resize the virtual device’s virtual disk.

4.  Now, test your  guest OS if it can still normally boots and on the console enter:

fdisk -l /dev/sda

5.   Compare the old value  to the new one, confirm if it successfully resize the partition.

6.  In my case, an LVM drive, you can perform the following to check the current status of your volumes:

pvdisplay  #To check the physical volume partition

lvdisplay #to check the  status of your volumes

vgdisplay #to display the status of your logical volume

7.  When creating server’s on vmware, doing it on LVM is a good way to safely increase or decrease the storage capacity of your guest OS.

8. Add an lvm physical partition, in my case it will be /dev/sda4 and add it in the current LVM volume group.

9. Now you can perform lvextend to resize your current logical volume capacity.

 

lvextend -L+10G /dev/base/system

10. Restart your guest OS and its done.

%d bloggers like this: