So, as per my previous post, I was searching for an upgrade path for my Thinkpad R60 virtualistation server running Debian Lenny. I’d tried upgrading to Squeeze a few times … and didn’t like it … always reverting back to Lenny.
So after trying a few different distros/virtualisation hypervisors (using a spare disk), I settled on installing Scientific Linux 6.0 64 bit. Key reasons are;
- It’s effectively Redhat 6.0, and Redhat ‘target’ stability rather than ‘latest and greatest’ which is what I’m after with a virtualisation box.
- By using KVM virtualisation on SL 6.0, it contains a version of libvirt that will let you disable the memory balloon (the one in Squeeze always has the memory balloon enabled). The memory balloon stuff is more annoying than useful on my little laptop server.
- It didn’t seem too annoying. Well that’s what I thought. Less annoying the Debian Squeeze I guess.
So I’d been trying out SL 6 on a spare disk … so now I thought I’d put it ‘properly’ on the two 500GB drives in the R60. I’d always run Lenny with a mirrored root and /boot so I wanted to do the same with SL 6.0. I’d already set split mirrors on the two 500GB drives. The original md devices I used on the first disk were md2 and md3. They currently had an old Debian Squeeze on them with a deliberately failed mirror. The other disk now had md20 and md30 which was the original Lenny install that I had ‘split off’ prior to my last Squeeze upgrade (md20 and md30 also being a degraded mirror). So I still wanted to keep md20 and md30 for now in case I wanted to go back to Lenny, and I would overwrite md2 and md3 with SL 6.0. I still wanted to put SL onto RAID1 mirror devices, because at some point I would kill md20 and md30 on the 2nd disk and properly mirror md2 and md3 again.
So I wanted to install SL 6.0 onto md2 and md3 which were both currently failed mirrors. Do you think it would let me do this? Nope. It seems the anaconda installer won’t let you do this … which is very very sad. To me it seems an obvious thing that you’d want to do. I found redhat bug 188314 which is basically a depressing read.
So, I dug out yet another spare disk (an old 30GB IDE) , removed the 2nd 500GB drive, repartitioned the 30GB drive to match the root and /boot partition sizes and then used mdadm to add in the extra partitions and eventually md2 and md3 were active RAID1 mirrors.
Then I tried to install again (lucky you can boot off a USB attached DVD drive on the R60). This time I got past the partitioning hiccup and had it installed. I used the SL 6.0 x86_64 LiveMiniCD … which doesn’t ask too many questions and gives you a reasonable (for 2011) size install. One thing on my R60 that I learnt from previous attempts, that at the inital GRUB screen press a key to get the boot menu, cursor down to ‘install’, press tab, edit the 2nd line, then add the word ‘nomodeset’ onto the end of the main kernel boot line. The ‘nomodeset’ turns off KMS and prevents the extremely annoying flickering I would previously get.
Once I had it installed, I then went about getting it sorted to work much the same as my Lenny setup. I’ll include some of my notes further down for reference, and just discuss some of the aspects of KVM setup right now.
So in order to get KVM set up right, I installed the main virtualisation stuff;
yum groupinstall Virtualization
yum groupinstall “Virtualization Client”
yum groupinstall “Virtualization Platform”
Then I had to edit /etc/libvirt/libvirtd.conf in order to get rid of the dependence on PolicyKit (or attempt to get rid of the dependence). Then I copied in the xml libvirt files from my Lenny install and started modifying them. I changed the kvm binary references to /usr/libexec/qemu-kvm and as I mentioned in my previous post, my Windows 7 32 bit VM always crashed on shutdown until I switched the cpu from i686 to x86_64 (as I had recently upgraded the CPU in my R60 from a 32 bit T2400 to a 64 bit T7200). The machine type in the xml files is also a puzzle. Lenny has a completely different set of them compared to Redhat/SL. Lenny had ‘pc’ as the main machine type, and on most of my attempted Squeeze upgrades, somehow the upgrade process seemed to think it was smart to change all my ‘pc’ machine references to ‘pc-0.12’. I remember noticing that this makes Windows think it has a new network card, new hard drive etc….
With Redhat/SL the QEMU/KVM machine types are pc, rhel6.0.0, rhel5.5.0, rhel5.4.0. It says ‘pc’ is the same as ‘rhel6.0.0’, so if I’d left my libvirt xml files all with ‘pc’ they’d be automatically upgraded … just like Squeeze did. I think I currently have rhel5.4.0 as the choice for my Windows 7 VM for lack of a better choice.
Anyway, in the process of getting Windows 7 going again, I got the dreaded ‘Windows Actiation error’. Sure it now thought that not only had the network card changed, the hard drive and now it thinks it has a new CPU. I thought one of the ‘features’ of virtualisation is to have the ability to have a ‘constant’ hardware abstraction.
So I try to activate Windows 7 online. That fails, so I have to ‘ring MS$’ which is definitely the low point of the week. I type in a very long string of numbers on the phone keypad, only to be told that I needed to be put through to an operator. I read the same long series of numbers to a human, who then reads a similarly long series of numbers back to me … which finally does the trick.
I’m thinking at this point … that maybe I should listen to all the people that comment on my ESXi posts and ditch KVM and just switch to ESXi.;-)
But I am a glutton for punishment.
Anyway, it is all kind of working OK now. I added in the line to disable the memory balloon for my KVM guests, and also found there was some ksm daemons that also needed to be disabled, as they sporadically ate lots of CPU as well.
# Get rid of the leftovers from doing the LiveCD install chkconfig --del livesys chkconfig --del livesys-late # Turn SSH on chkconfig --level 345 sshd on service sshd start # Turn NetworkManager off. chkconfig NetworkManager off service NetworkManager off # Updated eth0 and br0 network configs
edit /etc/sysconfig/network-scripts/ifcfg-eth0 and ifcfg-br0 chkconfig --level 345 network on service network start
# The LiveCD never seems to set the hostname set HOSTNAME in /etc/sysconfig/network Add our own hostname and IP to /etc/hosts Comment out hiddenmenu in /boot/grub/menu.lst. Also add in my Lenny boot menu item # Fix up some sudo rules visudo sudo yum upgrade
# Add in all the virtualistion stuff yum groupinstall Virtualization yum groupinstall "Virtualization Client" yum groupinstall "Virtualization Platform"
# Turn off ksm chkconfig ksm off chkconfig ksmtuned off service ksmtuned stop service ksm stop #LiveCD leaves firewall on, so need to do some updates to /etc/sysconfig/iptables and restart the iptables service
#I installed the development tools and some other stuff
yum groupinstall "Development tools" yum install libX11-devel yum install sg3_utils yum install expect yum install libXfont-devel
#downloaded EPEL release 6.5 rpm to get EPEL set up
#Compiled X11rdp and xrdp from svn and cvs respectively. Needed automake1.9 for X11rdp. Downloaded a srpm for it and compiled by hand. xrdp 0.6 has changed a bit. Still has the bug where it eats heaps of CPU though, so added the changes mentioned on this page; http://sourceforge.net/projects/xrdp/forums/forum/389417/topic/3706601
#Quite a few deps to handle to do the xrdp compile. Can't remember all of them yum install pam-devel yum install libXfixes-devel