vmware player performance

Well, after my excitement over KVM, I somehow found myself looking at vmware’s desktop products again (Workstation, Server, Player and Fusion). I use Fusion on my Macbook and it’s always worked great. I’ve never even had to consider trying to tweak anything to improve performance with Fusion. It’s always been great. Constrast that with running Server or Player on my core2duo Linux box. As I said in an earlier post, I’ve never been able to get them to work very well, especially when you attempted to run more than one virtual machine. This machine has 2GB of ram and the performance degradation that occurred when you put two busy VM’s running on it was pretty bad. I’m sure I’ve looked at the MemTrimRate and various other tweakable parameters and never really found anything that helped.

So recently I got sidetracked looking at this. It requires using one of VMware’s desktop products, so I started having a look at Vmware player on linux (v2.5.2 is the latest). It didn’t take me long to get it going, and I must admit it works pretty damn well even though I never bothered to set sound up (there are some good hints further on in that thread about key repeat issues). But of course, I wanted to try running another Vmplayer session at the same time, at which time I was reminded how well virtualization turns to crap on this decent spec machine when you run two or more VMs on vmware player.

So I started googling … and there’s a lot of info out there, as Vmware Player/Workstation/Server all exhibit much the same behaviour under linux. Much of the good info relates to Vmware Server. The most useful post was this one, whish talks about a few things to do. And this one and this one was useful too. I ended up using a combo of some of the settings they mentioned. The main ones that I found particularly useful were;

In /etc/vmware/config;

tmpDirectory = “/tmp/vmware”
mainMem.useNamedFile = “FALSE”

And then mkdir /tmp/vmware and mount /tmp/vmware with the following in my /etc/fstab

tmpfs /tmp/vmware tmpfs defaults,size=3G 0 0

And also to add the following /etc/sysctl.conf settings

vm.swappiness = 0
vm.overcommit_memory = 1
vm.dirty_background_ratio = 5
vm.dirty_ratio = 100    (actually, later I ended up changing this to 80)

Those posts obviously mention a few other settings. There’s quite a few that relate to Vmware’s tricks re memory sizing up and down rather than say a 1GB RAM VM actually consuming 1GB of RAM. I was actually keen to keep some of this auto-memory resizing stuff at the expense of performance.

Well, these changes do make a marked difference for me. I can even convince it to run two VM’s each with 1GB of RAM on my 2GB of ram server … and it a) lets me do this and b) doesn’t go into swappiness madness like it used to do.

There’s actually an interesting comment in that last link I mentioned from someone who works on the memory subsystem (for vmware I assume). There’s an interesting quote which might explain why VMs seem ‘much smoother on OSX’ than they do in Linux;

The big-picture answer is that the Linux virtual memory subsystem is simply not tuned for running VMs

I have to admit Linux’s swapping behaviour does seem somewhat more painful than other OS’s at times.

Alas, being a glutton for punishment, I wanted to push things, and ran a 2nd VM (Windows 7 RC 64 bit) with 1GB of configured RAM and a 3rd VM (Ubuntu 9.04 64 bit) with 512MB of RAM. So that meant 3 guests with a configured total guest RAM of 2.5GB on my core2duo with 2GB of RAM. Yes, I didn’t think the outcome would be pretty, but I thought Vmware might be able to handle it.

Now, I only lightly loaded these VM’s and it wasn’t too long before my box went into paging hell. Some interesting commands to run are;

vmstat 5

And watch the si and so columns, and also;

watch grep -A 1 dirty /proc/vmstat

I picked up the latter command from this interesting performance troubleshooting page. It better explains how the output relates to the dirty_ratio and dirty_background_ratio.

Even though the 3 VM’s were operational, the major problem was that occasionally one of the VMs would disappear. A closer look at dmesg revealed that the OOM killer had stepped in. I’d never heard of the OOM killer before. Basically the kernel monitors whats left of ‘low memory’ in 32 bit linux. ‘Low memory’ is memory in the first 900MB or so of your system. It seems to have some special purpose, and it’s important that you never have NO ‘Low memory’ left. So the kernel now will go out and kill the process hogging the most ‘low memory’ if it detects that it might run out. You can apparently turn it off, but the bigger problem is that you can run out in the first place regardless of how much RAM you have. A key workaround is to move to 64 bit linux. My server had run Debian 32 bit (first Etch and later Lenny) for quite some time, and only recently had I been looking at Ubuntu 9.04 64 bit. I was quite enjoying Ubuntu for a while, but bit by bit a few things started to annoy me about it, so I switched back to Lenny 32bit to do all this VMware testing.

So I haven’t really solved this lowmem problem. I could just switch to 64 bit linux to see how it goes. In the meantime, I’m actually going to get some more RAM for this box. I’m interested to see how it performs when there is more RAM available than the total required by the guests. In theory, I might still get lowmem problems but I’ll see how it goes.