I’ve been running KVM for quite a while on my lab server. It’s been running without issue but with the release of vSphere/ESXi 6.0 I felt it was time to move back to VMware.
I wanted to preserve the virtual machines already running so I set out to move these to ESXi. I ran into some issues which I’m not sure is a generic problem or specific to ESXi 6.0 but I’ll describe what I have done.
In order to convert the existing disk images to VMware’s vmdk format you should you use the program qemu-img from the package qemu-utils (in Ubuntu).
The process is straight-forward
$ sudo qemu-img convert -p -i DiskImage.img -O vmdk DiskImage.vmdk
- Transfer disk image to ESXi (using scp (enable ssh in ESXi)) or NFS (as I did)
- Create new virtual machine with custom options and add the converted disk
Unfortunate this did not work as expected, when booting the converted images, the Linux instances inside all crashed during boot with this error message (or something similar)
It turns out, two steps were missing; after transfering the converted disk image to ESXi, do this from the ESXi cli (via SSH)
-d is the output format which can be zeroedthick, eagerzeroedthick or thin.
Now open the newly created vmdk file in vi and change the line ddb.adapterType from ide to lsilogic.
After doing this, add the image(s) to a newly created VM and boot.
(This was done in Ubuntu and will work with any Linux variant with qemu-img. If you want to do this in Windows, StarWinds V2V converter is said to be able to do the job)