Using Windows or Linux on FreeBSD's vm-bhyve

I've been finding Windows 10 in FreeBSD's version of VirtualBox unbearably slow, so decided to give bhyve a try. Much of this is taken from vm-bhyve's Quick start guide.

The first steps are fairly straightforward. Install vm-bhyve. You can use pkg for this.
pkg install vm-bhyve bhyve-firmware 

The firmware package is recommended when having a guest, such as Windows, that can use UEFI. It's a meta package that also installs a couple of other uefi bhyve packages.

Create your vm directory. We'll just call it vm for purposes of this tutorial. If using zfs then use zfs create pool/vm if using UFS, then mkdir /vm can be used. Now that we have our main directory, we can add these lines to /etc/rc.conf. (In this case, let's assume we used UFS, and the directory is under /vm).
vm_enable="YES"
vm_dir="/vm"

Now run vm init. This will create the required subdirectories under /vm. Next we want the templates. We can just get the Windows one, but we may as well get all the sample ones that were installed when we install vm-bhyve.
cp /usr/local/share/examples/vm-bhyve/* /vm/.templates

Note the dot in front of .templates, it's a hidden file.

Now we have to create a switch which the vm will use for networking. We'll keep this as simple. The switch will be called public and attached to our main network card. Say the main card connecting you to the Internet is em0.
vm switch create public
vm switch add public em0

We need a Windows10 (or whatever--in this case, we'll assume Windows-10) iso. You can put it where you like, for convenience, I would put it in the same directory that you have for the vms. We'll assume it's called Windows-10.iso.

We can now begin to create the guest. The -t refers to type (the man page says -t refers to template), so in this case, we use -t windows. The -s refers to size. The default size is 20G for Windows, but we'll make it 40G. You can call it whatever you want, in this case, we'll call it winguest.
vm create -t windows -s 40G winguest

If you wish to reconfigure from the defaults, now run vm configure winguest and a file will open showing the various defaults. For example, memory is set at 2G. You may wish to raise this, or the number of processors which is set to 2.

Once created, we can begin the installation. We can use a vncviewer, e.g., remmina, tightvnc, or anything else you prefer.

vm install winguest /Windows-10.iso

You will see a message that it has booted. Now to see what's going on, you will need a vncviewer. It uses port 5900, so if you were using tightvnc's vncviewer the command would be vncviewer localhost:5900, or vncviewer localhost:0. Either should work. This should take you to the Windows install. It will reboot once or twice.

When complete, it reboots again. (You probably have to reconnect the vncviwer each time you reboot). This time it will go through setup, asking location, choice of keyboard, and so on. When complete, you should have a working Windows 10 installation.

In the future, when you want to run it, you can start it with the command vm start winguest. You will need to run a vncviewer on port 5900 to view its screen. I have found it to run far more smoothly than Windows in FreeBSD's VirtualBox. There are various improvements you can make, such as changing the default e1000 virtual NIC with virtIO, but as I only need Windows for a few quick things, I haven't looked into that.

Speaking of VirtualBox, you can have it as a host on the same machine as vm-bhyve. However, you can't run them simultaneously. Before starting a VirtualBox machine, you have to first unload the vmm module as root or with root privilege.
kldunload vmm

Once VirtualBox has stopped, you should be able to once again run any vm-bhyve machine without having to load modules. If the VirtualBox machine has used bridged networking, I have sometimes, but not consistently found, that I had to run service vboxnet stop before being able to reach the bhyve vm.

It can also be run with Linux VMs. RedHat and clones are installed in the same way, but for type use centos7.
vm create -t centos7 -s 20G redhat9

Make the size whatever you wish but use centos7 for type, as long as it's RHEL7 or later. It works fine with RHEL9, Rocky, or other RH types. (I'll get to Fedora in a bit

However, most other versions of Linux need you to run vm config <machine_name> to change their defaults. There is a wiki which has a guest example section. I found that for a Debian install, if I wanted a gui, I had the change the Debian default of loader="grub" to loader="uefi" and add a line graphics="yes". Otherwise, the built in vnc server wouldn't work, although I could do an install by using vm console debian. In contrast, the console doesn't work with a RH install that boots into a GUI.

Apparently the VNC only working with uefi is something that most people know. I'm sure it's in the docs somewhere, but I've missed it. Basically, I've found that if you want to run any Linux with a GUI, the best thing to do is use the centos7 template, increasing the memory if the boot from the install image doesn't work. For example, in my experience, if creating a RedHat or clone server, I might need 2048 MB of memory--this wasn't consistent, e.g., it was true with genuine RH but not with AlmaLinux. For RHEL9 and clones, I used the centos7 template which defaults to 512 MB of memory.

I also found (I think this this is in the documentation) that with uefi booting installs, it only takes up to 3 additional disks. With grub or other boot methods, you don't have this limitation. However, as of May, 2023, I have been able to add more than three disks to a Rocky9 bhyve install. Note that once I added a disk, the connected interface name would change. For example with one disk, it was enp0s3, if I added one more disk, it would change to enp0s4, etc. So, if adding a disk, at least to a uefi boot, I recommend connecting afterwards with a vncviewer and checking if you need to alter the network configuration.

As it gets more mature, I'm sure there will be more documentation and more methods like as vm-bhyve, to handle bhyve virtual machines.

In September of 2023, an issue arose with bhyve machines that use uefi, which would include Windows and RedHat bhyve VM's. The machines would fail to start. With a Windows VM it one error, not a BSOD, but an error showing a problem, with RH, it seemed to be starting but never reached the login screen. The solution as given in this freebsdforums thread by user vermaden in post #7. With vm-bhyve, just run vm config <vm_name> and at the end of the file add
bhyve_options="-A"

Interested readers can investigate the links in the thread for further information, but the solution is to add that -A to the config options of your vm. This is necessary with Windows, RH, and RH clones, but I haven't needed it with Fedora or Arch.

I have also found this unnecessary in recent VMs of RH and clones, as of November, 2024. In January, 2024, I went to install Fedora, and found that the mouse wouldn't work with the GUI install. I also tried installing Arch, and found that once I installed X and ran startx, I had no keyboard or mouse input. I asked on freebsdforums and user _al pointed me to a thread with a solution. That thread had done a bit more, but in my case, all I had to do was comment out the line that read xhci_mouse="yes" in the default centos7 template that I was using and mouse and keyboard worked in these Linux installs with X.

Hopefully, this will get fixed soon. See the FreeBSD bug report. There's already a working patch to fix it that should be in packages soon.

This also, as of November, 2024, no longer seems to be the case. The config can have the xhci_mouse="yes" line, without it causing a problem.

With some distributions when one uses uefi boot, I've had other issues. The problem is that if you want a GUI, you have to choose UEFI. What will happen is, once everything's installed, the reboot may go smoothly, but if you then power down the vm, the next time you start it you'll see something like
BdsDxe: failed to load Boot0001 "UEFI BHYVE SATA DISK
BHYVE-48FF-992B-D5E0" from
PciRoot(0x0)/Pci(0x4,0x0)/Sata(0x0,0xFFFF,0x0): Not Found
>>Start PXE over IPv4.

Most of what I have here, is taken from an article from davidschlatchter.com

It can't find the EFI bootloader so it's trying to boot over the network. Now, if you wait, and you may be waiting for a couple of minutes, eventually, it goes to an EFI shell. At that shell's prompt, you type
FS0:

If your fonts don't make that clear it's F,S, zero, and a colon. Then your screen should show FS0:\> and at that point, various commands such as ls will work. You're looking for where the grubx64.efi file is. In the davidsclachter.com page, he's using Debian and it's in EFI/debian. Once you've found the grubx64.efi, you can cd into its directory, type grubx64.efi and it will boot. That page gives suggestions for fixing it on a Debian system (also works for Devuan), just run
grub-install --bootloader-id=boot

which will install it into EFI/boot instead of EFI/debian. The article also mentions that you will have to rename the grubx64.efi file to bootx64.efi. (The article says you'll probably have to rename it, but there's no probably about it). Rename it.

With Fedora, and other RH based distributions, it just works, you don't have to worry about EFI installs not booting. (Don't forget, with Fedora, you do have to comment out the xhci mouse line).

With ArchLinux, one can modify the solution from davidschlachter.com. If, when installing grub on Arch, if one follows the wiki, they suggest the command
grub-install --target=x86_64-efi --efi-directory=esp --bootloader-id=GRUB

with esp being the directory you're installing into, usually /boot. The wiki does point out that the bootloader-id of GRUB can be changed. So, even if you've already installed grub, you can reinstall grub with
grub-install --target=x86_64-efi --efi-directory=boot --bootloader-id=boot

This puts the efi file in /boot/EFI/boot. Then, as with Debian and Devuan, rename grubx64.efi to bootx64.efi and it will boot without problems. I only installed Arch to play around with it a bit, so I don't know how one deals with kernel updates, I imagine you might have to just reinstall grub.

Alpine Linux also works well, but not with their suggested templates. Rather than use grub as the bootloader, choose uefi. If you do this, you don't have to worry about the various grub entries in the default template. In this example, I'm using 4G of memory but you can use whatever you want. It's a very light system.
loader="uefi"
cpu=1
memory=4096M
network0_type="virtio-net"
network0_switch="public"
disk0_type="virtio-blk"
disk0_name="disk0.img"

The various Alpine configs on the vm-bhyve wiki use grub, and some other people have mentioned using them, changing the word "vanilla" in the config to "lts". However, when I tried that, it didn't work. The config I list above is thanks to user ArgentoSoma on the FreeBSD forums. With this config, I was able to install with the console--note that though it says uefi it doesn't have the typical line for a GUI, the graphics="yes" line. Once installed, I could either access it with sudo vm console alpine, or just by ssh'ing into it.

With OpenBSD if one goes to vm-bhve's default templates you'll see that it's been changed from booting with grub to booting with uefi. (As of January, 2024, the default OpenBSD template that comes with the vm-bhyve package uses grub). So, go get the new template and use that. My template reads
loader="uefi"
cpu=1
memory=2048M
network0_type="virtio-net"
network0_switch="public"
disk0_type="nvme"
disk0_name="disk0.img"
graphics="yes"
uuid="45ac0f0a-bad7-11ee-abcc-b8ca3abc453f"
network0_mac="58:9c:fc:0f:1f:f7"

I do keep the graphics="yes" stuff in though, as the vncviewer method of install works ell for me. Some have had success with just using vm console, and if I figure it out, I'll post it.

If installing with the uefi template, when it asks which disk to configure, one should choose disk 1, not 0, as 0 seems to be the image used when installing. (The default is usually sd0, choose sd1). When it asks if you want to use the whole disk, GPT, or OpenBSD area, choose the default of gtp. Otherwise when it's booted up after install, it won't be able to find the efi file and won't boot.

Also, when finished installing shutdown the vm and then start it again with vm start, rather than rebooting, as a reboot takes you back to the installation. Unfortunately, the only way I've found to do this is vm poweroff openbsd;vm start openbsd. However, before doing this choose S for shell and do
halt -p

While that won't turn the vm off, at least it puts everything in a state where it's unlikely to be damaged by vm poweroff.

When I do an install, I usually choose to include xbase74.tgz as otherwise I find I'm missing something when I go to install some scanning tools. If you don't want some, or any of them, the way to remove selected sets is to type -<setname>, for example, after it lists the sets, to get rid of xbase, type in
-xbash74.tgz

(That information is on the console screen that lists the sets). You can do that for all the sets beginning with an X. Then, you can either use ssh or use vm console. (You can use vm console even if you include all xsets. You don't have to use vncviewer or a GUI).

During the install, there's a point where it asks if you want com0 to be the default console, with a default of no. Choose yes, and you can accept the default of 9600. This way, if you need to use vm console, it will work properly.

I'll mention this last part in case someone runs into it. Recently, I did a fresh install, leaving my VMs in place on a different hard drive. I then installed vm-bhyve and saw that all my machines were there, but none of them had a network. So I just destroyed and recreated the public switch with vm switch destroy public and vm switch create public; vm switch add public em0, and all was well again. I don't think it's a typical situation,but perhaps the reader may find themself with the same problem.