“ZFS 2.0.0 final” install on Proxmox 6.x


If the following guide gives you any trouble, find me on Discord and we can have a chat about it, possibly fixing your issue: https://discord.gg/H9uYucQ

Abit of an update to my previous post that installed the release candidate versions of ZFS2. This guide is for the final 2.0.0

ZFS2 is not officially supported by Proxmox at current time, but can be installed using this guide. Official ZFS 2 support will arrive in Proxmox 6.4

If you use “systemd”emon, you are cursed and this may not work for you!

This guide can also be used for Debian 10.5+ by replacing the pve-headers package with:

linux-headers-$(uname -r)

Commands to enter:

apt-get update

apt-get upgrade

apt-get dist-upgrade

apt-get install build-essential autoconf automake libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev pve-headers python3 python3-dev python3-setuptools python3-cffi libffi-dev git

git clone https://github.com/zfsonlinux/zfs

cd zfs

git checkout zfs-2.0.0

sh autogen.sh


make deb

apt-get remove zfsutils-linux

rm kmod-zfs-devel_2.0.0-1_amd64.deb

dpkg -i *.deb

apt-get install zfsutils-linux


A word of Caution. When updating Debian or Proxmox after installing this you might have to run the above again if kernel versions change or ZFS packages are updated.


10 thoughts on ““ZFS 2.0.0 final” install on Proxmox 6.x

  1. Pingback: ZFS 2.0 install on Debian 10.5 /Proxmox 6.2 | Happy Administrator

  2. Followed your steps and 2.0.1 doesn’t seem active on PM 6.3
    zfs version shows
    and zpool create -O compression=zstd reports error ‘compression’ must be one of ‘on | off | lzjb | gzip | gzip-[1-9] | zle | lz4’ indicating .8.5-pve1 is being used
    any extra steps required to set 2.0 as default used? Thanks!


  3. Followed it exactly except used latest 2.0.1 instead of 2.0.0 zfs
    Do you also have previous version showing or only 2.0.0 using zfs –version
    apt install zfsutils-linux removes some 2.0 packages according to your non-final guide
    so wondering if it might have caused the issue. I can redo without last step and try to set zfs to load at boot manually, I’ve ran acrosss the commands somewhere I think on archlinux site but thought you might have quick answer


  4. Confirmed the issue is with the last step install of zfsutils-linux. skipping the install leaves
    zfs –version
    and create zpool with zstd works without issue indicating 2.0.1 is indeed active. pools as expected no longer mount on boot. Trying to follow archwiki https://wiki.archlinux.org/index.php/ZFS#Automatic_Start but I’m not familiar enough with proxmox/linux to see why systemd commands is masked etc when trying to mount at boot. importing pool after logged in works without issue however


    • I never tested zstd compression. I also do not run systemd. I use lz4.
      And you are correct about the versioning which is how I have run it since september with various RC’s. I did this mainly to save my SSD’s from being killed by non-persistant L2ARC. (With the above guide, l2arc is persistent)
      root@pve02:~# zfs version

      Maybe this will help you, untill proxmox 6.4 we are probably left with this half-arsed MacGyver solution :/



  5. I got zfsutils-linux to install version 2.0.1 with zfs-zed by adding sid repository on fresh system but couldn’t get openzfs kmod installed at the same time because of dependencies. Tried again but now this guide is breaking on ./configure giving me **cannot find UTS_RELEASE definition. Not sure what changed, was working fine to compile debs now have mismatch apparently and didn’t keep my previous debs


  6. Found my old git download and it compiles and installs successfully so something changed in recent git giving UTS_RELEASE error. Interesting thing on new system is zfs version both lines show 2.0.1-1 when on old system just show second line updated.
    sudo zfs –version
    Haven’t tried using sid repo yet to actually get updated zfsutils-linux and nothing is auto mounting so assuming still 0.8.5-pve1
    6.4 could be a long way off haha


  7. I came across the UTS_RELEASE error myself with the following versioning:

    Linux host 5.4.73-1-pve #1 SMP PVE 5.4.73-1
    git checkout zfs-2.0.3

    When I follow the instructions in blog post exactly I hit the UTS_RELEASE error. When I add an additional installation:

    apt install pve-headers-$(uname -r)

    It gets past the UTS_RELEASE error. I think the pve-headers itself doesn’t fetch enough of the required headers.

    Then install of the `rm …` command, I needed to use:

    # because of having multiple kmod-zfs-devel packages
    rm kmod-zfs-devel*.deb

    Full install script here: https://gist.github.com/zph/01119c836dca98293581c0e20f07c927


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s