![]() |
Notice: This article was traditionally maintained by customers, however should include some attainable helpful data. See Storage: ZFS over ISCSI for the official article from the Proxmox VE reference documentation |
Expertise and options
As of Proxmox 3.3 the ZFS storage plugin is full supported which suggests the power to make use of an exterior storage primarily based on ZFS through iSCSI. The plugin will seamlessly combine the ZFS storage as a viable storage backend for creating VM’s utilizing the the traditional VM creation wizard in Proxmox.
When Proxmox creates the uncooked disk picture it’ll use the plugin to create a ZFS quantity because the storage which accommodates the disk picture. Eg. a ZFS quantity can be created for each disk picture like tank/vm-100-disk-1. Being a local ZFS quantity additionally implies that Proxmox will present customers reside snapshots and cloning of VM’s utilizing ZFS’ native snapshot and quantity reference options.
Since ZFS is obtainable on a number of platforms utilizing totally different iSCSI goal implementation the plugin has numerous helper modules every offering the wanted iSCSI performance for the particular platform. For now iSCSI modules exists for the next platforms:
- Solaris primarily based platforms utilizing Comstar. Examined on Omnios and Nexenta Retailer. For GUI use napp-it or Nexenta.
- BSD primarily based platforms utilizing Istgt. Examined on FreeBSD 8.3, 9.0, 9.1. For GUI use zfsguru.
- Linux primarily based platforms with zfsonlinux utilizing Iet. Examined on Debian Wheezy. I’ve no information of obtainable GUI’s. Edit 2013-10-30: I’ve begun growing a ZFS plugin for OpenMediaVault in collaboration with the OpenMediaVault crew. A beta launch of the plugin is scheduled ultimo subsequent month (November 2013).
A phrase of warning. For enterprise usecases I’d solely advocate solaris primarily based platforms with Comstar. Linux primarily based platforms can IMHO be utilized in a non-enterprise setup which requires working HA. I cannot advocate BSD primarily based platforms for enterprise and/or HA setups attributable to limitations within the present iSCSI goal implementation. Istgt would require a restart of the daemon each time a LUN is to be deleted or up to date which suggests dropping all present connections. Work has begun to supply a local iSCSI goal for FreeBSD 10 which hopefully will resolve this inconvenience. NOTE: That is fastened in FreeBSD 10.x URL
Platform notes
- On all zfs storages nodes the next needs to be added to /and many others/ssh/sshd_config:
For previous ssh from Solaris primarily based OS
LookupClientHostnames no VerifyReverseMapping no GSSAPIAuthentication no
For OS which use openssh
UseDNS no GSSAPIAuthentication no
- For all storage platforms the distribution of root’s ssh key’s maintained by Proxmox’s cluster broad file system which suggests you need to create this folder: /and many others/pve/priv/zfs. On this folder you place the ssh key to make use of for every ZFS storage and the title of the important thing follows this naming scheme: <portal>_id_rsa. Portal is entered within the gui wizard’s discipline portal so if a ZFS storage is referenced through the IP 192.168.1.1 then this IP is entered within the discipline portal and subsequently the important thing can have this title: 192.168.1.1_id_rsa. Creating the secret’s easy. As root do the next:
mkdir /and many others/pve/priv/zfs ssh-keygen -f /and many others/pve/priv/zfs/192.168.1.1_id_rsa ssh-copy-id -i /and many others/pve/priv/zfs/192.168.1.1_id_rsa.pub [email protected]
- login as soon as to zfs san from every proxmox node
ssh -i /and many others/pve/priv/zfs/192.168.1.1_id_rsa [email protected] The authenticity of host '192.168.1.1 (192.168.1.1)' cannot be established. RSA key fingerprint is 8c:f9:46:5e:40:65:b4:91:be:41:a0:25:ef:7f:80:5f. Are you certain you wish to proceed connecting (sure/no)? sure
In case you are logged in with out errors you might be prepared to make use of your storage.
- The important thing creation is just wanted as soon as for every portal so if the identical portal supplies a number of targets which is used for a number of storages in Proxmox you solely create one key.
- Solaris: Other than performing the steps above no different issues should be carried out.
- BSD: Other than performing the steps above the next is required: Since istgt will need to have no less than one LUN earlier than enabling a goal you’ll have to create one LUN manually. The dimensions is irrelevant so a LUN referencing a quantity with dimension 1MB is enough however keep in mind to call the quantity with one thing totally different than the Proxmox naming scheme to keep away from having it present up within the Proxmox content material GUI.
- Linux: Other than performing the steps above no different issues should be carried out.
- Nexenta: Other than performing the steps above the next is required: rm /root/.bash_profile. To keep away from to go in nmc console by default.
![]() |
Notice: The ssh key should not be password protected, in any other case it won’t work. |
Proxmox configuration
Use the GUI (Datacenter/Storage: Add ZFS) which can add configuration like under to /and many others/pve/storage.cfg
zfs: solaris blocksize 4k goal iqn.2010-08.org.illumos:02:b00c9870-6a97-6f0b-847e-bbfb69d2e581:tank1 pool tank iscsiprovider comstar portal 192.168.3.101 content material pictures zfs: BSD blocksize 4k goal iqn.2007-09.jp.ne.peach.istgt:tank1 pool tank iscsiprovider istgt portal 192.168.3.114 content material pictures zfs: linux blocksize 4k goal iqn.2001-04.com.instance:tank1 pool tank iscsiprovider iet portal 192.168.3.196 content material pictures
Then you’ll be able to merely create disk with proxmox gui.
- Skinny provision: When this feature is checked volumes will solely use precise house and develop as wanted till restrict is reached.
- Write cache: When this feature is unchecked the iSCSI write cache is disabled. Disabling write cache makes each write to the LUN synchronous thus decreasing write efficiency however ensures information is continued after every flush request made by the VM (if volumes has sync disabled information is just flushed to log!). If write cache is enabled then information persistence is left to the zfs volumes sync setting to resolve when information needs to be flushed to disk. When iSCSI write cache is enabled your quantity ought to have sync=customary or sync=at all times to make sure in opposition to information loss. Write cache is just configurable with Comstar. For istgt and iet write cache is disabled within the driver and can’t be enabled.
- Host group and goal group: In case your storage node is configured to limit entry by host and goal group that is the place it is best to enter the required info.
Notice: iscsi multipath does not work but, so it is use solely the portal ip for the iscsi connection.