| Carl Friedrich Gauß Faculty | Department of Computer Science

Storage Server

AuthorFrank Steinberg

Almost all data processed by IBR services, hosts and users is stored on a single storage server, zfs1.ibr.cs.tu-bs.de:

  • Ubuntu 16.04
  • Supermicro server chassis with 24 HDD slots
  • Intel(R) Xeon(R) CPU E5-1650 v4 @ 3.60GHz
  • 19 SAS 6TB HDDs
  • 2 400GB fast NVMe SSDs for read and sync-write caching
  • 2 small mirrored system SSDs
  • 40GBit, 10GBit and 1GBit links for LAN and SAN connections
  • redundant power supplies, connected to UPS


The server supplies the following services:

  • NFSv3 via LAN and SAN to other servers within the same room
  • NFSv4 with Kerberos security to the whole LAN and via VPN
  • Samba file service to the LAN and via VPN
  • iSCSI to other servers, primarily for VMs (not yet)
  • iSCSI to IBR users if requested (not yet)
  • IBR web server backend ("xcmd")
  • IBR file alternation watchdog(s)


ZFS builds the powerful layer between the raw capacity of >100TB HDD and SSD space and ~60TB of useable storage space featured with striped layout for better performance, two parity disks per stripe for reliaabilty and spare disk(s) for easy replacement in case of failing disks. The storage can be used as filesystems with quota limits and as iSCSI volumes. User accessible snapshots allow easy restauration and are the basis for a poweful send/recv based backup to other ZFS backup systems (one in house, a second located at GITZ (not yet)).

As of May 2017 these datasets are in use:

/pool1the root ZFS pool
/pool1/ibrtoplevel exported filesystem
/pool1/ibr/ftpIBR FTP stuff
/pool1/ibr/mirrormirrors of public FTP/HTTP file resources
/pool1/ibr/webweb server document spaces
/pool1/ibr/tmpspace for temporary stuff, auto-deleted after 6 months
/pool1/ibr/homecontains sub-datasets for each regular IBR user
/pool1/ibr/y-homecontains all home directories of y-users
/pool1/iscsicontains iSCSI volumes

In most cases you don't have to care about the different datasets. You can navigate smoothly within the complete /ibr tree.


Most ZFS filesystem datasets are exported via NFS. To those hosts, that are located in the same room and maintained by no other persons than the IBR admins, a plain NFSv3 export is used. Most servers use NFS-over-RDMA via a separate SAN which allows to fill up the clients' 10GBit links by >90% (with appropriate access patterns).

Other machines in the LAN or even clients via VPN access the datasets via NFSv4 with Kerberos5 authentication and encryption ("sec=krb5p").

Common IBR client configuration

Most IBR Linux systems mount the NFS volumes automatically via autofs. The complete /ibr tree is available, access restrictions apply, of course.


User home directories and the "/ibr" dataset tree are also exported via SMB/CIFS. Please note that the filesystems' nature is based on UNIX permissions. Windows clients cannot get permissions correctly. Therefore users which can access the data via a UNIX/Linux system (e.g. by SSH to an IBR Linux system and working on the data remotely) are encouraged to choose that way!


As a first step some VMs will be switched from a local disk image to iSCSI volumes. Especially iSCSI over RDMA (iSER) looks quite promising. Later, we will probably supply iSCSI volumes to IBR users on demand. More to follow...


ZFS allows to take snapshots of datasets. In case of filesystems there is a hidden .zfs directory which you cannot see in a directory listing but which can be accessed explicitly. Regular snapshots are taken automatically. Users who have write access to a dataset's root directory (e.g. an IBR user's home directory) can take a snapshot with ibr-snapshot.

steinb@x1 ~/ 502 $ pwd
steinb@x1 ~/ 503 $ date
Fri May 26 21:00:49 CEST 2017
steinb@x1 ~/ 504 $ ibr-snapshot 
took snapshot /ibr/home/steinb/.zfs/snapshot/steinb-20170526-210054
steinb@x1 ~/ 505 $ ls -l /ibr/home/steinb/.zfs/snapshot/
total 0
dr-xr-xr-x 1 root root 0 May 26 10:48 backup-20170526-090008
dr-xr-xr-x 1 root root 0 May 26 21:01 backup-20170526-150006
dr-xr-xr-x 1 root root 0 May 26 10:48 steinb-20170526-104656
dr-xr-xr-x 1 root root 0 May 26 10:48 steinb-20170526-104754
dr-xr-xr-x 1 root root 0 May 26 14:56 steinb-20170526-111543
dr-xr-xr-x 1 root root 0 May 26 14:56 steinb-20170526-111551
dr-xr-xr-x 1 root root 0 May 26 14:56 steinb-20170526-144931
dr-xr-xr-x 1 root root 0 May 26 14:56 steinb-20170526-145632
dr-xr-xr-x 1 root root 0 May 26 21:01 steinb-20170526-210054

Quota Limits

Each regular IBR users has his own "home" dataset. This allows to simply use df to show its current capacity:

steinb@x1 ~/ 506 $ df -h .
Filesystem                                       Size  Used Avail Use% Mounted on
zfs1.san.ibr.cs.tu-bs.de:/pool1/ibr/home/steinb  250G  184G   67G  74% /misc/ibr/home/steinb

y-accounts do not have individual datasets. However, all users (incl. y-users) can query for their personal quota limits with ibr-quota:

steinb@x1 ~/ 510 $ ibr-quota 
NAME                   PROPERTY          VALUE             SOURCE
pool1/ibr/home/steinb  quota             250G              local
pool1/ibr/home/steinb  used              184G              -
pool1/ibr/home/steinb  userquota@steinb  none              local
pool1/ibr/home/steinb  userused@steinb   172G              local

The size of new IBR user home datasets is usually initialized with a maximum size of 20GB, but they can easily be enlarge upon request. The common quota limit for y-users is 10GB. (Until May 2017 it has been 1GB.)

Note that those filessystem blocks of snapshots that differ from the current filesystem content consume a fragment of the quota limit. This means that filesystems with heavy fluctuation may seem more occupied than you would expect due to (backup) snapshots. If this is the case for your home directory, you may wish to remove some old snapshots. This may be achieved with ibr-backup-clean [keep] to remove all but a remaining number of backup snapshots. To wipe all personal snapshots, you may use ibr-snapshot-clean. These commands do not affect the safety of your backups! It is just about the archiving of former snapshots. Data on the backup servers will not be touched in any way by these commands.


A regular backup is realized based on ZFS's send/recv feature: complete identical copies of regular snapshots of all datasets are maintained by our own simple script. Currently we mirror 200+ dataset snapshots multiple times each day to a separate host located at IBR and to a another ZFS backup host located at the GITZ. [As of 2017-05-29 the first full copy to GITZ is running and takes a few days. During this time backups to the first machine are paused.]

Volumes not (yet) located on the ZFS server are intergrated into an rsnapshot backup system. This will be intergrated with the ZFS system soon.

Note that the formerly accessible /ibr/backup is no longer required, due to the more powerful snapshot features.

Accessing filesystems from your own client host

Accessing your home directory and the /ibr tree via Samba is straight forward: Connect to \\zfs1.ibr.cs.tu-bs.de\username or \\zfs1.ibr.cs.tu-bs.de\ibr and use your personal credentials to authenticate.

TBD: Samba with UNIX extensions...

However using NFS might be preferable. For security reasons you have to use NFSv4 and Kerberos, which means you need:

  • a Kerberos client configuration, e.g. this /etc/krb5.conf,
  • a host registration with IBR's LDAP. You can request a host with the dirac tool ("hosts", then "create", ...), so that you can get:
  • a /etc/krb5.keytab file for your client host Kerberos principal. You can retrieve it as a host's supervisor with "ibr-keytab [hostname]",
  • your personal principal's credentials.

last changed 2019-10-28, 08:29 by Frank Steinberg