30 July 2008

Howto install FreeBSD under ZFS (including root) (Part 1)

This is my first post talking about BSD systems, so, I will talk about my first (bleeding edge) experience making tests on freebsd (FreeBSD 8.0-CURRENT).


Update: I forgot the credits, I take this from ZFS FAQ, just to dont write everything again and only to exemplify, thanks Jerry Cornell).s






Why FreeBSD?


Disregard  the bad points of FreeBSD (yes, a lot of bad points, but strangely FreeBSD looks untouchable by criticism, sorry but I'm kinda realist), it's a good unix environment, stable (not my case, I'm bleeding edge @ home, stable-paranoic-guy @ work), and looks a lot similar to Gentoo in a lot of points (yeah yeah I know I know).


WTF is ZFS?


ZFS is a file system designed by Sun Microsystems for the Solaris Operating System.


Why ZFS?


ZFS is known to be one of the fastest filesystems on the earth, so I decide to give it a try. But not just using a single filesystem, I just want the whole OS inside ZFS, so I can measure more precisely.


What is the advantages?
  • Support for high storage capacities
  • Integration of the concepts of filesystem and volume management, snapshots and copy-on-write clones
  • On-line integrity checking and repair, and RAID-Z.
  • Increased reliability through checksums, multiple copies of data and self-healing RAID.
  • Very similar to LVM (snapshots, rollbacks, etc), the partitions can be resized at any time, and in fact can be allocated up to the full size of the storage media.
  • Built-in Compression and encryption, also NFS file sharing.
  • Easy toolset for creation and manipulation.
  • Many more


How it works?


Unlike traditional file systems, which reside on single devices and thus require a volume manager to use more than one device, ZFS filesystems are built on top of virtual storage pools called zpools. A zpool is constructed of virtual devices (vdevs), which are themselves constructed of block devices: files, hard drive partitions, or entire drives, with the last being the recommended usage. Block devices within a vdev may be configured in different ways, depending on needs and space available: non-redundantly (similar to RAID 0), as a mirror (RAID 1) of two or more devices, as a RAID-Z group of three or more devices, or as a RAID-Z2 group of four or more devices. The storage capacity of all vdevs is available to all of the file system instances in the zpool.


A quota can be set to limit the amount of space a file system instance can occupy, and a reservation can be set to guarantee that space will be available to a file system instance.


How about the capacity?


ZFS is a 128-bit file system, so it can store 18 billion billion (1.84 × 1019) times more data than current 64-bit systems. The limitations of ZFS are designed to be so large that they will not be encountered in practice for some time. Some theoretical limits in ZFS are:
  • 264 — Number of snapshots of any file system[8]
  • 248 — Number of entries in any individual directory[9]
  • 16 EiB (264 bytes) — Maximum size of a file system
  • 16 EiB — Maximum size of a single file
  • 16 EiB — Maximum size of any attribute
  • 256 ZiB (278 bytes) — Maximum size of any zpool
  • 256 — Number of attributes of a file (actually constrained to 248 for the number of files in a ZFS file system)
  • 256 — Number of files in a directory (actually constrained to 248 for the number of files in a ZFS file system)
  • 264 — Number of devices in any zpool
  • 264 — Number of zpools in a system
  • 264 — Number of file systems in a zpool


The Model


ZFS uses a copy-on-write transactional object model.
All block pointers within the filesystem contain a 256-bit checksum of the target block which is verified when the block is read. Blocks containing active data are never overwritten in place; instead, a new block is allocated, modified data is written to it, and then any metadata blocks referencing it are similarly read, reallocated, and written. To reduce the overhead of this process, multiple updates are grouped into transaction groups, and an intent log is used when synchronous write semantics are required.


Continue...

4 comments:

  1. [...] I will talk about my first (bleeding edge) experience making tests on freebsd (FreeBSD 8.0-CURRENT) Part 1 and Part [...]

    ReplyDelete
  2. [...] filesystems on the earth, so I decide to give it a try. But not just using a single filesystem, I just want the whole OS inside ZFS, so I can measure more [...]

    ReplyDelete
  3. Your blog is interesting!

    Keep up the good work!

    ReplyDelete
  4. Jerry Cornell29.1.09

    Uh, the first page is stuff about yourself combined with stuff lifted from ZFS FAQs. I'll continue onward now but you might want to paraphrase (or credit) a bit in case others find that negative. :-)

    ReplyDelete