GEOM and the file systems do all IO access in byte offsets, so the underlying sector size isn't important as long as it's addressable. For instance, a command like "dd if=/dev/da0 bs=10 count=1 of=/tmp/foo" will fail since /dev/da0 is a raw device which cannot be read in increments of 10 bytes - only at "physical sector size", which is 512 (change bs=10 to bs=512). This means that basically, any file system that can do most of its IO as 4K-aligned will automagically work "good enough" in the configuration where the drive can do 512 byte access but optimally does 4K access. Luckily, UFS and ZFS can do all accesses aligned to 4K so they can work even on "pure" 4K drives. In fact, if the drive adverties 4K sector size as "one true sector size", both file systems will do fine by default, without any additional configuration. The rest of this text applies to the situation where the drives advertise both sizes.
1. Aligning partitions
The first step is to align the partitions to 4K offsets. Though again GEOM doesn't care, both fdisk and disklabel tools still behave like they are bound by old defunct CHS requirements and by default produce silly offsets like "63 sectors".
The default sysinstall configuration will also do this, so if installing on a 512/4K drive, manual intervention is required to make the partitions aligned to 4K (or in other words, 8 512 byte sectors). In this case, 64 sectors is a good alignment, 63 sectors is not.
UFS formatting (newfs) reques only that the fragment size be set to 4096:
newfs -U -f 4096 /dev/da0
This will basically ensure most of the IO is done in 4K offsets. FreeBSD 9 will use 4K fragments by default.
ZFS requires basically the same things like UFS, only the "zpool create" command cannot accept an argument telling it what the alignment should be (ashift). To make ZFS issue 4K-aligned IO, the "zpool create" phase needs to be tricked into thinking the drive has 4K physical sectors. This can be done by using gnop - a GEOM class which is used for testing and which normally does not produce permanent changes (i.e. it doesn't write its metadata).
For each device which will be made a part of the pool, a gnop "chained" device needs to be created, and then those devices need to be added to a pool:
gnop create -S 4096 /dev/da0
gnop create -S 4096 /dev/da1
zpool create data mirror /dev/da0.nop /dev/da1.nop
This will create the "data" pool with the *.nop devices, making ZFS think the drives have 4K physical sectors. Next, the pool needs to be exported and the gnop devices removed:
zpool export data
gnop destroy /dev/da0.nop /dev/da1.nop
Next, the pool can be imported from the "raw" devices:
zpool import data
You can check the configuration of the pool by using the "zdb" command on the pool:
zdb -C data | grep ashift
The ashift should be "12" for 4K alignment. This works because ZFS writes the ashift value in its metadata.
There could be problems with booting from such drives, but apparently there are patches to fix those.
For compatibility with older machines (which cannot boot from "pure" 4K drives) and operating systems (which cannot parse them), the 512/4K dual identification will probably remain with us for some time as one of the "legacy" technologies which are common in the PC world. It is because of this that the steps described above are necessary - if any drives appear which only advertise the 4K size and the BIOS (or EFI) and the controllers support it, FreeBSD will support it automagically, without the configuration dance described above.
The described methods are "future proof" even if the drive, the controller, BIOS or the OS decide one day to ignore the 512 byte advertisments and pronounce the drives as "pure 4K".
The changes are also, of course, irreversible. If for some reason it is discovered that the drives need to be used as 512 byte drives (e.g. booting problems, etc.), the only way to do it is to reformat them (i.e. destroy the ZFS pools, or use newfs again).