The ``Martian NetDrive'' from http://www.martian.com/ is a trivial-to-set-up SMB fileserver that is also silent, and small, suitable for closets or obscure shelves. One model even has 802.11, so that it doesn't even need wiring - just unpack the shipping crate, plug it in, hit the on switch, and wait for it to show up under Rendezvous http://www.apple.com/macosx/jaguar/rendezvous.html in your browser (or find it in your Windows Network Neighborhood and look for the setup.html... or, if you're One Of Us, find the address in the DHCP log when you first bring it up :)
It's a really cool ``instant infrastructure'' idea; if http://www.mekinok.com/ and http://www.boxedpenguin.com/ were still going concerns, we'd probably be working with them. In the mean time, it's a pretty good ``I need a hundred or so gig in a fileserver *now*'' item... as long as you're heavily firewalled, of course!
http://www.openafs.org/ gives the details, but basically, OpenAFS is the modern production version of what was once the ``Andrew File System'', a secure (kerberized) distributed network filesystem popular among educational institutions and some large-infrastructure companies (including, MIT, CMU, UMich, and Morgen Stanley.) There are clients for every OS that anyone takes at all seriously.
One of the particularly convenient features of OpenAFS is how you add more space to it. You simply add new space (install a new machine, or connect disks to an existing one) and restart the fileserver process on that machine, and your new partitions are available. This doesn't matter to the end-users, who see a unified filesystem under /afs/site.domain.name/ regardless of where the data is; the administrator can then ``behind the scenes'' migrate data to the new server, or add replicas of existing volumes there, or simply start creating new volumes there (and ``mounting'' them in the global namespace.)
http://www.thok.org/ has been an OpenAFS cell (/afs/thok.org/) for years, and has three machines in it. The machines are cheap half-GHz Celerons picked up over the last few years; they work, but they're noisy and take up a lot of space. The advent of quiet drives with fluid bearings and no fans led me to realize that I could get a bunch of office space back if I upgraded the important machines to something quiet, and there have been a bunch of interesting ITX-based system designs that would work well at ``reducing the ambient clutter of machines''.
Along comes the NetDrive, which does all of this ``in a box.'' The 40G version seemed a bit small to bother with, what with 250G drives shipping from Western Digital already, but then the 120G version came out... A few questions to their support list gave me enough hints to think I could probably make this work in my environment, and that it would at least be interesting to try.
The box showed up promptly, with a very simple instruction pamphlet. Plugged it in, it got an address from my open wireless net, popped up a web interface, and I could set the name and do a few simple things from there. It's still clearly all in the clear, it doesn't use SSL at all, but the usual server certificate model isn't going to work here. Still, it did work as described and the risk has to do with access to the configuration of the SMB server - the operating system is locked down quite tightly, all things considered.
In my position as a long-time security specialist, this was of course not nearly enough. One of the additional reasons to use OpenAFS is that not only does it use Kerberos for authentication, the data transfers are also encrypted. So, once it was clear that it worked as advertised (which it certainly does), it was time to start poking at it.
While the instructions don't involve any more connections than power, the box has an obvious full set of rear connectors. This means the box will probably have a bunch of other uses, but for starters, it means hooking up a PS/2 keyboard and VGA monitor. Sure enough, there's a login prompt. Hitting ctl alt del performs a normal linux-style shutdown, and reboots; after an interesting netboot option menu (it appears that system uses Intel's PXE) it drops to a LILO prompt.
At this point, use
martian init=/bin/sh lets the kernel start,
mount the root partition, and drop us into a shell, to poke around.
(Note that if you're following this in order to try it at home on your
own box, you don't have to go this path - this is just the way to
explore an unknown system.) During this ``poking'' process, we discover
a few things:
apt-get updateis useful... and it will be useful,
sambahas had several security updates post-3.0.)
Continuing along the exploration path, it is prudent to save enough of
the original installation out of the way, so as to be able to restore
things to the original Martian configuration later on. At the same
time, after checking that
root's dotfiles aren't booby-trapped,
it's clearly easier and safer to do this from a real login, so this is
a good time to reboot and then just log in as root. A simple way to
do this is to copy
root's dotfiles and then just
up, carefully excluding things that don't pack well and aren't worth
# mkdir .orig # cp -a .??* .orig/ # tar --exclude /root --exclude /swapfile --exclude /dev \ --exclude /proc --exclude /home/ \ -czvf .orig/everything.tar.gz /
Next, now that you have a backup of the original /etc/passwd, change the root password. If you're going to bring this machine up on the net with any remote access features, you want to be sure someone else (or some tool) doesn't just try it.
Having done that, you may find it useful to enable
sshd so that you
can get in from somewhere more comfortable. Along with changing the
password, you'll want to change the host keys, since they're also
duplicated on other machines out there:
# cd /etc/ssh # rm sshd_not_to_be_run # ssh-keygen -N '' -f ssh_host_rsa_key -t rsa # ssh-keygen -N '' -f ssh_host_dsa_key -t dsa # /etc/init.d/ssh restart
* find http://www.toonopedia.com/martian.htm and be amused
Since I'm not going to run any inherently insecure services on the
box, I'll start with turning off
thttpd. Note that
these aren't bad implementations - they are simply used in
insufficiently secure ways for me to be willing to expose them to the
# /etc/init.d/samba stop # /etc/init.d/thttpd stop # update-rc.d -f remove thttpd # update-rc.d -f remove samba
Note that /etc/init.d/samba contains some changes to handle a
Martian-specific samba.martian.wrapper; since Debian marks this as
conffile, these changes are protected from upgrades. Also,
/etc/init.d/thttpd has code that stuffs the IP address of the
machine into the setup.html that is visible from the SMB share
so that non-Rendezvous-enabled users can still find the box.
Next, we need to fix /etc/apt/sources.list by changing the security line to
deb http://security.debian.org/debian-security stable/updates main contrib non-free
so we can take the outstanding security updates:
# apt-get update # apt-get -u upgrade The following packages will be upgraded libcupsys2 libldap2 libpng2 libssl0.9.6 perl perl-base perl-modules samba samba-common wget
We want these (I'm a little surprised that Martian hadn't pushed out a samba update already, actually) and if we do need the old ones back for some reason, we still have .orig/everything.tar.gz from above.
While we're at it, there are a bunch of other packages I tossed in during the course of this effort; you'll be happier pulling them in up front instead of going back one by one.
straceis already present)
# hdparm -q [PUT ARGS HERE] true
hdparm -u 1 -d 1 /dev/hda1
(and reboot, or run that command directly) to get signficantly improved disk performance.
As a server you find with Rendezvous or SMB solely on a local
net, you don't need a fixed address and hostname. However, as an
OpenAFS server, you need to have an address that stays fixed so that
the Volume Location Server can tell clients where to find the data
they've requested. If you run
ifconfig you should see entries for
Both of these list an
HWaddr entry; this is the MAC address
which you need to tell your DHCP server about. Note that the
machine will always try to bring up both interfaces; you may wish to
adjust /etc/network/interfaces to let you switch it from one to the
other manually. I found it simplest to have both interfaces listed in
the DHCP server with duplicate
host stanzas, as I'd only use one
mode at a time - wired for initial large fast data transfers, and then
wireless for when I want to put it on a shelf somewhere out of the way
(the ultimate goal.) This approach could use work, but I'll need to
see what my use cases work out to in practice.
Be sure to set up a DNS entry as well, and a reverse-
listing so that when you later configure Kerberos it has a chance of
Since the OpenAFS client includes kernel modules, we need ``kernel
stuff'' around to build against. Looking at
dpkg -l "kernel*" shows
installed, which is odd because
However, the kernel in use is properly packaged, and there is a
/boot/config-2.4.20-martian so we can work with it, even if they've
failed to correctly walk the two obvious GPL paths (``promise in
writing for 3 years'' or ``sources included with''.) So, assuming they
don't have any patches (or rather, that we can build the module
without caring about any patches, I haven't looked at the wireless
support closely other than to note that it uses the
module), download 2.4.20 from
Downloaded link: http://kernel.org/pub/linux/kernel/v2.4/linux-2.4.20.tar.bz2 Suggested file name: linux-2.4.20.tar.bz2
Then get the debian-packaged OpenAFS sources:
apt-get install openafs-modules-source
Unpack the kernel source, so that we can at least configure it:
cd /usr/src tar xjf linux-2.4.20.tar.bz2
Pull in the existing config (the /boot/config-* convention, all by itself, is a reason to use debian :)
cp /boot/config-2.4.20.martian linux-2.4.20/.config
Follow the instructions in /usr/share/doc/openafs-modules-source/README.modules:
tar xzf openafs.tar.gz cd linux-2.4.20 make-kpkg configure make-kpkg modules_image dpkg -i ../openafs-modules-2.4.20_1.2.3final2-6+10.00.Custom_i386.deb
And now we do the normal client configuration.
apt-get install -u openafs-client openafs-krb5
debconf questions and answers:
Finally, start the client:
This takes a while the first time, because it is initializing the cache (/var/cache/openafs.)
In order to load the relevant keys, it is easier to get Kerberos set
up right up front. Since I also use
ssh-krb5 for everything, we
take care of that too.
First we install
krb5-user because it has
kadmin in it:
apt-get install krb5-user
Answer the config questions the first time, but note that it turns out
that this isn't enough to make
kadmin work, we need to answer some
of the higher priority questions; instead of changing the priority
just this once, we explicitly run
dpkg-reconfigure and answer the
kerberos-1or something that explicitly always gets the master server.
Before we try to use Kerberos, we need to get the clock in sync. You
could set it manually (it only needs to be within about 5 minutes) but
rather than leave a lurking problem, I just install
way, you want to first run <tzconfig> to set the timezone so that you
don't get confused about the resulting values.
apt-get install ntpdate ntp-simple
If you don't know an
ntp server offhand, you may be able to use
time.your domain or
time.your ISP. Also, unless you know
otherwise (ie. you explicitly configure broadcast NTP) I'd suggest
debconf overwrite the config file.
Once the clock is in sync, we can configure the keys we need (if you
don't also run the kerberos realm, just ask the admin for a
host/whatever key.) In the examples below, assume that your
EXAMPLE.COM(case sensitive, but the convention is for it to be the all-caps duplicate of the DNS domain.)
# kadmin -p you/admin kadmin: ank -randkey host/martian-afs-server.example.com WARNING: no policy specified for host/martian-afs-server.example.com@EXAMPLE.COM; defaulting to no policy Principal "host/martian-afs-server.example.com@EXAMPLE.COM" created. kadmin: xst host/martian-afs-server.example.com Entry for principal host/martian-afs-server.example.com with kvno 3, encryption type Triple DES cbc mode with HMAC/sha1 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal host/martian-afs-server.example.com with kvno 3, encryption type DES cbc mode with CRC-32 added to keytab WRFILE:/etc/krb5.keytab.
Now we have a key, make it useful. Edit /etc/hostname to contain the full name, also run
# hostname martian-afs-server.example.com
(/etc/init.d/hostname.sh takes care of this on reboot.)
ssh-krb5 itself, and give yourself permission to log in
as root using Kerberos (you may wish to use
depending on your site policy.)
# apt-get install ssh-krb5 # echo you@EXAMPLE.COM > ~root/.k5login
Finally, confirm that it works, by running
$ ssh firstname.lastname@example.org
from some other machine on which you have the relevant tickets. (At this point, you can also go sit somewhere comfortable and work from your laptop, instead of working at whatever keyboard and monitor you've hooked up... in fact, you shouldn't need local access at any further point.)
While you're in there, you'll want to fix the first line of
/etc/hosts to only mention
localhost, and not the leftover
martian entries - change
127.0.0.1 debian localhost martian
(I noticed and fix this while looking for another problem, which turned out to be clock skew related; I don't know if this fix is actually needed, since you'll be taking care of the clock early enough to avoid that problem. But do it anyway :)
It turns out that OpenAFS really expects to own the partition it puts
volumes on. It does use the filesystem there, and I suspect the
restriction is mostly a carryover from the non-linux platforms that
OpenAFS supports, but it does sense to keep the OS and data partitions
distinct, if nothing else because it can make
fsck faster, and if
you end up needing to replace the OS for some reason, the data should
You can actually do this step much earlier in the process, but you don't need it until you actually want an OpenAFS server running.
First, reboot single user (you should simply use
martian single at
LILO prompt, since you now know the root password, rather than
/bin/sh trick earlier.)
Next, do the easiest step: shrink the root to something small (plenty
of room for the OS and caches, but otherwise leaving most of the disk
for filesystem use). Note that I didn't run
lilo (to correct
lilo's list of blocks for finding the kernel) when I did this - I
believe it worked because none of the kernel was in the space above
the reduced extent of the partition. (I have run
lilo for other
reasons, long after this, and it did work, so the config file present
is sufficient - but I think you'd actually have to reboot here before
lilo anyhow to get back in sync with the on-disk form of
the partition, as well as to remount it writable.)
# ext2resize /dev/hda1 4G # reboot
(Later steps don't work any *better* in single user mode, but this one did need the root to actually be readonly and stable.) Once it comes up, we repartition. Unfortunately, the default partitioning assumes a different geometry than the kernel comes up with by default, so the simple approach leaves the machine unbootable...
I haven't yet had a chance to test this procedure on a second box so be very cautious, and prepared to hook up a CDROM drive to boot a rescue disk from (or figure out how to use PXE to netboot something; I'd welcome tested pointers for inclusion here.)
First, here's the partitioning I ended up with:
Disk /dev/hda: 255 heads, 63 sectors, 232581 cylinders Units = cylinders of 16065 * 512 bytes
Device Boot Start End Blocks Id System /dev/hda1 1 536 4305388+ 83 Linux /dev/hda2 537 667 1052257+ 83 Linux /dev/hda3 668 14593 111860595 83 Linux
Command (m for help): x
Expert command (m for help): p
Disk /dev/hda: 255 heads, 63 sectors, 232581 cylinders
Nr AF Hd Sec Cyl Hd Sec Cyl Start Size ID 1 00 1 1 0 254 63 535 63 8610777 83 2 00 0 1 536 254 63 666 8610840 2104515 83 3 00 0 1 667 254 63 1023 10715355223721190 83 4 00 0 0 0 0 0 0 0 0 00
Note that F</dev/hda2> is actually a rescue install I made so that I could fix some of the unsuccesful attempts - but for an OpenAFS client, it might be useful to just remount it on F</var/cache/openafs> for a 1G fixed cache.
The critical bit is that
Disk /dev/hda: 255 heads, 63 sectors, 232581 cylinders
says 255 heads; apparently what caused the trouble involved
thinking there were only 63 heads, with the ensuing problems. To be
sure things work out, run
fdisk /dev/hda and then
pit again; it shouldn't change the output, but if it does, make sure you understand it.
n 1 r +4096k
n 1 r 536
n 2 r +1024k
n 3 r RET RET
Cylcolumns have the same values (the second triplet will be different and reflect your partitioning above, namely they should be 254,63,536.) If the first triplet isn't 1,1,0 then you need to figure out what's going on, assuming that's what you saw the first time.
x poutput previously.
If the start of the partition isn't correct, I recommend looking at
sfdisk, which is included with
fdisk and gives you complete raw
control; in particular, I used a command close to
echo 1,8610839,83 | sfdisk /dev/hda -N1
sfdisk to operate on disk /dev/hda, partition
Number 1, and then it reads values from standard input, namely
sfdisk -n -l /dev/hdadoes list the same value.
fdisk's sanity check; I had to do a little arithmetic and go back and forth to get this right.
Now that the partition table looks good, reboot. Note that whatever
lilo will still boot the kernel because it has explicit
block numbers for it; if it all worked, you'll be able to log in as
usual, if it didn't work, you'll get an obscure
error message, simply because it'll try all of the available partition
types and only complain about the last one that failed. In that
case, you're stuck with opening the box, installing a CDROM drive (the
only trick here is that you'll need a power-splitter, or ``Y''
connector, because there's only one old-style fat power cable end.
You can use any random IDE CDROM drive, as long as you don't want to
close the case up.) I recommend just using debian media to install
onto /dev/hda2 so you can look at man pages and work with
debugfs and the
offset, since that
partition will work fine, while you poke at the primary partition.
If it did succeed, now you can do the usual partition creation, and add the journal:
# mke2fs /dev/hda3 # tune2fs -j -c 127 -i 12m /dev/hda3 # mkdir /vicepa
Finally, add the /etc/fstab entry,
/dev/hda3 /vicepa ext3 defaults 0 3
mount /vicepa (or reboot) to make it available.
Finally, we actually install the server. Having previously mounted
/vicepa, all we need to do is copy over the server config and start
# rsync -e ssh -avu otherserver:/etc/openafs/server /etc/openafs/
Note the importance of copying this securely; /etc/openafs/server/KeyFile itself is the thing that needs protection, as it is the shared key among all of the servers.
# apt-get install openafs-fileserver # bos create martian-afs-server fs \ fs -cmd /usr/lib/openafs/fileserver \ -cmd /usr/lib/openafs/volserver \ -cmd /usr/lib/openafs/salvager -localauth
Confirm that it worked:
# bos status localhost -noauth Instance fs, currently running normally. Auxiliary status is: file server running.
At this point, you can do your normal OpenAFS administration; I generally start by (using tokens with the relevant privileges, on another machine - you don't need to ever log in to the Martian box again, other than to pull in Debian security updates, at this point) adding an entry to the ``at-a-glance'' partition directory:
$ vos create martian-afs-server a disk.martian-afs-server.a $ fs mkm /afs/example.com/service/partitions/martian-afs-server.a disk.martian-afs-server.a $ fs sa /afs/example.com/service/partitions/martian-afs-server.a you rl
And then check it:
$ fs df /afs/example.com/service/partitions/martian-afs-server.a Volume Name kbytes used avail %used disk.martian-afs-server.a 104511260 22772468 81738792 22%
(Ok, you got me, I've used it a bunch already...)
This page was produced in
emacs 21 using
Perl Pod format under
Mac OS X. HTML produced by
pod2html 1.03 and published using
Copyright 2003, Mark W. Eichin <email@example.com>. Republication
permission available; otherwise,
href is free - Link, don't Copy.