This is one of a series of computer configuration stories, in reverse chronological order. Others are accessible as follows:home page for this activity.
Migrate to a newer hardware platform with SATA disk support in order to expand storage capacity.
First install with Ubuntu 12.04.1 worked at first, but then wouldn't reboot. Among other errors on-screen (after plugging one in, and finding that it was stuck on GRUB-splash), the swap file wasn't ready when needed and the (initially wireless) network configuration wouldn't initialize. Following some Googling, I reinstalled without encrypted homedir, reading that cryptoswap setup might be responsible for the former error. As a separate step, I then encrypted my homedir but explicitly didn't elect to encrypt the swap.
There's apparently a nasty bug in Ubuntu 12.04 Server that means that the presence of "wins" in the active /etc/nsswitch.conf, so as to enable access via the unqualified hostname, causes the boot sequence to freeze! It works as intended within the session instance where it's set, though. For now, I've deleted the offending element, and now it comes up. No fun having to find about this and fix from a recovery shell!
After some experimenting and reference to SAMBA documents, I've actually been able to get SAMBA shares working with GO-NG alone and also when GORILLA is live and coexisting! The smb.conf, which started from GORILLA's version, is saved for reference at /etc/samba/smb-ref.conf.
Unsurprisingly, wireless connectivity is slower than wired. Roughly, transferring a 1GB file via rsync from the TADPOLES-NG desktop (i.e., with wireless links to the access point and back) took about 25 minutes or 40MB/minute, via 802.11g connections varying around indicated values of 50Mb/sec and without much other congestion. As a rough benchmark, sha1sum-ming the same file on both platforms had the much newer desktop faster by about a factor of 2 (or, by a factor of 4 relative to the even earlier CPU in the original GORILLA), but it probably didn't get to use multiple cores for the purpose.
What a difference a month makes in home technology! I started this
update by editing this file through an AFP share onto GO-NG from a
MacBook Pro. GO-NG is now
populated with 4TB of disk (1x 1TB, 1x 3TB), once I saw web wisdom re
disk consumption in TimeMachine and decided I'd take the plunge now
rather than having a possible destablizing reinstall later. This led
me down the learning path of using
to format (and see all of) a 3TB
disk with a GPT partition table. I wouldn't expect that the bios in a
motherboard from ca. 2004 would be able to boot from it, or that the
motherboard's designers would ever have thought its SATA ports would
be driving this much storage, but the Linux kernel is happy to access
it as a second drive and it's working well so far.
It took most of an extended
weekend day to transfer the prior Gorilla's files via WLAN, partly
because rsync transfers kept aborting (see: “Broken Pipe” errors
on ssh connections) and maybe also because the WLAN had reverted to a
slow speed. Now, alongside the wired Ethernet,
is showing 1Mb/sec, which would
certainly impede progress. After
a few days' successful operation, I commented out the wlan startup
from /etc/network/interfaces in favor of the wired port; it's faster
and won't cause contention (or, maybe even, slowdown of faster
devices) on the local wireless network.
A glance at a power meter shows 72 watts at idle. That means roughly 13.9 hours to consume a Kwh, or 1.7 Kwh's per day. With power at about $0.10/Kwh (to go up, assuming I continue as one of the handful paying the Nstar Green surcharge!), that would be 17 cents per day or about $5.00/month if I were to keep the server on 24x7. I wouldn't do that (not only for reasons of economy, but also for energy conservation and for limiting online server exposure to attack), but it would hardly be ruinous if I did.
Quick performance test: having populated the file tree onto one disk, I ran rsync to shadow a copy of it onto the other disk. Hardware diversity is your friend if one component fails! Transferring 194 GB took 115 minutes, with some other activities taking place in parallel, or 1.68 GB/minute, or 28 MB/second (aka 224 Mbit/sec.). That doesn't seem so bad for an almost decade-old processor. I imagine that it would be notably slower for a case that would require it to encrypt or decrypt the transfer (maybe twice, if for both ecryptfs and ssh layers), but haven't specifically timed that yet.
Preliminary testing with the old Gorilla server suggests that an upgrade from Ubuntu 12.04 to 14.04 is generally smooth, but that some refitting will be needed to make the Apache configuration work again. I'll probably need to figure this out alongside another system where I've configured Apache from the start within 14.04.
After having watched external disk backups take many hours when written
onto an NTFS-formatted portable drive, I decided to get a new drive and format
it as Linux-native, using default type selection which yielded ext2. I'd noticed
that the ntfs module was often visible high up on
top's process list.
Having done so, an rsync of 106GB of unencrypted data onto a clear drive took from
13:57 to 15:06, about 70 minutes or 1.5 GB/min, which seems substantially
faster. A further 101GB of decrypted data took from 16:06 to 17:06, apparently even
faster at 1.68GB/min despite decryption processing, so the limiting factor may
be USB (2.0?) throughput. As of 12 March, copying my user data (86GB) from one area
of the external drive to another using
cp -r took about 125 minutes,
or 0.69 GB/min, close to what one would expect per the above for two passes through
the USB, from and to the drive.
Revisited 28 March 2014, back to an NTFS drive but writing the decrypted and shadow
data to empty disk space rather than rsync'ing into an existing
copy of the directory tree. The unencrypted portion (from user
took from 8:44 to 9:54, the same 70 minutes as seen above for writing to ext3. New
working conclusion: both NTFS and ext3 may be able to reach the same USB speed limit,
but writing to a clear disk area may be important for performance.
I might also try to reformat one or both of the existing WD Passports in my offsite backup rotation, but see web wisdom indicating that they've complicated reformatting with hidden partitions. The Toshiba that I'm now testing, after Staples was out of the Seagate model that I'd planned to purchase, seems to format and accept data smoothly.
I've also revised go-ng's shadow script files, using
rsync -L ... on the
external copy of the general file system copy along with
expressions so that the encrypted
versions of home directories aren't serialized into the external copy though they
remain on the internal disk shadows. The decrypted copy of jlinn's homedir
rsync --safe-links ... instead.
This combination (along with separately decrypted
copies of the encrypted homedirs) seems to yield a total
size close in GBs to that of the original file system, so I assume that at least most
relevant contents are present on the copy, though there's a glaring
logged example of infinite misrecursion in X11 subdirectories. I flushed some
non-personal directories from
a wart subtree) which should get rid of
some, but I see that it's there for the base
/usr/bin as well. I think
I'll go back from
--safe-links and look more
carefully at what that skips.
A convenient way to avoid "broken pipe" errors as noted below on ssh connections
is to run backup scripts under
at, as with
at now + 2
minutes if one wants to start the job shortly. Unfortunately, this isn't
friendly to scripts that require sudo. Curiously, jobs sometimes seem to
disappear from the list as shown by
atq even while they continue to run;
top is your friend, as well as writing meaningful status information into