Log in

23 November 2011 @ 11:09 am


Assuming ZFS version > 19:

zpool detach tank label/Zil_A
zpool remove tank label/Zil_B


All of the documentation says that recent versions of ZFS support removal of Intent Log devices, however the standard zpool remove just doesn't seem to work for a mirrored zpool.


To establish a common basis, set up a 3-disk RaidZ with a mirrored ZIL and a pair of cache disks. For reference, this was originally performed on FreeBSD 9.0-rc1 with ZFS Pool Version 28.

zpool create tank raidz label/Disk_1 label/Disk_2 label/Disk_3 zpool add tank log mirror label/Zil_1 label/Zil_2 zpool add tank cache label/Cache_1 label/Cache_2 # Play with the pool a bit bonnie++ # Remove the cache devices zpool remove tank label/Cache_1 zpool remove tank label/Cache_2 # And remove the log devices zpool remove tank label/Zil_1 # FAILURE: cannot remove label/Zil_1: operation not supported on this type of pool zpool remove tank label/Zil_2 # FAILURE: cannot remove label/Zil_2: operation not supported on this type of pool zpool remove tank label/Zil_1 label/Zil_2 # FAILURE: # cannot remove label/Zil_1: operation not supported on this type of pool # cannot remove label/Zil_2: operation not supported on this type of pool</pre>

This seems to be contrary to all of the existing documentation and marketing that touts removable log devices. Removal seems to work fine for the Cache devices, which wasn't really possible in older pool versions, and FreeBSD doesn't exactly make any significant modifications to the ZFS codebase. Do I need to destroy my pool and reload all of my terabytes of data?

As it turns out, it is necessary to break the mirror first:

zpool detach tank label/Zil_1 # Success
zpool detach tank label/Zil_2 # FAILURE
# cannot detach label/Zil_2: only applicable to mirror and replacing vdevs

It also turns out this is no longer a mirror after a device has been removed. It may now be removed like a normal device:

zpool remove tank label/Zil_2

Overall, just a little something to keep in mind when playing with your pools. It's arguably obvious in hindsight, as the log device is just another mirror until it isn't a mirror anymore and can be removed with remove. The "operation not supported on this type of pool" errors don't go a long way towards helping, though. The ZFS docs and error messages usually tend to be much more useful.

Tags: , , ,
Current Location: The Office
Current Mood: embarrassedembarrassed
20 November 2011 @ 06:05 pm

Quick Solution

Go to the All Computers list and delete everything. Try administering the new computer again.

This is a bit of a shotgun approach, but it worked for me. Beware that this will remove all of the systems and configuration information from your manually-assembled computer groups. If you are experiencing this issue, see if it is possible to remove just the system in question from the All Computers list.

The Problem

Every now and then, after imaging a fresh computer, it is impossible to connect to it over Apple Remote Desktop. Double-clicking the machine's entry in the Scanner does nothing. Upon closer inspection, the following appears in the logs:

Remote Desktop: kCGErrorIllegalArgument: CGSSetWindowShadowAndRimParametersWithStretch: Invalid window 0xffffffff

Attempted Solutions

  • Restart both the local and remote systems
  • Software updates on local and remote systems
  • Disable and re-configure remote administration from Preference Pane
  • Kill ARD daemons and restart ARD with Kickstart utility
  • Clear out ARD configuration plists
Tags: ,
Current Location: The Office
Current Mood: indifferentindifferent
Current Music: Bastion OST
24 June 2011 @ 06:17 pm


Internally-hosted Software Update Service is not downloading the latest updates. The logs say that there was an error downloading the .smd file for a given product:

swupd_syncd[12345] : Unable to download or create .smd file for product


A quick and dirty solution is to recreate the swupd directory to force a redownload all the catalogs and packages. The SUS will be unreachable during this period.

  1. Open the Server Administrator tool
  2. Connect to the server hosting your SUS
  3. Click the Software Update entry in the list
  4. Select the Settings tab, and take note of the "Store Updates In" directory
  5. Stop the Software Update service
  6. In the console, As Root, rename or delete the updates directory
  7. Recreate the updates directory
  8. chown the updates directory to _softwareupdate:_softwareupdate
  9. Restart the Software Update Service
  10. Wait while updates are redownloaded

Better Solution

It might be possible to get by with simply deleting the catalogs. This option is currently untested, but could potentially avoid a full redownload of all 25 GB of the current updates.

Current Music: Woe is Me

Solution Summary

After following the initial GPTZFSBoot on Mirror doc on the FreeBSD wiki, the Mac Pro's firmware fails to find a bootable OS. It turns out that the MBR needs to be updated. The partitioning tool on the rEFIt boot disc will handle this automatically. Run it, reboot the system, then run it again to take care of the other half of the mirror.

This works without issue, but has not undergone rigorous testing


I'm currently examining the possibility of repurposing some of our Mac Pro's for use as storage and backup servers. OpenSolaris is no longer an option, but FreeBSD now has production-ready ZFS support and AFP/CIFS support via Netatalk and Samba.

Problem Statement

Install FreeBSD on a ZFS Mirror pool on a pair of mismatched disks in a 2008 Mac Pro. Following the GPTZFSBoot on Mirror document leaves the system in an unbootable state.

Note that cost and performance are not a factor at this time, as we are utilizing spare hardware and have space in the server room. Booting into OSX is also not a goal.


Run the Partitioning tool from the rEFIt. boot disk. This will fix up the MBR to include the partitions defined in the GPT. Fortunately, the GPTZFS tutorial includes exactly three partitions, totalling four with the EFI Guard Entry.

After running the partition tool, reboot the system from the rEFIt shortcut. It will be necessary to run the partition tool a second time, followed by another reboot, in order to update the MBR on the second disk in the mirror. Repeat as many times as necessary for all the disks in your boot pool, just in case a disk happens to fail somewhere down the line. This is why the documentation goes through installing a boot loader on all of the disks in the Root Pool.

This solution unfortunately does not include support for a native EFI boot process, instead relying on the Mac's BIOS emulation that is normally used for Boot Camp. If time permits, it would be preferable to switch to a rEFIt-based boot solution on the OSX EFI partition, or even the experimental FreeBSD EFI Bootloader.


  • HPLogsdon - Used rEFIt in a multiboot situation.
Current Mood: accomplished

This is based off of Bombich's rsync with OSX Enhancements, making substitutions for use of the the official rsync and rsync-patches git repositories (as of v3.0.8).

Prerequisites for this install are OSX, Developer Tools, Xcode4/Clang, and Git (or just use release snapshots of the tree). Note that some more testing and formatting is still to come.

git clone http://git.samba.org/rsync.git
git clone http://git.samba.org/rsync-patches.git

cd rsync-patches
git reset --hard v3.0.8

cd ../rsync
git checkout -b bombich v3.0.8

# Apply patches
git apply ../rsync-patches/fileflags.diff
git apply ../rsync-patches/crtimes.diff
git apply ../rsync-patches/hfs-compression.diff

# Compile sources using clang, adding our own suffix
# Note that it is no longer possible to make a static compile on OSX
CC=clang ./configure
make EXEEXT=-3.0.8

# You can now move the binary to
# another machine, or run make install
Tags: ,

Removing Managed Prefs for Admin Users

This has been lightly tested against a 10.6.6 client. More testing is pending.

To disable Parental Controls without going through the UI, wipe out all of the user's Managed Preferences and delete their Managed Preferences directory.

sudo dscl . -mcxdelete /Local/Default/Users/$user
sudo rm -rf /Library/Managed\ Preferences/$user

This destroys any managed preferences that aren't automatically re-created. If System Preferences is open, close and re-open it to force changes to take effect.


A little while back, I was faced with a request to remove the Time Machine icon from the Menu Bar. My initial solution was to use Jamf Casper to push a Managed Policy (applied once, on next login) out to all of our Macs. Overall, this worked out quite well, and survived a fair bit of usage. There weren't even any complaints from users or our other IT staff (which includes myself).

In most situations, this wouldn't be a major issue. Regular users with management are regular users. They can't be promoted to Admins from the UI, but that's not a major issue for them. This does, however, turn out to be a problem when your entire organization has local admin. New users must be created correctly, lest they become unmodifiable.

This situation proves to be quite a nuisance, but it is fortunately non-fatal. Restoring normalcy also proves to be a nuisance, as simply making the preference unmanaged doesn't fix the problem.

Problem Statement

Managed Preferences are causing Admin Users to have "Parental Controls" checked in the Accounts UI. This prevents toggling the user's Administrator status, as Parental Controls prevent modification of the "Allow user to administer this computer" checkbox. As the users are admins, they also don't show up in the Parental controls pref pane.


Undoing the confusion requires figuring out what the UI is doing, and what assumptions each component is making when reading the configuration. Most of this is gleaned from a mix of research and Trial/Error.

To prevent the problem from resurfacing, the Managed Preference must first be removed from the source (in this case, a bad Casper policy). It can be explicitly marked as Unmanaged, or could possibly be removed from the list of enforced entries.

Simply enabling Parental Controls causes entries to be added to a user's Directory Services entry. Upon login, these are written to a directory inside /Library/Managed Preferences/.

The Managed Preferences can be deleted with a simple sudo dscl /Local/Default/Users/$userName mcxdelete at the terminal. This is unfortunately very heavy-handed, as it wipes out all of the user's managed preferences. This might not be an issue, as the MCX preferences are usually rebuilt at every login.

The local cache can then be removed by deleting their /Library/Managed Preferences/$userName directory. The checkbox should now be cleared when the User prefs pane is reopened.

Current Mood: tiredtired
18 January 2011 @ 12:04 am


There is an incredible range of ways to host your own website, each requiring different levels of attention and technical skill. I've dealt with nearly the entire spectrum at various points, all the way from GeoCities to my own servers running co-located DNS, Mail, Web, and various other services. The former proved extremely stifling, and the latter required too much attention.

After several tries, and the introduction of some new services, the best compromise of simplicity and attention was a combination of Amazon Route 53 and Google Sites.


  • Domain Name: Already exists, but is inactive
  • DNS: Able to customize and insert arbitrary records fairly easily
  • Website: Easy to edit, wiki-like
  • E-mail: Not required, sticking with my primary address, but could be useful later
  • Cost: Free, or extremely cheap
  • Maintenance overhead: Minimal


For DNS, Route 53 proved to be the most effective solution. I am not keen on running a DNS server again, especially given that I am on a potentially static IP. Dealing with BIND or DJBdns is also a disincentive toward this. Route 53 allows Amazon to host my DNS zone as primary and secondary (also tertiary and quaternary), piggybacking on my existing AWS account. While it does cost about $1.50/month ($2 if I see more than ~2M unique visits per month), I do have the ability to use AAAA records and CNAME record to point to a dyndns entry, or even set up my own Dynamic DNS service.

Overall, Google Sites appears to be easy enough to use. It offers wiki-like functionality with history and the potential for collaborative editing. Quite generally, I don't have much need for anything fancy like JavaScript or raw HTML, so the simple Google Docs-style interface is sufficient. Site hierarchy and navigation are automatically generated, which simplifies user interaction. It saves having to download and maintain a wiki of my own with PHP and Apache or JBoss. It might be advisable to migrate the site over to Apps for the Domain in order to set up a more complete e-mail and docs service should it ever be desirable.


Route 53 and Sites work out to being a decent compromise between flexibility and maintenance overhead. Route 53 + Apps for Domain would be a better compromise for a startup with multiple users or people without a footprint to migrate.

Current Location: home


For the most part, Fear, Uncertainty, and Doubt (commonly known as FUD) is a tactic commonly used to frighten potential consumers out of selecting a competing product by providing inaccurate, out of context, or outright fabricated information. It's quite rare to see such tactics employed with a company's own products and recent acquisitions, and is surprisingly frightening and unsettling, and I am frankly doubtful about the future.


On April 20th, 2009, Oracle Corporation has acquired Sun Microsystems and all of its associated staff and properties. This has included OpenOffice, OpenSolaris, Java, MySQL, and Glassfish. Since the acquisition, the OpenOffice community has fractured. The OpenSolaris effort was left to die, never quite euthanized, with its core pieces becoming effectively unavailable for use to anything other than the small enterprise. MySQL support costs have more than doubled for the low-end tier, and InnoDB has been pulled from the standard distribution. (EDIT 2011-02-19: InnoDB has been removed from a version of MySQL bearing the misleading name "MySQL Classic Edition" intended for use in embedded contexts. Anyone using the fully ACID-compliant InnoDB backend in this context might be doing something wrong) Key members of Sun have left the company with varying levels of anger, or have been put in positions of no power. Google has been sued over varying bits of Android. (TODO: Linkage to Google Suits, Gosling's exit e-mail, various other exit emails)

The latest in the series of Oracle's arguably disastrous moves and side effects has been the statement that Oracle will create tiered JVM pricing for the reference JVM (now OpenJDK), with free and paid editions.

Why the Uproar?

At present, tiered pricing is a fairly common idea in the Free Open Source Software world. It has been quite effective for Nokia's QT Framework, and has worked well for MySQL even before the Oracle acquisition. Furthermore, commercial plugins for free software have also been widely-used and accepted, such as FDT on Eclipse. Even for free software without a commercial license, paid support is often available for those desiring it. There are even commercial JVM offerings from IBM and (notably) Oracle.

So, why is Oracle any different in this case? Given the current acceptance of the myriad commercial offerings, paid add-ons, and business dealings, why should the announcement of a new commercial JVM be an issue? Why should a tiered JVM be a matter for concern, rather than just another healthy competitor with its own little niche?

The reasons to be concerned are two-fold. Firstly, Oracle is in fairly direct control of the Java platform, meaning that they are capable of manipulating it as per their business needs. Secondly, they have repeatedly shown themselves to be poor stewards of their recent acquisitions. The exact effects of this announcement are still to be determined, though Oracle's history does not inspire confidence.

EDIT 2011-02-19: Oracle has stated that it will leave the current JDK and JRE available with the existing free license models, and that things already marketed as premium extras will likely continue to be marketed as premium extras. I remain concerned that this does not make any statement about future releases, and that there is still significant opportunity to mishandle OpenJDK.

Doom, or just Gloom?

At this point, nobody outside Oracle can say exactly what will happen -- any outside thoughts are idle speculation. An Oracle representative has clearly stated that "there will always be a high-performance gratis JVM" which should imply that Java will always be available to those who need it. This statement fails to instill much confidence given historical examples, it is fully conceivable that Oracle could marginalize the free JVM by sequestering new and useful features inside the paid JVM. This could lead to stagnation of the entire Java platform.

Even given the potential stagnation of the platform, it is likely that the Open Source community will be able to compete with Oracle's hypothetical feature set. Unless, Oracle should choose to shun contributions that could compete with or resemble the official commercial offering, as they have already done to Google Android and Apache Harmony.

Perhaps my fears are completely unjustified, and I have misjudged Oracle's intents towards the betterment of the Java Ecosystem and Community -- I would be happy to have an about-face on the matter, but it's up to Oracle to do something right for the world at large, and it's not looking good so far.

Why Does it Matter?

If Oracle fails (or chooses to destroy) the Java community, and the community is unable to recover, the entire body of work and investment in the Java platform could be lost. If the Java community is not properly lead, it could prove to tear itself apart on its own power. This may prove to be a minor bump in Oracle's profit margins, but it could prove a terrible loss to the programming community at large.

Where To Next?

The next steps aren't exactly clear, as the timescales involved are quite long by technology standards. The platform's momentum is significant, the major players are slow-moving behemoths, and Java is incredibly well-entrenched. If it were to suddenly die today, there would still be plenty of work in the field. The really cool projects would still ongoing, and odds are that they would all try to switch to an LLVM or Parrot backend.

My response is to pick some new and interesting technologies and study them to build a more diverse set of personal options. I am nominating Shell Scripting, LLVM, Ruby on Rails, Objective-C/Cocoa, and C/C++/QT. Maybe a smattering of JavaScript/HTML/CSS and C#/MVC in case things get tough. This group of technologies covers nearly all of my personal interests in large-scale cross-platform software development for both open-source and closed-source contexts.

Your choice of stack may, of course, vary.

Final Thoughts

Oracle has poisoned the well. The extent of the damage is unclear. There might be somewhere else to get water, even if it is from a distant stream.

Poison or not, Java ain't dead yet. The ecosystem is happy, and is still getting better. Even if it dies today, it is going the way of successful dead languages like Cobol. There is still time to use it and push it towards a good future. In the worst case, there is an opportunity for Java polyglots to start porting existing code to new platforms.

There are still a few really good years left, if not, many more. I want to keep using the platform until the bitter end, or until something else more compelling comes along.

Current Mood: anxiousanxious
Current Music: Champion - No Heaven
18 July 2010 @ 01:03 am

It's always fun to dabble with new technologies, but it is outright boring to do without an actual goal in mind. You've done the Hello World... so now what? Problems must be solved. Value must be created. Fun must be had! Hence, a new project

Buzzword Barrage

Buzzword Barrage is an attempt to combine numerous new and popular technologies, practices, and ideologies into a single place. Some entertaining ones that come to mind include: Java, Groovy, Grails, OSGI, Spring-DM, Facebook, Cloud Computing, Content Distribution Networks, and Internationalized/Localized Content Management Workflows.

Given the above, the only sensible deliverable is an enterprise grade Facebook application framework supporting application and fan page management for an international audience, all backed by a cloud computing platform.


The Buzzword Barrage project has the following immutable constraints:

  • Open-Source
  • Fun (at least for me)
  • Free, as in Beer, or at least similarly inexpensive
  • Core implementation should take about a month

Initial Design Notes

The system's functionality can be divided into areas of Facebook posting, Facebook monitoring, data storage, content management, and content display. The first three are the most interesting, and seem to be accomplishable during the core development phase. Furthermore, these components are directly usable on the Google App Engine platform, enabling free development and usage. These classes can then be used in the context of an OSGI container, which will require a switch to Amazon EC2.

Switching to EC2 will allow for usage of the full AWS platform, including S3 for dynamic content management (large assets), and Cloudfront for high-performance content distribution and streaming. This will unfortunately require an ongoing investment in AWS to properly develop.

Current Mood: determined
28 March 2010 @ 11:29 pm

The Problem

OpenSolaris Zone networking fails after upgrade to the Development tree. After the upgrade, shared-IP zones seem to be unable to connect to the internet, with unusual errors like "ping: unknown host" for hosts that are reachable by the global zone.

The Solution

In short, the solution is to use Crossbow to provide a VNIC for the zone. Set the zone up with ip-type=exclusive, and everything should work as normal.

The entire process of zone setup is outlined in Brian_Leonard's Creating/Cloning a Zone Demo article. Finding this document was unfortunately a challenge, as there were no keywords linking my issue to what was being described.

The Specifics

Begin by setting up a new Virtual NIC for the "myzone" zone:
pfexec dladm create-vnic -l rge0 myzone0

Configure the zone:
pfexec zonecfg -z myzone
set zonepath=/rpool/zones/myzone
set ip-type=exclusive
add net
set physical=myzone0

The rest of the zone configuration is fairly straightforward, and is generally in line with a standard zone setup. For those that haven't set up an exclusive IP zone before, exect the following key differences:

  • 30-second wait when adding IP, routing information, and IPv6

  • It seems to help if the default router/gateway is set, rather than autodetected

  • DHCP might be usable in this context

Current Mood: geeky