My favorite fonts, gender typing in typography, and truthiness.

I am a self-confessed font nazi. I love fonts. Andy, a good friend and housemate, recently asked for some font advice on Facebook, and used the word "masculine" to describe the kind of font he was looking for. This created... a hub-bub within some folks in the gay community, and I posted the following.

Univers, Frankfurt Airport, where I first fell in love.

First, Frutiger's font styling is some of the best out there, and my all time favourite sans is Univers. I own this font; I spent about $1000 on it over the years for its various typefaces. It's clean, easy to read, works on everything from signage down to text, and it's gorgeous, gorgeous, gorgeous....

Univers® font family | Linotype.com


Now I prefer that by far to its other options, like Helvetica, Arial, and its simulacra. They just feel stranger, less formal, somehow more awkward. Disney, however, is currenly moving to Avenir, another Frutiger font. Avenir Next, specifically, is slightly more informal, a little more streamlined and modernist.

Linotype Font Collections - Avenir Next


To get something to feel historical, and to lend the feeling of truthiness, you might opt for a historically-based serif like Baskerville.

For a more feminine take, and I'm using that the same way I think you are imagining in your mind (and in this case, for illustrative purposes), see Mrs. Eaves (the typeface).

Baskerville font family | Linotype.com


Century Schoolbook is an old hand at serifs; it's considered highly legible, even though it's a serif. (I have a high bias against serifs, I just think they're frilly and superfluous, but they do something for lots of readers.) It lends itself a weird familiarity with the reader, mostly because they've seen it a million times before but can never put their finger on where.

It's widely available. It's a bit... Dare I say basic. I guess. But it's good, it's honest, and it's not Times Fucking New Roman.

Century Schoolbook font family | Linotype.com


This is how ugly it used to be.

Emulating an Android feel requires Roboto. It's a weird, weird frankenfont, with the look and feel - you might argue "the best", and you might well bloody not - of several other fonts: Helvetica, Myriad, Universe, FF DIN, and Ronnia. It's an interesting study into the strangeness that is Google.

Of course, many of these original issues have been fixed, but it's roots are still deeply embedded in the patchwork that birthed it.

It is, now, however, a very good font worthy of consideration.

Roboto | Google Fonts

Roboto Is Was a Four-Headed Frankenstein | Typographica


Like many derived types, differences can be subtle.  Thank you, ive, for not putting little hearts above the lower case i.

Then there's our more elegant and beautiful cousin. Emulating the feel of Apple devices requires San Francisco these days. They're both interesting yet utterly derivative of their Geneva and Helvetica Neue past, Apple's previous system fonts. When you see them, you see lazy. Unless they're trying to associate their product with Apple. Then you see brazen manipulation of the reader.

San Francisco | Fonts - Apple Developer >


Consolas was one of the greatest gifts to the programming community. Microsoft's original is still an incredibly beautiful monospaced font - perfect for everything from terminals to the "full screen writing" or "distraction free writing" tools that I occasionally drop into.

Consolas | Microsoft Typography

And that's great, if you're on Windows. But to be frank, there's really no need to use it anymore. Inconsolata is an incredibly productive and useful replica, which has been altered over time and is now incredibly complete and beautiful. It still lacks some of Consolas' polish, but it's free, and everyone has no excuse not to switch over to using it right now.

Inconsolata | Google Fonts


So little known fact: Fonts often have popular pairings, and it's not uncommon to do headings and subheadings in a serif and then drop to sans-serif for body text.

One of my favorite pairings in the free community is Merriweather with Open Sans; the former is a good, solid, free serif, and the latter is a nice, open-body sans that's round and full-bodied.

Merriweather | Google Fonts

Open Sans | Google Fonts


Because nobody gives a fuck about impact.  Really.

Sometimes you want a display font - something for logos and punchy intros. Something that isn't IMPACT, because you shouldn't ever use IMPACT. Because it's so overused it doesn't have any anymore.

For the rest of us, there's Hudson NY.

Hudson NY | MyFonts


You can almost see the 80's synthesizer blinking.

Sometimes you need a human touch in fonts. Hand-drawn fonts can be everything from sketchy to industrial, and there's more than a few drafter-hand letters.

Me, I'm a child of the 80's. Every time I see Imogen Agnes I want to dress up in a Miami Vice salmon jacket and throw money at hookers.

Imogen Agnes | CREATIVEMARKET.COM


I don't always use handwritten fonts, but when I do I use might could pencil.

Now, if you think you need Comic Sans, you don't. You need Might Could Pencil. Proof that you can write like a cynical bitch and still look at the world with childlike wonder while you do it.

Might Could Pencil | CREATIVEMARKET.COM


Chronicle.  Your basic mid-newspaper glossy insert font.

For magazine style layouts, sometimes you want something that just kind of bleeds elegant fashion.

Personally, I'm a fan of Chronicle Display, and it's close cousin, Chronicle Text. The latter I almost never use for anything, because, you know, I hate serifs, but the former has a place in any designer's book.

Chronicle Display Fonts | Hoefler & Co.

Chronicle Text Fonts | Hoefler & Co.


Mercury and Esquire: Making news sexy since 1996.

Sometimes you need something less newsy, but still serif'd, stylish yet bold. Mercury was designed for Esquire magazine, and it's the heart of their signature look - you see it instantly when you know, but it's subtly there influencing the mind of the reader when you don't; you've seen it a million times on the cover, and somewhere deep in your soul, you make the connection. Because that's what good fonts do.

Mercury Display Fonts | Hoefler & Co.


Somewhere between the Gothic fonts of old, like Baskerville and its ilk, and the postmodern "humanist" fonts of Frutiger that just scream airport signage at you - because, yes, that's probably where you've seen it before - is Whitney. This one's designed for New York's Whitney Museum.

Whitney Fonts | Hoefler & Co.


Of course, I suppose I should wade into the "gender typing" war.

However you feel about the subject of gender typing, the goal of a designer is to evoke a sensation within the masses, to communicate. You do so using stereotypes; idioms and identifiable thought patterns you know exist in the mind of the reader.

It seems appropriate, on the day that Trump becomes present, to remind readers that we are ingrained with idiotic, pointless, stupid, foolish, and downright wrong stereotypes from birth. All the fucking time.

Your job, as a designer, is to pull every lever, to push every button, to silently manipulate the image you're conveying through both liminal and subliminal to convey your message. This isn't just about what you write, or the images you use, but the shapes, the lines, the foundational design elements, the design language of the piece.

Design requires a deep understanding of the psyche of the individual you're designing for. So describing things as "masculine" or "feminine" is as real as the stereotype that exists in the mind of the reader - and believe you me, most people have that stereotype so well ingrained that you can use it.

Leveraging Stereotypes in Design: Masculine vs. Feminine Typography | DESIGNSHACK.NET


Last, a reminder. Yes, I worked in display advertising for years and years. I've always been a fan of the business - Saatchi and Saatchi and Tomato are probably my favorite design houses.

Design, especially within advertising, is often about communicating using every possible element - constructing tiny universes where every single visible element is there to deliver and reinforce a message and identify with a target audience.

But that's design's goal - to build things that connect with humans, to build associations. It's the reason people care about their typefaces; they're literally part of the identity of the brand.

Apple is San Francisco. Android is Roboto. They're not just typefaces, they're full-blown wars over what makes something readable, about what humans like to see, about whether people are more likely to trust what you say when you use one font over another.

For one of the world's most fascinating experiments on this, read The Baskerville Experiment.

The Baskerville Experiment: Font and its influence on our perception of truth | MARKETINGEXPERIMENTS.COM

And with that, I conclude. Thank you, dear reader. ;)

Modelling the bodybuilding process

One of the biggest challenges in starting the process of bodybuilding is looking in the mirror every day. You see the person you are in the mirror, weighed by the baggage of the person you've been seeing for years - not the person you're intending to become.

Visualisation is key to motivation.

To help with that, I've been working on some... projections of the effects of my workouts, plotting ahead towards the 9/1 deadline I've set for myself for Burning Man 2015.

Read More

Improving the model of base card value

So while the previous model had a good fit, we can do better.

First, it's worth noting that we were training only off of creatures; weapons with no text were not modelled, and we were modelling those after the creatures.

By adding a categorical value for the card type, we can include both sets of data in the model, and provide a better prediction for both creatures and weapons.

Lastly, we add something to the model to account for card balance, penalizing cards that have are "lopsided" towards attack or health.

Predicted vs actual for fitted values.  Remember that horizontal values (actual) are quantized integers; vertical values (predicted cost) are not.

Predicted vs actual for fitted values.  Remember that horizontal values (actual) are quantized integers; vertical values (predicted cost) are not.

An updated poisson(link="identity") model, including the card type as a categorical (factor) attribute of the model.

Deviance Residuals: 
     Min        1Q    Median        3Q       Max  
-0.80723  -0.05861   0.03219   0.11557   0.88626  

Coefficients:
                                              Estimate Std. Error z value Pr(>|z|)
(Intercept)                                    0.05952    0.45112   0.132    0.895
Attack                                         0.06746    1.17126   0.058    0.954
Health                                         0.81846    1.14619   0.714    0.475
sqrt(abs(Attack - Health))                    -0.29185    0.98007  -0.298    0.766
CardType_q2                                   -2.39374    3.99784  -0.599    0.549
Attack:sqrt(abs(Attack - Health))              0.25817    0.53455   0.483    0.629
Health:sqrt(abs(Attack - Health))             -0.18654    0.64174  -0.291    0.771
Attack:CardType_q2                             0.07223    2.55269   0.028    0.977
Health:CardType_q2                             0.98153    2.49477   0.393    0.694
sqrt(abs(Attack - Health)):CardType_q2         0.73281    2.73863   0.268    0.789
Attack:sqrt(abs(Attack - Health)):CardType_q2  0.46050    1.66056   0.277    0.782
Health:sqrt(abs(Attack - Health)):CardType_q2 -0.67290    1.20226  -0.560    0.576

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 59.0578  on 45  degrees of freedom
Residual deviance:  3.7054  on 34  degrees of freedom
AIC: 148.7

Number of Fisher Scoring iterations: 6

...and for a preview of just how much card mechanics beyond the Mana/Attack/Health relationship affect the true card cost, here's a plot of the predicted base vs actual for all minions and weapons.

Predicted "base value" vs the actual value, highlighting the effects of other card mechanics on the true cost of a card.

Base Card Value: Gaussian vs Poisson

All of these mathematical endeavours begin with the presumption that Blizzard has a secret formula it uses to compute the amount of mana a card ought to cost. The value on the card will be different on the card for one of three reasons - the influence of mechanics, the effects of rounding (because a card costs 1 or 2 mana, not 1.5), and tweaking based on how it plays (variance introduced through human evaluation).

What we're ultimatley trying to build (for the base card value) is a model of the basic relationships between the values on the card (Attack, Health, and Mana), and these other effects, to 'reverse-engineer' the basic formula.

A standard lm model uses a family of functions for analyzing the variance of a dataset based on the assumption that there's a gaussian distribution in your data, and is primarily concerned with producing a binary predictor for those values.

As always, model fitting is part art, part science, and part throwing shit at the wall to see what works. Or, it is when I do it, at any rate.

So.

Given our belief that there's a direct, additive relationship between the base card value and the attack/defense values on the card, in theory, a glm on a poisson family should get you a better fit than the gaussian; it better represents the expected behaviour of a count variable. But is it true?

First, the original, gaussian LM fit from last time.

  Call: lm(formula = Mana ~ Attack + Health, data = dataset, subset = CardType == 1 & CardText == "" & Mana > 0)

  Residuals: Min 1Q Median 3Q Max -1.9940 -0.2844 0.1968 0.2218 0.7611 

  Coefficients: Estimate Std. Error t value Pr(>|t|)   
  (Intercept) -0.16376 0.14999 -1.092 0.283   
  Attack 0.50626 0.06172 8.202 2.28e-09 _*_

  ## Health 0.43566 0.06107 7.134 4.27e-08 _*_

  Signif. codes: 0 ‘**_’ 0.001 ‘_**_’ 0.01 ‘_’ 0.05 ‘.’ 0.1 ‘ ’ 1

  Residual standard error: 0.4999 on 32 degrees of freedom Multiple R-squared: 0.9359, Adjusted R-squared: 0.9319 F-statistic: 233.6 on 2 and 32 DF, p-value: < 2.2e-16

The autoplot for lm(formula = Mana ~ Attack + Health), showing a fit using the gaussian family.

Now let's change tack.

If we presume that our outcome value is "count"-like - additive in nature from the base values on the card - we can switch to the generalized linear model, and switch to a poisson distribution - with intriguing results.

glm(formula = Mana ~ Attack + Health, family = family, data = dataset, subset = CardType == 1 & CardText == "" & Mana > 0)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-1.0093  -0.2435  -0.1382   0.2194   0.9077  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)  
(Intercept) -0.17849    0.23141  -0.771   0.4405  
Attack       0.14891    0.06234   2.389   0.0169 *
Health       0.16466    0.06622   2.487   0.0129 *
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 44.9140  on 34  degrees of freedom
Residual deviance:  4.4248  on 32  degrees of freedom
AIC: 101.1

Number of Fisher Scoring iterations: 4
The autoplot for&nbsp;lm(formula = Mana ~ Attack + Health, family=poisson), showing a fit using the poisson family.

The autoplot for lm(formula = Mana ~ Attack + Health, family=poisson), showing a fit using the poisson family.

Note the differences in the two sets of graphs:

  • Residuals are healthier. The residuals on the poisson distribution have better deviations; LM range was -1.9940 to 0.7611 (~2.75); our Poisson GLM is -1.0093 to 0.9077 (~1.9).

  • Residual QQ is better. The two graphs are day and night; you now see a nice, clean, quantized Q-Q for the residuals, showing the stair-stepping you'd expect to see when you know that whatever magic formula exists has the effects of rounding (because mana is an integer) applied.

  • Cook's Distance improved. We go from having some pretty strange outliers to being within 0.06 on all modelled values.

Scale-Location and Residuals Vs Leverage also see huge improvements over their normal counterpart.

In short, the poisson family appears to do a much better job of estimating the base value of the card than the normal family.

pMana and Base Card Value

So, once you've scrubbed out zero-mana cards from scoring (which are a problem disconnected from value - it costs you in the deck build, but the card in isolation is always a pretty good deal), you clean up a few of the outliers and end up with a pretty solid qqplot.

QQ plot of predicted vs. actual, monsters &gt; 0 mana with no additional mechanics.

QQ plot of predicted vs. actual, monsters > 0 mana with no additional mechanics.

QQ plot of predicted vs. actual, for *all* monsters and weapons (including those with mechanics)

QQ plot of predicted vs. actual, for *all* monsters and weapons (including those with mechanics)

Our goal in computing a base card value is to look purely at the numbers on the card, without considering the effects of the card mechanics, the rule violations that appear on the card. You do this for a few reasons:

  • Many mechanical effects have dependencies that are situational or may just not go off the way you hope they would when you built the deck.
  • Some cards have a "Silence" mechanic that wipe the card text, leaving you with just the base minion.
  • Dependencies on combos and card synergies requires careful deck engineering.

Most of your deck should be stable, dependable, and work towards a single goal; every card in your deck should help you reach that goal. Like building your first boat, or house, the temptation is to throw every cool thing you ever saw into a deck; what usually happens next is a catastrophic chain of losses.

What makes building a valuation like this interesting is the stuff at either end, the outliers on the outskirts of Value Town. I'm also pleasantly surprised to find a fairly normal distribution.

The outliers are more-or-less who you'd expect them to be.

At the bottom of the value pile is the Molten Giant, at a mana cost of 20 for an 8/8 creature that's really only worth 7-8 mana. It's all in the rule violation of the card text: Costs (1) less for each damage your hero has taken.

At the top of the heap is Mukla's Big Brother... a massive 10/10 creature that costs 6 mana but should cost 9. Again, the card text says it all: So strong! And only 6 Mana?!

Two more cards without card text do well, here:

  • Emerald Drake, costing 4 for a 7/6 creature, is well known to be great value and at a 92/100 rating for value is a great choice.
  • Blood Fury is a great value at 3 mana for a 3/8 card.

Its brethren above 90% are all either great value or have debilitating problems with their additional mechanics.

  • Ancient Watcher at 2 mana for a 4/5 looks good until you see that its big restriction is that it can't attack.
  • Injured Blademaster has a debilitating Battlecry that deals 4 damage to himself, a 4/7 creature that becomes a 4/3 when you play him.
  • Earth's Elemental is great value at 5 for a 7/8, but has an Overload: (3) that takes out three of your mana for a whole turn.

Millhouse Manastorm and Flame Imp are right up there with Ancient Watcher , but again, those debilitating mechanical violations come in to destroy their utility.

Even the Oasis Snapjaw and Chillwind Yeti appear in the correct order. (Yeti > Snapjaw, in case you didn't know.)

A cursory review of cards shows a pretty good match for general consensus on perceived value, so I'm going to roll with this for a V1.

Histogram of card$baseScore generated from linear regression

Note that it's hard to read too much from the distribution here; this is data that came out of the model, which iteself presumes a normal distribution of the source data. I'd have to switch to a glm or bayesian model to avoid making that assumption about the source data...

Which is something I'll look at in future passes. For now, I'm comfortable that the fit passes basic sniff tests.

Hearthstone: Mana cost

When evaluating the base cost of a card, you might be tempted to say that most of the cost lives in the base attributes; so how would you evaluate that statement for truthiness?

Looking at a linear regression fit from all Minions with an expressed cost >0:

Linear regression model of all Minion cards with mana cost > 0

First, just look at the quantization in the residuals-vs-fitted. Pretty, isn't it? That suggests that the mechanics associated with these cards have clear, distinguishable values; this is Blizzard's own statisticians at work.

Next is the fit to a normal distribution; not bad, and as you'd expect, the outliers are the ones whose mechanics strongly influence mana cost (in either direction).

Residuals:
    Min      1Q  Median      3Q     Max 
-4.7648 -0.5829 -0.1133  0.4963 11.3935 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) -0.02670    0.14274  -0.187    0.852    
Attack       0.54873    0.04468  12.282   < 2e-16 ***
Health       0.53042    0.04092  12.962   < 2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.21 on 270 degrees of freedom
Multiple R-squared:  0.7617,    Adjusted R-squared:   0.76 
F-statistic: 431.6 on 2 and 270 DF,  p-value: < 2.2e-16

So a basic LM fit is surprisingly expressive - moreso by far than I was expecting, and it matches Trump's views on base cost of card being a very important factor. In fact, even without filtering out all of the cards that represent more unusual cases, it covers more than 76% of the variance in the dataset.

We can do better, though - if we're looking to fit a model for base cost, let's restrict the model to those mechanics that don't actually express any other mechanics.

In other words, let's go build a linear model that fits only the relationship between Mana, Attack, and Health for minions with no other mechanics.

Linear regression model of all Minion cards with mana cost > 0 and no other mechanics.

The resulting fit is better, too - We're at 93% of the variance of the data covered by the model.

Residuals:
    Min      1Q  Median      3Q     Max 
-1.9916 -0.2755  0.2203  0.2287  0.7730 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) -0.17189    0.13566  -1.267    0.213    
Attack       0.50416    0.05960   8.460 3.56e-10 ***
Health       0.43905    0.05918   7.419 7.90e-09 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.4879 on 37 degrees of freedom
Multiple R-squared:  0.9338,    Adjusted R-squared:  0.9302 
F-statistic: 261.1 on 2 and 37 DF,  p-value: < 2.2e-16

Of course, the problem with this is that we're now restricted to 37 degrees of freedom, and there's still quite a bit of scatter between the fit and residuals.

In fact, if you take this model, and use it to predict mana cost for all cards in the deck if the card had no other mechanics than its base value, you get something like this:

It bodes well for my tuning of cost evaluation models for Deckalytics.

Weathrman is no more...

Weathrman was built around a simple idea; search Flickr for a photograph taken near you, showing weather conditions and time of day roughly similar to your current location's weather conditions and time of day.

For the most part, it worked surprisingly well, given that I had limited API capabilities from Flickr, time-of-day is badly managed in images, most of Flickr's images weren't geocoded well, the API is horribly slow, and althoiugh we start searching hyperlocal and step upwards until we find something, each of those queries is run sequentially (due to both limited number of queries per day to Flickr and the cost of AppEngine to me).

It's been many years since I seriously looked at the codebase; the app still worked, but the quality of the images it pulled had steadily decreased, people aren't really using Flickr like they used to.

The net effect is that the app was old (written for Froyo, and last updated in 2011), had bad image choices due to limited API control... and for a while, was actually costing me money to run, as it had enough users to hit AppEngine's limit on free CPU time.

But no longer; it's fallen out of use, gets few requests. The app has bitrotted, people have moved on from being endlessly fascinated with live wallpapers, and there's no point leaving it up.

So down it comes, three years after my last update.

If it brought you joy, thank you; if it brought you tears, I apologize.

It's ALIVE!!!!

AdSense Dashboard 3.1 is alive!!!

For those who've been wondering, life's been a little busy lately. Since moving to Mountain View, I've taken over the role of TL lead and Manager on the advertiser's side frontend of our Display business at Google, what we call the 'Content AdWords Frontend' in Google.

That's left me with precious little time; I first knew about 1.2 six months or more before it got released, and knew I'd have to migrate the dashboard to the API - but I ended up moving to Mountain View to take on this role before I had the chance to do so.

Now that my other major project, Search and Display Select is launched, I have a moment to take a breath and fix the dashboard.

Now, keep in mind I still want everyone to move over to the official app - but it's also true that it's 4.0+ only, and I supported folks on Froyo.

Froyo is no more. If you're still on Froyo, go buy a phone, the OS is much nicer now and you'll be happy you did.

So the AdSense Dashboard is Gingerbread or later now. I've gone and changed a few things to make it easier to maintain and take some of the pain away - including to moving to Play Services for authentication. (Auth used to be particularly ugly under the hood.)

New navigation hierarchy

  • Local TimeZone everywhere. Everyone but Google thinks in their local timezone. So timezone reporting isn't optional; the app works in the timezone you gave AdSense.
  • A new widget that supports resizing and lockscreen use.
  • Goodbye, ViewPager. It was broken anyways, and we now have way too many reports to just blindly page through.
  • Hello, Navigation The new design paradigm on Android is an ActionBar button linked to a navigation drawer; now that we have navigation, we've added more reporting.
  • New reports Ad unit and site reporting have been added.
  • More data A full set of metrics on all of the reports we show.
  • Pull To Refresh Because I was wrong, Nick.
  • New icons While playing with the navigation drawer I found we needed some kind of visual indicator. I wanted scaleable icons that worked at all DPIs, but was way too lazy to actually go and make icons of all of those sizes. Enter FontAwesome, a font with a host of icons of just the right style and use case; that, plus a customised TextView that supports specifying a font, and a bit of aggressive caching of typography, and we've got some icons in the app now.
  • Use of typography I switched everything over to the fonts that are used in JellyBean and KitKat, Roboto. This is temporary, until I can find (or get around to buying for app embedding) something like Trafalgar and Requiem
  • API 1.4 introduced a change I've been begging for since the first version of the API; at least some of this happened because they're now seeing these problems for themselves as users of the API.
  • Rewrite of networking I rewrote the networking to make a single batch request at the same time I moved to Play Services for authentication. The refactor cut the amount of code I had in the app by more than half.
  • Use of Play Services SDK Play Services adds a lot of critical support for doing auth properly across a wider range of devices.

And, of course, moving off of the v1.2 API, which is what broke the app for all of October and November.

I did this in two stages - a 3.0 release in November, and a 3.1 release just a few days ago to make use of some of the earlier cleanup.

Along the way, I cleaned up a bunch of code, imported the 1.4 libraries, followed the daisy chain of required updates, moved to Android Studio, deleted all of that and switched to the maven repository, fixed all the maven conflicts, updated to later versions of support libraries, rewrote a bunch of stuff that the support library changes broke, etc. This has been, undoubtedly, a massive yak shaving exercise; but it's better off now.

We'll see where things go next; as always, send me your feature requests, complaints, and general chatter to the support address.