Wise thoughts on “literally” vs. “figuratively”

In the Salon article by Mary Elizabeth Williams, which I’ve already written two posts about but I promise this is the last one, this comment by machineghost struck me as entirely reasonable and wise:

So the problem with literally becoming *yet another* alternative to “very” is that we no longer have a word which means “no really, I mean this thing actually happened, it’s not just a strong metaphor”. So our language has become less expressive because this particular modification.

This is a good and wise restriction (or, gasp!, prescription) on word meaning and usage, the violation of which does impair clear communication and is entirely unnecessary. As machineghost points out, we already have numerous words and phrases to give emphasis to a statement, even to the point of hyperbole, and using literally to do so degrades the impact of this word when it is appropriate.

The non-necessity of using literally to give emphasis to a statement via hyperbole reminds me of Mark Twain’s admonition against using very to give emphasis to your writing. Good writing typically gets its point across effectively without adding this emphasizer, so it is either unnecessary or is the sign of weak writing.

The same lesson can be applied to hyperbolic, non-literal literally in probably every imaginable context. Take the much written-about instance of British soccer commentator Jamie Redknapp saying “David Silva literally floats around the pitch.” Or another one mentioned in that article, “literally in another galaxy”. The columnist uses history to justify his concession that hyperbolic literally is right and proper, but I’ve always seen history as only a part of any discussion about grammar and usage. Luckily, we in our wisdom can see the error of the ways of those writers of centuries past and can see how the misuse of literally impairs usage by weakening its impact in proper contexts. Writers who used or continue to use it for hyperbolic, metaphoric effect were/are writing ineffectively and impairing the effectiveness of this word in literal contexts.

On top of weakening literally in its correct uses and leaving us with no word to take its place as meaning “no really, I mean this thing actually happened, it’s not just a strong metaphor”, hyperbolic literally is literally entirely unnecessary. If you’re using literally in an obviously hyperbolic, unrealistic, out-of-this-world, impossible, or fantastical metaphor, then the metaphor should do all the work of having a strong impact on the audience, without being “strengthened” by the misleading addition of literally. What, if you say an athlete floats around the pitch or that some people seem to be in another galaxy on a political debate, these don’t have a strong enough impact on the audience, so you have to add literally before it to sort of fool the audience into thinking, “Oh!—maybe he does mean literally—oh, no, of course he doesn’t, but he must be really, really, doubly, triply, extra serious, then!”

This is weak hyperbole for writers and speakers who either can’t write or speak effectively or are so dense/see their audience as so dense that they cannot be satisfied with mere hyperbole. Stan Carey writes about this linguistic inflation and sort of manages to come down on the right side, by the end of the column. He does quote writer/composer Anthony Burgess with a sentiment echoed by machineghost above:

A ‘colossal’ film can only be bettered by a ‘super-colossal’ one; soon the hyperbolic forces ruin all meaning. If moderately tuneful pop songs are described as ‘fabulous’, what terms can be used to evaluate Beethoven’s Ninth Symphony?

This page, Not everything is epic, is amusing and spot on.

The literally vs. figuratively debate provides a good example of wise, beneficial prescriptive rules. The rule is: don’t use literally when you mean its exact opposite, figuratively. Breaking it encourages sloppy thinking, writing, and speaking, it insults the audience as if they aren’t smart enough to recognize the hyperbole, and it reduces the effectiveness of literally in its correct uses.

Posted in Language, Writing | Comments Off on Wise thoughts on “literally” vs. “figuratively”

Anti-Hopefully crusaders are hopeless

Some of the comments to the (in)famous Mary Elizabeth Williams column that I blagged about yesterday are so stupid they practically drool. It’s a shame I didn’t read this column (or, actually, its comments) earlier, because I could have ranted and raved against their stupidity directly in response to them, instead of here, where they’ll never see it (nor will many others). But this blag’s for my own enjoyment and an outlet for my own thoughts and frustrations, so writing about those comments and their ignorance here is good enough.

(To review, Mary Elizabeth Williams sort of lamented, sort of understood, and sort of threw up her hands in surrender at the Associated Press’s official de-banishment of the sentence-modifying adverb Hopefully, as in Hopefully, the AP won’t make future concessions that actually do harm usage. The objection is that hopefully should only modify specific verbs, describing how an action was done, not modify whole clauses or sentences, because sentential hopefully injects the writer’s own personal perspective (his hopes) into the sentence, where it doesn’t belong.)

The worst offender in the six pages of comments, which were a mix of sensible, smart, stupid, ignorant, superstitious, interesting, boring, and indifferent, was named Francis E. Dec. In response to a professional editor who wrote, somewhat tongue-in-cheek, “Hopefully, we’ll all get over it,” Francis E. Dec responded,

(Hopefully, we’ll all get over it.)

What does that sentence mean? Does that mean you are hopeful that some other group accepts this change? You use the term “we,” but prior to that seem to exclude yourself from those who need to “get over it.” Or does it meant hat those who do not accept the change are hopeful?

Perhaps you begin to see the problem with using an adverb when there is ambiguity concerning the verb it modifies? I don’t know who is hoping in that sentence. Do you?

That you might be someone who has worked professionally for thirty years does not shock me. It is, after all, the declining standards of professionals such as yourself that has led to this change in the AP style guide.

Francis is an idiot or, more likely, is being intentionally obtuse to try in vain to make a point. It is obvious who is doing the hoping: the author. Stop being a smug, pedantic little twit. You, I, and everyone who reads that sentence know exactly who is hoping. It is the person who typed it. Stop pretending it isn’t obvious.

Further down, in response to the column itself, Francis E. Dec wrote,

The AP Stylebook editors did this to sell copies. They have no requirement to yield to changes in common usage as they claim. They publish a stylebook, not a dictionary. The job of the style guide is to provide a universal standard for writers in a particular profession. You can bet the MLA style guide, APA guide, and other professional style guides won’t change. (One hopes.)

One hopes? Who is “one”? Which one? You? So you’re filling in your own hope for the unknown hopes of your nameless, anonymous readers, who may or may not share yours? Are you assuming all of your readers’ hopes are one with yours? How are we supposed to interpret “One hopes” if not as “Everyone hopes”? How about “It is hoped”? Well, hoped by whom? Same problem. Why not write “I hope”? Because the first person is forbidden in your weird, stupid fantasy land and we’re supposed to pretend those words just appeared on the page with your name above them? You can’t bear to use an alternative wording that’s at least as clear (“Hopefully” or “I hope”), so you choose to say that “one hopes” instead of saying you hope it, thereby committing in equal severity exactly the same insidious personal perspective injection that you’re trying oh-so-smugly to shoot down.

People like Francis E. Dec are hopeless, self-blinding, insufferable, willfully ignorant, pedantic, annoying little twits who seem invariably to make the same “mistakes” they rail against or worse ones. At best they’re a nuisance that we ignore or brush away occasionally like a persistent mosquito, and at worst they’re a scourge on English usage everywhere, only impeding clear writing with their inane, groundless, ineffective pseudo-rules and harming the credibility of the purveyors of beneficial usage rules because they make us all look like crazy, superstitious pedants.

There are good grammar/vocabulary/usage rules, and there are bad ones, and those who push bad ones that aren’t grounded in logic, aren’t backed by history, aren’t observed by esteemed writers, and don’t improve style or clarity are harming their own language far more than any sentential adverb could.

Later, a commenter named G. I. wrote a thoughtful, sensible comment that was marred by, what else?, a weak objection to to sentential Hopefully:

So it is really pretty simple unless we choose to be perverse about it. That said, I would make a distinction between spoken language and written language re “hopefully.” In spoken language, it is, as its defenders say, merely a short form of “It is to be hoped that.” Like much informal speech, it is inelegant but unlikely to be misunderstood. In written language, it’s wrong for the simplest of reasons: it can’t be diagrammed–it can’t be attached to any word that it modifies with one of those little diagonal stems. (The phrase used as a sentence for effect CAN be parsed: the omitted part of the complete sentence is clearly implied and indicated as such in the diagram.)

Um, learn how to diagram better? I mean, come on, these aren’t fucking Feynman diagrams that illustrate a fundamental force, movement, or interaction of particles and energy under the immutable laws of physics; they are human-designed markings on paper to illustrate the uses of and relationships between human-determined, human-evolved, and human-implemented language components. If your precious diagrams are unable to parse sentential Hopefully, then you and your diagrams are the problem, not the word that makes perfect sense to everyone and has a clear, unambiguous meaning. The nature of sentence diagrams is not constrained by some universal laws; if they have a shortcoming, modify the diagrams, not the language! Sheesh!

In a sentence that begins with Hopefully, the subject doing the hoping is understood (I). Note another type of sentence that has an understood subject: imperative sentences, also known as commands. Go to the store. Read this article. Stop doing that. Their understood subject is You. Sentence diagrams are perfectly capable of handling those understood subjects, so no, I doubt they are simply inadequate to handle the understood subject of a sentence with sentential hopefully.

Finally, in an exchange that was pretty well handled by the more sensible of the two participants, one Rrhain tries to explain that adverbs can only modify adjectives, verbs, or other adverbs, and “They don’t modify phrases unless those phrases function as one of those parts of speech.” (This is simply a false statement.) He uses specific example sentences posted by a previous commenter and tries to rearrange them to make his point, to his own detriment:

Specifically, the adverb is not modifying the clause but rather a specific part of the sentence, either the verb or an adjective. It has simply been separated from it. You can recast each of those sentences to place the adverb directly next to the word it is modifying and remove the comma:

I actually do live in Chicago.
I do actually live in Chicago.
Mrs. Brown arrived surprisingly late despite her usual punctuality.
I usually eat at home.

He is right that many, if not most, sentential adverbs can be moved to the interior of the sentence, adjacent to the verb, adjective, or adverb they actually modify, but interestingly, he misunderstands the surprisingly example not once but twice. Here, he doesn’t understand that the original sentence that another commenter wrote, “Suprisingly, Mrs. Brown arrived late despite her usual punctuality,” has Surprisingly modifying the entire statement, not the arrived or the late. Due to his misunderstanding, he makes surprisingly modify late in clear contrast to the meaning of the original commenter’s sentence. After another commenter explains this to him, Rrhain still refuses to understand:

Whether “surprisingly” is modifying “arrived” or “late,” it is modifying a specific part of speech: The verb or the adjective. It is not modifying an entire sentence. To reduce your sentence down to its bare bones:

Mrs. Brown surprisingly arrived.

This was especially enjoyable and rewarding to me, somehow. This ignorant anti-Hopefully pedant misunderstands his own rules and admonitions, and misuses surprisingly not once but twice, all because he has set himself on this crusade against sentential adverbs and refuses to acknowledge that not just hopefully but many, many more adverbs, such as surprisingly, are frequently, clearly, and correctly used as sentence-modifying words.

With that last suprisingly example, Rrhain insists that Mrs. Brown’s arrival can legitimately be described with the adverb surprisingly only because this adverb modifies a specific verb in the sentence, arrived.

Um, no, it doesn’t. Let’s explore why.

Superstitious pedants like Rrhain love to repeat the mantra that hopefully shouldn’t be used unless the author means that a person was hopeful while doing an action; they performed that verb in a hopeful manner. Therefore, Rrhain and his ilk ought to know how to put verbs and adverbs together and what it means when a verb is modified directly by an adverb like hopefully or surprisingly; after all, they remind us that that’s the only truly correct way to use these adverbs every chance they get. Yet here, Rrhain has used surprisingly incorrectly to modify a verb it has no business modifying! Mrs. Brown did not arrive in a surprising manner; the effect that her arriving itself had on others was not to evoke surprise; she did not perform the arriving in a surprising way. Rather, the whole combined fact of her arriving late despite her usual punctuality is what’s surprising! That’s why Surprisingly should be used at the beginning of the sentence in a sentential role, to modify the whole sentence! However, because Rrhain refuses to allow sentential adverbs, we see him put surprisingly right before the verb, which changes its meaning from modifying the whole fact of Mrs. Brown’s arrival to the nature of her arrival.

It’s almost ironic, except willful ignorance, inconsistency, hypocrisy, and plain stupidity are expected from anti-Hopefully crusaders, so I don’t find his misunderstanding and misuse of surprisingly surprising at all.

He also makes the same mistake in his first post, in a slightly less obvious way:

For example: “I’ll go in a bit.” “In a bit,” while not containing any adverbs itself, is functioning as an adverb, modifying “go” to indicate time. You could use “hopefully” in this sentence: “I’ll go hopefully in a bit.” And in this case, you could easily move “hopefully” to the front of the sentence: “Hopefully, I’ll go in a bit.”

He thinks that “going hopefully” is the same thing as “Hopefully, I’ll go”! What phenomenal ignorance from someone claiming to guard English against degradation and criticizing others for their wrongheadedness! They are not the same. “Hopefully, I’ll go in a bit” means “I hope that I’ll go in a bit” or “I hope that in a bit, I’ll (be able to) go.” In contrast, “I’ll go hopefully in a bit” means that when I go in a bit, which is certain to occur, it will be in a hopeful manner. He thinks that putting hopefully in that spot makes it modify the adverbial phrase in a bit, but it does not. It is modifying the verb go in that sentence. If he meant to express the idea that he will go (for certain) and that he hopes this going will occur in a bit, he needs a well-placed comma: “I’ll go, hopefully in a bit.” Another punctuation change could express the same idea: “I’ll go (hopefully in a bit).” And this sentence can be rephrased, “I will go, and hopefully, this will occur in a bit.” The placement of hopefully and commas in his example can create sentences with three distinct meanings, all of which are, or should be, clear to all native English speakers and which exemplify the glorious flexibility and versatility of the English language. Let’s list them for ease of understanding:

1. “Hopefully, I’ll go in a bit.” Or: “I’ll go in a bit, hopefully.” (I hope that I’ll (be able to) go in a bit.)
2. “I’ll go hopefully in a bit.” (I will definitely go in a bit, and I will perform this going in a hopeful manner.)
3. “I’ll go, hopefully in a bit.” Or: “I’ll go (hopefully in a bit).” (I will definitely go, and I hope this will occur in a bit.)

Now that I’ve corrected his punctuation and adverb usage, we can plainly see that Rrhain is trying to use hopefully to express his own hope (example 1 or 3), and he justifies this by noting that in a bit is an adverb and then claiming that hopefully is modifying this adverb.

Again, no, it isn’t, except in my third example above, which is the only one he failed to type and which I had to write for him. In the first example, hopefully is modifying the whole ensuing clause, to express the speaker’s/author’s hope that the whole clause will come true. Rrhain refuses to accept this, so he tries rearranging the words to show that the sentential Hopefully in example 1 is actually modifying the adverb in a bit, as in example 3, but these sentences mean different things. Hopefully absolutely does not apply to the same group of words in examples 1 and 3. The only way to express the hope that “I’ll go in a bit” will come true using the word “hopefully” is by using it in a sentential role. Any other role applies hope to only a subset of the words, changing the meaning of the sentence.

I don’t know exactly why a preposition (in), an article (a), and a noun (bit) are perfectly fine to function together as a phrasal adverb but hopefully is not acceptable as a sentential adverb. I also don’t know why using Hopefully to express hope that an entire clause or sentence will come true is wrong but using hopefully to express hope that a time-adverb (such as in a bit) will prove true is OK. Either way, the speaker/author is injecting his own personal perspective into the sentence in a “sneaky” way without using the word I (or the far inferior One), which some people object to, possibly Rrhain. I think this “adverbs can only modify verbs, adjectives, and adverbs” rule sounds more like a post-hoc rationalization and extension of their specific opposition to sentential hopefully, in which they’ve tried to extend this position to a general rule that applies to all adverbs. They rail against sentential hopefully, they realize other sentential adverbs are used in identical ways, they want to at least appear consistent and logical, so they created a rule out of thin air to extend their fetish to other adverbs in the name of consistency, against all evidence.

This rule is unnecessarily limiting to the English language and inhibits, not enhances, clarity.

As the coup de grace in my exposure of Rrhain as a cartoonishly ignorant, illogically pedantic anti-Hopefully fetishist, Rrhain uses the adverb Specifically in a sentential role, not modifying any specific verb, adjective, or (phrasal) adverb, in one of his own sentences in the natural course of his writing, i.e., outside of his fabricated example sentences. He writes: “Specifically, the adverb is not modifying the clause but rather a specific part of the sentence, either the verb or an adjective.” There is no way to construe that sentence to claim that Specifically is modifying either is or modifying or any other specific word or phrase. Rather, it is modifying the entire thought conveyed by the sentence. This sentential adverb means To be more specific, and what follows is Rrhain being more specific. The entire sentence and not any one verb is described by Specifically.

This use of sentential Specifically is perfectly clear, logical, and grammatical—features it shares with sentential Hopefully and every other sentential adverb and adverbial phrase that good writers and speakers have used for hundreds of years. And even bad writers and armchair grammarians, apparently.

At the end of his first comment, Rrhain does make a good point that hopefully’s sentential use is more of an interjection, sort of prefacing the whole sentence, but he is very confused about the roles that adverbs play in his own examples.

Posted in Grammar, Language, Morans | 3 Comments

Any writer who rails against sentential Hopefully and the passive voice in general is not worth listening to

Look. I sympathize with Mary Elizabeth Williams. I really do. I hate it when people misuse a word, and I hate the idea of English speakers separated by time and space meaning different things when they use the same words or being entirely unable to understand each other. I have my own grammar and usage “rules” that I always follow and that I think make for more effective writing, but I don’t write about them or mention them to anyone because descriptivists will point out that the rule is arbitrary, has been broken by esteemed writers for centuries, or isn’t usually necessary for clarity. So I stopped caring about them as rules and stopped labeling transgressors as wrong. I just judge them silently and collect examples Garner-style, possibly to be blagged about one day but most probably not.

Plus, if you write about a usage rule or trend that you haven’t studied like a linguist, you’re bound to make material errors much worse than using a word supposedly incorrectly. For example, Mary Elizabeth Williams and her compatriots lament sentence-initial Hopefully because the AP stylebook recently un-banished it, but there is no doubt that every single one of those annoying, smug little pedants has used dozens of different sentential adverbs before, in writing, without batting an eye. Frankly, Obviously, Surprisingly, Next, Additionally, Also, Luckily, Happily, Fortunately, Unfortunately, First, Maybe, etc., etc. (the last two by Williams herself in that very column). This the ONLY ONE they have a problem with, because their parents or teachers had a problem with it and instilled this attitude in them, and they like enforcing rules because rules are good, not because the particular rule in question is good.

It is perfectly reasonable to object to sentential Hopefully on the grounds that it unduly injects the author’s opinion into the sentence either inaccurately (does every reader also hope what the writer hopes?) or subversively (hey, if you want to state your own perspective, write “I hope”; if personal pronouns aren’t allowed in this publication, you should revise your statements to meet the literary or logical rigor it requires, without just asserting things because you hope everyone else agrees). I see nothing wrong with objecting to this sloppiness or personal perspective injection when such objections are warranted. But it’s important to remember that everything that was ever written, even primary scientific literature, was thunk up and written down by a human(s), who had opinions and hopes and functioning nervous systems, so it is pretty rare for an injection of the authors’ hopes to be completely inappropriate and unacceptable. (Even in primary scientific literature, a sentence near the end of the paper could easily and legitimately begin with “Hopefully,” to express the authors’ hope about future studies, discoveries, or advancements. There’s nothing about “We hope” that is superior to “Hopefully”.)

Therefore, while the objection to undue perspective-injecting on the part of the author is understandable, the objection only holds water on rare occasions. In the contexts in which an author would be tempted to begin a sentence with “Hopefully”, chances are exceedingly, vanishingly small that the context is inappropriate for that word. (For instance, in a news article about what terrorist group did what or who died where, I don’t see an AP correspondent writing, “Hopefully, no more dead bodies will be discovered.”)

So the word and its sentential use are not the problem; inappropriate context is, and I challenge any Hopefully pedant to find a single instance of inappropriate, unacceptable, context-ignorant sentential Hopefully in any half-respectable publication.

Now, on to Mary Elizabeth Williams’s interjection against passive voice: “Language is meant to be subverted. (Note bold use of passive voice!)” I echo John McIntyre’s call to that famous passive-voice-misunderstanding-exposer Geoffrey Pullum to explain whether Williams is right in calling that sentence passive, but I’m going to take my own stab at it and say No. I think McIntyre is right when he writes,

Neither bold, nor passive, I think. We could argue whether a past participle following a form of to be is a true passive or merely serving the same function as an adjective in a copular construction, as in the sentence “Ms. Williams’s argument is impaired.”

How about changing one word in Williams’s sentence (admittedly the most important word, but keeping it a past participle verb): “Language is supposed to be subverted.” How about “I am surprised at the level of ire she musters”? These both have the structure [noun] [to be conjugate] [participle], just like her supposed passive sentence. The thing is, meant is being used more like an adjective than a verb; there is no implied actor performing the verb; there is no meanor in her sentence or supposer or surpriser in my sentences. Passive, I think not. Language is the noun doing the action in the sentence; it is the thing that’s meant to be subverted, and it’s doing so right there.

Before hearing about Geoffrey Pullum and, shortly afterward, encountering his screeds against people who mistakenly criticize the passive voice while completely mis-identifying the passive voice in their examples, I never would have guessed that so many language commentators who rail against the passive voice would so often mis-identify that very voice. But they do. Ms. Williams does it here. Dr. Pullum has catalogued probably hundreds of examples, by now, of passive voice bashers bashing something that isn’t even the passive voice. It’s quite astounding.

We should criticize the passive voice when it obscures actors and meaning, so it’s good to avoid it as a general default approach, but the passive voice is frequently extremely useful. Further, if “Language is meant to be subverted” is the passive voice (which I doubt), then obviously any “rule” or even guideline that would bar the use of that phrasing is stupid because that sentence is perfectly fine, clear, and direct! How else are you going to express that idea? Any “rule” that would label that sentence as grammatically “undesirable” or “improvable” is a stupid rule based in ignorance.

Finally, did you ever notice that some (hell, probably all) of the very people who rail against the passive voice are also the people who rail against sentential Hopefully, and that they are being laughably inconsistent in these two crusades? They hate the passive voice because it is supposedly just inherently bad—less forceful, less clear, less direct. But then they go and say that injecting the speaker’s perspective into an article or essay by writing “Hopefully” is inappropriate because not everyone necessarily hopes that. Well, the only two alternatives for expressing the author’s hope are the much more personal “I hope” and the heavily passive and ridiculous-sounding “It is (to be) hoped that”. The former must be far worse than mere “Hopefully” on the personal-perspective-injection scale, and the latter is about the most clunky, stodgy, awkward, unnecessarily passive clause I can imagine. If an author wants to express the meaning “I hope that…”, without starting with “Hopefully”, then he can either use the first-person pronoun, which seems worse than implying his opinion with the word “Hopefully”, or he can write “It is to be hoped”, so the anti-passive, anti-Hopefully crusader is left with either “I hope” or hypocrisy.

Posted in Language, Morans, Writing | 2 Comments

Yu Darvish and Craig Kimbrel will injure their elbows

The first thing that struck me when I saw footage of Yu Darvish pitching last year was that he uses a lot of arm to throw that hard and that his mechanics are similar to those of Kerry Wood, Mark Prior, and Adam Wainwright, all of whose faulty mechanics resulted in elbow injuries that required Tommy John surgery.

The excellent website of Chris O’Leary, a St. Louis–based pitching and hitting instructor, has dozens of helpful and informative animated GIFs, including of the three pitchers mentioned above. On his page about Adam Wainwright and the “inverted W”, O’Leary pinpoints what he thinks Wainwright’s mechanical error was, or at least the source of his problems: he keeps moving his elbow upward for far too long, resulting in his elbow being higher than the ball for too long and even significantly higher than his shoulder for a while. This requires his elbow to twist too much too fast and his throwing hand to whip around and forward too quickly, which puts way too much strain on the elbow over time.

Mark Prior’s mechanics were even worse and make me cringe just looking at them. O’Leary rightly calls Prior’s mechanics a train wreck. O’Leary even created another page comparing Mark Prior’s mechanics to Greg Maddux’s and Nolan Ryan’s, two pitchers who never had serious arm problems (though Nolan Ryan is kind of a freak of nature, so I’m not sure any comparison to him would be fair. I’d love a Roger Clemens comparison because I always considered him to have nearly perfect mechanics.) The story is the same for Mark Prior: elbow too high too late, ball dragging behind it, whipping around too fast too late.

Kerry Wood is among the most famous injured pitchers of the last couple decades, and O’Leary’s Kerry Wood page details basically the same problems that hurt his elbow as hurt the other two. Added to that is Kerry Wood’s somewhat across-body throwing style that presumably helped him snap off his curve ball so effectively. Maybe Kerry Wood’s curve ball contributed more to his elbow problems than his “inverted L” arm motion did, but I think I generally agree with O’Leary’s analysis.

I have little doubt Yu Darvish and Craig Kimbrel will suffer similar fates because I see the same faults in their mechanics: their elbows are ahead of their throwing hands, their elbows come up and back too far, and they have to whip the ball around too fast at the last minute, which puts excessive torque strain on the elbow. Their mechanics seem particularly worrisome because you don’t need GIFs or slow-motion video to detect their faults; they are obvious in real time on TV. Here are a couple photographs of each pitching:

Yu Darvish pitching for the Rangers   Yu Darvish with the inverted L

   

Kimbrel is far worse than Darvish, and I don’t think the fewer innings he will throw as a reliever will save him. It isn’t as evident in those photographs of Darvish as it is in video, but he seems to whip or twist his arm around much more than most pitchers, possibly similar to Kerry Wood in the 1990’s (though I haven’t seen video of pre-injury Kerry Wood in a long time, certainly not enough of it to study it).

Posted in Sports | 1 Comment

Bryan Garner interview in VICE magazine

I liked several passages of this interview of Bryan A. Garner in VICE magazine:

As we saw the rise of literacy over the next few hundred years, especially in the beginning of the 18th century, the language became relatively more fixed. It’s very interesting that a grammarian like Lindley Murray, who in 1795 wrote his English Grammar, became the best-selling author of the first half of the 19th century. He sold more than 10 million copies of that book. … Nobody else was close, and grammar was something that Americans seemed to care about a lot. … He outsold Stephen King or J.K. Rowling—and to a smaller population. It really is quite extraordinary.
[…]
With the rise of literacy, especially in America, there were strong notions of correctness. And, by the way, Murray was a good grammarian. He’s often derided. But he did not, contrary to maybe not popular belief but popular academic belief, tell people not to end a sentence with a preposition or never to split an infinitive. He didn’t. A lot of the popular notions about grammar are just wildly false.
[…]
My impression is—and there’s quite some literature on this—that the public schools have simply stopped trying to teach it [grammar]. It’s not that they don’t do a good job of it. They don’t even try. It’s not even a subject that is offered in most public schools. Part of why that is, I think, is a misguided egalitarianism that suggests that we should not put teachers in the position of criticizing language as it’s used at home by various students.

It might look as if the teacher is criticizing the parents of the pupils. And there is a view among some inane linguists that says that we shouldn’t be teaching nonstandard speakers the standard dialect—that it’s simply the dialect of the people in power. Instead, we should be teaching everyone to be accepting of linguistic differences.
[…]
I grew up in West Texas. I did actually have a high-school course in grammar that was taught by a coach. Picture this, in Canyon, Texas, a coach who stood at the head of the room dipping snuff while teaching, and filling up a Styrofoam cup with his gross, brown saliva, and he didn’t know a damn thing about English grammar. So I had no educational advantages in terms of picking up standard English.
[…]
I’m all in favor of plain English. The problem with most legal writers is they entered law school as rather inept writers, then they’re exposed to a lot of very bad writing and a lot of unduly complex writing, and this pollutes their own writing habits. So not only are they very weak in grammar and very weak in just putting together good, strong sentences, but they are also trying to ape something that’s much more complex and archaic. They end up being execrable writers.
[…]
When people ask me, “What are your biggest pet peeves?”—well, maybe this is because it’s my profession, but I’m beyond the state of pet peeves. There are 3,000 things that seriously bother me, but I find that having the outlet of the usage book to record them in and disapprove of them—sometimes with some denunciatory fun—is a great way of coping.
[…]
How about giving us a layman’s definition of descriptivism and prescriptivism?
Let me just try this off the top of my head. A descriptivist is someone who tries to study language scientifically with absolutely no value judgments—to simply describe how certain speakers use the language, without any of its sociological significance, but instead to record grammar dispassionately. And that means that whatever the dialect, if it’s West Texas, if it’s black English, if it’s cockney, there is no such thing as good or bad. A prescriptivist looks at the language from the point of view of the standard dialect and judges deviations from the literary standard as being less than ideal. To a prescriptivist, it is possible to make a mistake. To a descriptivist, a mistake is impossible. In the world of affairs, virtually everyone is a prescriptivist. Descriptivism is a very artificial construct, developed by linguists for purposes of recording grammars. Descriptivism has its place, but it has no place in the world of affairs.

It’s more like cultural anthropology than grammar.
That’s right. It’s very much in line with that, to talk about how a society does things and not judge them at all. If you were an anthropologist studying cannibals, there would be no disapproval of their eating habits. That’s something that’s hard for most people to fathom, just as it would be hard for most businesses to hire someone who answers the telephone by saying, “He ain’t got no business today.”
[…]
A really good discourse proceeds by paragraphs, and a paragraph is sort of the building block of a good essay. Too many people see the sentence as the building block, and they don’t pace their ideas well. They have bumps between sentences, not a smoothly flowing development of thought. Advanced writers know that the paragraph is the basic unit of composition.

I also liked the seven-question sample grammar quiz, excerpted from Garner’s full-length quiz, that they printed at the bottom of the article. I am quite proud to say I got six out of seven right; I only missed the one about the conjugations of swim! D’oh!

Posted in Grammar, Language, Writing | Comments Off on Bryan Garner interview in VICE magazine

Quote of the day

It’s not as musical as Spanish, or Italian, or French, or as ornamental as Arabic, or as vibrant as some of your native languages. But I’m hopelessly in love with English because it’s plain and it’s strong.

William Zinsser, Writing English As a Second Language

Posted in Language, Writing | Comments Off on Quote of the day

Superstitions and fetishes make for bad writing and editing

Scientific editors are a hard group to deal with. Most of them are either bad writers or not experts on the topic they’re editing at a given time, or both. Either shortcoming often leads them to make unnecessary, unwise, or even actively incorrect changes or recommendations in a text. Being unfamiliar with the jargon of a field is one thing, and can only be overcome with time and experience, but being bad writers and grammarians who adhere to baseless grammatical superstitions, force their fetishes on others, and judge themselves protectors of formal prose who provide anything of value to others is inexcusable in a scientific editor.

I deal with, and am myself, a scientific editor who corrects the language, grammar, and style of biomedical research papers written by non-native English speakers to improve them to the level of native English speakers’. This might mean my job entails much different functions from an editor who works for a scientific journal, who might do more copyediting than language, grammar, and meaning editing. It also means we often have to go a little above and beyond how typical American scientists would write because these, too, are notoriously bad writers and grammar users! Consequently, I have had opportunities to witness self-appointed grammarians making simply awful, perplexing grammatical changes to perfectly correct, clear sentences all because they think it’s their job to force their 3rd-grade grammar fetishes on their customers—and, unfortunately, on their underlings.

As an independent contractor who according to the IRS is self-employed, I don’t exactly have a boss, but I do have managing editors who run the company for which I do contract work, who judge and approve of my work, and who set either de jure or de facto house styles. Therefore, you might conclude that it’s frustrating to be supervised by, judged by, and have my ability to make a living partially determined by superstitious nincompoops who enforce certain rules only because they heard somewhere that they were rules or who enforce certain styles only because they don’t know any better. You would be right.

I think it’s possible to divide most of their mistakes and misconceptions into two rough categories: turgid scientific formality and ignorant grammatical superstitions. Their fixations on both could rightly be called fetishes. Dictionary.com gives this as its second definition of fetish: “any object, idea, etc., eliciting unquestioning reverence, respect, or devotion”. Turgid is also a perfect word to describe the style of most academic writing and all of the editing that these fetishists perpetrate. It means bloated, overblown, and inflated with self-importance and bombasticness.

The most prominent and most easily identifiable form that turgid fetishism takes in scientific editing is replacing short words with longer ones and Germanic English words with Latinate ones. Thus we have then being replaced with subsequently, tried with attempted, by with via, rest with remainder, about with approximately, made with generated, before with previously, learn with ascertain, have with contain, seen with observed, show with demonstrate, now with currently, explain with elucidate, got with obtained, done with performed, check with investigate, et cetera ad nauseam. (I will say that the last three changes mentioned are usually improvements, but not every time.) To emphasize, I have seen editors replace each of those shorter words, written by the authors in their original manuscripts, with the longer words. I have seen, in an edit of mine, a managing editor replace two instances of so far with thus far for no reason other than it suited that editor’s fancy (i.e., they were in separate paragraphs and were the only two instances of so far in the entire document, so those changes weren’t made to inject some variety in the midst of a bunch of other so far’s or so’s). I’ve got news for you: as a writer, using thus far instead of so far in any type of document doesn’t make you sound more educated or formal; it makes you sound like a clueless twit who thinks longer is better and who is so lacking in a knack for language and so unconfident in his ability to impress readers with the content of his writing that he feels compelled to write as pompously as possible. As an editor, going to the effort to change somebody else’s so far to thus far looks even worse.

Some scientific editors also seem to have a bias against the conjunction so itself, such that in the middle of sentences they change “, so…” to “; therefore,…”, “; thus,…”, or “, and as a result,…”. Where in the world did that prejudice come from? If the conjunctions and, or, and but are fine in the middle of a sentence to introduce an independent clause, what the hell is wrong with so?

The other category of inanity I’ve encountered, the enforcement of baseless grammatical superstitions, is surely a cousin of the desire for utmost formality and propriety, but separate enough to warrant a separate classification. This category includes mainly old prescriptivist grammatical pseudo-rules that never were valid, aren’t valid now, and shouldn’t have been taught. I think the best-known of these are the pseudo-rules against ending sentences in prepositions and against splitting infinitives. I don’t think anyone really follows these rules anymore, but—I kid you not—in an edit of mine, a managing editor rearranged a perfectly good verb, which I had left alone, to avoid splitting an infinitive. I was stunned that anyone would go to the effort to un-split an infinitive in someone else’s writing in this day and age. I don’t communicate with them about the changes they end up making (can you see why, by now?), and they leave no note about why each change was made, but I can only guess that infinitive was un-split because of a stupid superstition, not because it sounded more natural or flowed better. When I saw that, I thought, “Oh, no, honey. Oh, you poor thing. No, that’s…that’s just stupid. I feel bad. You’re an awful writer.”

The other two baseless grammar fetishes that I’ll mention are the proscription against using Since to mean “Because/Inasmuch as/Seeing that” and the proscription against beginning a sentence with And or But. As a general prescriptivist (a rational prescriptivist or selective prescriptivist, I like to call myself), I take Bryan Garner as the final word on language issues on most occasions when I need a final word. Garner classifies both of those proscriptions as “superstitions” on the level of never ending sentences with a preposition and never splitting infinitives.

The Since/Because issue came up in the discussion forum of my fellow editors recently. One editor said it was understandable to require editors to avoid using Since to mean “Because” when we add it ourselves, but that it is over-editing to change such usage of Since in the original document if its meaning is clear (which it is about 90% of the time). He cited the editing guide of the company we contract for, which says Since can have both a causative and a temporal meaning, but temporary confusion can result from the causative meaning, so they discourage it. They even discourage leaving it alone, even when its meaning is unambiguous. I never noticed the somewhat-almost-maybe permissive wording of this entry in the editors’ guide; I always assumed it was an absolute proscription and so I happily followed it religiously. It wasn’t until this discussion thread that I decided to look up what actual grammarians say about it and respond to him. Actually, I responded to the person who responded to him; that first responder said the rule bothered her at first but now she agrees with it. I responded that the rule seemed great to me at first but now bothers me! Maybe the descriptivist influence of Geoffrey Pullum and Marc Liberman at LanguageLog is affecting me…

Anyway, I could cite more authorities than just Garner on the Since/Because issue, but that would be pointless because they all agree with him; there is no dissent on its acceptability; it isn’t even an issue anymore. Except in editing circles populated with ignorant, superstitious pedants. Every English dictionary lists “because; inasmuch as” as a definition of since. If it weren’t for overzealous grammar-delusionists who have turned their delusions into fetishes and their fetishes into (pseudo-)rules, there would be no point in discussing this word at all except to occasionally remind people not to use it if there is some chance of confusion between a temporal and a causative meaning.

I decided to put the managing editors’ words to the test recently and see if they really meant what they wrote in their editors’ guide: that causative Since at the beginning of a sentence is discouraged but not wrong. I left it alone at the beginning of a few sentences where there was no chance of confusion to see if they would also leave it. Nope, they consider it wrong and will change it religiously against all evidence, all history, and all sense. If that’s their policy, they should make it a part of an official house style and say, “No dictionaries or grammarians agree with this restriction, but the managing editors have been specifically instructed, on threat of termination, to change Since to Because whenever possible and to dock your editing score if you don’t, so follow this house style because it’s in your job description.”

The sentence-initial And/But issue came up recently in a review article I edited. A review article is secondary literature, i.e., it summarizes and occasionally comments on previously published findings (from the primary literature) and does not report any new findings. Reviews are slightly less formal than primary literature, if only in the sometimes quirky, punny titles that the authors probably spend weeks coming up with. In my opinion, there should be a little more leeway in the formality and stiffness of secondary literature, but some people (surprise) disagree. In the aforementioned review article, the author, who was German and wrote in very good English but not nearly the perfect English we sometimes ascribe to northern Europeans, began a lot of sentences with But. Like, 10 or 12 throughout the 5000-word document. I had deleted or changed all of them to However, except the last or second-to-last one, in the Conclusion of the article. That sentence began with But although, which is fine and correct to any educated English speaker with half a knack for English prose. I knew of their proscription against sentence-initial conjunctions, so I wrote a comment to the managing editor when I uploaded my edit: (paraphrasing) “I had to change a lot of ‘But’ at the beginning of sentences, but at least one near the end of the document was fine. I felt like an overly pedantic, zero-tolerance elementary school teacher deleting them or changing them to ‘However’ every time.”

Nope. She was having none of that. She either doesn’t understand or doesn’t agree that good, formal writing can have sentences that begin with And or But, so she changed it to However. Never mind that her sentence sounds more stilted and awkward than mine; The Rule says sentences can’t begin with But, and that’s that. (Note that this Rule doesn’t actually exist anywhere except in a small minority of English speakers’ minds. It literally isn’t a rule in a single extant grammar guide. And why can an adverb begin a sentence but a conjunction can’t? Conjunctions are only supposed to connect two independent or dependent clauses, not two sentences? Why? Aren’t adverbs supposed to modify verbs and adjectives? Why whole sentences, then? Why am I asking when I know she has no answers to these questions other than “Just because” or “My 5th grade teacher said it; I believe it; that settles it”?)

I couldn’t help but wonder what was going through her mind when she changed a perfectly clear and sonorous But although to a clunky and discordant However, although. (I mean, honestly. Listen to that clunky, stilted sentence starter. However, although? Seriously? Who thinks that sounds good? What could lead anyone to think that looks, sounds, feels, or flows better than But although?) What was she thinking and feeling? Was there some intellectual consideration or emotional turmoil as her mind struggled to figure out whether The Rule she had been given was inviolate? “Hmm, he’s got a point, but my hands are tied because The Rule says so”? “Yes, ‘But although’ does sound better than the clunky ‘However, although’, but the point of academic writing isn’t to express thoughts clearly and smoothly; it’s to show off how pompous you are and how many syllables you can waste”? “The author didn’t hire us to leave boorish informalities alone but rather to mark up his paper with as many changes as possible”? “It’s not up to me to determine whether anything is grammatical or acceptable; that’s why we have rules: so we can implement them uniformly and move on”? “In theory, a sentence could start with ‘But’, but I’ve never seen an appropriate one yet, so I’m changing it”? “Oh, but what will people think of me and my intellect and my upbringing and my employer if I let such an egregious affront to decency pass? We must all be dutiful rule-followers and never question such basic facts of grammar”? “I’ve always changed sentence-initial ‘But’ to ‘However,’ and I’m not about to stop now”? “Uh, in scientific literature? No way, rookie!”? Or a jaded, “Fuck it, I don’t care what’s right; it’s in my job description to change it, so I’m changing it and moving on”?

So, obviously But although sounds, flows, and feels better than However, although, and obviously people talk that way, and obviously non-scientists begin sentences with But and And all the time. But is it acceptable in formal academic literature? Bryan Garner is yet again a sufficient authority to quote on this matter:

Many legal writers believe that but, if used to begin a sentence, is either incorrect or loosely informal. Is it?

No. But the superstition is hard to dispel. Usage critics have been trying to dispel it for some time. In the first quarter of the 20th century, the great H.W. Fowler dispatched an editor who wanted to change a but to however at the beginning of a sentence:

“It is wrong[, said the editor,] to start a sentence with ‘But’. I know Macaulay does it, but it is bad English. The word should either be dropped entirely or the sentence altered to contain the word ‘however’.” That ungrammatical piece of nonsense was written by the editor of a scientific periodical to a contributor who had found his English polished up for him in proof, & protested; both parties being men of determination, the article got no further than proof. It is wrong to start a sentence with “but”! It is wrong to end a sentence with a preposition! It is wrong to split an infinitive! See the article FETISHES for these & other such rules of thumb & for references to articles in which it is shown how misleading their sweet simplicity is.

When Sir Ernest Gowers revised Fowler in 1965, he treated the question with and:

That it is a solecism to begin a sentence with and is a faintly lingering superstition. The OED gives examples ranging from the 10th to the 19th c.; the Bible is full of them.

“Faintly lingering” is a good description of what the superstition is doing nowadays. It isn’t supported in books on rhetoric, grammar, or usage—though several try to eradicate it.
[…]
Webster’s Dictionary of English Usage—generally ultrapermissive, but thorough in marshaling previous discussions on point—found unanimity among language critics:

Part of the folklore of usage is the belief that there is something wrong in beginning a sentence with but:

Many of us were taught that no sentence should begin with “but.” If that’’s what you learned, unlearn it—here is no stronger word at the start. It announces total contrast with what has gone before, and the reader is primed for the change
–Zinsser 1976

Everybody who mentions this question agrees with Zinsser. The only generally expressed warning is not to follow the but with a comma….

Perhaps we can all agree that beginning a sentence with but isn’t wrong, slipshod, loose, or the like. But is it less formal? I don’t think so. In fact, the question doesn’t even reside on the plane of formality. The question I’d pose is, What is the best word to do the job? William Zinsser says, quite rightly, that but is the best word to introduce a contrast. I invariably change however, when positioned at the beginning of a sentence, to but. Professional editors such as John Trimble regularly do the same thing.

Sheridan Baker, in his fine book The Complete Stylist, recommends that writers choose but over however in the initial position.

Garner then goes on to quote Baker as saying however is best left between commas in the middle of a sentence, which I completely disagree with. I actually think that is less clear, more awkward, and more often leads to confusion than putting it at the start of the sentence. But then Garner cites lots of examples of But starting sentences in very formal writing, to support his (everyone’s) point that it isn’t ungrammatical or informal.

Like just about any matter of style or syntax, too much of anything is…well, too much. So I would suggest that sentence-initial And and But be used sparingly, maybe once every few thousand words in formal writing. This style advice has nothing to do with their being incorrect or informal but rather with the fact very much repetition of anything is unwise. Eliminating them entirely from the beginning of sentences is groundless from every historical, logical, stylistic, or grammatical perspective that can be conjured. Some people just don’t care, and it’s not only to their detriment but to mine as well, since I work with them. I have little doubt that these delusional formality zealots would edit Bible verses to say, “Additionally, God said, ‘Let there be light'” and “Moreover, it was good.”

Fowler was right: The proscription against sentence-initial And and But is ungrammatical nonsense, and only bad writers and editors follow it.

EDIT: God damn, this post is poorly written. It was only 5 years ago, but I hate my writing from back then, at least some of it. This post just sounds so utterly whiny and condescending. I could just take it down, but I don’t like memory-holing things.

Posted in Career, Language, Science, Writing | Comments Off on Superstitions and fetishes make for bad writing and editing

Quote of the day

I think it’s [“consumed”] an ugly term when applied to information. When you talk about consuming information you are talking about information as a commodity, rather than information as the substance of our thoughts and our communications with other people. To talk about consuming it, I think you lose a deeper sense of information as a carrier of meaning and emotion—the matter of intimate intellectual and social exchange between human beings. It becomes more of a product, a good, a commodity.

—Nicholas Carr, in an interview about his books and five books on technology and information that he recommends

Posted in Books, Technology | Comments Off on Quote of the day

In praise of highly serial television shows

When I was in high school, planning to go to college to major in something scientific (it ended up being Genetics), I remember thinking that college English classes and Literature classes didn’t reward writing that (only) got to the point, expressed it clearly, and backed it up with logic; they primarily rewarded a florid, pretentious writing style. I expressed this opinion to my friends, and they echoed the sentiment, agreeing that it was a problem with English classes in general. We had no basis for this perception except, I guess, what we imagined and what we had heard (from whom? each other?).

I never took any English or Literature classes in college (exempted by taking College English at my high school in 12th grade), and I only took a few humanities classes. Today I doubt my perception of style over substance held true in very many English or humanities classes across the country any time recently. Instead, based on what I read in newspapers, magazines, and web pages, I think another perception of mine is accurate much more often: that the broad world of professional and academic writing outside of the sciences judges the quality of a column or essay on how well the author seems to make his point and how strong its self-contained logic is, from the first to the last sentence, and not on whether any one of its statements is supported by actual data and not whether the author’s opinions are worth a lick more than anyone else’s (completely contradictory) opinions. I should note that this perception of mine is not based on any journals, college essays, or actual academic writing; it’s based on the columns of professional journalists, freelancers, and writers of other stripes that seem to say to the reader, “Take my analysis as fact based only on what I write here; don’t factor in any outside data that might contradict my conclusions, and don’t worry about the existence of other opinions, which would reveal that all I’m telling you is what I think, not what I know.”

I think I can identify the column that instigated this perceptual change in me. Or at least, looking back on that column now, it is the earliest column that I can remember that (should have) started me thinking this way. Unfortunately, I don’t know who wrote it or when it was published, but I know I read it in the Atlanta Journal-Constitution about 10 or 12 years ago. It was about the mis-perceived political and social messages of “Sweet Home Alabama” by Lynyrd Skynyrd.

The point of the column was basically that “Sweet Home Alabama” isn’t a racist song or a lame Republican (Wallace, Nixon) cheerleading song. The author wrote something like, Skynyrd was actually being satirical and criticizing racist/segregationist George Wallace supporters, and the line, “Now, Watergate does not bother me. Does your conscience bother you?” was referring more to Northerners and Southerners and not simply to the Watergate scandal and the Republican president who perpetrated it, and they meant all Southerners are not to blame for Southern racism or governmental policies any more than Northerners are for Watergate, or something along those lines. (In scouring Google with every search-term variation I could think of in order to find this column, I came across a few others about “Sweet Home Alabama” that say basically this, so I’ll just assume the column I read in the AJC made the same points.)

I remember thinking as I read that column, “Well, maybe that’s true, and maybe it isn’t. How do I know? Why should I take your interpretation of the lyrics as fact?” It could be because he was limited to the space of the newspaper column, but I think that’s a secondary issue. Columns and blag posts that I read around the internet about entertainment, society, and leisure rarely back up their opinions or “facts” with data, studies, quotes, or even supporting opinions. That just isn’t how they do things. In my limited experience, they seem to write for the purpose of presenting their opinions and conclusions based on the way they see things and putting them up for comparison with the readers’ opinions and conclusions, and however well the author’s perspective jibes with the audience’s is how worthy their writing will be judged. However, the way it usually comes across to me is that they have analyzed the song, movie, TV show, book, societal trend, event, or other story, they’ve interpreted it and drawn conclusions based on their understanding of society and their own life experiences (and possibly sources they don’t mention), and they present it not as opinion but as learned analysis to be taken as fact or some combination of opinion and fact. I just don’t buy it most of the time.

My background in biomedical science doubtless flavors this perception of mine. In science, every statement in a primary-data paper is either reporting previous findings, reporting what the authors of the current study did (methods), stating what they found (results), or stating an obvious opinion or judgment on their part (“This discrepancy could have been caused by…”, “It is interesting to note…”, “It will be necessary to determine…”, “Future studies should investigate…”). Nothing is presented as true unless current or previous data are cited. I see nothing wrong with holding journalists, humanities writers, and entertainment columnists to the same standard. Either insert a lot more “I think”, “It seems”, “Maybe”, and “In my opinion”, or cite something to back up your claims.

These thoughts were most recently prompted by the column “Did The Sopranos do more harm than good?: HBO and the decline of the episode” by the A.V. Club’s Ryan McGee. If you want the short answer, it’s No. If you want my opinion of his column, continue reading.

McGee’s thesis is that the popularity of HBO’s heavily serialized shows (shows that link each episode with the previous ones and tell a single story over a season(s)-long story arc, as opposed to procedural shows like CSI and Law & Order that tell a completely self-contained story within each episode that does not link to anything else), represented primarily by The Sopranos, has led to such a focus on serialization in the TV industry that the (largely) self-contained single episode has suffered, which is a bad thing that often makes the viewing experience worse, not only for that episode but for that show as a whole.

Below, I will first quote McGee’s statements that strike me as assertions of fact that I think could only be accepted as fact if supported by real outside data or some sort of statistical analysis, and then I will contrast his analysis of (over-)serialization with my own perspective.

Here are the statements that seem to be written as facts, but I have no way of knowing whether they are really true because he cites no outside sources and has no data, statistical or otherwise, to back up his claims:

[M]aking HBO the gold standard against which quality programming is judged has hurt television more than it’s helped it. [This is one statement of his thesis, which seems to be backed up subsequently by a bunch of opinion, not facts or data.]
[…]
[E]ach piece of art [novel vs. TV program] has to accomplish different things.
[…]
HBO isn’t solely to blame for this trend. It’s been accelerated not by internal mandate, but by viewer consumption. [That’s probably true, but the column cites no outside opinion or data to show it.]
[…]
Plowing through a single season in two or three sittings may feel thrilling, but it’s also shifted the importance of a single episode in terms of the overall experience.
[…]
How about those who sit at home on the night of initial airing and obsessively analyze that week’s episode in order to discuss it at length online or at the water cooler? Such a viewing model should put an emphasis on the episode as a discrete piece of the overall pie. And yet the critical praise heaped upon HBO has infected the way we look at that discrete piece.
[…]
Rather than take stock of what has just transpired, eyes get cast immediately toward that which is still unseen.
[…]
After all, if we measure quality by the gold standard of HBO, then by definition, the best element of the show has yet to actually air.
[…]
Such an assignation has merit, but has established a benchmark against which other programs simply can’t compare.
[…]
But the perception exists that the only way to be a critical hit is to write the equivalent of Sgt. Pepper’s Lonely Heart’s Club Band.
[…]
[T]hese shows [that follow the Burn Notice “mythology” model] feature long-running arcs that usually harm, not help, their sturdy-if-bland lather/rinse/repeat episode structure. Rather than having the two dovetail, they often work against each other, producing uncomfortable friction as both sides seek to establish the same space.
[…]
David Goyer, who created the show Flash Forward, bragged that he and his writing staff had built out the show’s first five seasons before the pilot even aired. [Again, I don’t doubt this is true, but how are we to know? Take Ryan McGee’s word for it? Search Google on our own? It is a statement that we are supposed to take as fact because he read or heard it somewhere, mixed in with all of his opinions that are also presented as fact.]
[…]
The idea of having a fixed point toward which a show inevitably builds is fine in theory, but false in practice. There are too many variables at play when producing a television show that slavishly adheres to a predetermined finish line.
[…]
Assuming that a superior idea won’t arise later is simply arrogant thinking, and counterintuitive to any collaborative process. [I don’t know, maybe it occasionally replaces mediocre collaborative results with brilliant ones; who knows? How can we ever tell? What observations support this claim?]
[…]
Laying in groundwork for a massive payoff down the line is a terrible risk, one that comes with so little control as to be almost laughable.
[…]
With a theoretically unlimited amount of episodes to fill, it’s smart to look at the environments in which shows operate and look under rocks and behind corners to see what might exist. [Always? Necessarily?]
[…]
Show-creator Vince Gilligan planned the overall arc of the show [Breaking Bad] in the broadest strokes possible: the transformation of Mr. Chips into Scarface. But he’s run a writers’ room in which narrative improvisation fueled the actions seen onscreen. Rather than staying constricted to a heavily planned scheme, Gilligan and company have worked through each episode, looked at the results, and then adjusted accordingly down the line. [Probably true, but mixed in with pure opinion, it makes most of the statements sound like facts.]
[…]
A meticulous attention to detail on the part of both those who create television and those who consume it has stymied a desire for the kind of experimentation and exploration working in the microcosm of episodes allows.
[…]
Everyone’s so concerned about getting everything right that they’ve forgotten how much fun mistakes can be.
[…]
Showrunners are too often trying to fool the audience rather than entertain it. Audience members are too busy trying to solve the show and being disappointed when reality doesn’t line up with theory. Amid all of this, the episode has suffered under the weight of crushing expectation over where a story is going to go as opposed to what it currently is. [How verifiable is any of this?]

I must re-emphasize that I’m not saying he has no business publishing his opinions, whichever statements were in fact supposed to be opinions. I’m definitely not saying every statement in a TV column needs to cite either outside sources or primary data. I also can’t really say entertainment writing should be any different. What I am saying is that I simply don’t buy a lot of it because he presents nothing to support his analysis except a bunch of personal experience, and furthermore, that mixing in things that are definitely verifiably true or false (e.g., the David Goyer and Vince Gilligan stories) with assertions that strike me as purely subjective (most of the rest of the column) leads to the perception that all of it is verifiably true—that he has done research to obtain objective facts and is not just spouting his personal opinions. The tone or wording is usually not different between the sentences that seem to be facts and those that are probably actually opinions. Maybe I’m being too much of a scientist and failing to interpret this column the way a normal person would (mostly opinion that we can take or leave), but I just don’t buy a lot of it.

Now my comments on McGee’s assertions and conclusions, with plenty of my own opinions and un-cited facts. My thesis is: McGee’s conclusions are only valuable if one or both of the following conditions hold: (1) his assertions and judgments are supported by outside facts and statistical analysis that would reveal them as actual, verifiable trends and not just subjective perceptions; (2) you already agree with his opinions anyway.

The Sopranos took a patient approach that rewarded sustained viewing. The promise that payoffs down the line would be that much sweeter for the journey didn’t originate with the HBO mob drama, but the series turned into the boilerplate for what passes as critically relevant television.

But is this a good thing? The Sopranos opened up what was possible on television. But it also limited it. It seems silly to state that the addition of ambition to the medium has somehow hindered its growth, but making HBO the gold standard against which quality programming is judged has hurt television more than it’s helped it. The A.V. Club’s TV editor Todd VanDerWerff started pointing out the change in HBO’s approach when, speaking of Game Of Thrones, he noted something that had been in the back of my mind but not fully formulated until I heard him say it: HBO isn’t in the business of producing episodes in the traditional manner. Rather, it airs equal slices of an overall story over a fixed series of weeks. If I may put words into his mouth: HBO doesn’t air episodes of television, it airs installments.

This isn’t merely a semantic difference that paints lipstick on the same pig. It’s a fundamentally different way of viewing the function of an individual building block of a season, or series, of television. Calling The Sopranos a novelistic approach to the medium means praising both its new approach to television and its long-form storytelling. But HBO has shifted its model to produce televised novels, in which chapters unfold as part and parcel of a larger whole rather than serving the individual piece itself. Here’s the problem: A television show is not a novel. That’s not to put one above the other. It’s simply meant to illuminate that each piece of art has to accomplish different things. HBO’s apparent lack of awareness of this difference has filtered into its product, and also filtered into the product of nearly every other network as well.

Why is treating an episode as an installment a problem? An episode functions unto itself as a piece of entertainment, one that has an ebb and flow that can be enjoyed on its own terms. An installment serves the über-story of that season without regard for accomplishing anything substantial during its running time.

I think he and I would agree on one thing here: there is a valuable distinction between connecting an episode to the overall story arc in such a way that it entices further viewing and making an episode simply boring, unimportant, and un-entertaining by itself because little happens in it. To the extent that his argument is, “The propagation of episodes in which little to nothing happens and which can’t be enjoyed on their own (by some people) is due entirely to the obsession with serialization,” he seems to be on solid ground. But when he jumps to conclusions like (paraphrasing) “treating an episode like an installment is a problem”, “TV shows shouldn’t be like novels”, “TV shows can’t be planned weeks, much less months, in advance”, “Focusing on the future often hurts the present”, and “Adding a long-running story arc to a show usually harms, not helps”, he has ventured into pure opinion and conjecture.

HBO isn’t solely to blame for this trend. It’s been accelerated not by internal mandate, but by viewer consumption.
[emphasis added]

Yeah, because people like it, an obvious and well-established fact that McGee entirely ignores throughout his column.

It’s easy to blur the line between “episode” and “installment” if you’re blowing through an entire season of Breaking Bad over a single weekend. When doing this, thinking about how a certain episode works on its own becomes less relevant. Simply getting through the virtual stack of content becomes paramount, with the next episode literally moments away from appearing on your screen. Plowing through a single season in two or three sittings may feel thrilling, but it’s also shifted the importance of a single episode in terms of the overall experience.

He again implies that something is bad when what he means to say is, “I don’t prefer it.” The affordability and ubiquity of DVD/Blu-ray players and of TV series on DVD, Blu-ray, and Netflix has changed the landscape of TV viewing. People neither need nor want to catch every episode of every TV show they like on a weekly basis, even with the life-saver that is DVR, preferring instead to buy the disc set (or torrent entire seasons) and watch them as the televised novel that they are written as. I am one who prefers the latter. My wife is another. I think of heavily serialized TV dramas, with good, exciting, dramatic, suspenseful TV episodes that not only put you on edge during the episode but make you anticipate what’s to come in the future and entice you to guess how an individual episode’s events will tie in to the overall story arc, as far superior to (mostly) procedural dramas.

How about those who sit at home on the night of initial airing and obsessively analyze that week’s episode in order to discuss it at length online or at the water cooler? Such a viewing model should put an emphasis on the episode as a discrete piece of the overall pie.

How about those who like completely serialized TV shows with no stand-alone episodes? They don’t get any shows that are up their alley? Their tastes are just dismissed as wrong or, actually, ignored altogether? McGee’s opening example of Luck, an HBO show in which virtually nothing happened during the first three and a half episodes except to set up a web of storylines that would all come together during the fourth hour, is obviously popular with many people (Tony Kornheiser called it the best show on television in a recent episode of Pardon the Interruption). Many viewers most likely felt rewarded and even further enticed by the big payoff that materialized mid-way through episode four. There might very well be many people, perhaps including many fans of Luck, who would rather see a completely novelistic TV show with no stand-alone episodes that includes some boring clunkers along the way in exchange for the long-term payoff. McGee seems basically to be pushing the conclusion that this is bad, when a more accurate take would be that he doesn’t like it. Who cares?

I also find suspect his conclusion, “Such a viewing model should put an emphasis on the episode as a discrete piece of the overall pie.” I don’t think such a viewing model should put an emphasis on either one. I would say that to the extent that one model (self-contained vs. subservient to story arc) is better for discussing online or at the water cooler, I would lean in favor of the episode that is subservient to the story arc. It is more interesting and popular, among people I correspond with, anyway, to talk about how an episode fits into the story arc and not just about what happened in the episode alone. This is just another in a long list of conclusions reached by McGee that might be verifiably wrong or are simply opinions that he tries to dress up as objective analysis.

The single episode has taken a backseat in importance to the season, which itself is subservient to the series.

I say that overall, that’s a good thing. Nothing is perfect (as McGee notes later), and if some go-nowhere episodes are the price to pay for a several-season story arc that actually resembles the paths that many people’s lives could take in the real world (well, a real world far more interesting than the actual real world), then that’s a great tradeoff to me. If you don’t like it, then, (1) who cares?, and (2) watch something else.

Rather than take stock of what has just transpired, eyes get cast immediately toward that which is still unseen.

Not mine. I am actually able to think about what just happened, what happened before that, and what might come all at the same time! If you can’t think about two things at once or don’t want to, then, again, who cares? And why pass off your own experience as universal fact?

In other words, what just aired gets mixed into what we’ve already seen in order to formulate opinions about the unknown future.

In my opinion, that’s a good thing. I like being reminded (or being forced to remind myself) what happened to lead to this point, and I love how a good TV show will make you try to anticipate and speculate about the future.

Then there are shows that adhere to the USA network’s model of modern-day television “mythology.” … The model: Any particular episode will have roughly 90 percent self-contained story. This works well and counters the trends listed above. But for some reason, these shows also feel the need to have a larger, ongoing story that serves as the spine for a season.
[emphasis added]

Hmm, I wonder what reason that could be. Why would TV writers construct stories in a certain way, and why would TV producers hire them to do so? Why, why, why… It’s like he isn’t even capable of taking millions of people’s tastes into consideration. It’s like he has some sort of neural block or electrical device hooked into his brain à la “Harrison Bergeron” that prevents him from acknowledging that shows do this because people like it! I am one of those people, and there are millions more who like or even love it! This concept is honestly, seriously not that hard to understand.

[T]hese shows [with a mysterious “mythology” behind everything] feature long-running arcs that usually harm, not help, their sturdy-if-bland lather/rinse/repeat episode structure. Rather than having the two dovetail, they often work against each other, producing uncomfortable friction as both sides seek to establish the same space.

No, I think the background mythology rather helps the creative and entertainment value of the show, and the only reason they would work against each other is if there isn’t enough of the mythology, which I think is the case so far with Grimm. If a lather/rinse/repeat episode structure is bland, that seems to speak against the procedural TV drama, and his whole column inveighs against overly serialized dramas, so what does he want? A happy medium? So do most people; that’s not really saying much. Hopefully the mythology of Grimm will continue to grow and absorb Detective Burkhardt into itself, so that the show can actually go somewhere instead of giving us formulaic supernatural police procedurals, which perhaps Ryan McGee prefers to serialized “mythology” shows.

David Goyer, who created the show Flash Forward, bragged that he and his writing staff had built out the show’s first five seasons before the pilot even aired. But what Goyer and company forgot to do was build five characters the audience could relate to. The idea of having a fixed point toward which a show inevitably builds is fine in theory, but false in practice. There are too many variables at play when producing a television show that slavishly adheres to a predetermined finish line. All those breadcrumbs have to lead somewhere. But what if that destination changes along the way? How can one account for the clues already left behind? Assuming that a superior idea won’t arise later is simply arrogant thinking, and counterintuitive to any collaborative process. A television show is a living, breathing entity that represents a synergy of creative, cultural, and social forces that simply can’t be predicted five weeks out, never mind five years out.

Actually, contrary to his completely opinionated and unsupported claims, several shows have succeeded by planning out large chunks of their storylines months and even years ahead of time. For example, Supernatural, my favorite show that I started watching within the last few years. Eric Kripke originally planned out a rough three-season story arc for Sam and Dean Winchester, but as they got into making the show, it turned into a five-year arc. The above-quoted paragraph therefore does contain a nugget of valuable insight, but only a nugget: A television show is a living, breathing entity that can change along the way with writers’ input and viewers’ opinions. But it very obviously can be planned five weeks out and even several years out. Every script needn’t have been written; only the overall story arc.

Another show that I only watched recently that followed the season-long story arc model for the first two seasons was Veronica Mars. According to the commentary from creator Rob Thomas in the special features to season 3, they didn’t have every major plot point of season 3 planned out before they started shooting, but I would predict that they had most of seasons 1 and 2 planned out before each one started shooting. And it worked to perfection: Veronica Mars is a brilliantly plotted mystery show that presents the viewer with good mysteries and interesting happenings during most episodes, with occasional big payoffs, the biggest obviously coming in the season finale.

Yet another perfect example of the merits of planning things months in advance is Buffy the Vampire Slayer. Joss Whedon probably knew how he wanted some plots to unfold and the paths he wanted characters’ lives to take before they started shooting each season. The best adjective that I’ve read to describe Buffy is “groundbreaking”. That was a show that was dominated by the season’s story arc (the “Big Bad”, they always called it), and it had great stand-alones, but my favorite parts of that show were the whole Slayer mythology and the paths the characters’ lives took, which may not always or even often have been planned far in advance but sure seemed like it sometimes.

This very TV season provides us with three perfect examples of phenomenal dramas that seem to have been planned out, either broadly or meticulously, from the beginning to the end of the season: Ringer, Revenge, and Once Upon a Time. I have never watched three TV shows all in the same season that are as good as these, and it might be a long time before three shows as good as these debut at the same time. The main thing that makes them good is their serialization, the season-long story arcs that must, at least in the first two cases, have been planned out in some level of detail, before shooting or possibly even casting began. I would love to know if this speculation is true. Whether it is or not, the effect on me, my wife, and the couple friends I’ve talked to about these shows is the same: They seem like they were planned out from beginning to end, and that makes us admire the shows even more. The only drawback to Once Upon a Time so far, in my opinion, is the stand-alone episodes that don’t push the season’s story farther along.

Yet another show that was definitely planned out from the beginning to the end of its most recent season was Dexter. A few episodes of season 6 were followed by short clips of interviews with showrunner Scott Buck, who basically revealed that they had planned the plot twists and the major plot lines before shooting ever began. I think that’s how every season of Dexter has been, much to its benefit. Therefore, TV writers clearly can plan storylines five months in advance, and this clearly does produce excellent results, and frequently.

One of my favorite TV shows of all time is The X-Files, which many people think mixed the stand-alone episode with Chris Carter’s “mythology” brilliantly (at least, for the first seven seasons). Some people’s favorite episodes were the stand-alones, whereas mine were the alien/government conspiracy episodes. One problem with the alien mythology that Chris Carter created over nine years was the very fact that it was a living, breathing storyline that wasn’t really planned out from the beginning, such that what we really got (as far as I remember) was every question being answered with three more questions and Mulder never really finding any closure with the aliens and the government conspiracy (although he did find closure with his sister, which was the most important to him), and I was always left wanting more about the alien mythology: more answers, more facts, more explanation, more closure. I scarcely got any, because Chris Carter had no answers or end-goal from the beginning, only a great ability to keep people watching by making stuff up as he went along.

Contrast that with Supernatural, a show I originally thought was just a cheap X-Files knockoff but which I now like more than The X-Files for two main reasons: It is more serial, meaning there are fewer stand-alone episodes and more focus is put on the multiple-season story arc and the whole Winchester mythos; and much of the first five seasons was planned out from the beginning, with the result that things actually went somewhere and, in fact, ended somewhere, at least in a way (a very good way). My only two problems with the sixth and seventh seasons are that season six was a little bit unfocused and didn’t handle all four of its major storylines perfectly, probably because they didn’t have a plan from the beginning but rather were making much of it up as they went along, leading them to struggle to either eliminate or tie together the plot lines in the last few episodes; and season seven has had way too many stand-alones in a row and absolutely zero action on the story arc since about Thanksgiving.

The balance between standing alone and tying into the mythology or story arc is a topic of some debate among fans of many shows. I have corresponded with Supernatural fans on the internet who miss the good ol’ days of Sam and Dean huntin’ ghosts and burnin’ bones, and I must say that nearly every stand-alone this season has been excellent, especially “The Mentalists”, “Adventures in Babysitting”, “Repo Man”, “Plucky Pennywhistle’s Magic Menagerie”, and “Season 7, Time For a Wedding!” (which was less stand-alone than the others). But there’s just something missing from season seven, and that is a heavily serialized story arc that ties episodes together, builds on the Winchester mythology, and portends of things to come.

Clearly for most TV shows, especially those with full 22–24-episode seasons, a good balance between episodic and novelistic interest is ideal, and making the viewer interested in the characters is the most important thing, as with any fiction. But McGee’s disparaging comparison of TV episodes to installments of a novel fails to be useful or meaningful at all, to me. The reason is because not even novels get away with being boring and having very few events until some big payoff towards the middle and then some bigger payoff at the end. A good novel piques people’s interest throughout. Not necessarily in every single chapter, but certainly every 50 or 100 pages. Therefore, in order for a TV show to be novelistic in a good way, it must pique most viewers’ interest in every episode and keep them enticed to follow the overall story arc. For a TV show to be novelistic in a bad way, it won’t pique the interest of many viewers very often, just as a bad novel loses most readers’ interest along the way. And just as a good novel draws three-dimensional characters that readers relate to, root for, root against, or care about on some level, so does a good TV show. Therefore, to say that a show is bad because it is novelistic doesn’t hold water; it might be bad because it’s like a boring novel, but it’s good if it’s like an interesting novel! To say a TV show that fails to build five characters the audience can relate to or that sacrifices interesting individual episodes to serve a pre-planned, season(s)-long story arc is bad because of serialization is like saying a novel is bad because of some inherent problem with novels. No, they are bad because the writers are bad and failed to find one happy medium or another that would appeal to enough viewers to maintain sufficient ad revenue and/or studio support.

The serialization or novelization of this generation of TV shows isn’t bad and hasn’t gone too far, according to some people, such as myself. It might very well be true that some TV writers, even entire teams of TV writers, are driven too far to the TV-novel end of the spectrum, where they aren’t as effective as if they stuck to 45-minute procedurals. It also might be true that some writers apply the paradigm of serialization poorly to their particular show (McGee sites The Killing and Walking Dead as examples). This isn’t a problem with serialization, and it isn’t indicative of over-serialization in Hollywood. It seems to me like a problem of some writers trying to do something that works for others but not them. If I watched more TV or were paid to analyze it, I probably could find some examples of TV shows that were too procedural and not serial enough. For example, the main reason I was so intrigued by The Office in the first three seasons was the Jim and Pam storyline; maybe some sitcoms could benefit from more (any) serialization, which would just go to show that serialization hasn’t necessarily gone too far or spread too wide, the fixation with serialization isn’t undue, and the problems are with individual writers and writing teams that are trying to expand outside of their forte.

When Ryan McGee complains about overly serial TV trying to be something it isn’t, a novel, he strikes me as similar to the dinosaurs in the record industry and movie industry who don’t understand or don’t accept that technology changes, society changes with it, and they have to change with society. The record industry has sort of adapted by embracing digital downloads, but the movie industry is stuck in the 1990’s. They are (perceived as) old, greedy, out-of-touch suits who think they are owed money simply for doing the job they’ve always done and refusing to adapt to change. Well, TV has changed too, maybe starting with The Sopranos or maybe starting with Buffy or earlier, and its evolution into a TV-novel paradigm should be praised as progress rather than denigrated as eroding the single episode’s purpose. Whose purpose? Ryan McGee’s purpose? The purpose of fiction is to entertain while exploring the human condition and creating characters the audience cares about, and I think novels and TV shows are wonderful ways to accomplish all of those goals. TV offers opportunities a movie doesn’t: to explore how characters and their interrelationships evolve, in addition to telling much longer, more epic, more rewarding stories. To suggest that a fixation with serialization is bad and that erring on the side of more self-contained episodes and less story-arc progression/tie-ins is the solution would be to deny TV the great advantage it has over other audio-visual media, and would unjustifiably clip its wings.

If anything, I’d rather TV writers err on the side of too much serialization, because as McGee notes, nothing is perfect, and there’s a cost/benefit tradeoff to everything in life, so I prefer the heavy serialization and seasons-long story arcs, even with the drawbacks they bring (which are much fewer and less important than McGee would have us think).

In summary, I feel like I can look at this rambling column by Ryan McGee in about three ways:

1) His opinion of good TV differs from mine. So what? Why is that worth a 1938-word essay? Why is that worth a salary as a professional writer? What has he added to the world if his entire column is opinion backed up by more opinion, selective sampling, and blind assumption? (What am I adding to the world by writing this? Probably very little. What am I getting paid for it? Even less. Fortunately for me, I write for myself and not anyone else.)

2) He actually isn’t saying serialized TV shows are bad per se, just that a happy medium between self-contained episodes and a season(s)-long story arc needs to be found. Wow, that’s even more worthless than option #1.

3) He is actually right but has a grand total of zero data to support his hypothesis.

The answer is obviously a combination of #1 and #2, but option #3 brings me back to my original impetus for writing this blag post: opinion pedaled as objective analysis. His conclusions aren’t the result of research and data-gathering; it is all just contemplating and opinionating. Again, maybe my scientific background skews my perception of professional writing in other fields, such that I simply don’t appreciate what everyone else appreciates, viz., that he’s offering primarily opinions that we can either take or leave. But they sure don’t seem to be phrased like just opinions. Many of my assertions and opinions in this post don’t begin with “I think” or “In my opinion”, and they are presented pretty assertively, like facts, but to me it seemed pretty obvious that most of them came from my thought processes in response to McGee’s statements and my TV experience, not research or fact-citing, and the tone and purpose of a blag post are both much different from an entertainment column by a professional writer in an actual publication. Maybe I should just start interpreting the purpose of non-scientific writing as experience-backed subjective analysis that we should compare to our own perspectives, and stop interpreting it as purported objective analysis to be judged as correct or incorrect.

Posted in Entertainment, TV, Writing | Comments Off on In praise of highly serial television shows

The author’s intent does matter

An important lesson that most of us, I hope, learned from English class is that the most important aspect of a work of literature is how it affects us: how it makes us think of other people and our relationships with them, how it affects the ways we interact with others, view the world, contemplate the human condition, and approach our lives from that point forward. The best literature is written with these thoughts in mind, not just to tell an interesting story; it is written by real people who want to affect other real people in a certain way.

Therefore, how we interpret a work of literature is paramount to extracting the fullest effect, the most substance from it. We also, hopefully, learned that literary analysis and criticism should focus less on the sequence of events that make up the plot and more on how the events affect the characters and, most important of all, on what the entire work says about humans and our world. What is the work’s message(s), its theme(s), its lesson(s)? Clearly, in analyzing such messages, more than just the author’s words on the page come into play; both the author’s message and our interpretation of his message affect the result of our analysis. In other words, to paraphrase Homer Simpson: It takes two to send a message: one to send the message and one to listen.

There are people out there who will tell you that the author’s intended message does not, in fact, matter at all. His motivation behind writing the work to begin with, the reasons the characters have the personalities they do and react in the ways they do, and the events that happen and how they affect the characters are, according to these people, absolutely meaningless because all that matters is how we interpret the work of literature. They say that works of literature are open to interpretation and that what matters is what we get out of it. Both statements are true, but many armchair critics go too far: They say that any interpretation of a story is acceptable and that all interpretations are equally right because all that matters is the interpreter.

That crowd would undoubtedly object to my characterization of their position as “the author’s intent is absolutely meaningless”, but how else can their position be interpreted? Either the literary critic’s interpretation should be compared with the author’s intent (if known) to judge how closely they match, or the two shouldn’t be compared. To say that any interpretation, as long as it is backed up with reasonable logic and a fair number text citations, is equally valid because the critic’s interpretation is all that matters is, in fact, saying that the author’s intended interpretation of his own work is totally, completely irrelevant.

The obvious flaw in this reasoning is that it pushes the author out entirely. What’s the point of even reading a work of literature if it is perfectly fine to ignore the author’s intended message and conjure up an interpretation that is its complete opposite? What’s the point of analyzing a particular series of events that happen to a particular set of characters in a particular setting and analyzing how those characters are affected if we’re going to ignore the intentions of the very creator of all of those things? Why not just make up our own story and type up an interpretation that matches our desires?

Or would the “equally correct interpretation” advocates say, “Well, an interpretation that is completely antithetical to the author’s obviously isn’t right, but an interpretation that adds something new to the author’s or extends the author’s message in an unintended way is perfectly correct and valid.” Why draw the line there? Why is one type of interpretation that the author didn’t intend wrong but another type 100% “correct”? Why draw the line anywhere other than exactly where the author intended (again, if the author makes it known)?

It does matter what the author meant by his writing, his themes, and his symbolism, and if you interpret them in your own way that is completely outside of his intentions, then yes, you have interpreted them wrong. There is a right way(s) and a wrong way(s) to interpret an author’s meanings and his intentions. The right way is what the author meant or what the author concedes is a perfectly fine interpretation of his work, and the wrong interpretation is one that the author didn’t intend and does not condone after he hears about it.

I make these judgments from a purely literary point of view. I can say nothing generalized about how valid a person’s conclusions are from a philosophical, sociological, or political point of view, because those all depend on a philosophical, sociological, or political analysis separate from the work of literature. The conclusions you draw about the world or the changes in your worldview that you have gained from reading the book might be great, but you could still be wrong in your interpretation of the author’s message. It is foolish and arrogant when people say that someone’s interpretation of a work of fiction isn’t “wrong” but rather all that matters is “what you get out of it” or “how you interpret it”.

The most famous example of this is the insistence that Ray Bradbury’s Fahrenheit 451 is about censorship, specifically, warning against censorship and a powerful government that employs it. Bradbury has repeatedly said Fahrenheit 451 is not about censorship but rather about how TV dumbs down people and makes them interested only in superficial, useless little “factoids” presented on TV screens instead of detailed, in-depth reporting and analysis of history and news. This article in L.A. Weekly reports these sentiments of Bradbury’s in addition to mentioning the time he walked out of a UCLA classroom because the students refused to accept his insistence that Fahrenheit 451 was not about censorship or McCarthyism or anything like that.

You can hear Bradbury himself refute that position in this video from his own website:

There is no more clear-cut case of some (most, in fact) critics of a fictional work being completely wrong about the meaning of the work and in how they interpreted it. There is a right way and a wrong way to interpret Fahrenheit 451, and an admonition against censorship is plain wrong. Period.

This doesn’t mean nothing is gained by taking the dystopia of Fahrenheit 451 as a warning against the evils of censorship. Censorship is bad, and if you are inspired to devote part of your life to combatting censorship because of something you read that wasn’t actually about censorship, then the end result is probably good. In that sense, your interpretation can be useful or beneficial, and the pronouncements it leads you to make about humanity and society can be true and good, but your interpretation of that particular literary work still remains wrong.

There are numerous other examples of (real or hypothetical) interpretations of literature that are simply wrong, regardless of how much the critic likes them. Contrary to the suave, heroic, exciting, adventurous image we get from the name James Bond, Ian Flemming intended the exact opposite when he created the character:

When I wrote the first one in 1953, I wanted Bond to be an extremely dull, uninteresting man to whom things happened; I wanted him to be a blunt instrument…when I was casting around for a name for my protagonist I thought by God, (James Bond) is the dullest name I ever heard.

Flemming later clarified even further:

I wanted the simplest, dullest, plainest-sounding name I could find, “James Bond” was much better than something more interesting, like “Peregrine Carruthers”. Exotic things would happen to and around him, but he would be a neutral figure—an anonymous, blunt instrument wielded by a government department.

You might like the movies, their romanticizing of this dull literary figure, and the portrayals of Agent 007 by Sean Connery and others, but the fact is that Ian Flemming originally intended for James Bond to be the exact opposite, and no amount of essayist alchemy can change that fact.

Another author whose intentions and motivations are readily available for all to read is Ayn Rand. She wrote probably more about the meanings behind her novels than any novelist in history. Many people love her, and many, many more people hate her. Regardless of anyone’s opinion of her and her philosophy, there is a specific correct interpretation of her novels—and many wrong ones. If you read any of her novels and come away with the conclusion that the protagonists are awful, their ideas are wrong, the antagonists are admirable beacons of light, and communism is the greatest idea in mankind’s history, then it doesn’t matter how well you make your argument, how you apply it to the historical events of the real world, or how it makes you feel; it is simply the wrong interpretation of her writing. The author has said so. It is an easy yes-or-no question.

The list goes on and on. Nearly every meaningful work of art that has ever been made and ever will be made has one or very few (obvious or less obvious) correct interpretations and many wrong ones. “Born in the USA” is not a racist, pro-military, flag-waving, interventionist anthem, whereas that stupid Toby Keith song is. The Sistine Chapel painting The Creation of Adam is not about gay sex between an old man and a younger one. The Crucible is not about the evils of dark magic and how we should appreciate the witch-burning zealotry of our ancestors who made this country so great. There are right and wrong interpretations of works of art, and it is stupid, arrogant delusion to pretend otherwise. The right interpretation(s) is the one the author explicitly communicates or implicitly endorses. This is because it is the author’s work, not yours, and the author’s message, not yours.

The author’s intent determines the right and wrong interpretations because the message doesn’t just exist out there in some abstract realm of human thought; the words on the pages didn’t just come into being on their own, and the stories they tell and characters they draw don’t just exist in our imaginations. They were written by a real person with specific intentions. To ignore the origin of the message and its actual content (if known) is identical to saying you are equally as right as a speaker if they say one thing and you hear another.

The wrong interpretation doesn’t mean wrong in the real world or useless to anyone’s life; it just means you interpreted the author’s message wrong. This fact stems from the dual nature of a literary message mentioned above, that it takes one person to send the message and the other to listen. Literary interpretation doesn’t exactly have numerous right answers, but it has two different kinds of answers. That is, we can separate the literary validity of a critical interpretation of a work of art (which is judged against the author’s intent) from the sociological, philosophical, or political value of the critic’s conclusions (which is judged against everyone else’s sensibilities and worldviews). In this context, we can say that we don’t necessarily have to choose between comparing a critic’s interpretation to the author’s intent and avoiding this comparison; we can compare the interpretation of a work to the author’s intent in order to judge the literary correctness of the interpretation in question, and we can also ignore the author’s intent when evaluating the critic’s analysis as it applies to human society. We can even ignore the author’s intent and say, “Oh, yeah, I do prefer that interpretation of the author’s message; therefore, I’m going to assume this is what the author meant and live my life accordingly.” That might make an interpretation valuable, but it doesn’t make it accurate.

To the extent that commentators on literature and literary criticism/analysis/interpretation acknowledge this fact, they can judge the critic’s interpretation on its own merits all they want. But to consider only how well a critic makes his arguments and how applicable his conclusions seem to be to the real world ignores the author’s intent, which is to say, the message itself, thereby abrogating half of the point of analyzing literature in the first place.

Some people will repeat that literature, like any art form, is open to interpretation. That’s fine. Interpret it how you want, draw the conclusions you want, and modify your worldview accordingly. Hopefully your life will be better for it. Maybe your life will be even better than if your interpretation had agreed with the author’s from the beginning. However, being “open to interpretation” does not mean “however I interpret the work of literature is equally as correct as the meaning the author intended”. To make such an outlandish, bizarre, bewildering statement would be like saying, “I liked the TV or movie adaptation of the book more than the book itself, so the author must really have have preferred that people see his story on the big screen rather than read it in a book.” Such delusions are so ludicrous and incoherent as to be indicative of far worse problems than literary misinterpretation.

Posted in Books, Morans, Writing | 5 Comments

Sad songs: “The Last Resort” by the Eagles

This is my personal saddest song that I know. It made me cry quite a few times when it was new to me, back in high school. It originally appeared as the last track on the Hotel California album, but its best version is the live version from Hell Freezes Over, as with most songs from that concert album.

Don Henley, who is an avid and accomplished environmentalist, said “The Last Resort” is about how the west was lost.

“The Last Resort” lyrics by Don Henley:

She came from Providence, the one in Rhode Island,
where the Old-World shadows hang heavy in the air.
She packed her hopes and dreams like a refugee
just as her father came across the sea.

She heard about a place, people were smilin’.
They spoke about the red man’s ways, and how they loved the land.
They came from everywhere to the great divide,
seeking a place to stand or a place to hide.

Down in the crowded bars, out for a good time.
Can’t wait to tell you all what it’s like up there.
They called it Paradise, I don’t know why.
Somebody laid the mountains low while the town got high.

And then the chilly winds blew down across the desert
through the canyons of the coast to the Malibu,
where the greedy people played, hungry for power,
to light their neon way and giving things to do.

Some rich men came and raped the land, nobody caught ’em.
Put up a bunch of ugly boxes, and Jesus, people bought ’em.
They called it Paradise, the place to be.
They watched the hazy sun sink in the sea.

We can leave it all behind and sail to Lahaina
just like the missionaries did so many years ago.
They even brought a neon sign that said, “Jesus is coming.”
Brought the white man’s burden down and brought the white man’s reign.

Who will provide the grand design? What is yours and what is mine?
’Cause there is no more new frontier—we have got to make it here.
We satisfy our endless needs and justify our bloody deeds
in the name of Destiny and in the name of God.

And you can see them there on Sunday mornin’
stand up and sing about what it’s like up there.
They called it Paradise, I don’t know why.
You call some place Paradise, kiss it good-bye.

Posted in Music | 1 Comment

How to change the WordPress admin area text field/edit post font

After upgrading to WordPress version 3.3.1 recently, I was puzzled and frustrated to find that the text in the text area/entry field where you type the content of each post (the Add New Post page or /wp-admin/post-new.php) looked funky, awkward, and all mono-spacey. I knew this was obviously the fault of some new CSS file I had “upgraded” to with the new WordPress release, but I had no idea how to find the offending bit of code in the mass of files that comes with WordPress. Well, Googling key words like “wordpress”, “font”, “text”, and even “admin area” gave me mostly tutorials and discussion threads about how to change fonts displayed on websites run by WordPress. But eventually I found this post by Martin Brinkmann at ghacks.net, who is a gentleman and a scholar and who explains that the offending string of CSS is in the file wp-admin.css located in the wp-admin/css directory, which specifies that #editorcontainer should have the font-family Consolas, Monaco, monospace.

That apparently is no longer true in WordPress 3.3.1. The wp-admin.css file has no editorcontainer text anywhere in the document, and I changed a few Consolas,Monaco,monospace fonts to something a human would like to type with, to no effect. (No effect that I have noticed yet.)

So I simply did what I should have done from the beginning: used Firebug to determine what CSS selector was specified for this text box/entry field and what CSS document it was located in. It was wp-includes/css/editor-buttons.css, and the CSS selector was wp-editor-area. Therefore, to correct this stupid lapse on WordPress’s end, you can either download the WordPress .zip file to a temporary place like your Desktop, unzip it there, and find wp-includes/css/editor-buttons.css, or connect to your home directory with an FTP program like Filezilla to download the editor-buttons.css file that WordPress automatically placed on your server. Open editor-buttons.css with a text editor or more advanced programming software, and search for:

wp-editor-area

Immediately or shortly after that text should be:

{font-family:Consolas,Monaco,monospace; …

I recommend just deleting those three fonts and adding normal fonts in their place, like so:

{font-family: Verdana, Tahoma, Helvetica, Arial, sans-serif; …
[or, you know, whatever other good fonts you want]

Don’t delete the semicolon!

Finally, use an FTP program to upload the new file to your home directory/wp-includes/css/, overwriting the old file.

It turns out that using the Visual editor option to edit posts (and pages, I’m sure), as opposed to the HTML option, uses a normal font like Verdana or something, but I hate the visual editor. I always feel like I know exactly what I’m getting with the HTML editor and can control everything better with it.

So if you’re annoyed like I am at the puzzling decision to change the font of the text in the edit post/edit page text boxes in the wp-admin interface, you have Martin Brinkmann to thank for finding the offending code and pointing us in the right direction.

Posted in Blagging, Morans | 5 Comments

No, in fact, languages don’t want that

Because they are incapable of wanting anything.

Earlier I remarked that Geoffrey Pullum, Ph.D., professional linguist, ardent descriptivist, and language blagger extraordinaire surely didn’t really think this but wrote it in Language Log posts for rhetorical effect: that languages want to change and love ambiguity and show no interest in avoiding polysemy or ambiguity. Perhaps he really does, without the purpose of irony, cuteness, rhetorical effect, or simple convenience/time-saving in informal blag posts, anthropomorphose languages to such a great extent. We’ve all heard about real-life and fictional animal researchers becoming attached to their research subjects the way they would a pet or loved one, artistic creators feeling more love and devotion to their creations than to real people, entrepreneurs exhibiting more devotion to their businesses than their families, car or boat lovers thinking of their cars or boats almost as people, etc. I guess some linguists can become so immersed in and attached to the subject of their life’s work that they ascribe to it human qualities that it doesn’t deserve.

Geoffrey Pullum has repeatedly, over and over and over again, written that languages love ambiguous meanings and show no interest in avoiding ambiguity or polysemy. No, of course they don’t, because they are abstract things, basically concepts, that are incapable of thought or emotion. People, on the other hand, very obviously are interested in avoiding ambiguity. At least, some of us are. I doubt very many people, even those dreaded prescriptivists whom Dr. Pullum basically makes into a caricature of intolerant, unthinking, dogmatic unreasonableness, which in itself makes me wary of intellectually associating myself too closely with descriptivists of his type, care too much about the polysemy that is so typical of short, one-syllable words that we’ve inherited from ages past. But I find myself sympathetic to prescriptivists, if that’s what it makes them (me), who object to a rarer word with an established meaning acquiring a new one that it doesn’t need and doesn’t help anyone communicate.

The point of those posts by Dr. Pullum was to point out that polysemy does not mean ambiguity and thereby prove that prescriptivists are wrong in wanting words to maintain constant meanings, or even in believing that they can.

I would say the archetypal ongoing definition shift that divides descriptivists from prescriptivists is the adjective “disinterested”. It means impartial or not having a stake in either side. In contrast, “uninterested” means bored or not engaged. I don’t know if it’s descriptivist, prescriptivist, or neither to note that “impartial” is basically a synonym of “disinterested”, so do we really need the latter at all?, and that it is kind of stupid to have two negation-of-“interested” words mean different things. Then again, the word “interest” means different things, so I have no problem with “disinterested” and “uninterested” meaning different things. The sentences, “I’m not interested in that,” and, “I don’t have an interest in that,” can have distinct meanings from each other&#8212the first would mean “uninterested”, and the second would typically (or at least occasionally) mean “disinterested”. Ah, but “interested” isn’t the same word as “interest”; maybe that influenced the meanings of the sentences more than just the polysemy did. How about, “I have no interest in that,” vs., “I don’t have an interest in that”? Phrased and juxtaposed thusly, more ambiguity certainly seems to creep in. Normally, when we use “interest” to mean stake in a matter, we use “matter” or other contextual clues to clarify our meaning (“I don’t have an interest in that matter”).

The “interest” examples provide yet more support for my point that in contrast to languages, people are quite obviously interested in avoiding ambiguity and avoiding problems caused by polysemy. For example, if we wanted to write a sentence with a structure like one of those in the previous paragraph but were worried about the ambiguity of the word “interest” (meaning fascination/attraction/intrigue vs. stake/bias/potential benefit or loss), we would use a different word. This is because we are interested in avoiding ambiguity and the problems that polysemy causes. The languages we’ve inherited and were able to evolve on our own have plenty of polysemy, occasional ambiguity, and also plenty of ways to avoid both, so we can further influence our language through usage to enable as much precision, clarity, and anti-ambiguity as possible.

One way to achieve this in the “interest(ed)” examples is to choose a different (coincidentally, also polysemic) word that could, despite its polysemy, have only one meaning in the sentence at hand: “I have no stake in the matter,” or, “I don’t have a stake in the matter.” Or, you know, something completely different but still constructed in the same way: the colloquial, “I don’t have a horse in that race.” Another way to achieve clarity in a short sentence like that is to use a word with a specific meaning for this purpose, “disinterested”. We wouldn’t be compelled by Language to choose the word it wants us to use and has devised for this specific purpose; we ourselves would choose a different word, possibly the polysemic “stake”, which people themselves have come to use for certain meanings because of the benefit of doing so, not caring whether it were polysemic or what it meant in other, irrelevant contexts.

On this point I both agree and disagree with Dr. Pullum: polysemy doesn’t (necessarily) cause ambiguity, certainly not enough to paralyze our use of language, but at the same time, we are not slaves to a disconnected, extra-societal, self-evolving Language that dictates what words mean and how they must be used. We have words with multiple meanings or one meaning because of how our ancestors used them, and we can continue to influence our usage of our language by conscious choice to achieve or at least maintain as much precision and clarity as is reasonable. What is reasonable can be debated extensively but also largely agreed upon, by either the linguistics community specifically or by society at large, and can include the maintenance of a word’s definition when it seems beneficial to do so and, especially, when it is obviously not necessary and not helpful for its meaning to change.

Language shows no interest in having “disinterested” and “uninterested” mean the same thing or different things or doing anything else. People, on the other hand—at least some people—seem to have an interest in keeping both “disinterested” and “uninterested” at one meaning each. Descriptivists are quick to remind us that “disinterested” used to mean what “uninterested” means today. This either occurred because the abstract concept of Language wanted and willed its meaning to change, or it happened because people came up with two different, non-overlapping meanings for these two words, which became standard and popular because it was helpful. Maybe it was the latter, and maybe people can choose, by way of usage, to keep it that way. Maybe it’s worth considering that people who use “disinterested” incorrectly are only doing so out of ignorance of its definition or a simple lapse of prefix distinction, not because Language wants oh-so-badly for it be polysemic, and that we could actually influence usage in a good way by simply reminding people of its definition while sparing ourselves half of the effort we expend arguing over whether this polysemic shift is a good, bad, or completely neutral thing.

Language Log commenter Jeff Rembetikoff said,

I don’t think the more reasonable breeds of prescriptivists object to each and every word with multiple meanings. Instead, they object to a rarer word losing a distinction between it and a more common word due to perceived sloppiness. Their frequent objections to certain uses of, for example, collide and comprise seemed reasonable enough to have influenced my own usage.

I agree. I don’t want words to have a certain, static meaning per se; rather, I want their meanings and distinctions to be clear and precise, and I want people separated by time and space to mean the same thing when they write, say, hear, and read the same words. Ardent descriptivists, in contrast, seem to consider any change a good and desirable thing per se because it’s what language does and is therefore by definition good. They spend hours and hours and millions of words defending their complete lack of a position, which seems to border on pointless, to me, except in such cases as people’s communication or understanding is impaired by undue prescriptivism or other strictness. I don’t really care about the polysemic words that we have, despite their meanings being broad and ambiguous, because the phrases and sentences they are used in are clear, unambiguous, and easily distinguishable from their other uses.

Going back to Dr. Pullum’s examples of completely unambiguous, unconfusing polysemy, I was particularly interested in his example “see”. Here are the definitions he lists for “see”, which exclude the original vision-related one:
1. understanding: I see what you’re saying.
2. judging: I see honesty as the fundamental prerequisite.
3. experiencing: Our business saw some hard times last year.
4. finding out: I’ll see whether he’s available.
5. dating: I heard that she’s seeing someone.
6. consulting: You need to see a doctor.
7. visiting: I’d be go and see my aunt for a while.
8. ensuring: I’ll see that this is done immediately.
9. escorting: Let me see you to your car.
10. sending away: I’ll come to the airport and see you off.

The thing that struck me about all of these uses is that in my world of scientific writing and editing, and possibly in the primary literature of most academic fields, every single one of these uses of the word “see” would be inappropriate. We would never use “see” to convey any one of these meanings (maaaayyyybe “see a physician”). Using “see” to mean any of those things in a biological or medical paper would be imprecise and inexact because we have better words with more specific, specialized meanings that would more precisely convey the meaning of the sentence. In everyday speech, such pomposity wouldn’t be necessary, but in higher-level writing, it’s often beneficial and even necessary, my objections to bombasticness in general notwithstanding. In the case of “see” as used with any of the meanings listed above, I would consider a replacement necessary, for clarity and precision. The reason “see” would be inappropriate in primary literature is not because Language wants and compels us to write differently for different situations, nor would it be (only) because scientists and scientific editors are bad writers who conflate “more syllables” with “better writing”. We would use more specific words because we want to and we see the benefits of doing so.

I see the benefit of keeping many words’ meanings distinct and constant, the best example of which is “disinterested” vs. “uninterested”. Many more I don’t care about. I think it’s reasonable to agree with me on any or all of them (if I ever compile a list, I’ll let you know), and it doesn’t make me ignorant, intolerant, pompous, unrealistic, or prescriptivist to advocate lexicographical constancy for some words when there are clear reasons to do so and unclear benefits of change.

My desire is not to avoid polysemy or change per se, but rather to promote things that make sense and discourage things that don’t. All of those different uses of “set”, “draft/draught”, “charge”, and “put down” make sense and don’t cause ambiguity, so there is no problem with them. Advocating the use of “disinterested” to mean “uninterested” probably doesn’t cause much lack of clarity, either, but it’s stupid, so I oppose it. It’s stupid because it already means something else and there is already a word that means “uninterested”. It’s not a short, simple, common word with lots of everyday uses, especially uses that are distinguished in prepositional and adverbial clauses. It’s a long, uncommon adjective whose meaning it makes more sense to preserve, so I advocate its preservation and constancy. This position is simple, beneficial, common-sensical, and very nearly unobjectionable, as far as I can tell.

Posted in Language | Comments Off on No, in fact, languages don’t want that

Web fonts

I’ve been a little bit giddy over the last couple days at my discovery that the fonts used on my website are not limited to the fonts I have or others are likely to have on their computers. Rather, a website owner can use CSS to refer everyone’s browser to a publicly available font at a web font service, thereby allowing everyone’s browser to display exactly the font that the designer wants and not just one of the several that are likely to be available on the visitor’s computer.

This is a really simple concept, but I’m not surprised I didn’t think about it before. When an application on your computer (say, Microsoft Word or Mozilla Firefox) loads text of a certain font, it has a certain folder(s) on the hard drive where it goes to look for fonts and then displays each bit of text in the correct font. Instead of limiting the browser to fonts stored on the same hard drive as itself, or at least the same computer (with multiple drives) as itself, why not tell the browser to fetch the font from a location on a hard drive on a different computer, connected to it via the internet? That’s exactly how web fonts work!

I first realized this when I came across Typekit, a web font service that several blags I’ve encountered use. I’m a little too obsessed with typefaces, so naturally I was curious about what those blags’ fonts were called and where they came from and how it was possible that they displayed as normal text in my browser, not as a static image or snapshot or anything. I eventually decided to click on the little Typekit logo in the bottom-right of the screen of one of those blags, which says “Fonts from” and then the logo.

So I signed this domain up for a free account and started trying a few different serif fonts that would serve me better than my all-time favorite font of Times/Times New Roman had. Times (New Roman) will always be my favorite font because it’s just so…classical, functional, straightforward, unassuming, readable without being bland, elegant without being flashy.

The only problem is that it wasn’t as readable as I’d like on my laptop. I love how the text of my posts looked in Times New Roman on my 19″ desktop monitor (on Ubuntu Linux), but in every single browser I tried on my Windows 7 laptop (a 13″ Toshiba), the text was just too small for the Times New Roman font. It isn’t just a matter of size, because many sans-serif fonts on many web pages are the same size as this text was. Rather, the problem with Times New Roman on web pages is that the font doesn’t scale down well or something, because it is just hard to read at 12 points or smaller. Too much serif or too small spaces between letters or too much black space per white space or something. Part of the problem could be my beige parchment background, which looks very pale on my desktop monitor but quite red-tinged on my laptop. I’ve spent probably upwards of an hour between several different occasions trying to get my laptop’s display closer to my desktop’s, but it seems impossible. I think I would need hardware buttons like the RGB balance and gamma, but they don’t seem to exist on this laptop.

When I work in Microsoft Word for my editing job, 90% of the papers are, thankfully, written in Times New Roman, most of those in 12-point size. When I zoom in to 120% in a Word document with Times New Roman 12-point type, the text looks perfect in every way; my ideal font. In contrast, in Microsoft OneNote, which I use to jot things down and take notes while editing, Times New Roman 12-point font has the same problem as above (suggesting the website readability problem was not due mainly to the parchment background). The dilemma was that if I made it any larger, it would be way too large on my desktop monitor. I don’t know how they would look on visitors’ displays, but if my two computers were any guide, Times New Roman wasn’t achieving a happy medium between readability on a laptop and aesthetics on a desktop monitor. It just isn’t a web font.

The same was unfortunately true with a font I quickly became enamored of from Typekit, Adobe Garamond Pro. When I applied that font to my blag at 16px, it looked wonderful on my desktop, elegantly simple, with a little extra flare or style that Times doesn’t have. However, it was probably even less readable on my laptop than Times. Increasing the size was an option, but not a great one.

The free account at Typekit.com doesn’t give you access to all of their fonts, so I didn’t have many Times- or Garamond Pro–like options, and most available serif fonts again looked too small on my laptop or too big and awkward on my desktop. Liberation Serif initially seemed a good option, but on my laptop it looked even too narrow and almost like the letters were too close together. I want a serif font for my main entry content because it seems a little more professional and academic, more classically stylish and not newfangled-computer-stylish, which matches the brown/red/beige/earthy theme of my website better. Serif fonts also display em dashes and en dashes better than some sans-serif fonts, such as two of my favorites, Lucida Grande (which displays en dashes and hyphens identically) and Trebuchet MS. Looking around at other websites recently, I suppose I could go with Georgia at around 14px, but I have disliked Georgia on other websites before, probably at 16px, so I’ve acquired a little bias against it. It seems to be basically a web font while Times is a word processing font.

To make a long story short, I found the Google Web Fonts service and am now using that with Sebastian Kosch’s Crimson Text as the text you are reading. I have no idea how long I’ll keep it as that.

I have discovered three large advantages of Google Web Fonts over Typekit. 1) There are thousands more fonts available for free, many of which are the same as those available with the free Typekit account (PT Serif, Droid Serif, and Prociono, just to name a few). The free Typekit account has plenty of good ones, hence its popularity, but not all of the open-source ones. 2) Some of the Google web fonts seem to look better, at least on my laptop, than the same font from Typekit does. I am not sure if it’s possible that Google Web Fonts’ version of PT Serif looks better than Typekit’s, but something has convinced me it does. Sebastian Kosch has apparently uploaded two different versions of Crimson: Crimson to Typekit and Crimson Text to Google Web Fonts. I am completely sure that these two look different on my two computers, in favor of the latter. 3) Google Web Fonts’ API is much easier to use and is open-source, so you don’t need an account, username, download, special access, or anything other than minimal CSS knowledge to apply a certain font to a certain type of text on your web page. All you do is add the link rel tag to the appropriate place in a CSS document, with the name of the font at the end of it, and the font will display properly on your web page. I was successful on my first try. You don’t need any WordPress (or other blagging platform) plugin, you don’t have to download, activate, or sign up for anything, and you don’t technically even have to visit the web fonts gallery; if you know the syntax of the CSS tag and the name of the font, you can type it in yourself. You just need access to the right CSS document(s).

In case you’re wondering, some of the other fonts I was attracted to and might cycle through again later were Lusitana, Cambo, PT Serif, Gentium Basic, Ovo, and Poly. The main reason I probably won’t use Lusitana, Ovo, and Poly is because they are a little too squat—too short and wide and not tall and narrow enough. I use those terms relatively, of course, because I hate really narrow fonts like Arial Narrow and other compressed/condensed fonts. I like the fonts that are very slightly on the taller and narrower side. They just seem more elegant and stylized to me. Another one I’m interested in if I can figure out the API or CSS is Century Catalogue. It is listed at webfonts.info as being available for @font-face embedding, but I’m missing some knowledge about how to link to or embed this font. Maybe just link to the .sfd source file?… Its author says it is still a work in progress, so maybe it’s not embeddable yet?…

I know my friends would think I’m nuts for even noticing any font except if it’s unreadable, but given the thousands of font designers and typography afficionados out there, I’m far from alone.

Posted in Blagging, Writing | 2 Comments

On correctness, ambiguity, and precision in language

In the linguistics blagosphere last week, prescriptivists got all indignant because some British company I’d never heard of, Waterstone’s, dropped the apostrophe from its name, and descriptivists got all agitated at the prescriptivists for making undue proclamations about correctness, rules, and history.

Well, I don’t really care about a company changing its name, because it’s a trademark and it’s their prerogative to make a change for marketing and advertising purposes. The descriptivists are obviously right that it’s not so much a grammatical issue as it is a marketing one. That said, I do think it’s a little stupid for a company to, (1) change its name from what it’s always been; (2) change it to something that, regardless of the history or the future, does not currently express a possessive, which was the whole purpose of the letter s in its name in the first place, so to the extent that people in and out of the Waterstones company want it to express a possessive concept, it decidedly does not; and, (3) make a change that does nothing to prevent confusion, because no one is going to type www.waterstone’s.com in their browser’s address bar and be confused when its web page doesn’t load. People know to leave out the apostrophe when typing URLs.

The Waterstones powers that be might not care or intend for the new name to express a possessive. They do assert that the name change was motivated—or perhaps rationalized post facto—by the “altogether truer picture of our business today which, while created by one, is now built on the continued contribution of thousands of individual booksellers.” Either way, I think Waterstones’ dropping of the apostrophe is about as unnecessary, silly, and possibly unwise as Barnes & Noble would be to drop the ampersand from its name because it can’t be part of a URL or file name. (I will say, however, that now that Waterstones’ trade name lacks an apostrophe, it makes it much easier, or at least more natural-looking, to make its name into a possessive!)

But this post isn’t about Waterstones. It’s about descriptivist claims regarding the nature and purpose of language that I think are only marginally relevant to the prescriptivists’ complaints about rule breaking, incorrect punctuation, and ambiguity creation.

I am not a linguist, nor ever will be, and I would not dream of challenging any assertions about meaning, ambiguity, or linguistic history put forth by Geoffrey Pullum and Michael Rosen in recent blag posts about prescriptivists and the apostrophe. I will, however, go so far as to at least question the conclusions or implications—or lack thereof—that a completely indifferent, disinterested, historical descriptivist viewpoint entails.

A fair summary of many grammatical descriptivist vs. prescriptivist battles could be phrased thusly: Descriptivists point out that everything about language changes over time and with usage, for convenience, clarity, technology, and other reasons, especially English; in fact, the exact way we’ve gotten to our current state of usage [e.g., of the apostrophe] is by a series of changes that began in some locale or industry [e.g., typesetting], that might have been considered wrong if there had been prescriptivists around, and whose current “end” result is now considered “right” by pedantic prescriptivists today. Prescriptivists counter that people occasionally had good reasons to change how language and punctuation were used, either consciously or subconsciously, and many things have changed for the better, and they don’t have to change again; we should strive for the ideal of logic, clarity, and uniformity in language to maintain ease of communication across time and space, and if prescriptivists can point out why something is more or less clear or logical than an alternative and they can influence people’s usage, then by god, they’re going to do so.

The key point as I see it regarding the topic I’m writing about today—ambiguity, precision, and declarations of “correctness” and “rules” in language—is that language is not some amorphous, physical, animate or inanimate object that has a will or behavior of its own. People use the language and people change the language.

Dr. Pullum writes:

English could easily have a distinct letter sequence for every different meaning, using letter sequences much shorter than the present ones. It doesn’t because the language in general shows no signs of being the slightest bit interested in that.

I know he doesn’t really think it, but the insinuation that I’m refuting is there nonetheless: that language is just some extra-human, extra-societal thing that changes with a mind of its own and that we are powerless to stop. Language is not “the slightest bit interested” in perfect logical clarity or reducing ambiguity because it is not interested in anything because it cannot be. Humans, on the other hand, very obviously are interested in logic, clarity, and consistency, and the very nature of our brains, at least if Chomsky is right, is to think in words and not just pictures, movies, actions, and feelings. (Even if Chomsky’s hypothesis about the basic nature of our brains as opposed to animals’ isn’t right, it is undoubtedly true that civilized humans who grow up in a social environment do think primarily in language.) Therefore, it makes perfect sense to me that we as thinkers, doers, and language users should both consciously and subconsciously strive to create rules and affect our language to suit our myriad purposes as well as possible. The apostrophe does a decent job of expressing genitive nouns, inconsistently though it may be used in pronouns, and it’d be nice to have different symbols to represent elision and possession, but this is what we’ve inherited and everyone knows how to use it, so let’s guard against its decay and misuse.

Descriptivists would counter that prescriptivists are failing even on their own terms: if we’re supposed to strive for logic, consistency, and clarity, then that means we have to allow language to change in order to improve, and the apostrophe clearly isn’t used very consistently even now, so maintaining the status quo clearly isn’t amenable to the prescriptivists’ own ends, save for the desire for uniformity across time: the desire for our language to be similar to the English of future centuries. Well, I don’t know how many other characters are even available to introduce into orthography, and no one is going to just start using or start teaching the use of a completely different, non-standard symbol, so keeping the apostrophe just where it is is fine by me.

people who say that the ridiculous orthographic mess we have inherited is a finely tuned system for clear communication and avoidance of ambiguity are simply fools.

Well, I don’t know, I think it’s pretty clear almost all of the time. I think Dr. Pullum’s main point here is that language and orthography have not been “tuned”, and certainly not “finely”, by any wise or learned body, and certainly not for any purpose or according to any plan. I can’t argue with a professional linguist about how finely it has been tuned, but the history lessons related by him, Dr. Rosen, and other descriptivists seem to indicate that people have, in fact, tuned the language in many specific ways by tweaking their usage and following different rules over the centuries. Many of these tweaks were made consciously and intentionally for clear, verifiable reasons, with clear, logical goals, and with clear, lasting effects. Typesetters made conscious decisions for specific reasons to start using apostrophes, and English-speaking people adopted some or all of those uses because of the advantages they saw in them—clarity, consistency, or a function the apostrophe fulfilled that seemed desirable, or some combination of these. People are omitting them for a clear reason: convenience, laziness, and no real loss of clarity. (They also add them where they don’t belong for a clear reason: they are ignorant or perhaps stupid.)

The fact is that dogs, dog’s, and dogs’ are all pronounced exactly the same, so the fact that we can understand each other when we talk about dogs is as good a proof as one could expect for the proposition that there is no real danger of irresolvable confusion here.

True, but I bet few literate societies have ever talked just the way they wrote, and our writing (and understanding of written language) shouldn’t be—in fact, isn’t—limited by what our mouths can pronounce, so I think it’s a fine thing that we can distinguish on paper those three words that all sound identical when spoken. We have the ability to distinguish those three concepts clearly with a simple, tiny symbol, which many other languages don’t even have, so its use in making small distinctions like this has clear value in orthography and was seemingly introduced by real, living people who consciously decided to use it in that way. Maybe we’d be better off speaking and writing like Romance languages and saying “of the dog” or “of the dogs”, but I have a feeling those bulky phrases can create as many problems (though surely of different types) as our apostrophe does. Thanks to the apostrophe, dogs, dog’s, and dogs’ do all mean different things, which we all understand perfectly and immediately, and I think that’s worth something. That’s an accomplishment of conscious human effort combined with circumstance (as Dr. Rosen’s dogges -> dog’s explanation shows), not unknowing, detached, extra-societal linguistic evolution. I’m proud of us for reaching even this level of precision and anti-ambiguity, inadequate though our minds may be for grasping the deeper nature of the universe and insufficient though our languages may be for expressing what our distant descendants may discover about it. I’d be proud if future generations continued to improve the logic, consistency, precision, and clarity of our language, and I, for one, especially hope that English speakers of the year 2400 can understand our writing much more easily than we can understand Shakespeare’s. As a layman, I can’t see any need for English vocabulary to change very much at all (except for the invention and borrowing of new words, which don’t change the comprehensibility of current English), and I can’t see how English could be improved by any unforeseen, unprecedented, major grammatical or syntax changes, so how is anyone harmed by advocating—I’ll say it—stagnation? How would anyone be harmed by the occurrence of stagnation (except, again, the introduction of new and additional words)? Why do descriptivists get so indignant when prescriptivists recommend that something be standardized or held constant?

I think a better question than “Why do prescriptivists want this or that?” or “Why does what prescriptivists want matter?” or even “What do strict descriptivists want?” is “Do strict descriptivists want anything?” Do strict descriptivists have any opinion on any grammatical or vocabulary matter whatsoever? Do they ever take a position on anything other than how ignorant and overly restrictive prescriptivists are? What viewpoint or recommendation has any descriptivist ever put forth on an accessible web page that could be considered a specific rule or proscription that it would be wise for English speakers to follow? Is there an example of a descriptivist saying, “Yes, I think this grammatical convention is better, and we’d all be wise to follow it for clarity and uniformity,” or, “Yes, it is true that this word currently has such-and-such meaning, and it’d be a shame if the ignorant misuse of this word led to an accepted alteration in meaning”? What is the point of studying something you have no opinion about other than “anything goes”?

Or is it more accurate to classify descriptivists as linguists and other interested grammarians who agree with the prescriptivists and the conventions of the world on 95% of issues but define themselves as descriptivist (and, complementarily, their detractors as prescriptivist) by their refusal to take a side on the issues that they think we shouldn’t take sides on? Obviously, being agnostic and saying “either/or” is a position and is an opinion…but when they go so far as to say it really doesn’t matter whether the apostrophe lives or dies and it really doesn’t matter if 21st-century English sounds like a foreign language in 500 years, I think they’re going overboard in their desire to remain impartial and avoid influencing usage.

Saying that language just changes on its own and that it’s pointless for us to opine on, care about, or try to influence it seems to me like saying, “All of the continents once existed in a single land mass called Pangaea, and land masses are all constantly moving and will all crash into each other again someday, so it’s meaningless to refer to seven different continents today or to take precautions against earthquakes.” We might be powerless to stop continental drift, but we’re not powerless to influence language and usage. We use it every day, and we influence it and it influences us, so insisting on certain grammatical conventions and striving for vocabulary constancy seems preferable to descriptivist indifference.

If some people are fine with the complete elimination of the apostrophe from the English language, which would be a widespread, large-scale, major overhaul of a grammar, why not instead advocate the development of computer programs and operating systems that can tolerate more characters in URL names? Which is more impossible to achieve, and which would require more major overhauls of how we conduct our daily lives? I honestly don’t know the answer.

Posted in Language | 1 Comment

Amazon vs. Barnes & Noble

It’s not hard to find people, especially book lovers, who lament the downfall of brick-and-mortar bookstores thanks to the rise to dominance of Amazon.com. I could say “online retailers”, but let’s face it: it’s only Amazon. Barnes & Noble has always been just about my favorite store to go into, look around in, and shop in, but my most recent experience made me extol the virtues of Amazon’s vast selection and ease of shopping even more than I usually do. (I am not one of those people who lament the decline of brick-and-mortar stores, local stores, or any other type of business or industry of any kind, really, because the market must change constantly to meet the new realities of the world, every industry drastically changes over time, and many companies must die for better ones to supplant them. Despite some people’s despair at the bankruptcy of Borders bookstores, I was not particularly sad to see it go; if it wasn’t giving people what they wanted at the prices they wanted, then it represented an inefficient use of resources and would serve humanity better by making way for companies that could better meet the demands of the masses).

I recently had to return a movie that I received two copies of for Christmas, and I chose to return the one that had come from Barnes & Noble because I knew their movies (and CD’s) are all over-priced and that I could get a more valuable store credit from there than wherever the other copy came from. Also I wasn’t planning on exchanging one over-priced movie for another; rather, I was going to buy two or three books with the store credit.

My frustration with Barnes & Noble (and, when you think about it, all brick-and-mortar bookstores) reached a peak when I couldn’t find a single one of the first 12 books I looked for. Going on memory and the Amazon wish list on my phone’s Amazon app, I walked back and forth and all around the fiction & literature section looking for all of these titles, none of which was carried by this particular Barnes & Noble:

A Certain Slant of Light by Laura Whitcomb
A Handful of Dust by Evelyn Waugh
Riddle-Master by Patricia McKillip
Smilla’s Sense of Snow by Peter Høeg
The History of Danish Dreams by Peter Høeg
Bridge of Birds: A Novel of Ancient China That Never Was by Barry Hughart
Lamb: The Gospel According to Biff, Christ’s Childhood Pal by Christopher Moore
The Invention of Morel by Adolfo Bioy Casares (I looked under both Bi- and Ca-)
Ubik by Philip K. Dick
Alternate Realities by C.J. Cherryh
The Stars My Destination by Alfred Bester
Blindsight by Peter Watts

(I looked only in the English-language fiction & literature section because I only care about translations of the foreign-language novels, which is obviously why I listed their English titles.)

Every single one of those books has over a dozen to hundreds of reviews at Amazon.com, and I only ever heard about them because they were recommended by others over the internet as fascinating, memorable, unique, must-read, or even life-changing books. In other words, these aren’t just run-of-the-mill novels that I might kind of like to read someday.

They also didn’t carry Bryan Garner’s Modern American Usage or the Merriam-Webster Dictionary of English Usage in the reference section, although they did carry other usage guides (Chicago and MLA, for example), so up the total number of absent books in a row to 14.

I did eventually find five that I was looking for. Three I didn’t buy: The Brothers K by David James Duncan for $3 or $4 more than Amazon sells it for, Creatures of Light and Darkness by Roger Zelazny for a hell of a lot more than the $5 Amazon is currently selling it for, and A Fire Upon the Deep by Vernor Vinge for either $3 or $5 more than Amazon is selling it for. I don’t mind Barnes & Noble charging a little more for a book than Amazon, but for a standard mass-market paperback, I think more than a $3 difference is quite high. The two I ended up buying were The Yiddish Policemen’s Union by Michael Chabon and Spin by Robert Charles Wilson.

In all fairness to Barnes & Noble, if I had really gone into the store with a pre-planned list of books to look for in the order that I wanted them, The Yiddish Policemen’s Union would almost certainly have been number 1, perhaps behind only The Invention of Morel, which Octavio Paz has described as “without exaggeration…a perfect novel”, an extolment many others agree with. It seems to me that any respectable bookstore would carry the English version of this novel.

I probably would only have ended up with two novels in the end anyway, so I’m glad I found the two that I did. The Yiddish Policemen’s Union was in the main literature section, although I’ve heard it described as science fiction, so both of these science-fiction novels should be very interesting to read in the near future.

Posted in Books, Life | Comments Off on Amazon vs. Barnes & Noble

“Times” and “folds”

[This post has been updated from its original form to make it better organized, clearer, and more direct and succinct.]

Bill Walsh is absolutely, completely, 100%, unequivocally, and in all other ways right in his position on the meanings of “times” and “fold” in his recent disagreement that is as much mathematical as it is semantic.

If I start with $100 and end up with $250, did that money grow 2 1/2 times?

A reporter and I are having a good-natured disagreement: He says yes, and I say no.

The confusion between growing, say, 1 1/2 times and 2 1/2 times comes from the fact that some people use grow to mean either multiplying or adding, depending on the situation or, I guess, their whim. But grow should not mean to multiply. It should only mean to add a number to a starting value. For example, when you say a child grows by an inch, you don’t mean that their previous height was multiplied by 1 inch. The same logic applies when we say something grows by a percentage. For example, say the height of something grew by 1%. That means that 1% of its previous height was added onto that previous height, not that its height was multiplied by 1%. Importantly, that percentage is not just an abstract number but refers to a specific, concrete, physical measurement; a raw quantity, say in inches or meters.

The same applies when we convert percentages into times. When Bill Walsh and his friend refer to money growing by 1 1/2 or 2 1/2 times, they are really referring to percentages: percentages of the starting value. The only way that multiplication comes into a “growing” calculation is when we use a percentage (or “times” or “factor” or “fold”) to calculate how much gets added. When we say something grows by a certain factor of a starting value, what we mean is that we multiply the starting value by that factor and then add that product onto the starting value.

These concepts can be easily seen with a simple linear transformation. Here is the operation as Bill Walsh and I see it:

Let the function “grows N times” be the linear transformation such that if x grows N times, then

xx + Nx (x becomes N times larger than x).

With this vocabulary, we see that

“$100 grows 1 1/2 times” means $100 ↦ $100 + (1.5)($100) = $250
“$100 grows 2 1/2 times” means $100 ↦ $100 + (2.5)($100) = $350
“100 grows .5 times (grows by half)” means $100 ↦ $100 + (.5)($100) = $150

Alternatively, let’s define a linear transformation according to Bill Walsh’s friend’s use of the English language:

Let the function “grows N times” be the linear transformation such that if x grows N times, then

xNx (x becomes N times x).

“$100 grows 2 1/2 times” means $100 ↦ (2.5)($100) = $250
“$100 grows 1 1/2 times” means $100 ↦ (1.5)($100) = $150
“$100 grows .5 times (grows by half)” means $100 ↦ (.5)($100) = $50

In case you aren’t convinced yet, let’s consider the opposite of growing: shrinking. If we use standard English and refer to shrinking as identical to growing except in the opposite direction, then we can easily define an analogous but opposite operation. But first, note that nothing can shrink by more than 100% (1 time); if something shrinks 100%, there is none left.

Here is how I and, presumably, Bill Walsh would define the shrinking transformation:

Let the function “shrinks by a factor of N” be the linear transformation such that if x shrinks by a factor of N, then

xxNx (x becomes N times smaller than x).

Note the replacement of “N times” from the original formula with “a factor of N” in the new formula. This is partly because of how English works and partly because of what I said above about the impossibility of shrinking more than 100%; it can sound awkward to use the plural “times” if the factor is less than 1, and if a value shrinks by 30%, we don’t say it shrank by .3 times but rather that it shrank by a factor of .3, or alternatively by 30%. My “shrinking” transformation thus produces:

“$100 shrinks by a factor of .75 (by 75%)” means $100 ↦ $100 – (.75)($100) = $25
“$100 shrinks by a factor of .25 (by 25%)” means $100 ↦ $100 – (.25)($100) = $75
“$100 shrinks by a factor of .5 (by half)” means $100 ↦ $100 – (.5)($100) = $50

Now let’s define a different “shrinking” transformation analogously to Bill Walsh’s friend’s “growing” transformation:

Let the function “shrinks by a factor of N be the linear transformation such that if x shrinks by a factor of N, then

xNx (x becomes N times x)

“$100 shrinks by a factor of .75 (by 75%)” means $100 ↦ (.75)($100) = $75
“$100 shrinks by a factor of .25 (by 25%)” means $100 ↦ (.25)($100) = $25
“$100 shrinks by a factor of .5 (by half)” means $100 ↦ (.5)($100) = $50

Not only is that usage of English incoherent, but it requires that growing by half be identical to shrinking by half!

“Now, John, you’re setting up a straw man,” you say. “No one uses shrink like that.” Then why do they use grow like that?

We can conclude that grows by and shrinks by are equal but opposite operations and that they each require multiplying the starting value by the growing or shrinking factor and then adding that product to the starting value. (For the “shrinking” operation, we can use the phrase “adding that product” to mean adding a negative number, which is the same as subtracting a positive number.)

Now, the word fold. As far as the word fold goes, I thought its meaning seemed clear to me, but its meaning as used seems different, and I would typically avoid using it if I were writing a scientific paper and not just editing others’ papers. In my job as a scientific editor, I think every time I’ve ever seen the word fold, it has meant “entailing multiplication by a factor of [the number that comes before it]”. In other words, a 2.5-fold increase always is used to mean “multiplied by 2.5 times”. Therefore, people would say both that $250 is 2.5-fold greater than $100 and that $250 is 2.5-fold (of) $100. That makes no sense to me. Well, no, it does make some kind of sense, but it is inconsistent sense. Well, no, it’s consistent mathematically, because regardless of the construction of the sentence, you just always multiply the original number by the fold factor. But it is inconsistent semantically.

Folds are also frustrating when referring to decreases, but I’ve come to accept formerly non-sensical fold decreases and not care anymore. For example, in the real, physical world, nothing can decrease more than 100%. If some quantity decreases 100%, none of it is left, and there is no such thing as negative matter or energy, so it is non-sensical and meaningless to say something decreased by more than 100%. If something decreases by half, 50% of it is left. If something decreases by two-thirds, one-third of it is left.

Well, if you took fold to mean “percent of” or “fraction of”, then the most anything could ever decrease would be 1-fold. If something decreased 0.5-fold, that would be decreasing 50%. Etc. That, however, is not how any biomedical research scientist has ever used fold that I’ve seen. They say something “decreased 7-fold”, meaning the final value was 1/7th of the original. If something decreased 150-fold, the final value was 1/150th of the original. That’s stupid, but I guess everyone’s consistent, so now fold means “involving a factor or ratio of the original value”.

Bill Walsh has experienced similar frustration with fold:

My friendly adversary pointed me to a dictionary that defines the verb triple as meaning “to increase three times in size or amount.” And there is the -fold model. A twofold increase is doubling, a threefold increase is tripling, and so on. To which I respond: None of the dictionaries on my shelves are that sloppy, and those shelves also hold an otherwise wonderful usage book in which the author is tripped up by -fold, insisting that tripling would be a twofold increase. (It’s a special case, -fold, because “a onefold increase” is not only never used but also impossible. You can fold something in two or three or more, but you can’t fold it in one.)
[I would love to know what wonderful usage book that is. —JTP]

His friendly adversary’s dictionary would, unfortunately, agree with most biomedical scientists on the use of -fold: they use a twofold increase to mean multiplying by a factor of 2 (doubling), even though multiplying by and increasing (growing) by ought to mean different things. I suppose you could argue that increasing (growing) by can mean adding to or multiplying by according to the whims of the author and following no consistent or pre-defined rule, but that is illogical to me and goes against the meanings of the words “increase” and “grow” as I understand them, especially when the word “by” is added after them.

This is a perfect example of why I am largely prescriptivist: so that meanings can stay as consistent as possible and people separated by time and space (and mathematical ability) will mean the same thing when they use the same words. I will stop being prescriptivist when I gain the ability to understand how people can not care that the same words mean substantially, crucially different things to different people.

If shrink is not the perfect opposite of grow, less than is not the opposite of more than, and grow (by) can mean two mathematically distinct things, then, well, I don’t know anything and it’s pointless to write or talk about anything.

Posted in Grammar, Language | 5 Comments

Young American female vowel shift

In a recent post about a different topic, Bill Walsh mentions some annoying vowel shifts exhibited by young-ish American females. These mainly involve changing the short e to the short u sound, so that “desk”, “test”, and “better” become “dusk”, “tust”, and “butter”, respectively. “Dad” also becomes “Dodd”. I’ve also noticed these and been annoyed by them. It’s kind of a California valley-girl accent creeping into all parts of the country in teenage girls and 20-something women, and now probably even women in their 30’s.

I tried to post a comment reporting my observation of another pair of vowel shifts that have annoyed me much more than those have, but his blag only allows posts from Google Blagger users, which I am not. (What kind of foolish isolationism is that? How many insightful comments and loyal frequent visitors has that practice precluded over the years at the blags that employ it? I’ve only ever seen it at Blagspot blags, but I know the owner has to choose to enable that restriction, so I can’t place the blame entirely on Google.)

Instead, I’ll post it to my own blag and reap the sweet, sweet internet karma it’ll bring me: The annoying vowel shift I’ve noticed that is unique to young-ish American females is pronouncing “thank you” as “think yo”. I guess the “yo” is actually more of a Minnesota/Midwestern type of “yoh”, but it’s not like it’s ever stressed or enunciated so strongly that it makes you think of Marge Gunderson. Maybe it’s just halfway between “you” and “yo”, and maybe the first word is more of a “thenk” than a “think”, but whatever it is with each individual girl, it’s annoying.

I haven’t noticed any prevalence of vowel shifts among American males, although a little bit of the valley accent certainly can be heard in some college guys’ (especially frat guys’) speech, as in “mahn” for “man”, “cuhl” for “cool”, etc. Sort of a surfer accent by guys who are trying too hard to sound casual and indifferent (or who have tried to sound casual and indifferent for so long that they no longer can tell any difference).

Maybe some nit-picky female linguists are equally as annoyed at males’ dropping of entire consonant sounds, as in “’anks” for “thanks”?

Posted in Language | 4 Comments

Observations from our Christmas road trip

Kathy and I recently drove from Ann Arbor to Atlanta for Christmas, from Atlanta to Tampa to visit her grandmother, and from Tampa back to Michigan with an overnight layover in my parents’ house in Atlanta again. The trip between Ann Arbor and Atlanta (actually, Johns Creek) is 12 to 13 hours, and that between Atlanta and Tampa 8 or 8.5. We drove instead of flew for two reasons: to take a lot of puppy- and gift-related stuff in both directions and to save money.

The first and most important thing to remark on about our trip was the nearly ideal whether. On the way down, it was drizzling and extremely foggy in Michigan and Ohio, but that’s a hell of a lot better than snow or ice and didn’t impede our travel in any way. Between Atlanta and Tampa both ways was dry and warm. The final leg of our trip, from Atlanta to Ann Arbor, was completely dry and not the least bit dangerous. We were very lucky in this regard, I think. That was Dec. 31, and on New Year’s Day it rained a little and created ice on the ground (which I felt as I was bringing groceries in from the car), and on Jan. 2 it snowed an inch or two. We missed dangerous or at least slow-driving weather by a day, or two at most.

The most important observation I can make or thing I learned from the trip is that Ohio totally sucks. I already knew that from having lived near it for 6 years and having driven through it on our summer road trip to Atlanta and New Orleans, but, man, what a shitty state. On our way down south, we saw either four or five (let’s call it four to give them the benefit of the doubt) speed traps along I-75, compared to a total of zero in Kentucky and Tennessee. There was one that we saw in Georgia. There was nothing close to a speed trap on I-75 in Florida going north or south, as everybody there drives 85–90 mph and I was being passed going 85 in the middle lane. That was the only positive point that has ever made Florida a slightly less odious state to me, not that I’ve changed my tune from never wanting to live there. On the way back north, I’m not sure if there was any speed trap in Georgia, but there were a total of zero in Tennessee and Kentucky, compared to six—yes, six—in Ohio alone. Combine this with the lower speed limit of 65 throughout the state, and you get an entire state-wide heap of shittiness and pettiness. Luckily, we have not been pulled over on any of our road trips, but I saw two people get pulled over behind us after we passed the cops, one in Georgia and one in Ohio.

The absurdity of Ohio’s 65-mph speed limit and its numerous speed traps also emphasizes the fact that speed limits on interstates generally do nothing to increase safety. If they did, then German autobahns would have the highest death rate in the world, but instead they have a remarkably low death rate. Michigan and Ohio routinely have similar traffic fatality rates on interstates; for instance, in 2006, Michigan’s interstate highway death rate was lower than Ohio’s. When we crossed the border into Michigan that night, I didn’t feel the least bit more endangered when we increased our speed from about 68 to about 78. This is because it wasn’t more dangerous. I’ve never felt safer going the other direction, either. I could understand a city, county, and state wanting to increase its police presence on the roads on New Year’s Eve, but at midnight and after, when people are actually drunk and are actually driving home, not at 9:00 when people are going to the places of imbibement. My god, six speed traps on a 3-hour stretch of interstate? I know Jersey Shore has really increased in prominence throughout American culture in the last few years, but you don’t have to try quite so hard to edge out New Jersey for the Shittiest State in the Union award, guys.

On the way down south in the rain and fog, we noticed one additional advantage to those cool-white LED headlights in addition to their looking cooler: they stand out in precipitation far more than traditional headlights do. A lot of them are annoying and probably even unsafe because of how bright they are in your rear-view mirror, and I hope automakers fix that in the near future, but their superiority in fog and rain and their cool, futuristic, science-fictiony look make me really, really want them in my next car. Obviously there are many different kinds of LED headlights, and I want a kind that is not overly bright and not very blue. I like the cool-white look, which is very pure white with probably a hint of blueness in them, but the ones that look blue are just retarded.

On our way home on I-75 north in Florida, we saw a Georgia fan with four Georgia Bulldogs window flags driving down south, presumably to go to the Outback Bowl.

Also on I-75 north just north of Tampa, we saw a billboard for Bronner’s Christmas Wonderland…which is in Michigan. No, there’s not another branch in Florida; the billboard said “Frankenmuth, Michigan”. No, it wasn’t just an advertisement for their website for all those retirees to order piles of tacky Christmas decorations, although the website was listed on there. Just a normal billboard for a store that is over 1,000 miles away. Weird!

Finally, I recently found this new blag You are a bad driver and I hate you, which I will read regularly and eagerly. Dean, sir, you are a gentleman and a scholar.

Posted in Life | Comments Off on Observations from our Christmas road trip

Stupid NFL

Between these two NFC quarterbacks, the first of which played in a more difficult division and finished with a 10–6 record, and the second of which finished 9–7 to win his shitty division, which one would be more deserving of being selected to the Pro Bowl?

Quarterback 1 Quarterback 2
41 TD 29 TD
16 INT 16 INT
5038 yds 4933 yds
63.5% completions 61.0% completions
97.2 rating 92.9 rating

Quarterback 1 is superior in every single meaningful category, including wins, so obviously I’m writing about this because Quarterback 2 was selected to the NFC Pro Bowl team and Quarterback 1 wasn’t. Who are they? Quarterback 1 is Matthew Stafford, and Quarterback 2 is Eli Manning.

What a bunch of bullshit. The Pro Bowl rosters are currently voted on by players, coaches, and fans, so it’s probably the idiot New York fans who voted Eli Manning in. As shown nearly every year by the baseball All-Star voting at some position or another, fans are idiots and shouldn’t be allowed to vote on anything because they make it a popularity contest instead of an accomplishment contest.

Posted in Morans, Sports | Comments Off on Stupid NFL