Thursday, May 31, 2007

Averting Change by Avoiding the Right Questions

Yesterday I found myself mustering some pretty strong rhetoric about the extent to which we are losing our right to talk about change because that right is being suppressed by the ways in which mass media purport to keep us informed. Any second thoughts I may have had about being too extreme were dismissed this morning by Henry Banta's piece for Nieman Watchdog. For those unfamiliar with the Nieman Watchdog site (one of the sites listed below on my "What I Read" list), the motto posted on their header is "Questions the press should ask." Mr. Banta took this motto seriously in his examination of just how flawed media reporting has been in accounting for the current price of gas (and his analysis did not waste any time over sheer blunders, such as the one just committed by KOTV). Mr. Banta is one of those lawyers who appreciates and honors the value of clear, straightforward, and compelling language. His clarity concentrates on the extent to which media coverage of the price of gas has been obfuscating the actual "story behind the story;" and he poses specific recommendations (with strong argumentative support) of the questions that media reporters should be asking when they interview any connected with the energy business. It does not take long to read his piece; and, once you have done so, your respect for just about anything to see in a newspaper or on television about the "price at the pump" will probably be seriously eroded (if any of it is left)!

Those who do not Learn from History Pay for It

It was only half a month ago the we learned how an unfounded rumor could wreak havoc on the price of Apple stock, at least as a short-term effect. Unfortunately, it turned out that much (if not all) of the "we" was limited to readers of CNET News.com, as if this was just another story about the eccentricities of today's technology. Well, if Reuters did not pay very much attention to the price of a share of Apple, they clearly had a different opinion when it came to the price of a barrel of oil! Here is the lead from a story they filed yesterday afternoon:

World oil prices jumped briefly on Wednesday after a television station in Tulsa, Oklahoma -- the No. 62 U.S. media market -- posted an erroneous story about a refinery fire on its Web site.

At 10:14 EDT (1414 GMT), CBS affiliate KOTV reported that a lightning strike had caused a fire at an Oklahoma refinery -- sparking a flurry of excitement among energy traders and boosting U.S. crude prices 40 cents.

The refining company announced the story was "completely wrong" and the station withdrew the story.

"All it takes is a screw-up on a Web site to move the market. It just goes to show how tense this market is," said a Houston-based oil trader.

When I ran my editorial impressions of the Apple story, I linked the post to Ellen Goodman's recent column on "The Benefits of Slow Journalism." In this story KOTV seems to have demonstrated how little regard that have for Ms. Goodman's values. The sad irony is that all of this took place in a local context. KOTV was probably in the best position to take the time to check out the story before posting it but decided, instead, to run at "Internet speed." As Vonnegut used to say, so it goes.

Wednesday, May 30, 2007

At the End of the Road to Serfdom

I am beginning to realize that one of the functions of "rehearsing" ideas involves trying to cast them in the form of propositions that might serve as "laws" (which, for the most part, are descriptive, rather than prescriptive). Four such laws have emerged in my recent writing. The first was the Fundamental Law of Economics (which I had actually be "rehearsing" for over half of my life):

People are willing to share poverty, but they would prefer to keep wealth.

This was quickly followed by the Fundamental Law of Business:

Survival comes before quality.

I then moved from the economy to politics by proposing a variation of a definition introduced by Carl von Clausewitz:

Terrorism is protest by other means.

Most recently I have been focusing my attention on the problem of a general lack of will, leading to the following proposition:

Comfort is the enemy of will.

We are now in the wake of two recent news items that are particularly disconcerting where questions of not only will but also our very sense of national identity are concerned. The first (which actually gave rise to that fourth proposition) was the failure of the Congress to hold to its opposition to the Iraq war through its control of the military budget; and the second was Cindy Sheehan's decision to "retire" from her personal (not to mention sincere) efforts to oppose that same war. There have been a lot of reactions to both of these stories, many of which have involved agonizing over the extent to which we have "lost our democracy." Well, the truth is that we never had one. The Founding Fathers were well aware of the shortcomings of a simplistic majority-rule approach to government. Instead, they drafted a Constitution around principles of fair representation and separation of powers. This was then followed by the first ten amendments to that Constitution, which basically embodied the principle that majority rule always needed to be attenuated by minority rights. The real question that is at stake is whether or not voices of opposition have a right to be heard, not only in theory but in the practice of an electoral system through which those voices can be embodied as candidates who stand for votes. It is not about whether or not we are "losing our democracy" but about whether we are losing our right to talk about change (and then turn our talk into action).

Cast in the framework to talking about change, we quickly encounter a corollary to that proposition about will (which basically addresses that capacity for turning talk into action):

People who are comfortable do not want change, since that may lead to discomfort.

This "law" was first presented to me by a seat companion on a flight from Santa Barbara to Chicago. She told me that psychiatrists had begun to talk about the "Santa Barbara syndrome." This was a particular form of depression that grew out of the premise that "life was perfect in Santa Barbara." The primary symptom was the total lack of ability to take any significant action, since that action might lead to having to leave Santa Barbara and thus abandoning the state of perfection one had attained! My own corollary generalizes on this little lesson. There are lots of places you can be comfortable; but, once you are comfortable, you want to "freeze" it into a "permanent state." As Isaiah Berlin has observed, this is the "tragic flaw" (Aristotle's language, not Berlin's) of utopianism. More recently Russ Feingold has picked up on the corollary in his argument that Congress was more interested in "political comfort" than in the will of their electorate.

However, the corollary gives rise to another law that looks at change from the other side of the coin:

People who are uncomfortable do not have a voice.

This is because, regardless of what happens on any given Election Day, a voice is only heard if the mass media choose to let it be heard. Since the mass media are controlled by the comfortable, it is in their best interests to mute the voices of the uncomfortable, since those voices may induce the changes that they fear and abhor. Nevertheless, it is in the best interests of the mass media, since they are businesses, to do a very good job of deluding the uncomfortable into thinking that they do have a voice. As a matter of fact, they are far better at this than any candidate from any political party! After all, promoting that illusion sells soap (or insurance, not to mention any number of pharmaceuticals about which you should “ask your doctor"). Furthermore, since its primary “virtue” appears to be in marketing, the Internet is no better in this regard than any of the other mass media!

The irony of that last sentence is particularly relevant on the heels of the address given by Eric Schmidt to the Seoul Digital Forum on the theme of the Internet as “a powerful force for democracy.” Most of the irony resides in the extent to which his own company, Google, has both committed and supported non-democratic or anti-democratic activities in the name of its own prosperity (as Sumner Lemon observed in his IDG news report published by InfoWorld). Whatever the virtues of the Internet may be, the “powers that be” appreciate the extent to which it can sustain illusions of democratic and representative practices, because even those illusions reinforce its marketing power. The populi have no more of an effective vox on the Internet than they have in their letters to (now almost defunct) newspapers; they just see themselves “in print” more readily. Hayek warned that both Europe and the United States were headed down “the road to serfdom;” and we seem to have arrived!

Tuesday, May 29, 2007

Cindy Sheehan's Example

Cindy Sheehan submitted a "letter of resignation" through yesterday's entry on her Daily Kos blog:

This is my resignation letter as the "face" of the American anti-war movement. This is not my "Checkers" moment, because I will never give up trying to help people in the world who are harmed by the empire of the good old US of A, but I am finished working in, or outside of this system. This system forcefully resists being helped and eats up the people who try to help it. I am getting out before it totally consumes me or anymore people that I love and the rest of my resources.

Good-bye America ...you are not the country that I love and I finally realized no matter how much I sacrifice, I can’t make you be that country unless you want it.

It’s up to you now.

The above text is the conclusion of that entry. It was preceded by a rather extended text developing the argument that led her to this conclusion. In a time that has so tightly coupled the lack of reflection with the lack of will, the entire text makes for sobering reading. It deserves all that reflection that we no longer seem to be able to muster, whether through lack of will or the sheer laziness of lack of energy. One must read her remarks in light of not only Russ Feingold's criticism of the Congress for preferring "political comfort" to strength of will but also the tactics of the "media ideologues," both right and left, to turn the legislative process into grist for an entertainment mill that, as Ms. Sheehan observed in her blog, is probably best represented by American Idol. Last week I choose to pick on Keith Olbermann; but he is only a symptom of the far greater malady that has warped the democratic ideal of the "people's choice" to a point where it is barely recognizable. It is the country that grew out of that ideal that is the object of Ms. Sheehan's love in the above text; and she now seems as resigned to the death of that ideal as she has had to be to the death of her son Casey.

If Ms. Sheehan is right, then she has refuted my assertion last week that the Democrats care what she says, while Bush can ignore it by dint of his own personal morality. Actually, if I read her argument correctly, it is actually the Democrats who refuted my assertion, leaving me feeling about the same way I felt in 1968, when it was clear that neither the Democrats nor the Republicans had put up a presidential candidate for whom I could vote. This was all the more ironic, since this was the first presidential election in which I could cast a vote. So it was that, on the first opportunity I had to vote for who would be President, I gave my vote to Dick Gregory, running under the Peace and Freedom Party, because he made it clear that action in the face of an urgent situation was more important to him than "political comfort." However broad the playing field may appear right now, my guess is that the choice we shall ultimately face in November of 2008 will be equally disconcerting, if not more so.

Monday, May 28, 2007

A New Voice

Mariusz Kwiecien was apparently not up to the demands (primarily of unpleasant weather) of singing at the San Francisco Opera production of "Opera in Dolores Park" yesterday afternoon. This was a bit disappointing. Much of the afternoon was devoted to excerpts from Don Giovanni. All the other members of the cast (except for Kristinn Sigmundsson, who will be singing the Commendatore) were there. The good news is that the role of Don Giovanni was filled by a singer I had not previously hear, Wayne Tigges; and he was definitely worth hearing!

During the intermission I commented to a friend, "You know not to trust him from the moment he opens his mouth." She liked that remark enough that I figured I ought to run with it. The problem is that there are several directions to take it!

Probably the most interesting is the challenge of delivering dramatic material without the benefit of staging, which I just addressed in regard to The Seven Deadly Sins. We now seem to be in an age where singers know enough about dramatic technique to "deliver the message" even when the staging is minimal, or even absent. Willard White recently demonstrated this when the San Francisco Symphony performed Berlioz' Damnation of Faust under Charles Dutoit. White had such a firm handle on the character of Mephistopheles that he really did not need staging (which Berlioz had not felt was necessary in the first place). Tigges had the same command of Don Giovanni's character. It was all the more interesting because, by the time the opera gets around to the "Là ci darem" duet (which is what he was performing with Claudia Mahnke), we know what sort of character Giovanni is, thanks not only to the "actions in the present" that we observe from his first appearance on the stage but also to the account of his past that Leporello gives us. Add to that the fact that most of us sitting on the grass out there holding up bravely against that notorious San Francisco version of summer were familiar enough with the opera that we knew about Giovanni's character without seeing all that prior material. The point, however, is that Tigges has established himself in that school of practice that conveys character with a minimum of dramatic resources, basically by letting the performance of the music itself do all the work. This left me less curious as to how he would perform with the guidance of stage direction and more curious as to how he would deal with the dramatic elements of art song. Hopefully my curiosity will be satisfied sooner rather than later.

The fact that so many of us know "all about" Don Giovanni is probably more blessing than curse when it comes to actually staging this opera. We are probably far more disposed to accept it as the "dramma giocoso" it was billed as being than its original audience of 1787 was. I am not sure to what extent Mozart and da Ponte were up on their Aristotle; but, as far as the latter's "Poetics" is concerned, the fact that Giovanni is a noble "Don" makes the drama a tragedy, rather than a comedy. Shakespeare had already set the bar when it came to using low humor to emphasize a tragic situation; and, while da Ponte was no Shakespeare, he was probably well aware of the technique. The result is that, in the years following its premiere, more has probably been written about Giovanni than about any other character in a Mozart opera.

It will thus be interesting to see what Leah Hausman has made of all this in a production that has already been performed at the Théâtre de la Monnaie in Brussels. Musically, this version appears to be on solid ground. The afternoon in Dolores Park concluded with Donald Runnicles conducting the entire cast (with Tigges substituting for Kwiecien) and orchestra in the finale to the first act; and, for all the disadvantages that come with outdoor performance, the growing confusion and urgency were all there. The promise of a hell-raising production in there; by next week we shall know if it gets delivered.

Works that Defy Staging

I just read Rupert Christiansen agonizing in the Telegraph over the problems with staging The Seven Deadly Sins:

It sounds like the recipe for a hit. A half-hour cantata, combining elements of opera and ballet, which reflects on the ironies of vice in a cynical, sexy, witty yet touching parable set to brilliant and memorable music. How could it fail?

But I've now seen eight theatrical versions of Brecht and Weill's Seven Deadly Sins - most recently, the mess choreographed last month by Will Tuckett at the Royal Opera House - and not one has convinced me.

He reminded me that I have only seen one theatrical version, which, ironically, surprised me at how effective it was. The irony had to do with the production being by the Tanztheater Wuppertal Pina Bausch at the Brooklyn Academy of Music in 1985. I was a big "BAM" supporter at the time, which meant that I went to every Bausch production they offered that fall; and, with the exception of the Weill evening, they were all interminably self-indulgent. I think that the constraint of working within the half-hour limitation of Seven Deadly Sins did her some good; and this continued to be the case after the intermission with her choreographic interpretation of Weill's Kleine Dreigroschenmusik, which was definitely the only indication of joy (even if a bit perverse) that her dancers ever exhibited. It was also the only evening with "live" music, Michael Tilson Thomas conducting the Brooklyn Philharmonic; so Thomas may have had something to do with giving the performances more pace than the other productions had.

Several years ago he conducted Seven Deadly Sins with the San Francisco Symphony and Uta Lemper. As I recall, this was an unstaged performance with surtitles. Lemper had preceded the intermission with a pale shadow of a cabaret act that was pretty lifeless and certainly out of place in the enormity of Davies Hall. However, the size of the space was no obstacle when MTT took over the podium. Even with the spare resources of the Weill score, he made the performance "work" (and made Lemper far more interesting in the process).

There are, of course, "great classics" that defy staging. Stravinsky seems to have a monopoly on them. There will probably always be a controversy over whether or not there is a "right" way to stage "Le Sacre du Printemps;" but even "Firebird" has a reputation for being a clunker. Then, of course, there is "L'Histoire du Soldat," a pioneering piece of back-of-the-truck theatre that just never seems to "make it" when staged.

I have no idea why Bausch was able to succeed with Weill where so many others have failed. I have a better idea as to why her Weill effort should have succeeded when all of her other work was so disastrous, which is that she really needed someone to tell her when to stop. Perhaps Christiansen's frustrations can be traced back to productions that never figured out how to start!

Sunday, May 27, 2007

Reading what I don't Understand

I have to confess that I felt very pleasantly flattered by what JP Rangaswami wrote about my own writing over at confused of calcutta:

I agree with Stephen about many things; I disagree with Stephen about many things; and then there’s a large group of things I don’t even claim to understand as yet, and that doesn’t stop me reading what he writes.

I do not know if he recalls that, back in March, I wrote about "The Problem of Dealing with Difficult Texts;" but one of the points I was trying to make there is that we should never give up on those things that we "don't even claim to understand as yet," because the most important part of that phrase is the concluding "as yet!" JP made his comment in reaction to my post on "The Google Paradigm and its Discontents;" but one of the discontents that I did not address in that post is the problem of instant gratification (the unfortunate corollary of "Internet speed") that both Google and Wikipedia have cultivated into what is left of our reading skills and habits. The "as yet" effect does not kick in overnight. In my own life experiences there are time when it has taken years, only happening when triggered by some event that then tweaks my memory; and it is because my memory is not what it used to be that I have become so occupied with taking notes, not just by marking up what I read but also by transcribing those marks onto my "virtual 3 x 5 cards" in PowerPoint. What I have discovered is that it is just as important to mark passages that I do not "as yet" understand (out of some instinctive sense that they should not be neglected) as to make note of passages that pertain to any of my current writing projects. As I remarked in my earlier post, that act of transcription serves as my first step towards "as yet" understanding.

The important thing about JP's observation is that it reminds us that we should embrace difficult texts, rather than flee from them. I find this particularly important when I am on the road. Hegel and Derrida have accompanied me on long flights and jet-lagged nights in hotel rooms; and they are far better companions than the "business press" books, which usually can be read in less time than it takes to sit through a play by Tom Stoppard. Often my only concern is that I am traveling with so much stuff that I lack both the space and the strength to deal with anything that is physically too heavy!

Saturday, May 26, 2007

On the Practice of Editing

There has been some interesting banter over at Andrew Keen's Great Seduction blog around the fantasy of Google Apps developing a "virtual editor" (already christened "Google VE"). Since the time I spent working with Mark Stefik on editing the book reviews published in Artificial Intelligence accounted for some of my most memorable and enjoyable life experiences, I feel a need to elaborate a bit less frivolously from a practitioner's point of view. Ironically, when I started to reflect on the nature of my own practices, I discovered that I was falling back on the same framework that I had previously used in writing about the practices of musical performance, the medieval trivium of logic, grammar, and rhetoric. I cannot speak for how Mark approached his end of the operation (except for the fact that, because I was in Singapore, we could not have collaborated anywhere near as well as we did without really good electronic mail support); but, when I approached the manuscript of a review, particularly one that took the trouble to explore its topic in depth, I found that I had to "live" in all three of those trivium disciplines. Thus, if anyone is seriously fantasizing about a "virtual editor," it is important to account for each of those perspectives in terms of their capacity for accommodating technological support.

Grammar is probably the best place to begin. In the first place Microsoft has already demonstrated that it is at least feasible, although Word's grammar checking is so inferior to the spell checking that I ignore it almost entirely. One of the interesting things about natural language processing is the early discovery (going all the way back to Terry Winograd's doctoral work) that parsing a sentence is not necessary for "understanding" it (whatever the meaning of that word in scare quotes may be). Quite the contrary, because of issues like ambiguity, Winograd demonstrated that a syntactic analysis often has to draw upon hypotheses, if not results, arising from attempts at a semantic analysis. The Word grammar checker seems to be pretty ignorant of semantics, so it is no wonder that it is limited.

More important, however, is that, even if you establish that a sentence is "well formed" within the constraints of grammatical rules, as an editor, you may still want to recommend that the author change it. When I was a student, a favorite joke about language would be, "I can diagram it, but I can't understand it." I can still say that about a lot of the stuff I read (including, often, my own)! In other words not only are the rules of grammar not necessary for understanding the sentence; but also they are not sufficient! Editing within the discipline of grammar involves subtleties of judgment that go beyond the domain of syntactic analysis. Not only do I doubt that any "virtual editor" would be capable of making just judgment calls; but also I suspect that software analysis would not even be able to detect where those judgment calls would have to be made by the human editor who, presumably, would be using the virtual editor as a labor-saving tool.

Now, whatever may be bubbling beneath the surface at Google Labs, on the basis of what we see, Google lives in the world of logic; but, as just about anyone would point out, theirs is a rather impoverished world. I am sure that we all have our stories about why Boolean combinations of words and phrases do not make for particularly good semantic representations; and therein lies the rub. The emerging behavior of "Google rage" is a manifestation of the frustration that arises when what the user means just does not register with the capabilities of the search engine.

However, if Google were to argue that throwing a lot of smart researchers at the logic problem would eventually result in more useful software, my guess is that they are totally oblivious to the world of rhetoric, to the point that it would not even be on the radar in any brainstorming about a virtual editor. The good news is that I shall always have opportunities to make fun of the rhetorical ineptitude that emerges when Google faces the public. The bad news is that no one over there is likely to see any significance in my indulgences in this particular form of ridicule.

Nevertheless, there are issues of rhetoric that cannot be ignored, particularly when one is editing a particularly long manuscript, because, the longer the text gets, the less likely it is to be read in its entirety. This is where all three worlds converge. Taken together, the analysis of grammar, logic, and rhetoric help us prioritize what we read. It all goes back to the same things I had been writing about musical performance, the ways in which a reader can sort out the embellishing from the embellished. If the reader cannot quickly home in on those pieces of text that will be subjectively embellished, (s)he cannot make any sense out of the embellishment (and will probably then just bail on the whole text). It is that lack of the ability to prioritize that also can make search experiences so frustrating, both in how the query is expressed and in how the results are delivered.

Even in an age of Google and Wikipedia, how we read texts is still a good model for how we understand the world. Unfortunately, too much of our technological innovations choose to ignore this premise, perhaps because it is too difficult to be embraced by technology. Nevertheless, it is how we live in the world; and it cannot be ignored.

Friday, May 25, 2007

Historical Chutzpah

This was the week that the San Francisco Chronicle looked back on the Summer of Love, primarily through a series of articles by Joel Selvin that ran on Sunday, Tuesday, and Wednesday. Selvin's theme, expressed in his first headline, was that this was the "stuff that myths are made of." Bearing in mind that this was very much a "local" story, I just have to wonder why the Whitney Museum could not have been as perceptive, at least on the basis of the New York Times report of their show, “Summer of Love: Art of the Psychedelic Era.” That report was enough for me to select the Whitney as the recipient of the Chutzpah of the Week award. However, before I build up my argument I should establish two points of context:

  1. I was not part of the Summer of Love.
  2. I have never been a great fan of the Whitney.

These points are actually related, so let me elaborate a bit on this context. I graduated MIT in the spring of 1967. I was part of the first graduating class to receive diplomas from a president who had previously been on the Business School Faculty (which I did not recognize at the time for the omen it was). I cannot remember if any previous graduating classes had worn black armbands in protest of our "adventure" (the same word I now use in reference to Iraq) in Vietnam; but quite a few graduates in my class did. (For the record, I did not; for better or worse, I was too involved in other matters, which I shall get to quickly.) The only other thing I remember is my parents telling me that Lorne Greene's son was in my graduating class (I never met him); so his dad was there signing autographs.

More important was that EUTERPE, my first serious venture into computer music, had been my senior thesis; so I was looking forward to spending the summer working at Marvin Minsky's Artificial Intelligence Laboratory, continuing that work before starting in on my doctoral program. This was also the summer I started to build up my writing chops, particularly in writing about the performing arts with a primary focus on dance and a secondary focus on avant-garde music. This "beat" led me to many trips to New York, none of which involved the Whitney. My greatest "rebellion" was to take a down-and-dirty approach to my subject matter; and, for all the ferment in the art world, the Whitney always felt far too detached and posh for my personal tastes.

So the summer of 1967 was not particularly memorable for me. I had my "own things to look at," as Douglas Adams put it many years later; and I was of an age when that was all that mattered. My own personal revolution came in the summer of 1968, because that was when I got to know John Cage and most of the other members of the Merce Cunningham Dance Company. Without that encounter my doctoral thesis could not have turned out the way it did. It was literally a life-altering experience (which began with mushroom hunts but never involved anything psychedelic) that kept me in touch with Cage on and off until I left for Singapore in 1991.

Cage died while I was in Singapore. Apparently, I was the only person Singapore Radio could find who had a personal experience with him; so I allowed myself to be interviewed by way of an obituary. As a "punch line," I concluded with the observation that Cage taught me to question everything my teachers told me. Today, with the wisdom of a talk that Jacques Derrida gave at Stanford, I would revise that slightly: I would say that Cage taught me to question all professions, taking the word, as Derrida did, to simultaneously denote an institution and the statement of an assertion. This is the basis for my argument about the Whitney show!

Now all I have to go on is the report that Holland Carter provided to The New York Times. I have no idea how old Carter is. My guess is that he is not older than I am, just because anyone of that age seems to have been "ejected" from the newspaper business as a casualty of rampant cost-cutting. However, regardless of whether he had any relevant life experiences in 1967, Carter follows the same rules I used to follow in my own performing-arts writing: Start by reporting on the experience, and save the opinions for whatever column space may then remain. He may have been a bit heavy on introductory context (but, given what I just wrote, how can I attack him on that?); but the text he writes to retraces his steps through the exhibit demonstrate that he has what it takes to be a good reporter. As I have previously written, I agree with Plato that good descriptive language is critical to knowledge; and Carter seems to be in my camp on this.

Having dealt with description, Carter has his grounds for moving on to opinion:

“Summer of Love” is stuck on the style, or rather stuck on the effort to make one style the whole ’60s story. It pushes hard, covering wall after Whitney wall with posters for concerts at rock emporiums like the Fillmore West and East, or British clubs like UFO and the Fifth Dimension. (The show has a substantial British section; it was organized by Christoph Grunenberg, director of the exhibition’s originating museum, the Tate Liverpool.)

Here is where the Whitney chutzpah begins. Lots of things were happening in the Sixties, particularly after 1965; but both the reality and the myth of the Summer of Love were anchored in a very localized San Francisco experience. Since the heart of that experience was Golden Gate Park, even the Fillmore (only someone on the "other coast" would call it "Fillmore West") was a side show for this particular event. Any attempt to link this experience to related experiences in New York or Britain (even with the Liverpool connection to the Beatles) misses the point of the concept; and any major misconception that is reinforced by the "professionalism" (Derrida-style) of the Whitney crosses the line from a blunder to an act of chutzpah. In other words it is not just that the Whitney has missed the point but that they have now institutionalized their misunderstanding!

Now Carter steers clear of saying anything about what did or did not happen in San Francisco in the summer of 1967 (which means he is also a good student of Jane Austen). He prefers to write about the broader context of political ferment, which was certainly equally strong on both coasts. He takes the Whitney to task for ignoring this aspect of the experience, building up to the following conclusion:

So, we discover in 40-year retrospect, love was never all you needed; in the 1960s, in fact, it was barely there. “Summer of Love” doesn’t feel like a particularly loving show, and the ’60s, as seen through its lens, isn’t a loving time, unless by love you mean sex, which was plentiful, as it tends to be in youth movements.

But altruism, selflessness? Young people are by definition narcissistic, all clammy ego. They want what they want. There is no past that matters; the future isn’t yet real. Some might say — I would say — that American culture in general is like this, though not all of it. And if the kids in “Summer of Love” are stoned on self-adoration, there were also an extraordinary number of young people during the Vietnam era who engaged in sustained acts of social generosity. And they made art.

I mention this in light of the Flower Power revivalism of the past few years, in contemporary art and elsewhere. Psychedelia and collectivity are back (and already on their way out again). But the revival is highly edited; a surface scraping; artificial, like a bottled fragrance. No one these days is thinking, “Turn on, drop out.” Everyone is thinking, “How can I get into the game?”

The Whitney show, maybe without intending to, suggests that this was always true, and makes such an attitude seem inevitable and comprehensible. So, let’s have another ’60s show, an incomprehensible one, messier, stylistically hybrid, filled with different countercultures, and with many kinds of music and art, a show that makes the “Summer of Love” what it really was: a brief interlude in a decade-long winter of creative discontent.

This was a good way for him to wrap up his perspective; but I think he missed one crucial point, which is that the kids "stoned on self-adoration" in 1967 probably constituted a good chunk of the public now coming to this Whitney exhibit! They are all grown up now; and (guess what?) most of them are probably still stoned on self-adoration! In other words I read this whole affair as a massive attempt to pander to an audience that probably has the affluence to keep the Whitney afloat but needs to be titillated with a nostalgia for a past they never really had. My guess is that this whole affair was basically a fund-raising strategy; and it will be interesting to see whether or not it succeeds. Whatever the case may be, there is definitely an element of chutzpah in passing an act of pandering off as a major artistic event; but I hope the Whitney will accept the award with pride!

Too Much Technology?

Given our obsession for carrying around more gadgets (which are supposedly improving our day-to-day life), it was inevitable that one would end up interfering with another in an unpleasant, if not catastrophic, way. Drivers of the 2007 Nissan Altima and Infiniti G35 seem to have encountered one of those ways. These cars have "I-Keys," which, as Reuters describes, are "wireless devices designed to allow drivers to enter and start their cars at the push of a button." Regardless of whether or not this innovation is satisfying any significant need (such as what to do when both your arms are full), the I-Key has a problem that will probably affect just about anyone who buys one of these cars. According to Nissan (North America) spokesman Kyle Bazemore, "We discovered that if the I-Key touches a cellphone, outgoing or incoming calls have the potential to alter the electronic code inside the I-Key." You can figure out the rest. Once the code changes, you cannot open the door or start the car. Apparently, if you have an I-Key (does the I stand for "intelligent?"), your car no longer has the old-fashioned metal-in-slot alternative as a backup. It makes one wonder if we have become so obsessed with innovation that we no longer can be bothered with questions as petty as those concerned with ultimate benefit; or, as Delmore Schwartz might say, we are trying to get away with dreaming without responsibilities!

Thursday, May 24, 2007

It's Still about Will!

No matter how many YouTube clips I see, I still cannot bring myself to watch broadcasts of Countdown. I appreciate the value of Keith Olbermann's bully pulpit; but the product is just too slick (not to mention commercial-laden) for my tastes. Whether or not he is "on my side" matters less to me than the uncomfortable truth that, like most of the other plays in mainstream media, he is contributing to the culture of what I keep calling "A World Without Reflection." I found this particularly evident as I watched the full ten minutes of, as Truthdig put it, "disbelief and disgust over the Democrats’ war funding capitulation." (I watched the video on the Truthdig site.)

Fortunately, there was more reflective value in the comments submitted by Truthdig readers/viewers. I was particularly struck by one made by unregistered commenter david e:

The Democrats’ collapse illustrates the bankruptcy inherent in any opposition based on instrumental, as opposed to moral, criteria. “This isn’t working” cannot sustain a position. “This is wrong” can. Until enough Democrats embrace the latter, they will lack the strength and justification to employ the only real tool they have---to refuse any appropriation for continuation of the war.

This is well put; but it has good-news-bad-news implications. The good news is that the comment states in the most explicit language what it is that Bush has that the Democrats lack, the power to reduce everything to right and wrong (or good and evil) from the point of view of a personal moral stance. The bad news is that the Bush tactic still trumps anything the Democrats try to do, even when their efforts involve responding to the demands of their respective constituencies.

Let me try to make this a bit more specific. One way to interpret Bush's position would be to say, "I do not care what Cindy Sheehan is saying. My personal morality tells me that we are waging war against evil and must prevail, whatever the price may be." The Democrats do care what Cindy (and all of those who sympathize with her) says. They should, because their respect for that opinion got them elected. To put it in david e's terms, their morality has more to do with the "goodness" (or "sanctity") of the voice of voting Americans.

So what's the difference? Yesterday I argued that Russ Feingold seems to have assessed the situation most accurately: "There has been a lot of tough talk from members of Congress about wanting to end this war, but it looks like the desire for political comfort won out over real action." The bottom line is that Bush has the will to stand behind his morality; and the Congress prefers "political comfort" to strength of will.

By the way, if there is any remaining doubt about what the strength of will can do, Sundance is currently airing the documentary Sir! No Sir! This is an excellent chronicle of the efforts to end the war in Vietnam that took place within the military. It left me with hope that a parallel effort may yet emerge.

Wednesday, May 23, 2007

"What People Hear"

In his latest piece for The New York Review, "How Democrats Should Talk," Michael Tomasky reviews three fascinating books:

  1. The Greatest Story Ever Sold: The Decline and Fall of Truth from 9/11 to Katrina, by Frank Rich
  2. Words That Work: It's Not What You Say, It's What People Hear, by Frank Luntz
  3. The Political Brain: The Role of Emotion in Deciding the Fate of the Nation, by Drew Westen

I owe my title to Luntz, whom Tomasky describes as "the Republicans' most famous spin doctor of the past fifteen years." Writing as an advocate for the Democrats, Tomasky has a lot of admiration for Luntz. This is not nothing-succeeds-like-success admiration but instead involves a key insight (which is also the basis for the third book under review):

What Luntz does understand that many Democratic consultants do not is that language used by a politician sets off a network of associations in voters' minds. These associations, even for people who follow current events closely, are more likely to be emotional than rational, and voters "reason" their way toward emotionally biased conclusions.

Once again, I felt the icy breath of synchronicity, because I read this review very shortly after seeing Flock of Dodos on Showtime. The description of this film on the Showtime Web site is worth repeating:

Evolutionary biologist Dr. Randy Olson is the star of this tongue-in-cheek documentary that examines both sides of the evolution versus "intelligent design" debate, a controversial subject that has pitted faith against reason and school boards against scientists in an increasingly emphatic war of words and ideas. Which side will survive, and which will go the way of the now-extinct dodo bird?

Olson is at his most effective when he lets his subjects (on both sides) speak for themselves; and the most painful truths emerge when he manages to assemble a "flock" of Stephen Jay Gould's former colleagues for a recreation of their Harvard poker nights. What is revealing is the way these really smart guys end up venting about the intelligent design advocates. In retrospect I really should have counted the number of times the word "stupid" (along with its variants) was invoked in that one scene. These guys live and breath rationality; but, when it comes to being able to influence decision makers, such as school boards, they are the ones most likely to go the way of the dodo!

The bottom line is that they all need to leave their laboratories long enough to get some coaching from Doctor Luntz. Actually, a good start would be to grasp two simple rules:

  1. You are not going to persuade anyone of your position if the first thing you do is call that person stupid.
  2. Furthermore, you are not going to persuade that person if you call anyone that person clearly admires stupid.

The problem these scientists do not seem to be able to confront is that these are rules of rhetoric, rather than logic. In his play about Galileo, Brecht had Galileo belief that presenting the Inquisition with basic scientific facts and the logical path of his reasoning would be sufficient for the Inquisition to clear him of the charges of heresy. Brecht's point was that Galileo was woefully mistaken and paid a high price for his mistake. Olson's poker table was surrounded by latter-day Galileos who refuse to admit that not all thought processes are rational (even though I am sure they could all come up with examples from their own everyday lives). Brecht's tragedy is not that irrationality exists but that so many rational minds refuse to admit that irrationality cannot be dealt with by rational means. Think about that the next time you pick up a biology text and discover that the chapter on evolution has been replaced by one on intelligent design!

Boredom

It would appear that Andrew Keen wanted to be more than playfully self-referential when he wrote a blog post entitled "Blogs are boring." (I sure hope so. The last thing the blogosphere needs is a surfeit of Douglas-Hofstadter-like self-referentiality!) Now, while I agree with the premise, I am hoping that I can explore it a bit without getting mired in boredom.

Perhaps the problem has to do with reading. One of the things that makes a good author is the ability to deliver a message that does not bore the reader. (This is why Hofstadter-bashing is one of my guilty recreational pleasures!) This is particularly the case when the author happens to be writing about "the history of ideas," because, in the interest of good writing, you end up reading about the ideas and the paths leading to them. All the dead-end paths are excised from the historical account. It's not that different from reading mathematics: You read the proof of the theorem rather than the tortuous and frustrating process that eventually resulted in that proof. To appropriate the idea behind the title of an essay Seymour Papert once wrote, you are reading mathematics rather than reading about being a mathematician because the interest-level of the former vastly exceeds that of the latter.

So, if the Internet is the new "foundry" or "crucible" for new ideas, it stands as a corollary to the preceding paragraph that the whole Internet is boring! The interest comes only from evangelists and historians (if there are any) and the filters they apply. This then leads to another corollary: Guess what? Life is boring (at least most of it).

Now let's take this as an Oblique Strategies move and try to come "part way back." If we take a hermeneutic stance, then blogs, the Internet, and life itself are what you "read" them to be. The interesting is always there to be mined, but it will not shoot up of its own accord like the gusher in Giant. It is the product of our ability to react to what we encounter both cognitively and emotionally (without playing games about sides of the brain). As Daisetz Teitaro Suzuki used to teach at Columbia, with the right kind of examination and over enough time, the boring can become interesting!

Taking Globalization to Task

Leave it to Amnesty International to line out, in clear no-nonsense prose, specific examples of sickness brought on by drinking too much globalization Kool-Aid. Those interested in a more reduced version can turn to an article by Jimmy Burns for the Financial Times. Here is his lead:

The United Nations must develop international standards that hold big business accountable for its impact on human rights, Amnesty International says in its annual report published on Wednesday.

There is evidence in many parts of the world that “people are being tipped into poverty and trapped there by corrupt governments and greedy businesses”, Irene Khan, Amnesty’s secretary general, says in her foreword to the report.

Burns then goes on to cite several key examples from the report.

There is no doubting that this report needed to be written. The question is whether or not the United Nations is the governing body that can do anything about it. Like it or not, there is just too much cultural differentiation in the construal of the abstract concept of "human rights;" so the founding effort of the United Nations to produce a "declaration" of the concept has turned out, in practice, to be little more than symbolic. Unfortunately, this will probably all come down to the cynical question of who cares and how much; and there is no need for the United Nations to involve itself in that question. Amnesty International has probably already developed a set of standards. While it has no power of enforcement, it could offer them as the basis for a "seal of approval," for which any business could apply. If this were taken seriously enough, then the power to grant and withdraw the seal would have an impact on how the business is perceived, which may then have a subsequent impact on its prosperity. If it were not taken seriously, then the world would have basically declared that human rights is just not the priority issue that it was declared to be when the United Nations was founded. This would not satisfy Amnesty International very much, but at least they would know the level of hypocrisy with which they are dealing.

Demo Skepticism

There is no doubting that Doug Engelbart is one of the most important pioneers of the computer age in which we live today; but I like to live by the premise that this is no accomplishment, however great, that cannot benefit from being viewed through skeptical lenses. Thus, when there is a flood of adulation, I take it as a cue to rev up my skeptical engines. I recent example of such adulation can be found over at confused at calcutta:

You know, I never thought of looking for The Mother Of All Demos on the web.

Then, last week, I had the opportunity of attending a Doug Engelbart seminar at MIT, and the incredible privilege of having a private dinner with him, courtesy Tom Malone and the MIT Center for Collective Intelligence. [My thanks to all concerned.] Naturally, I looked for the video soon after, and there it was.

If you haven’t seen it already, do look at it: link. I’ve also shown it in my VodPod in the sidebar to this blog. It’s an hour and 15 minutes long; do not adjust your set when watching it, there is no sound for the first few minutes. I think everyone who has any interest in computing should watch it and hear from the man who, along with his team, gave us the mouse, hypertext, and the precursor to today’s GUI.

Make of this what you will. A comment by Lars Ploughmann made the following (which I shall reproduce in its entirety):

Barely a minute into his presentation, just after having described how people might work with information using a computer, Engelbart airs the question: How much value could you derive from that? (His smile suggests that he suspects ‘a lot’ of value could be derived from it.)

The question is a sound guide in research and business. Are too few people asking it too infrequently today?

I can react to this because it reminds me why I still have this irresistible urge to be the "kitchen cynic" every time Engelbart struts his stuff (without denying him credit for his many accomplishments, I should add). JP Rangaswamy, who runs confused of calcutta, recently wrote about words "whose sound reminds me of fresh chalk squeaking on a glass-fronted blackboard." The word that inspired this invective was "content;" and I expressed my sympathy in a comment. However, the word that really does it to me is "value!" Given the hash that philosophers have made of it, one would think that we knew enough to leave it alone;" but, sometime in the last decade or so ago, our language became contaminated by the phrase "value proposition." The hash has been crammed down our throats ever since.

I prefer to live by words that Robert Solow wrote last September in The New York Review: "Modern economics dispenses with the notion of 'value' altogether, and deals only with ordinary, observable market price." In my previous blog I chose to paraphrase this as "In other words 'value' is too abstract a concept to mess with the nuts-and-bolts practices of economics; so stick to the concrete phenomena you can observe." There is no denying the need for "a sound guide in research and business." A good start is to follow Solow's path and begin with the question "How much would you pay for it?" The rub comes when "it" doesn't have a concrete antecedent, which is the case when "it" is a vague construct like "how people might work with information using a computer!" Yes, we would all benefit from "augmentation" (or facilitation or what have you) of our workplace skills but at what price? Sound business demands that the price be made explicit and justified!

Comfort is the Enemy of Will

Related to my recurring theme of the importance of consequences is the proposition that making the decision to think about consequences (in the manner recommended by Neustadt and May) and then acting on the results of one's thoughts are both very much matters of will, however critical the situation may be. (The most recent situation I examined in this light is the ugly state of affairs in Congolese Africa.) One does not have to get wrapped up in Schopenhauer or Nietzsche to appreciate the significance of will in the basic matters of getting on the in the world. Nevertheless, we have to ask, if will is so important in times of crisis, why is it now in such short supply?

One would not normally look to politics for an answer to this question, but Senator Russ Feingold of Wisconsin may at least point us in the right direction. The context is that of the "compromise" between the Congress and the White House over funding the Iraq War, better described by John Nichols' blog for The Nation as how "House Speaker Nancy Pelosi and Senate Majority Leader Harry Reid flinched in their negotiations with the Bush administration over the continuation of the Iraq occupation." Yesterday's broadcasts reports from the BBC described this as Congress finally acknowledging that it is not their business to set foreign and military policy; but, interestingly enough, this language is absent from the report on their Web site. From Feingold's point of view, it is the business of Congress to represent the people who elected them to their respective seats. In his words:

Congress should have stood strong, acknowledged the will of the American people, and insisted on a bill requiring a real change of course in Iraq.

There is the word, not quite in the center of his sentence: will. The context should galvanize us all around the political framework that was originally conceived by the Founding Fathers: "the will of the American people." So why did Congress fail to acknowledge this will; or, to play with the word at bit, why did the Congress lack the will to acknowledge the will of the American people, which had been expressed so explicitly in the last election? Feingold's answer is actually in the sentence that precedes the one just quoted:

There has been a lot of tough talk from members of Congress about wanting to end this war, but it looks like the desire for political comfort won out over real action.

There we have the other word, the one that lies at the heart of the title selected for this post. Yes, will is important in a time of crisis; but it may not be engaged if the crisis, itself, is not acknowledged. The American people feel the crisis. Those who have lost family in Iraq may feel it more than others; but the media have provided the sort of coverage that practically guarantees that the rest of the nation will "feel their pain." Feingold has been bold enough to assert that the political comfort of our elected representatives trumps that pain, which means that those in pain really are not being represented.

Will they remember this the next time they have to make a choice? Most politicians tends to count on the right mix of short memories and a tendency to look at averages rather than instances. According to Nichols, MoveOn.org is already mobilizing to make sure that they do not get away with that tactic, at least where this issue is concerned. This brings us to Nichols' punch line? How many members of Congress (and which ones in particular) have the will to recognize crisis at the sacrifice of their own "political comfort?" Here is how Nichols wrapped up his own assessment:

Feingold is, of course, right. But how many senators will join him in voting "no"? That question is especially significant for the four Senate Democrats who are seeking their party's presidential nomination: New York's Hillary Clinton, Illinois' Barack Obama, Delaware's Joe Biden and Connecticut's Chris Dodd. Dodd says he is "disappointed" by the abandonment of the timeline demand; if he presses the point as he did on another recent war-related vote, he could force the hands of the other candidates. If either Clinton or Obama do go ahead and vote for the legislation, and certainly if both of them do so, they will create a huge opening for former North Carolina John Edwards, who has staked out the clearest anti-war position of the front runners for the nomination. But this is about more than just Democratic presidential politics: A number of Senate Republicans who are up for reelection next year -- including Maine's Susan Collins, Minnesota's Norm Coleman and Oregon's Gordon Smith -- may well be casting the most important votes of their political careers.

Collins, Coleman and Smith have tried to straddle the war debate. If they vote to give George Bush another blank check, however, they will have removed any doubt regarding how serious they are about ending the war -- as will their colleagues on both sides of the aisle.

This is an opportunity for the American people to watch the behavior of their elected representatives very closely, perhaps even without the mediation of mainstream media (or, for that matter, representatives of the "political blogosphere" like Nichols). They seem to recognize the crisis and have made at least one significant effort to exert their will. Whether or not that effort will be sustained remains to be seen.

Tuesday, May 22, 2007

The Casanova Connection

There is an interesting bit of Jungian synchronicity in the fact that the San Francisco Opera is about to open its summer season with Don Giovanni while The New York Review has just run a piece by Michael Dirda on Giacomo Casanova's History of My Life. For those unfamiliar with the latter, the first thing they are likely to notice in Dirda's piece is that Casanova's History runs to 4289 pages in Willard Trask's English translation; and, if we read into the piece, we discover that Casanova died before completing it. We also read that Trask provided "extensive" endnotes; but, even without the endnotes, this is a massive undertaking.

Much has been made about the connection between Casanova and the protagonist of Mozart's opera. Thomas Allen even explored it in a slightly perverse way in a television program he made that was frequently broadcast on Ovation before they bought the farm. We know from Lorenzo da Ponte's Memoirs that he knew Casanova (and that Casanova owed him "several hundred florins"); but it appears that any direct contact took place after da Ponte had completed his Don Giovanni libretto for Mozart. According to the dates in Dirda's piece, Casanova could well have been working on his History at the time of their meeting; but Dirda does not say anything about Casanova showing the "work in progress" to anyone. Nevertheless, Casanova did have a reputation; and da Ponte was probably well aware of it (since he tried to be well aware of everything). However, there is something curious in Casanova's text that leads me to wonder whether da Ponte was the one doing the influencing, at least indirectly.

I am great believer in first sentences. I got this from my music composition teacher. He believed that, just as the initial gesture of a musical composition determines how much attention you will devote to what follows, the first sentence of a book really determines whether you want to read any more. Dirda is quite taken with the first sentence of Casanova's History; and, given the length of the entire work, it has better be impressive! Dirda goes so far as to claim that it "could easily be mistaken for a sentence by Gabriel García Márquez" (which may be a sign that Márquez had read and enjoyed his Casanova). Here is the sentence:

In the year 1428 Don Jacobe Casanova, born at Saragossa, the capital of Aragon, natural son of Don Francisco, abducted Donna Anna Palafox from a convent on the day after she had taken are vows.

Had Casanova publicized this fact about his ancestry in a way that da Ponte would have encountered it; or is this a memory that was "embellished" by Casanova's knowledge of the Don Giovanni libretto? Dirda goes to some length to argue that most of Casanova's accounts are fundamentally true. However, Casanova made it a point to camouflage many of the names out of a sense of good taste; and this could well have been a case where no one really knew the name of the abducted maiden. So I have to wonder of da Ponte played a hand in a mini-narrative which, in his libretto, is the tale of Donna Elvira, whose name is then replaced by that of Donna Anna, the "victim" we encounter at the very beginning of the opera.

Is this making too much of a single sentence? That may be so; but it is a first sentence, which carries a lot of baggage. It may well be that Casanova decided that the best way to introduce his History as a "good read" would be to draw upon the opening scenes of Don Giovanni, which were likely to be familiar to most of his potential readers. Casanova may have charted his own course in life; but, when it came to writing about that life, he may well have known that a connection to Mozart and da Ponte could have been used to his advantage!

Monday, May 21, 2007

Living with the Anarchy of Web 2.0

Caroline McCarthy has finally decided to weigh in on the Engadget story from last week, in which the release of an unfounded (and false) rumor sent the Apple stock price on a ride that wrenched the guts of shareholders, if no one else. Ms. McCarthy wrote a "think piece" for CNET News.com, which seemed appropriate since, like many others, I first encountered the story on the CNET News Blog. My initial reaction (written, I must confess, immediately after reading the News Blog post) was to think less about the Apple shareholders and more about what this story was telling us about the Web 2.0 world, where everything happens at "Internet speed" without very much (if any) regulatory intervention:

However, as it was introduced, this is really a story about consequences, particularly the consequences of a world that now moves at Internet speed. Ellen Goodman wrote about this sort of thing in news reporting last month in her piece entitled, "The Benefits of Slow Journalism;" but I would guess that she does not share very many readers with CNET.

Certainly, if Ms. McCarthy was aware of Ms. Goodman's column, she gave no indication of it, other than applying "slow journalism" practices in preparing her own comments. While Ms. McCarthy used this as an opportunity to warn us about "the era of gullibility 2.0," I think she missed out on some important points (which, to be fair, also were not addressed by Ms. Goodman):

  1. The whole premise of the blogosphere is that anyone can play, no matter how uninformed one may be of petty details like professional standards (let alone the world as it happens to be when you look closely at it).
  2. By lauding this premise, Web 2.0 evangelists make a virtue out of taking action without giving any consideration to consequences.

To some extent the "lesson" from Ms. McCarthy's piece is as inevitable as it is familiar:

Let the reader beware!

This would be good enough were not that, in the broader context of the Web 2.0 vision, the "reader" is not necessarily an active participant. The most recent example is the owner of that house in Tacoma that got trashed due to a bogus Craigslist item.

It does seem as if there is some element in human nature that tends to ignore consequences, unless it is just a more general aversion to "inconvenient" thoughts. In many ways this is become the windmill that I am most fixated on challenging. On my last blog I used the "consequences" tag twelve time; but on this one, as I write this post (and before I tag it) the count is already up to 85, the most recent being yesterday's reflection on Jimmy Carter. I doubt that I shall ever vanquish the windmill; but, if I can get a few more people to start using the word in their respective working vocabularies, then I may be able to take at least a little satisfaction in my efforts!

Sunday, May 20, 2007

Plains Speaking

Jimmy Carter is publicizing his new audiobook series, Sunday Mornings in Plains, a collection of weekly Bible lessons from his Georgia hometown, the press is following him, and Al Jazeera is following the press. One cannot blame Al Jazeera for their interest. Carter's last book (print medium) dealt with the Middle East in the most direct terms he could muster; and one of those terms happened to involve the concept of apartheid. In this case it should be no surprise that a book based on Bible lessons should turn its attention to a President who takes pride in attributing his decisions to his direct communion with God. Whatever anyone may have thought about Carter as a President, it would be hard to find anyone who doubted the seriousness of his own sense of religion; and that sense has now led him to speak out against Bush in very harsh terms. Here is the Al Jazeera account:

"I think as far as the adverse impact on the nation around the world, this administration has been the worst in history," Carter told the Arkansas Democrat-Gazette in a story that appeared in the newspaper's Saturday editions.

"The overt reversal of America's basic values as expressed by previous administrations, including those of George H.W. Bush and Ronald Reagan and Richard Nixon and others, has been the most disturbing to me."

He also said that Bush has taken a "radical departure from all previous administration policies" with the Iraq war.

"We now have endorsed the concept of pre-emptive war where we go to war with another nation militarily, even though our own security is not directly threatened, if we want to change the regime there or if we fear that some time in the future our security might be endangered."

Al Jazeera further reported that this seems to be the context in which Carter has chosen to assess Tony Blair's career as Prime Minister:

In a separate interview the BBC, the British state broadcasting service, Carter also criticised Blair, the British prime minister.

Asked how he would judge Blair's support of Bush, the former president said: "Abominable. Loyal. Blind. Apparently subservient."

"And I think the almost undeviating support by Great Britain for the ill-advised policies of President Bush in Iraq have been a major tragedy for the world."

These remarks are valuable not just for those of us who simply like to welcome new voices raised in criticism of the Bush Administration. They also remind us that religion does have a place in just about any culture, as long as we are cognizant of what that place is. While I continue to be content with my own atheist stance, I an appreciate the extent to which religion provides a means to reflect on many of the challenging concepts that life throws at us (such as evil) and on our role in a "real world" that should impose all of those challenges. Unfortunately, too many religious leaders prefer to see themselves as the vessels of simple answers to such challenges. Sadly, not only are those simple answers wrong; but also, more often than not, they fall back on destructive strategies, such as intolerance of other view of (and approaches to) that "real world." Thus, when anyone chooses to draw upon their personal sense of religion, it is important for the rest of us to figure out on which side of the coin that sense lies. Carter is one of those whose sense has always been on the former side, which is why I continue to value his observations when he chooses to share them.

Fleshing out the Story

It turns out that the BBC story that provided me with grounds for giving Michael Moore a Chutzpah of the Week award was more than a little bit incomplete. The lacunae do not detract from either Moore or the chutzpah, but they are still interesting. Apparently, Associated Press has been doing a better job of covering Moore; and Jocelyn Noveck can now offer a more thorough account:

Lost in all the publicity over Moore's trip is the reason he went to Cuba in the first place.

He says he hadn't intended to go, but then discovered the U.S. government was boasting of the excellent medical care it provides terror suspects detained at Guantanamo. So Moore decided that the 9/11 workers and a few other patients, all of whom had serious trouble paying for care at home, should have the same chance.

"Here the detainees were getting colonoscopies and nutrition counseling," Moore told The Associated Press in an interview, "and these people at home were suffering. I said, 'We gotta go and see if we can get these people the same treatment the government gives al-Qaida.' It seemed the only fair thing to do."

So the group, which included eight patients — three ground zero workers and five others — headed off by boat towards Guantanamo. From a distance, with cameras rolling, Moore called out through a bullhorn that he wanted to bring his friends for treatment at the naval base. He got no response.

"So there I was with a group of sick people," he says. "What was I going to do?"

The answer: head to Havana. There, the film shows the group getting thorough care from kind doctors. They don't have to fill out any long forms; health care is free in the Communist nation, after all.

But did the American film crew get special treatment because they were, well, an American film crew? Moore and his producer, Meghan O'Hara, insist not. "We demanded that we be treated on the same floor as all Cubans, not the special floor for foreigners," Moore told The AP. Still, the doctors obviously knew they were being filmed, so it's hard to know — although Cervantes [one of the patients] said she went back alone with no cameras and was treated similarly.

If anything, this extended account further justifies Moore's award for exercising chutzpah so outrageously in order to make his point. However he chose to frame the narrative, it is hard to believe that Moore had not anticipated the folly of sailing to Guantanamo with no means of communicating with anyone there other than a bullhorn from a boat. Since his film Sicko pursues the advocacy of a socialized medical system, Moore most likely realized that this would be a provocative way to show his viewers socialized medicine "in action." It was just his rhetorical game, and the Cubans were happy to play it.

The real insight from this story comes back to the reaction of our government and Moore's reaction to that reaction. I suppose that ordinary life has always been the battleground on which opposing propagandas duke it out. We are most aware of it in all the negative advertising that fills the airwaves in the months (now years, apparently) leading up to an election; but we tend to forget that all the advertising we encounter amounts to little more that the progression of skirmishes in a series of propaganda wars. Health care just happens to be one of the bigger battlefields. It is this more general state of affairs that we come to recognize when Moore undertakes one of his muckraking projects. The only real irony is the extent to which the muck-generators serve his cause through acts of opposition that end up shining more light on the very things Moore wanted us to see in the first place!

The Concert as Madeleine

Looking over yesterday's post, I realize that this one two-hour student recital seems to have unleashed a flood of memories. Most of them were already "brewing" while I was listening to the performances. All I had to do when I got back to the computer was refine them and turn them from a mental muddle into something (hopefully) readable. Refining was not an easy matter. Once I started writing I realized that I had totally forgotten the name of DARMS, and I have to confess that Google was very helpful went it came to tying up such loose ends. Rendering was more interesting, since I realize that all those memories found themselves organized in the structure of a "life story" (with a nod to Charlotte Linde). This, in turn, reminded me of one of Jerome Bruner's most interesting remarks in his Acts of Meaning book, when he credits Jean Mandler with the observation "that what does not get structured narratively suffers loss in memory." In other words I was experiencing the preservation of memories triggered by this one concert experience by structuring them as autobiographical narrative. Consequently, I finally "got," in a genuinely internal way (albeit on a far more modest scale), Proust's experience through which the taste of a madeleine in tea could trigger the memory of an entire life (which, in his case, however ordinary, was rendered in epic proportions). That, in itself, made yesterday's "trip" entirely worth the effort!

Saturday, May 19, 2007

Learning from Getting it Wrong

I spent two hours this morning listening to the advanced chamber music students from the Preparatory Division (as in too young to matriculate) of the San Francisco Conservatory of Music. I could start out with some hyperbolic statement about some of these kids being smaller than their instruments. That would be a bit too extreme, although clarinetist Gabe Bankman-Fried certainly came close; and, while his name may have been a bit overextended for his stature, his sound certainly made up the difference. It's rough making the first impression in a showcase; but, while it was followed by many other impressive performers, his performance of the clarinet version of the Beethoven Opus 11 piano trio was still with me as I was leaving the Conservatory!

My greatest fear in listening to young talent is that, no matter how young they may be, there is usually someone who is already preparing them to face the competition circuit. Following up on a previous post, I would accuse that "someone" of robbing a kid of what may be the last time to really enjoy the music s/he has decided to play. So, as each little ensemble came out to strut its stuff, I found myself sorting them out. There was probably only one that just did not "get it:" The opening gesture of the Haydn quartet whose first movement they had selected (Opus 71, Number 2) was just an unfortunate blooper from which they never recovered. They were too poker-faced to give any indication of whether or not they realized this but certainly did now show any satisfaction when taking a bow. The remainder could be divided into those who did want to enjoy the music and those who simply wanted to demonstrate that they did "get it;" and it seemed as if the best way to tell the former from the latter was that the performers in the former class had locked into that wonderful sensation of being able to hear and enjoy your own performance while it is taking place.

Music education does not prepare one for this kind of talent; and I think much of the problem stems from the fact that what we have chosen to call "music theory" has little to do with what theorist Thomas Clifton once called "music as heard." When the camel of the computer first stuck its nose under the tent of music theory, we encountered a series of misconceptions about how to think about music. Some of them are embarrassments of the past. Others are still with us. All provide us with lessons to learn.

When I was first getting into this game, it seemed as if any theoretical research had to begin with a representation system; and, since music was grounded in a relatively sophisticated notation, the best approach would be to develop a representation system for the current state of music notation. Back in those days the laser printer was barely a glint in anyone's eye; but IBM had a project called the "photon printer," which was intended to be programmable for rendering any kind of image. Stefan Bauer-Mengelberg was at least part-time at IBM in those days and launched a project for a programming language for this printer that would render music notation. This was originally known as the "Ford-Columbia Input Language" and would later be called "DARMS" (Digital Alternate Realization of Musical Symbols), sort of an homage to the experimental music work going on at Darmstadt. DARMS was nothing if not thorough. I remember Ray Erickson showing me a sample of how it had stood up to the extreme demands of a score by Elliot Carter. The problem was that there was this school of thought that formed around DARMS with the idea that it would be the perfect input language for experiments in computer analysis of musical compositions. Needless to say, it was a decided inappropriate representation system, since it was concerned only with where marks should be placed on a sheet of paper. It would be a bit like trying to use PostScript as an input language for computer-based literary analysis. Sure, it would capture the sort of visual detail one encountered in Mallarmé, cummings, or the "concrete" poets; but it would be thoroughly unwieldy for the more ordinary forms of text usage. As a notation DARMS was not particularly kind to either the "vertical" dimension of music (the progressions of harmonies) or its "horizontal" dimension (the interplay of the voices of counterpoint); and, since it was totally locked into the notation itself, it had no way to deal with the ways an actual performance would interpret what had been notated.

If IBM was focusing on music notation, Bell Laboratories, under the leadership of Max Mathews, was at the other extreme, developing a representation language for audio synthesis in its full generality. These languages were called the "Music" languages, the most developed being "Music V." This was a highly modular language, impeded primarily by the batch-processing system that supported it. I have no idea if Robert Moog was aware of Mathews' activities; but there is a lot of "family resemblance" between the modular elements of Music V and the component modules of the first publicly available Moog synthesizers. The problem here, however, was that all of the representation effort went into describing the computational synthesis for "instruments." The representation of what those "instruments" would then "play" was almost left as an after-thought; and, since Music V did not run in a real-time environment, the need to "play" the instruments (rather than "conceive of a score" for them) was not an issue. Indeed, the computer environment was so "user-hostile" that, for most "real" musicians, getting any sort of interesting sounds out of the system for a respectable duration of time was enough of an accomplishment that the results, whatever they may have been, where immediately dubbed a "composition!"

I took a crack at a representation that had more to do with music performance under the guidance of Marvin Minsky. The result was EUTERPE; and, if I had been more enterprising (or greedy), I probably could have tried to promote it as prior art for MIDI. I was primarily interested in the idea that the voices of a contrapuntal structure were like a bunch of computer programs running in parallel. So I asked what would be the right language for those individual voices, bearing in mind that the programs would sometimes have to coordinate with (i.e. cue) each other. Also, because I was getting much of my guidance from Ezra Sims, who was extremely interested in microtonality, I endowed EUTERPE with an octave that was divided into 72 equal parts, thus enabling Sims to experiment with both quarter and third tones. MIDI never considered that as an option; and these days, as we become more interested in alternatives to equal-tempered tuning, that is probably its greatest disadvantage.

Meanwhile Robert Cogan was experimenting with an entirely different approach to representation. In his book New Images of Musical Sound Cogan developed analyses based on images of the actual vibrations responsible for the sounds, thus taking the most direct path possible to address "music as heard." When he was working on this book, his apparatus was highly limited. These days we can do this sort of thing with just about any personal computer; and, when I was in Singapore, I even advised a Master's student on a project involved using such data to compare performances of the Stravinsky "Serenade" for solo piano, one of the performances being Stravinsky's own. The most interesting thing about Cogan's book is that it is not limited to the usual "classical" compositions addressed by analysis; and, indeed, one of his analyses is of a Billie Holliday recording of "Strange Fruit." However, if the acoustic strategy of Music V was flawed by being based on abstractions that might not be particularly useful, Cogan's approach did not involve any abstractions. At the end of the day, he seemed to be using the visual displays simply to back up what his ear was telling him, which is not a bad idea but still needs to be done is some sort of consistent way.

This problem of abstraction, however, is basically the problem that faces all of the attempts I have outlined. However, one cannot define criteria for the "right" abstraction without first identifying the sorts of questions one hopes the abstraction will answer. (Minsky made this point in his "Matter, Mind, and Models" essay.) The sad truth is that the questions one asks in music theory are more normative (the sorts of questions folks have asked for the last two hundred years) than epistemological or even ontological. This is why I explored the possibility that performances, rather than notes or, for that matter, audio traces of recordings, could best be examined through the three lenses of logic, grammar, and rhetoric. Each of these is a distinct category that introduces its own family of questions, and each of those families needs to be addressed through a distinct set of theoretical strategies and methods.

This, of course, is my own concept of "rehearsal" at its most extreme. I could probably "wing" a few representative questions for those categories; but it would be better to be honest and admit that I have not through that far yet! Nevertheless, today's recital reminded me of just how important listening is and how little we actually know about the process as it applies to music. Clifton's approach to "music as heard" was grounded in phenomenology but still suffered from the problem of not beginning with an examination of what questions he actually wanted to answer. Of course, it probably makes sense to address the question of who is doing the asking. The questions a performer needs to ask are unlikely to be the ones that an audience listener would raise, no matter how well informed. The only educated guess I would hazard, though, is that the sort of questions raised by an academic trying to get a paper published are likely to align with either of these other two classes of questions!

Friday, May 18, 2007

Living with Comcast

I realize that I have selected a Title with pejorative connotations (like "Living with Cancer"); so let me start off by asserting that this will not be an entirely negative account (nor will it be entirely positive)! Thus far in the cable business, the customer has very little (if any) choice. Indeed, the only alternatives may be to go with a dish or manage with whatever your antenna can pull in "from the air."

When my wife and I moved to Palo Alto, the cable feed was provided by a local cooperative. They provided what, at that time, was a broad variety of options for the many tastes and preferences that make up the Palo Alto demographic. They also provided a feed for FM radios, through which one could hear BBC World, as well as a few stations from across the country. Since the only option for classical music in the Bay Area was KDFC (which, for anyone who actually listened to the stuff, was the moral equivalent of no option at all), I found it a real treat that, after many years of hearing about it, I could now actually listen to WFMT, coming out of Chicago.

As just about anyone could guess, Palo Alto Cable Co-op was not a particularly profitable undertaking; and eventually they had to throw in the towel. They were bought out by AT&T Broadband, and gradually most of the good things started evaporating. It turned out that the AT&T guys knew nothing about the FM feed, so it sort of managed on its own for a while in a closet. However, when the system "went digital," everything changed. I cannot remember if this took place before or after Comcast swallowed up AT&T Broadband; but, in terms of our family viewing habits, they were both turkeys of the same order. Not only had I lost my best radio feeds, but also I now had a set-top box that could not longer accept channel-changing signals from my VCR! These were the new semantics of "progress."

By the time we bought the place in San Francisco, Comcast had ironed out many (but not all) bugs. They had launched On Demand, which was a royal pain to use but was usually preferable to what was actually being broadcast an the times we were viewing. Also, the Homeowners Association for the San Francisco place (originally purchased for weekend use only) offered the Comcast analog feed at no charge. Since we would bring videotapes up from Palo Alto, this was fine. I also thought that the San Francisco place would give me an opportunity to compare Comcast broadband with my SBC service in Palo Alto. As I recall, I gave them about half a year from the time we moved in. At the end of that half-year, they were less certain about when broadband would be available than when we first set up the San Francisco place; so I ditched the opportunity to compare and called SBC.

Once we sold the Palo Alto house, I braced myself for upgrading my San Francisco Comcast service. It turned out that, due to San Francisco having a different "tier system" than Palo Alto, I would pay less for what I wanted, enough less to cover the fee for one of the boxes having a VTR. This was when I realized that Comcast did not want a box that would communicate with a VCR, because their longer game plan was to wean customers away from that VCR. On the whole the VTR service has been pretty good. Also, while KDFC is as bad as it ever was, I now have XM to satisfy my radio needs; so I no longer am mad at Comcast for taking away my FM feed (even if they were not immediately responsible). Besides, KDFC reception is really terrible in our neighborhood. That would be a moot point, except that I have a neighbor who wants to record the Opera broadcasts that KDFC now provides once a month. The good news for her is that she can now get a cleaner signal through her Comcast box. All she needs to do now is direct the audio feed from the box to her recording equipment.

The one problem with the VTR is that we can only watch it where its box is, which happens to be the bedroom; so, once again, we have gotten interested in On Demand for when we want to watch in the living room. Sometimes I want to check to see if something I have already recorded can be seen through On Demand. The On Demand menu system makes this a real pain; and, as more items are available, the greater the pain. This is why I was so glad to discover that Comcast now has a Web site for all of the On Demand content. The site has a disclaimer that what is has is "a sample of programming available from ON DEMAND;" but I think that means that only the nationally-available content is covered. This should not matter much, since I am not that interested in the local content.

This Web site is definitely an improvement over the set-top box interface. It is broken down into separate pages for each letter of the alphabet, but there is also a reasonably flexible search window. You cannot do phrase search; but, given the limitations of the content, you can find stuff. There is plenty of room for improvement, but the bottom line is that is looks as if Comcast is inching their way towards more manageable ways to take advantage of their services. Furthermore, the providers are starting to take advantage of On Demand. The last season of The Wire released new episodes through On Demand before they were broadcast on HBO. Showtime is currently doing the same with The Tudors and released all of the second season of Sleeper Cell while the episode-by-episode broadcasting was taking place. (My wife loved that one; life came to a screaming halt while we watched the whole thing over two evenings!)

On the whole I am far less negative about "living with Comcast" than I was when they invaded my house in Palo Alto. I have heard mixed reports on how well their broadband service is doing, so I do not particularly care one way or the other about missing the opportunity to compare them with DSL. Meanwhile we have friends who have become Slingbox advocates. This leads me to wonder if, having already bundled a VTR into their box, Comcast now has plans to do with same with a Sling-like service! That will give us one more way to "amuse ourselves to death!"