Don Quixote for the DH, Part II

The humanities have traditionally laid claim to value in virtue of the meanings they enrich. In practice, if not in theory, the activities of humanists have handled meaning by revelation and discovery. Eschewing the sophistry that would lend good arguments even to bad ideas, the humanist, like the scientist, digs and delves — even if, alone with her books and the light of her intuition, she is less scientist than shaman, less explorer or inventor or legislator of a world, than its  supplicant or priest. Humanistic activity, traditionally conceived, begs to be judged on its failure or success as meaning — i.e., as a contribution to an intersubjective, creative process. It is as a set of attachments to conversation and dialogue that this activity commands our respect. The elitism of many of those attachments notwithstanding, we who “do” the humanities tend to ascribe value to our work in terms of the survival of ethical thinking and feeling as forms of conversation. Since the value of conversation lies in its openness to the unforeseen, viz., in its contribution to the unfolding of history, it resists measurement by normative units of utility or efficiency.

But of late, it has become clear that this conception, long imagined to make a bulwark against the encroachments of instrumental rationality, no longer holds. How regularly our noblest intentions seem to fold before the demand — institutional, social, or even personal — for results. In the face of this demand, what Raymond Williams called “emergent structures of feeling” — which by definition elude the relative certainty of normative representation — can have no value, and representation itself can secure its place only by the vinculum of the bottom line, by the dollar sign’s lead balloon. If to be humane involves, in part, an ability or willingness to make our feelings legible to one another, what to do about the captivity of the legible itself to the ledgers of capital? One way of understanding the digital humanities, I think, is as a figure for this captivity. Once more, then, into the breach….

If Eyers is right to say that the DH return a positivist spirit to humanistic activity, how is this spirit manifest? The example of Franco Moretti, with his proscription of close reading, provides a limit case: a positivism that has occupied the critic’s methods in an explicit, indeed a polemical, way. Moretti performs a spirit of positivist inquiry, and the frisson of that performance, on ground hallowed by decades of deconstructive critique and postmodern skepticism, probably accounts  for much of its appeal. His is an act of sacrilege — an iconoclastic rending of the vestal status of the text. But might not the more powerful varieties of positivism work more insidiously, more seductively?

I want to talk about the seduction of the algorithm. This seduction is a constituent part of modernity, but its most spectacular moment, also the moment of its most spectacular failure, occurred at the turn of the 20th Century. In this period, logicians and philosophers pursued a program of mathematical formalism, a program most famously associated with the figure of David Hilbert.These projects sought to arrive at a set of explicit rules for mathematical reasoning that would, at least in principle, make it possible to derive all the truths of mathematics by purely formal means: in other words, algorithmically. (For a concise and accessible treatment of this history, see chapter 3 of Charles Petzold’s The Annotated Turing.) The aim, in Kantian fashion, was to put reasoning on firm ground (in German, Grund: a term that crops up everywhere during this period in the titles of dense German tomes), and this ground was sought in the one domain in which human reason was felt to have a legitimate claim to reach beyond the empirical: the realm of pure numbers.

To say that this ground proved, in the end, elusive is not to say very much, at least not without a foray into the thickets of first- and second-order predicate logic, mathematical proofs, number theory, etc. But I raise the specter of this failure only in order to note that this most esoteric of esoteric pursuits — the search for a formalized mathematics, in which reasoning would become a matter of finding the right algorithm — proved in the end a goose who laid a golden egg. That golden egg is, of course, the defining technology of the 20th Century: the digital computer.

Or to switch metaphors, the hegemony of the computer is a tower erected in the very shallows where Hilbert’s program of logical analysis foundered. As some philosophers and logicians in the wake of Hilbert’s optimism were to demonstrate — notably, Alan Turing — there are limits to what can be algorithmically derived, even within the realm of numbers. But within those limits, algorithms can accomplish a great deal.

My claim is that they exert a corresponding seduction. To see the influence of this seduction at work in the DH needs only noting the tone set by the NEH in its grant programs: in offering “start-up grants” and “implementation grants,” the Office of Digital Humanities borrows from the discourse of software development (even though the projects invited under these rubrics are not restricted to those deploying new software tools). By extension, we might say that the rhetoric of the DH seeks to establish an affinity between humanistic reasoning — which is traditionally without palpable “result” — and the kind of reasoning whose fruits now occupy our work and leisure alike, feeding our daydreams and idle moments as well as sustaining our most serious work — and all the while filling the coffers of inventors and investors.

Is this affinity merely rhetorical? Stephen Ramsay, for one, proposes a deeper kinship. But this affinity would have to exist in spite of certain fairly obvious differences. Programming, Ramsay argues (drawing on a canonical computer-science text by Abelson and Sussman), differs from argumentative reasoning as the imperative differs from the assertoric.  On Ramsay’s model, the one complements the other: in the synergy between criticism and computation, “the hermeneutics of ‘what is’ becomes mingled with the hermeneutics of ‘how to'” (63).

Ramsay’s argument appeals to me because it draws the lines of this affinity between two modes of writing: the writing of the critic and of the software developer.  And yet, I don’t find the lines as neatly complementary as Ramsay proposes. Let us pursue this distinction between assertoric and imperative, or “what is” and “how to.” The question of “how to” refers us to the programmer’s knowledge, which is a knack for making things happen through the affordance of the digital computer. That knowledge is practical; when writing code, we expect to see results, meaning that we expect to see certain behaviors, a certain output. But surely, the developer does not have a monopoly on practical knowledge. The critic, too, must know how to make things happen: that is what it means, what is has always meant, to write persuasively.

So in what does the writing of developer and critic differ? When one writes code, one issues commands. These commands  occasion actions by the processor (e.g., count the number of occurrences of a word in a text; assign that number to the variable X; print X to the screen). On the other hand, when making an argument, one asserts that X; one does not command it. (One might, actually, adopt an imperative style in a more formal mode of argument: “Let X = 1.” This mode is exactly what led Hilbert and others to pursue the algorithmic model of mathematical logic.) But if the developer’s practice is  imperative in this sense, what sense is that, exactly? Is it akin to asking someone to close a door? Like the command “print X,” a request to shut the door usually has a determinate outcome: the door gets closed, or else it doesn’t. But how far does the analogy hold? More important, what does the analogy tell us? To be informative, the analogy needs us to think about these two kinds of result in the same fashion: as occurring by way of definite actions in the computer’s CPU and in the auditor’s head. That’s not to say that we can’t, or shouldn’t, entertain this analogy. But beyond the simplest case, it starts to get confusing. What about commands like “Love thy neighbor” or “Love me tender” or “Don’t let me down”? What about the demands made on us by a literary text?

According to Ramsay, the computationally dependent DH confront us with “the difference between a text that describes a relationship and one that can perform the relationships it describes” (66). Turning from the imperative to the performative punts the question from the grammatical to that zone characterized by J. L. Austin as the “speech-act.” For Austin, the work of language is only partly captured by our talk of meaning (or what analytic philosophers call “sense” and “reference”). For Austin, any use of language invariably brings to bear, on the situation of its use, a certain kind of “force.” It is in virtue of this force that speech-acts are species of acts, i.e., that to say something is also to do something, to attempt or accomplish some variety of practical, invariably social, work. Thus, to deliver an imperative is to issue a command, to utter a promise is to create a kind of contract, etc. Austin analyzes these performances in terms of what he dubs their “illocutionary” conditions: these amount to the social conventions that must be fulfilled in order for the performance to succeed (where success is decided in light of the speech-act’s conventionally ascribed intention).

Austin’s model eschews the path taken by some philosophers of language, in which all utterances are boiled down to their presumably assertoric skeletons, skeletons serving as so many clothes-horses for the garments of truth. Austin eschews, in other words, the awkwardness of talking about a promise or a command as “true” or “false.” A felicitous promise (Austin’s term) obtains only when a number of illocutionary conditions are met, beginning with the promisor’s intention to make good on his word, along with his capacity to do so. Austin’s model has naturalism on its side, insofar as his way of describing and categorizing language manages to keep its sinews and innards and plumage intact; his model is true (or faithful) to what speakers and writers generally find most urgent: is my utterance appropriate? Will it succeed? But compared to the truth-centric picture of language, Austin’s model also multiplies the sites of error. My promise can fail because I never intended to keep it, but it can also fail because I am not in a position to make good on it, which is a fact about which I might be easily mistaken.

Ramsay invokes the performative model in order to steer us away — and rightly so — from the notion that statements arrived at by algorithm are somehow inherently “truer,” as if possessed of a better kind of truth, than those of traditional critical discourse. As he points out, an alogrithm for, say, text-mining the novels of Henry James, represents the literary text in a “potentialized form” (67); the output of the algorithm presents the critic with a set of potential meanings, not a definite truth:

If something is known from a word-frequency list or a data visualization, it is undoubtedly a function of our desire to make sense of what has been presented. We fill in gaps, make connections backward and forward, explain inconsistencies, resolve contradictions, and, above all, generate additional narratives in the form of declarative realizations.

Here algorithmic reasoning — the reasoning behind the compilation of the word-frequency list or the data visualization — is generative rather than purely descriptive. And yet, for Austin, to describe is to do something, too. My assertion that X is no less performative than my command (or promise) to X. In other words, the real virtue of Austin’s focus on the illocutionary lies in its refusal of innocence to the assertoric: there is no such thing as “pure” description or “pure” statement (outside the dreams of philosophers), for every statement comes to fruition or else withers on the vine of a real situation. If writing code is performative, it is no more so than criticism, or than literature itself.

But algorithms seduce us. They save money and time. They make possible kinds of labor that it would be infeasible, or impossible, to perform by other means. The heart of their seductiveness, however, seems to rest in how they reduce the complexity of the speech-act. If we can speak of the illocutionary conditions of statements in a programming language — and it would seem that we can, since the form and structure of these statements is governed by convention, too — then we might say that the felicity of statements in a programming language, by comparison with those of traditional discourse, can be rigidly defined. It is not that writing code is not error-prone: far from it. The proneness of writing code  to error is what lends it the character of an art, a practical know-how whose achievement comes through a long course of trial and error. But the rigid felicity of code appears in the fact that, for the writer at least, success or its opposite appears easy to detect. Deceptively easy, in fact.

The deception arises in taking the output of the program to the user’s interface as the result of the performance. Does this code throw an error? Does it produce unexpected output under any of the test scenarios that I can devise? If not, then I, the developer, might consider it a success. But good developers know that such success is provisional. The elusive felicity of traditional discourse refers us to the indeterminate nature of the language-situation — a feature of communication that might, but need not, lead us to ideas of human agency and free will. For it suffices merely to appeal to the complexity of communication as an event in social space, transpiring between organisms whose intelligence is as much somatic and emotional as narrowly “rational,” to see how the felicity of the human tongue hangs on, or involves, too many variables to count, too many shades of difference to name. Consideration of these shades leads us away from the illocutionary — which even for Austin suggests something that can be laid out in rules, however delicate and nuanced — and into another terrain. This terrain Austin calls, without elaborating on it much, the “perlocutionary,” a term that gathers all of language’s singular effects, its powers of persuasion or coercion that, for every conceivable speech-act, constitute both the occasion and the irrevocable aim.

To read the algorithmic rhetorically, as Ramsay enjoins us, requires us to think of programming as a perlocutionary act. But it becomes so only when the program and its output enters the space of social transactions in which speakers and auditors, readers and writers, are susceptible of an emotional response. Programming, as such, is only a detour — a detour, like all language, by which thought travels through the explicit to arrive somewhere else.

Alan Turing might be said to have known this as well as anyone — Turing, who before helping to crack Nazi Germany’s Enigma code, cracked the nut of Hilbert’s Entscheidungsproblem, or rather showed that it could not be cracked. The so-called Entscheidungsproblem (“decision problem”), as posed by David Hilbert, asks whether a general “decision procedure” exists for determining whether a given statement in first-order predicate logic is provable or not. Such a decision procedure would be an algorithm. In a nutshell, Turing demonstrates, using the resources of first-order logic in tandem with his own conceptual invention, that such a procedure does not exist. What I have called Turing’s “conceptual invention,” commonly known as the “Turing machine,” has a strong claim to be considered the prototype of the digital, programmable computer.

The Turing machine is as simple as it is hypothetical: a “head” reads a tape, on which a sequence of figures appears. The head can write, erase, and remember these figures, according to its programmed instructions. Some of the figures constitute a number in binary form, and others represent “rough notes” used, and erased, in the course computing that number (p. 71). For argument’s sake, the tape is infinite (as is the time the machine has at its disposal). But the machine’s simplicity speaks to its origin: it is modeled on the behavior, supremely simplified, of a “human computer,” performing calculations with the aid of a book of explicit rules.

The beauty of Turing’s invention is that nothing in its operation is implicit. It has no secrets; it has no other life. There is no ghost in this machine. Given its program, its current place in the program, the position of its head, and the sequence of marks or symbols on the tape, we know the machine’s “complete configuration” (p. 75). This transparency leads Turing (still en route to the goal of his proof) to envision a “universal” version of this machine, capable of reading the program of any other machine and performing the corresponding computations. After demonstrating the plausibility of such a machine — in a series of steps so densely cryptic that without Petzold’s careful annotations, the mathematically rather ignorant reader (i.e., Moican make neither heads nor tails of them — after demonstrating its plausibility, Turing goes on to show what this universal computing machine cannot do, universally. While it can perform, with a precision by now banal and ubiquitous, whatever any similarly constructed machine can perform, it cannot decide on the performance of all other machines.

The formal logic of the proof is complex, but it corresponds to a relatively straightforward intuition, which (bear with me) I will try to sketch. For a suitably designed machine — a digital computer, say, or a human brain — programs are, in principle, interchangeable. Putting aside questions of memory and processing power, Apple’s iPhone can calculate the square root of two as well as a pocket calculator, as well again as IBM’s Watson or Deep Blue. But no conceivable machine, no matter how powerful, can predict the output of every conceivable program. For example, “Does this program, with this input, print zero infinitely often?” (See Petzold, pp. 163-188.) Turing’s programs simply compute numbers, and there exist numbers (infinitely many, in fact) for which that question cannot be decided by finite, algorithmic means. This limitation has been dubbed “the halting problem.”  Turing’s proof relies on a reductio ad absurdum: assuming that there could be such a machine, reliably equipped to “determine the ultimate fate of other computer programs,” (p. 183),  this machine would have to predict the output of its own program, which would involve it in an infinite loop.

Turing extrapolates from his discovery to demonstrate the impossibility of an effective general decision-procedure, i.e., of an algorithmic means of determining which mathematical or logical problems have solutions and which do not. In hindsight, the result seems almost trivial if one but calls attention to the conceptual narrowness of the algorithmic. Surely there exist problems, one is tempted to object, for which an algorithmic approach is a priori inappropriate. Surely there are domains of human practice and experience for the comprehension of which we lack explicit rules and explicable procedures. Surely there is more between heaven and earth, Horatio, than can be computed.

But what if the human brain is a species of Turing’s universal machine? That would mean that there exist problems, the logic of which we cannot comprehend without traversing it. Such might be moral problems, for instance — problems about our place, as fugitive little specks of light, in this vast night, the universe. And the traversal of their logic might, when especially thorough, merit the name “art” or “literature.” For a literary version of Turing’s argument we can turn to that great master of the morally speculative, Jorge Luis Borges. His story “Pierre Menard, Author of the Quixote” offers us an image of the author as Turing machine. In the form of an obituary, the story recounts the efforts of its eponymous hero — whom the narrator declares to have produced “perhaps the most significant writing of our time” — to re-create Cervantes’ novel. But as the narrator insists, this undertaking was in the vein neither of a parodic updating — e.g., Don Quixote and Zombies — nor a “mechanical transcription of the original,” both facile exercises. Rather, Menard’s “interminably heroic” aim was “to produce a number of pages which coincided — word for word and line for line — with those of Miguel de Cervantes” (p. 91).

In order to adopt Cervantes’ program, Menard tries first an emulation of the author: “Initially, Menard’s method was to be relatively simple: Learn Spanish, return to Catholicism, fight against the Moor or Turk, forget the history of Europe from 1602 to 1918 — be Miguel de Cervantes” (p. 91). As absurd as that course sounds, he abandons it in favor of one yet stranger:

To be a popular novelist of the seventeenth century in the twentieth century seemed to Menard to be a diminution. Being, somehow, Cervantes, and arriving thereby at the Quixote — that looked to Menard less challenging (and therefore less interesting) than continuing to be Pierre Menard and coming to the Quixote through the experiences of Pierre Menard. (author’s italics, p. 91)

We might call this method an identification with the text. What “program” would Menardbeing Menard, need to run in order to produce Don Quixote? Call the original program Cervantes-Q. The problem posed by Menard is to find Menard-Q: a different program with the same output. There is nothing a priori impossible about two programs’ having the same output; in fact, such duplication is guaranteed by Turing’s logic, at least for some kinds of output. The problem lies in deciding which program is a match, since for sufficiently complex programs, the output is not predictable without actually running the program all the way through. In a remark reminiscent of Turing’s proof, Menard quips, “If I could just be immortal, I could do it.”

Borges’ story is often read as a parable about the necessary intertextuality of any text, or about the relativity of all textual meaning to the context of its reception. These interpretations derive from the gloss Borges’ narrator gives Menard’s work: “I have reflected that it is legitimate to see the ‘final’ Quixote as a kind of palimpsest, in which the traces — faint but not undecipherable — of our friend’s ‘previous’ text must shine through” (p. 95). On these interpretations, Borges’ text is a parable about the quixotic nature of the creative process.

But what if we read the parable as one about the interpretative process?    What if Menard’s task models not the general conundrum of the artist, but that of the reader and critic? The critic, let us say, wants to comprehend the constitution of the artistic work or literary text (along vectors that might be formal, psychological, historical, etc.). More to the point, the critic wants to comprehend the impact of the work on its milieu and/or on his own. Call that impact the output of the work. It is in relation to the latter hat the critic exercises his powers of decision. And it is in this respect that critical argument attempts to decide the status of the text (even though each decision remains provisional). But if the text, like one of Turing’s problematic programs, remains undecidable, then the limit to which the critic approaches in his labors is only that of Menard’s task, “futile from the outset”: it is to comprehend the text by reproducing it, by running its program. If anything, our task if of greater complexity than the task Turing assigns his machines; we cannot “know” the program behind the text. Unless, of course, the text is the program. In that case, we Turing machines generate its output (and every Turing machine is both a reader and a writer) without being able to predict it. This unpredictable output is what Austin called the perlocutionary effect of the speech-act — that which must be endured in order to be understood. And to traverse that output constitutes, for a humanities analog or digital, the interminable, quixotic task.

Leave a comment