I may have missed this in today’s discussion, but I don’t feel as if we ever put a definition to what the spatial turn is today. For whatever criticism Guldi deserves for attempting to (allegedly) “bifurcate” the field, or for some of her weak points, I felt it was her definition of the spatial turn that spoke through plainest.
The spatial turn is, what I gather, not just a current interpretive phenomena where we cast events, actions, and consequences in a physical space, but also look back reflectively on those things we have already studied and analyze them through this spatial facet. And what this spatial facet represents in many of these studies is not something that is as arbitrary as “any suburb will do”, but as a necessary companion to the events that unfold. To borrow a bit of language seen in the Jofish article (and I think one other), it is our problem space.
And looking back on my previous paragraph I understand now why I don’t entirely buy the geographers’ criticisms. I don’t like my use of “physical space” because I find it too constraining. Geography deals with a very limited and narrow view of space, one that is ultimately only terrestrial. When I think of the spatial turn I think of so much more than streets and roads, or accidental land features that we call terrain, or the arbitrary lines that we call borders. I also think about virtual space (where would this exist in Lefebvre’s triad?), and the space that exists where complex chemical reactions take place, and a child’s perception of space that may have no bearing on the actual physical characteristics of that space.
What the spatial turn means in the context of the digital humanities, then, is the design and creation of digital systems that distill these spaces in ways that make us see the space as a necessary counterpart to the event taking place, and not just a single word written after the type-faced word, ”SETTING“.
Posted by Matthew L Belskie at 4:38 PM.
Some of our discussion about memory and forgetting reminded me of this post from a while back from Cabinet Magazine.
Here are a few quotes:
“Are all acts of forgetting similar enough that we can think of them, always and necessarily, as a failure? Can forgetting in fact even be a virtue? And how do we understand the relationship between what needs to be forgotten in order for other things to be remembered?”
“Paul Connerton, a scholar in the Department of Social Anthropology at the University of Cambridge, has addressed these issues in a number of books, including How Societies Remember (Cambridge University Press, 1989) and How Modernity Forgets (Cambridge University Press, 2009). In his 2008 essay “Seven Types of Forgetting,” Connerton offers a preliminary taxonomy of forgetting, and of its various functions, values, and agents: repressive erasure; prescriptive forgetting; forgetting that is constitutive in the formation of a new identity; structural amnesia; forgetting as annulment; forgetting as planned obsolescence; and forgetting as humiliated silence.”
And from a question, with a mention of “the rise of printing”:
“And the rise of printing is not the only technological phenomenon that you implicate in the process of forgetting—you also discuss the notion of planned obsolescence, shifts in industrial culture, and the relationship of these to modes of forgetting and discarding.”
by Doug Diesenhaus
Updated November 2, 2011 at 12:25 AM.
The op-ed in today’s NY Times reminded me a lot of the Powers piece we read at the beginning of the semester.
One of the conclusions the author comes to is similar to one of the conclusions we came to in class, which was that by not being discerning about those things that enter the digital record we dilute and devalue all of the things that make it.
His point about what it would take to properly assess all of that information is more along the lines of my own conclusion, which was really just a comment on what ancillary systems will need to co-evolve with the present and oncoming glut of information if we want to keep our heads above water in a digital ocean.
Posted by Matthew L Belskie at 4:34 PM.
Though I haven’t read the whole thing in detail, I’m finding Christopher Prendergast’s “Evolution and Literary History” (which is here, if that link works) a useful gloss on Moretti’s very interesting but problematic models for literary history. One compelling moment:
Suppose at this juncture we were to state the blindingly obvious: that, whatever their other properties, literary texts do not possess genes. In all likelihood, such a reminder would raise a hearty guffaw. Of course the application of evolutionary concepts to literary history is not meant literally; literature is not a biological organism. Yet the naivety of the supposition carries an equally obvious lesson: if not meant literally, if you strip from the evolutionary paradigm its at once defining and delimiting genetic processes, then all you are left with is the husk of an analogy.
Prendergast’s categorical statement that “literature is not a biological organism” ignores the whole sub-discipline of literary studies called Darwinian Criticism, which uses “concepts from evolutionary biology and the evolutionary human sciences to formulate principles of literary theory and interpret literary texts” (Wikipedia). Some of this work is compellingly interdisciplinary, but that which I have read loses what is literary about literary criticism, to make the criticism Northrop Frye might have made.
One might say, to bring it all back to Darwin, it misses the trees for the forest.
by Matt Poland
Updated October 23, 2011 at 10:37 PM.
While the site isn’t exactly what I remember it looking like the last time I was here (specifically, there used to be a lot more information front and center about the Carnegie library redesign), I was still able to find some information on it as well as their discussion of personas.
Highlights (particular to personas) from the slideshow are: slide 40: Their execution of a persona using a body shaped cut-out with post-it notes affixed. slides 42 and 43: A synthesis of personas and swimlanes used to emphasize “breakpoints” slide 81: Real people testing the system. Note the similarities to the types of personas they chose to reflect.
Posted by Matthew L Belskie at 3:39 PM.
After talking about “digital criticism” broadly in class, I mentioned to Ryan this article by the novelist Zadie Smith from The New York Review of Books last November. Technically a dual review of the film The Social Network and Jared Lanier’s book You Are Not a Gadget: A Manifesto, Smith’s essay becomes more an exercise in criticism about Facebook itself and its potential to structure the interactions (and, indeed, minds) of its users. In some ways it’s an incisive bit of criticism, in others fairly predictable given it’s a novelist talking about a social network. Here’s a forceful bit from near the end:
When a human being becomes a set of data on a website like Facebook, he or she is reduced. Everything shrinks. Individual character. Friendships. Language. Sensibility. In a way it’s a transcendent experience: we lose our bodies, our messy feelings, our desires, our fears. It reminds me that those of us who turn in disgust from what we consider an overinflated liberal-bourgeois sense of self should be careful what we wish for: our denuded networked selves don’t look more free, they just look more owned. […] Software may reduce humans, but there are degrees. Fiction reduces humans, too, but bad fiction does it more than good fiction, and we have the option to read good fiction. Jaron Lanier’s point is that Web 2.0 “lock-in” happens soon; is happening; has to some degree already happened. And what has been “locked in”? It feels important to remind ourselves, at this point, that Facebook, our new beloved interface with reality, was designed by a Harvard sophomore with a Harvard sophomore’s preoccupations. What is your relationship status? (Choose one. There can be only one answer. People need to know.) Do you have a “life”? (Prove it. Post pictures.) Do you like the right sort of things? (Make a list. Things to like will include: movies, music, books and television, but not architecture, ideas, or plants.)
Agree or disagree, it’s certainly thought-provoking. Significantly less conservative and more productive is Douglas Rushkoff’s book Program or Be Programmed (here’s the tl;dr version, his SXSW talk from last year). I’m only about halfway through, but his basic argument is that users must understand the assumptions on which the technologies they use are built in order to maintain agency. Discussing how, for example, software’s biases towards atemporality and decentralization affect its users, Rushkoff argues that
Only by understanding the biases of the media through which we engage with the world can we differentiate between what we intend, and what the machines we’re using intend for us - whether they or their programmers even know it. (21)
He ends (I skipped ahead) by discussing software’s bias toward those who are able to code it, calling on users to learn how to do some programming in order to avoid being at the mercy of “those who do the programming [and] the people paying them” (128). This is what I mean by Rushkoff’s argument being more productive, or more enabling for motivated readers; the “further reading” section of his bibliography links to resources including Zed Shaw’s Learn Python the Hard Way, which is, as Rushkoff writes, “a very accessible approach to a very useful programming language” (147).
Though Rushkoff’s book is criticism about large technological and cultural forces rather than specific interfaces, unlike the call for “interaction criticism” in the Bardwell essay, it and Zadie Smith’s essay are interesting examples of current forms of “digital criticism.”
by Matt Poland
Updated September 28, 2011 at 12:09 PM.
Since we were just talking about Amazon-style recommendation engines and their possible applications to (humanities) research, I thought it was interesting when this crossed my inbox right after class.
There’s a blurb in Jakob Nielsen’s Alertbox newsletter this week about a study “examining the impact of recommenders by understanding how recommendation tools integrate the classical economic schemes and how they modify product search patterns.” The link is here. I haven’t had a chance to read it, but Nielsen summarizes it as follows:
…In the second part of the study, users bought 50% more items found through the recommender than items found through the site’s multi-criteria filtering tool. This finding is obviously relative to the respective quality of the filtering tool and the recommendations and will not necessarily generalize to other sites.
Of more interest is the finding that it was important to provide DIVERSITY in the recommendations. You might think that the only goal is to find items that are as close as possible to the user’s interest. But that results in a list of recommendations that are all very much alike and thus don’t help users enough.
Obviously this study is commercially focused, but getting a better understanding for how recommendation engines are used could point to how such systems could be used in a research context. I’m happy to forward the full text of Nielsen’s summary if you like.
Posted by Matt Poland at 11:22 AM.
So far I find that I spend an unhealthy amount of time trying to come to my own conclusion about, “Is Digital Humanities a legitimate field, and if it is, then what is it?”
Looking back on what we’ve read so far I feel competent to say what it is I agree with and don’t agree with. I certainly don’t feel that I am the final authority on answering my question, although it would be easier if the DH community imbued me with that privilege so that I could sort everything out once and for all. Even so, I want to share what I think are the high points of the argument for DH, and what I think is left to be answered.
DH is related to the humanities. To embrace the non-reductionist spirit of the humanities, as posed by Crane, we should possibly avoid reductionist definitions of DH.
… Unless we consider this mathematically. For example, graphing the area under the curve of 1/x^2 from 1 to infinity gives us infinite area, but then making a conic section of that curve gives us a finite volume of 1. This suggests to me that depending on how we look at DH there could be a reducible set of practices (at least at a thematic level) that deal with a non-reducible set of information to which those methodologies are applied.
Definitions of the form, “the intersection of humanities and computers” seem too ambiguous to be of much value to me. Which half is digital and which half is humanities? Or does it matter? (hint: I think that it does) Would a study of cultures using GIS methods, and a study of historical computing practices using analog methods both be considered DH?
I think that the role of the digital device needs to be accounted for. Merely making the digital be the object of study, or only doing word processing doesn’t seem substantial enough to be considered DH. Thinking about the function that computing is performing in order to augment or enrich the humanistic discipline is where to look.
Does DH need to be its own academic discipline? Probably from a funding and jobs perspective it is more attractive for it to be, but it feels to cohere better as a practice/methodology/philosophy rather than a wholly independent field. This is reinforced by the very interdisciplinary nature on DH, and also seems more amenable to the many different pathways that involve or wind up in the DH realm. The attraction of this way of looking at it is that it also might allow us to avoid subsequent conversations about, “How do we develop a top to bottom curriculum for DH?” - which to me seems impossible because you’ll have as many opinions about it as you have opinions about the humanities themselves.
At the same time, a DH curriculum isn’t impossible so long as it builds off of the twin foundations of humanities and digital scholarship, rather than try to establish either of those foundations itself.
I’m going to continue with the readings and then contribute more to this. Right now I’m trying to map out what is it that is important to DHists, and what should be important to them. I think the biggest challenge right now arises from the fact that so many of the people involved in DH have so many unique experiences that have lead them to DH, and so tend to think of DH within that context that they know.
by Matthew L Belskie
Updated September 18, 2011 at 9:36 PM.
There is some nice current debate at the Humanist that looks relevant to the discussions we had in class this morning.
I was particularly tickled by these few paragraphs:
The computer is just a tool, but so is a paintbrush. No one dismisses a painter’s art just because it was done with a paintbrush, even though you can produce some good art with just fingers and paint. At the same time, I can’t pick up a paintbrush and produce art. I haven’t had enough practice. Coloring by number doesn’t make art.
The current crop of tools act like the color by number painting. There are simple buttons to push and slots for information. There’s a lot of handholding because the users aren’t expected to be proficient in computation, any more than I’m expected to be proficient in painting. The results are useful, but they don’t capture anything of the researcher using the tool. Instead, they capture arguments made by the tool builder on how humanities should be computed. Does the tool user understand and agree with these arguments?
Not every paintbrush needs the training of an artist. I don’t need to have years of experience in order to paint the side of a house. Nor do I need to have years of experience to use the computer to write an email or use a word processor. Humanists aren’t interested in the broad strokes that paint a house, but in the details that create art.
The computer is just a tool, but it’s different than most tools. It’s malleable. It’s a medium like clay that takes on the shape of the artist. We should mold the computer to our will to answer our research questions. We shouldn’t mold ourselves to the computer and change our research questions so that the computer can help. Right now, I fear that we are using the computer like a hammer. We know it can do something well, so we turn everything else into a nail.
I feel I could copy and paste almost the entire post, honestly, so I would just encourage everyone to read it. I think it’s the second or third email, and can be found by finding “jgsmith” on the page.
Posted by Matthew L Belskie at 9:28 PM.
I was struck by a passage in Susan Hockey’s “The History of Humanities Computing” in the section about humanities computing in the 1980s:
A debate about whether or not students should learn computer programming was ongoing. Some felt it replaced Latin as a “mental discipline” (Hockey 1986). Others thought that it was too difficult and took too much time away from the core work in the humanities.
This idea of humanities students being required to learn some programming put me in mind of when, in what seems like a prior life, I was looking at the requirements for a PhD program in English Literature. I can’t remember now, it may have been at NYU (if so, the requirement has changed), but regardless: in addition to demonstrating proficiency in two languages pertinent to their research, PhD students would be required to demonstrate proficiency in one computer language.
As it did then, and increasingly does as I gain perspective on my humanistic education, this seems like an eminently good idea. Having done coursework in Latin, Hockey’s description of studying the language as a “mental discipline” aligns neatly with my understanding of the value of learning Latin. While it certainly exposes students to beautifully constructed rhetoric and gives useful perspective on one’s native language, learning Latin is primarily a calisthenic exercise in rigorous, categorical thinking and memory.
I would argue that learning a computer language is a similarly rigorous, disciplined pursuit, with the added benefit that you can make things with your well-constructed syntax (though it should be said: a well-made sentence is a beautiful thing). At least one computer language, Perl, was explicitly made with the structure of a human language in mind. Its creator, Larry Wall (a trained linguist), often refers to its variables and functions as “nouns” and “verbs.” While it doesn’t do to get too caught up in comparing the traits of computer and human languages (here, anyway), perhaps Hockey was on to something in highlighting the “mental discipline” the learning of computer languages might present humanities students.
Certainly, it would provide a definitively marketable skill to humanities graduates in addition to the someone more nebulous abilities to “write” and “think.” Though I would not go so far as to say students should take a course in HTML/CSS instead of Latin 101, as we have seen, the changing methodological nature of the humanities would make those skills, and the mental discipline they require, quite useful.
Posted by Matt Poland at 4:33 PM.