Humanists for Science
on The Secret Agent, AI, my father, and academic freedom
I make very few New Years’ resolutions, and I share fewer, but one of mine for this year is to watch more movies. Last year I watched maybe two: for some reason two hours, more or less the duration of my child’s nap, has seemed like too long a chunk of time to consecrate to doing only one thing, and I hate missing bedtime, and I hate watching films on my computer. But I miss film so much. I miss seeing with other people’s eyes, hearing what they hear; I miss the way film slows down my looking and directs it and shows me colors and light that are not part of my everyday looking. Anyway this year I re-watched Béla Tarr’s Werckmeister Harmonies, and saw Jafar Panahi’s It Was Just an Accident at home and then Kleber Mendonça Filho’s The Secret Agent at the brilliant gem that is the Cleveland Cinematheque. The house was packed, fuller than I’ve ever seen the theater, and a few people applauded at the end, which never happens. (It’s different down the street at the Cleveland Orchestra—if you play Severance Hall and you don’t get a standing ovation, you might’ve done something wrong.)
The Secret Agent bowled me over with its energy and style, not just in the flashy gorgeous Carnival scenes reviewers seem to like to point out. There’s Wagner Moura’s yellow VW Beetle on dirt roads through green fields; shots of daytime Recife, river and movie theater with its neon lights; a calm sunny apartment; a drunk hitman taking a beach-side shower; a projectionist’s booth. I don’t know enough about film really to explain what I mean when I say the film has “presence,” but it does. The images come forward. The music in the theater was loud, which was perfect. Moura’s performance is impeccable; I fell in love with him over and over. Everything feels up-close and sharp, out of the shadows, there for the seeing of it.
But The Secret Agent is exactly not all about surfaces. Spoiler alert: the rest of this post is going to give away some key information about the movie’s plot, so maybe if you want to see the film stop reading here, go see it, and come back—I do think the slowness with which information is revealed, and the contrast between the film’s sharpness and the murkiness of the events it chronicles, are part of its genius. Over the course of the film, we learn Moura’s character’s backstory: it is 1977 in Brazil, and the country is under military dictatorship, and the character, Armando, is a former university research scientist and department chair who has ended up on the wrong side of a private industry magnate whose interests in developing certain technologies for profit was threatened by Armando and his colleagues’ work. Armando finds the idea that he is under a death threat for standing up to this magnate ludicrous—he is an intellectual, a professor—and yet there he is.
It’s this story that makes the movie so terribly timely. Actually, all three of the movies I’ve seen are timely—they’re all about brutality and repression—and moving. But for some time now, I’ve wanted to write about the university, about science, and about academic freedom in the US. My father, who passed away last October, was a research scientist who left industry after many years for a second career in the academy, was—like Armando, in a way—keenly aware of the pressures of industry on the work of scientific research. Once I got to hear him give a talk in the chemistry department at the university where I was teaching. I expected not to understand anything, and it’s true that I didn’t understand most of the talk, which was about his investigation of the properties of a certain polymer that turned out to be useful in drug delivery. But the most important part of it would’ve been accessible to anyone. In industry, my father said, he was not permitted to research this polymer because his interest in it stemmed purely from curiosity. He did not, he said, have an idea for how this polymer might be used; he could not have sold it in advance. He had a hunch, an intuition, or he was just curious. In the end, he said, the polymer was useful, even groundbreaking, but he could not have known this beforehand. This, he said, was what universities were for. They were places where you could do research for its own sake, not for the sake of the market.
I have been thinking about this talk he gave often lately, and in the context of AI. One thing that surprised me most of all about Karen Hao’s Empire of AI, a recent history of the OpenAI company (a little about it here), is the relationship—antagonistic at best—between the Silicon Valley entrepreneurship research model and the US academy’s norms and standards. Hao chronicles how, in the case of OpenAI, the uneasy collaboration of the type that had existed elsewhere between university research scientists and corporate ones quickly soured—how OpenAI and companies like it not only silenced their research scientists, breaking decades of precedent for these scientists’ freedom and autonomy, but effectively gutted entire computer science departments, offering salaries that were impossible for universities to match or individuals to pass up in exchange for, well, the soul of scientific research. I’m under no illusions that similar things have not been happening for some time—I’m thinking of the oil industry in particular—but the conjunction of the AI industry and the US government, both of which seem intensely anti-academic at the moment, has me terrified. The idea that any of us would, say, be under death threats for refusing to join or fold—ludicrous? I don’t know. The forces ranged against the university right now—I write from Ohio, where Senate Bill 1 has destroyed academic freedom at public institutions—are strong. I don’t entirely understand the reasons for them, which in any case are overdetermined, but I do know they are material as well as moral.
Anyway, my father is the reason I cried and cried at the end of The Secret Agent—my father, who given Armando’s choice, would also have chosen neither to join nor to fold. My father is also part of the reason I am so vehemently and vigorously anti-AI. This spring, I’ll teach an undergraduate seminar on lyric poetry, the same survey of lyric poetry across periods I’ve taught before. This spring’s version, though, has an research-driven argumentative component, and I have this idea that I’ll have my students mount some sort of humanistic argument against AI. (I’m calling the course “This Living Hand,” after Keats, if that gives you some idea.) This argument will go something like: writing is what makes us human, and having a human behind writing matters in some fundamental way. I believe this with everything I am. I have come to love my students’ mistakes, the little marks of the person behind the writing, the signs of their struggle with the medium. I think differently about those things, the signs of friction and work, in my own as well.
The necessity of writing to being human is not, however, the only way to argue against AI—it might not even be the best one. I don’t think humanists should own, or think they own, the anti-AI position, nor do I think that scientists are or should be champions of AI. As it exists, lots of generative AI is bad science, inelegant and wasteful, driven by the forces that corrupt and curb research and curiosity, spurred on and in turn used by a power-mad and increasingly ruthless US regime. It’s no accident that it’s hard to find data on generative AI’s environmental footprint, or that it’s hard or impossible to know exactly what AI models were trained on what: if these things were more visible, AI would be less blandly palatable, less easy to shove down the throats of the general body of digital consumers. I am not so sure that the biggest problem with AI is that, say, students might plagiarize. I am more concerned that it will destroy both the planet and the norms of research and inquiry, that it will continue to enable and embolden the surveillance state, that it will power weapons easily used against the powerless, that it will dramatically worsen the material conditions of the world. I want to hear more arguments that point this out—humanists, too, could probably make them. We should.


