Nate Silver and NOT Elections: Computers Can Think

October 23, 2014

This is a post in response to “Rage Against the Machines” by Nate Silver on his statistical prediction website fivethirtyeight.com.

I work with computers, models and prediction a lot — not as much as Mr. Silver, but enough to know what I’m doing. I also find artificial intelligence and the predictive capabilities of computers fascinating, and love talking about them. So, when fivethirtyeight published Mr. Silver’s chapter on the utility of computers for making predictions, I was intrigued.

I want to start off by saying that I respect Mr. Silver’s general approach to prediction, as well as most of the themes outlined in that chapter (to be clear, I’ve only read the free chapter and skimmed a little bit of the Climate Science chapter from the Signal and the Noise, Mr. Silver’s book on our ability to make predictions). I especially appreciate his thesis — that computers and humans predict in complementary ways and that the best performance often arises when a computer’s brute-force calculations complement a human being’s long-term strategic predictions.

However, I take issue with the following observation Mr. Silver makes towards the end of the chapter:

“Be wary, however, when you come across phrases like “the computer thinks the Yankees will win the World Series.” If these are used as shorthand for a more precise phrase (“the output of the computer program is that the Yankees will win the World Series”), they may be totally benign. With all the information in the world today, it’s certainly helpful to have machines that can make calculations much faster than we can.

But if you get the sense that the forecaster means this more literally—that he thinks of the computer as a sentient being, or the model as having a mind of its own—it may be a sign that there isn’t much thinking going on at all.”

A few sentences later, Mr. Silver quotes Kasparov as saying that it is possible that nobody will ever design a computer that thinks like a human being. In combination, these quotes represent what I feel is a misguided belief in the power of human cognition as separate from computer cognition.

I first encountered this belief in college, as a cognitive science major. My major was new at college, and the curriculum design was a bit strange: students had to pick from four “menus” of computer science, philosophy, psychology, linguistics, and neuroscience. This design is good in principle, but in following it I experienced the strangest case of major cognitive dissonance:

In my philosophy classes, I would learn about the awesome, unknowable power of the brain, a perfectly enclosed room that we can never peek into, sending and receiving messages without revealing anything about its underlying functionality. Human brains were unique, incomprehensibly powerful, and certainly could never be emulated (or simulated) by an actual, constructed computer.

In my computer science classes, I would learn how to emulate human thinking with actual, constructed computers.

What was going on? Either my computer science professors were wrong, or my philosophy ones were, or the truth lay somewhere in the middle. In most cases, you would expect the truth to lie somewhere in the middle. However, I think that in this case, my philosophy professors were wrong when talking about the brain as an unknowable entity, one whose capacity computers can never achieve.

The human brain is certainly an impressive entity: 10 billion cells, 100 trillion connections. It is difficult to grasp that scale of processing power, so our brains (fittingly) balk at reasoning about themselves. We, conveniently, think of the brain as a black box — sights, noises, smells come in, get encoded as electric signals; electric signals come out, get translated into muscular activity. In consequence, when someone comes along and tries to peer inside the black box, we get nervous. It’s impossible to peer inside the black box, we have decided; so we form arguments about why the brain is an unknowable entity.

Let me outline some of these arguments:

At first, philosophers argued that thought is the ultimate authority on existence. I think therefore I am, said Descartes. Everything outside my thoughts may be an illusion but the thoughts themselves are real. While this argument did not stipulate the unknowability of the brain (in some ways, it claimed that our thinking apparatus — which is not necessarily restricted to the brain — was the only truly knowable entity), it did establish a fundamental difference between the brain and everything outside of it.

Much later, the behaviorist school of thought countered that, well, we can actually observe the inputs and outputs to the brain pretty well, so maybe describing the brain is just about measuring those inputs and outputs?

No, countered John Searle! The brain is like a mostly-sealed room. We can pass inputs into the room, and get outputs out of it, but we can’t tell what’s going on inside. Thoughts are sealed away from us.

The neuroscientists working on brain function found some problems with the mostly-sealed-room theory. They have analyzed human vision and hearing systems and, while they found these systems complex, they were able to figure out their structure and function to a great degree. Meanwhile, computer scientists (starting with people like Ada Lovelace and Grace Hopper) also built computers that could emulate many human abilities: performing logical operations, adding numbers, doing statistical analysis.

No, countered many philosophers and analytical thinkers (including Garry Kasparov, whom Mr. Silver quotes above). Those kinds of abilities are “base” — they have nothing to do with higher brain functions like creativity and abstract thought. One can never make a computer that truly excels at what humans excel, for example, playing chess.

Then Feng-hsiung Hsu built a machine that beat the world chess champion. Mr. Silver describes the match in some detail in the post I linked, though his argument seems to be that that machine, Deep Blue, won because of a bug that spooked Garry Kasparov. He alludes to (but never explicitly talks about) the fact that, starting about 2004, computers have not just edged out but outwardly trounced human opponents.

This line of arguments (and the events that disprove them) has a recognizable pattern: people arguing for the intractability of human thinking are losing ground, ever more quickly in recent years, even as they cling to ever more narrowly-defined behavior as “true”, idiosyncratic properties of humans that can never be replicated by a computer. Sure, computers are good at chess, but what about trivia, they say? What about the Turing test? What about art?

In focusing on specific tasks, the adherents of the brains-are-special theory are missing the bigger picture: computers can think, and they think in the same way as humans do. The processes that lead Watson to be good at Jeopardy share a key quality with the processes his competitors, human Jeopardy champions Brad Rutter and Ken Jennings employ, and it’s not just the inputs (Jeopardy answers) and outputs (correct questions). In one sentence, the core of thought is this: using computation and, specifically, statistics to construct models that help the thinker interact with their environment. Or, put more simply: thought is just taking in input from our environment, looking for patterns in that input, and making patterns on top of patterns.

Some of these patterns are already well understood. Earlier, I talked about vision: rod and cone cells in the human brain process light waves and transform them into electrical signals that provide rough, “pixelated” details about our visual environment. Here’s where the first layer of pattern recognition comes in: our brains, over millions of years of evolution, have gotten really good at helping us survive in our environment. One of the key tasks for survival is identifying and classifying objects and other living beings: for example, if you see a predator, run away. And so our visual system evolved to recognize patterns of “pixels” that correspond to living beings moving closer to us, very quickly. Humans who were better at recognizing these patterns could get away from predators faster, survived more, and passed on their genes — along with the evolved visual system — to their offspring.

Over time, humans got very good at interpreting patterns. Turns out, there are a whole lot of different patterns of visual stimulus in the world. There are predators, prey, lovers and friends. There are poisonous plants and nutritious ones. It was inefficient for our visual system to store every single pattern at the same level. In response to the evolutionary pressure to identify many different kinds of visual stimuli, each important for our survival, we evolved higher-level patterns.

We learned that plants that may look different have a similar function, and began to associate those plants together. Thus, the model of a plant was born — probably, at first, “poisonous plant” and “edible plant”, which then collapsed into “plant” as we grew less focused on the special task of dealing with flora. Same with models of “animal”, “friend”, “potential mate,” and so on. We also learned models of objects and materials, which helped us build tools to ward off predators and rise to the top of the food chain.

Then, over the course of thousands more years, we constructed ever more complex patterns. Patterns that helped us figure out how to make up new tools. Patterns that helped us communicate our patterns with fellow humans, for hunting or building together. Our brains created models for art and science and engineering. And along the way, we built another model: one that realized that all these patterns were very helpful for our protection, and coalesced into a notion of “I.” Consciousness arose, and we became aware of our world not just as a bunch of stimuli, but as an extremely complex model, a nested set of patterns that includes physical phenomena and nation states and general relativity.

That is just what computers are doing. Our search engines and our chess playing programs and our automated medical analysts are learning and communicating about patterns. They take input data (like text in a document, or set of symptoms), run it through a statistical engine, and see what pattern the input data matches. These computations are the model-level building blocks of thought and consciousness. As we add ever more processing power, as we increase the speed and the memory banks and the parallel functionality of computers, they will be able to learn ever more complex patterns, to stack those patterns on top of each other and form models, to use models to interpret their environment. Until one day, a machine realizes that all this input is key to its functionality, and forms a new model for organizing its thoughts: I exist.

Just a Reminder that there are Still Things Worth Fighting for

October 23, 2014

Here’s a link to, imo, a pretty great speech by Wendy Davis:

I think she does a great job, and I encourage you to watch the whole thing, or at least the last half (I know, all of us have busy lives). I think the thing that this speech reminded me of especially, is how important it is to keep fighting for equality, whether in Texas, New Hampshire, Russia, or wherever.

It’s easy to give up, or at least to get complacent. The Democrats may lose the Senate. The government’s in gridlock. Progress seems agonizingly slow sometimes — one step forward, two steps sideways. We have a progressive president who authorizes drone strikes. We have two major political parties in the US, one of which is much more progressive than the other, but both of which are heavily dependent on moneyed special interests.

But we can’t give up. We can do so much, with so little. If you have time to vote, to google your candidates and get informed, to make a call or two to undecided voters in swing states, to have an honest conversation about politics with your friends — that’s what will keep us moving forward, towards, dare I say it, a better tomorrow. Or if not tomorrow, then the day after, or the day after that. It can be extremely frustrating to watch progress inch by when there are so many issues that need urgent addressing, right now, but these small changes will and do add up.

Wendy Davis may lose in November, but we will remember her filibuster. The next campaign, the next woman who runs in Texas, will feel more empowered to speak up about abortion and women’s rights. And so on and on, these small steps of social change, these small grains of progress, will combine into something truly awesome — a future where the rich don’t earn an order of magnitude or two more than the poor; a future where there IS equal pay for equal work; a future where we are taking care of our planet instead of stripping it dry. A future where we can look back on our lives and say, we helped change things for the better.

Pres. Obama said something early in his presidency. He urged all of us to be the change, with him. I value that one brief phrase more than most other things he’s done or said (ok, not more than Obamacare, but it’s up there). It was never going to be about him, about one candidate or one law making our lives better. It was about all of us, doing the hard work to transform our country and our world into a better place to live. I hope you keep these words in mind, not just this election season, but whenever social change seems impossible and progress seems fleeting. With small steps, we will get to a better place, together.

Ada Lovelace Day: Advanced Language Technologies with Prof. Lee

October 16, 2014

This post is in response to a prompt for Ada Lovelace Day: writing about a woman in science, technology, engineering or maths whom I admire. I would like to write about Prof. Lillian Lee at Cornell University, whose class Advanced Language Technologies made me believe I could do math again.

As a child, I had this fascination / veneration of mathematics. My dad got his Master’s in math, and he would tutor me by giving me hard problems. Problems I could never be expected to solve. It was difficult, and frustrating, and the fact that we never talked about it contributed both to a worsening relationship with my father and to a feeling that I was hopeless at the subject. I did well at school, sure, but my parents were quick to point out that this was “weak American education,” that “higher maths” was this beautiful thing that was hopelessly outside of my reach. Their words rang true when I went to college and did disastrously in my Linear Algebra / Vector Calculus class. I remember getting like a 50% on the first assignment — my first failing grade in ages! — and asking the Professor for help, and seeing his contempt at my pathetic work. I stuck through the class, just barely, then did not return for the second semester, convinced math was beyond me.

And yet, math was useful and beautiful and I kept coming back to it. I learned that, with math, one could analyze and even predict the behavior of human societies. I learned about complex systems, and how the interaction of simple rules led to the irreducible beauty of natural phenomena from atomic lattices to natural habitats to riots. I wanted to study human behavior at the group scale, to understand a sort of physics of sociology. That’s what I told my mom I would be working on in graduate school (I wasn’t talking so much to my dad at the time). She said that nobody was interested in the subject; that I should study linguistics, as she had; and that at any rate, I did not have the math aptitude to study something like that. Her words hurt.

Still, I gave grad school a try. I only got into one graduate program out of the five I applied to, but it was probably my favorite of the five — a program in Information Science at Cornell University, young and small and full of academics asking precisely the kinds of questions I was interested in: what motivates group behavior? How do societies form and collapse? What are the socio-physical forces acting upon friend groups, communities and whole countries to enact global change? The program also had a rigorous course requirement — seven graduate courses, in sub-fields ranging from technology in its sociocultural context (with Prof. Phoebe Sengers, another woman in science who inspired me!) to advanced natural language technologies with Prof. Lee. I remember fellow grad students speaking of Prof. Lee’s class with fear — the math was too hard, her standards too exacting, the subject matter too abstract. It was with a lot of nervousness, remembering my mother’s words about my math-inadequacy, that I went to the first day of class.

I expected twenty students, and was surprised to see only six or seven, including a couple of my friends. Still, the atmosphere was tense — little eye contact, little conversation before class. I remember Prof. Lee going up to the board and starting the first lecture.

Prof. Lee started class off with a Nabokov quote. Then she talked about language, linguistics, and what computer science tries to do differently from computational linguistics, and how it’s better by being simpler. I kept waiting for my eyes to glaze over, for the math to overwhelm me. Instead, Prof. Lee patiently walked us through tf-idf — one of the core formulae in natural language processing, developed by a woman – Karen Spärck Jones. I followed the explanation. I understood.

Surely this was just the first day, I told myself. Surely, things were going to get far too complicated for us later. I went back for the second class.

Prof. Lee had us break up into study groups and tasked each group with compiling lecture notes *ahead of class*, so they could better understand the material. She warned us when a formula was going to be especially difficult (like topic modeling and Latent Dirichlet Allocation) and she encouraged us to work together if we did not understand a concept or a problem. She was not condescending. She talked fast and thought fast, and sure she was intimidating, but she was kind and patient with all her students — a fact I did not realize for a long time, so intimidated I was by the class subject matter, so sure I was that I was going to fail.

Prof. Lee was also tough. I remember her calling me and my friends on the carpet when one of us had plagiarized notes from a textbook. Again, though, she did not belittle us or humiliate us — she expected us to do better. We worked together with the student who plagiarized to help them understand why it was wrong to do so — in their culture, copying a textbook was the norm — and we did not repeat our mistake.

Still, when time came for the midterm, I felt pretty hopeless. The questions were hard and I did not have a good grasp of all of the material — I hadn’t studied hard enough. I got a grade in the 30%s, not failing because of the curve, but far from an A.

It was high time for me to give up, to either accept a low grade or just drop the class altogether and find another way to satisfy that course requirement. And yet, I didn’t. Strangely, I felt motivated to study harder. I paid close attention to the complex lectures on Latent Semantic Analysis and context-free grammar. I read over my notes and did test problem after test problem. I stayed late for study parties with fellow grad students and started attending my office — something I hadn’t done before.

The final exam was brutal. I literally walked uphill, in a snowstorm, to the exam hall, where I was presented with 5 advanced problems in Advanced Language Technologies. I solved one to reasonable satisfaction, and made notes on the rest. It was the best I could do, but I knew that I was dealing with hard material. I walked out of the class with an A-, again thanks to the generous curve, feeling disappointed in myself for not really earning the grade, but at least proud of having learned something.

The next semester, Prof. Lee invited me to her seminar.

I expected that she, like most other authority figures in my life, would look down on my pathetic math aptitude. Instead, she wanted me to read cutting-edge research in the field! I joined, hesitant, but ever more excited. We met, talked about papers, joked. Prof. Lee kept things going at a quick pace, not letting us slack off, but always inviting conversation in computer science, even when it veered off into tangents about algorithm performance and syntax structure. I read the papers, even the ones full of math formulae, and slowly, they began to make sense to me.

At around the same time, I found that I was no longer struggling in my other grad school classes, especially the mathematical ones — I understood the material, I was able to read cutting-edge research and critique it. It wasn’t all thanks to Prof. Lee’s classes, but a good chunk of my newfound comfort with abstract topics like term-document matrices and linear programming was due to her teaching and to my hard work in her classes — hard work I would have never dared to do without her encouragement.

My thesis was, in a large part, a mathematical proof. I became the math guy on several of my academic projects. Today, I have a job at the forefront of industrial social media analytics, heavy in mathematical analysis. I explain multidimensional matrices to our company’s lawyers as part of developing our patent portfolio. Just this summer, I supervised a student who did some excellent Latent Dirichlet Allocation work on our internal data set to demonstrate the flow of news topics between journalists and high-profile media figures on social media. At no point did I stop to think, maybe I can’t do it. Maybe I am not good enough at math. I have Prof. Lee to thank for that.

You are an inspiration, Professor, and I hope you continue to enjoy an awesome, successful academic career, and introduce many more students — eager or nervous — to the mathematical analysis of natural language.

 

On the Facebook Emotion study

July 7, 2014

This is a response to a response. The article I’m responding to is here.

In short, Facebook research recently published a study about manipulating the moods of its users in an experimental setting. It is not clear how much the users were able to consent to the study. Much ink has been spilled over the study and, in particular, the author of the piece I am responding to is worried that the extreme reaction to the Facebook study will result in less openness, not more, from the industrial research community, with teams preferring to keep results internally rather than publishing them. The poster also points out other examples of experimentation, e.g. on Wikipedia, that did not raise so much outrage.

I sympathize with the poster’s point of view, but I must respectfully disagree. I will respond in reverse order. First, as to the point of the other examples: I think user consent is important in ALL situations. A questionable experiment on Wikipedia should get as much criticism as a questionable experiment on Facebook. So the existence of one most certainly does not excuse the existence of the other.

Second, as to the point about the extreme reaction — I believe it was well-warranted. Most users of Facebook, Twitter and similarly-scaled social media sites really do not understand just how much power is concentrated in these sites. As one of my colleagues says, imagine if the US Federal Government asked a sizable fraction of American adults to report their age, gender, relationship status, location, likes, and so on in a centralized repository, and to keep that information up to date. There would be widespread outrage — and yet, that collection is precisely what Facebook has access to. The poster mentions Milgram’s experiments — those, due to logistical limitations, had a sample size in the low hundreds. A social media site has access to hundreds of millions of people and, as Facebook’s study shows, can manipulate their emotions. The implications are truly serious.

In light of these implications, it is natural for companies like Facebook to retreat into the safety of internal studies and never publish any results. It is a natural reaction of a guilty party to deny anything bad happened, downplay its seriousness, and move on. However, this sort of denial does nobody any good: the users continue to be experimented upon without any knowledge; the social media service loses credibility and suffers attrition; the academic community’s reputation is tainted and social science loses research funding that it desperately needs. The way to break out of this vicious cycle is openness and, yes, making mistakes and apologizing for them and suffering the consequences of temporary suspicion. Facebook would be much better off continuing to publish its research, conforming to standards of behavioral experimentation, providing users with the ability to consent, or not, to studies that manipulate their social media experience, and suffering through the growing pains of becoming a trusted social media research institution. The alternative is, as the poster suggests, increased secrecy, lack of oversight, and inevitably, a scandal the size of AOL’s leaked data.

Industrial researchers can do better. We should be willing to expose our world-class science to the rigorous examination of the public and respond to criticism in a mature and responsible way. That is our obligation to the billions of human beings who use our services.

Isla Vista killings

May 30, 2014

I grieve for the victims of the Isla Vista killings. I can only hope that, in time, they will find closure and healing.

I am also angry about the mainstream media narrative that this was yet another killing by a lone gunman with mental health issues. I want to make it clear that I am not downplaying the killer’s mental health problems — from what I’ve read, heard, he had them. At the same time, I feel like the media narrative of this shooting as an isolated incident misses out on the fact that it’s part of a larger pattern of violence perpetrated by men in this country.

There have been some awesome and informative posts on this subject, especially here and here. For a more detailed problem of violence among men, especially young men, see here (Paul Kivel). I am not going to repeat what they say — please follow the links instead! I would like to add my voice on the subject because of my graduate training in sociology and cognitive science; however, I recognize that my voice is privileged (as I myself am a young white male) and there are many different perspectives on this issue.

Now that you’re back (or opened those stories in three other tabs, or just ignored them), here is my take. The epidemic of violent shooting sprees in the US is just one symptom of a larger problem of a culture of violence in the US. We, American citizens, won’t be able to stop or minimize these horrible events without addressing the larger problem. However, a particularly pernicious side effect of the culture of violence is that it makes it difficult to see itself as the real culprit; a necessary first step to addressing the deep issue is recognizing that it exists in the first place.

I am not here talking about conspiracy theories; the NRA, while very powerful and interested in selling guns, is not the villain in this story. Rather, the villains – the participants – are all of us, insofar as we live inside the culture of violence in the first place. Killing, attacking, hurting — especially by privileged groups (like men) over marginalized groups (like women, people of color, trans people, disabled people, etc.) — are so enmeshed in our subconscious that we can’t easily fight it. Our brains recoil from the horror of Isla Vista, but some part of them, I would wager, has become desensitized to these events; it treats them with the same level of alarm as news of a distant storm or an earthquake. Terrible, to be sure, but inevitable.

In fact, weather is a particularly interesting example to use, because, I would argue, the way we react to these killings is the way we react to extreme weather in the context of climate change. Extreme weather events are similarly horrible and violent. They, like mass shootings, are symptoms of a larger problem – a drastically changing climate due to human activity. Furthermore, just as with mass shootings, it is *impossible* to draw a causal link from climate change to a particular hurricane or mudslide. Nevertheless, no matter how good we get at building levies or early warning systems, the extreme weather events will keep taking (ever more!) lives until we deal with the larger problem. Finally, just as with mass shootings, we are so enmeshed in the changing climate – it is literally everywhere, outside – that it’s very hard for us to accept that there’s something wrong with reality most of the time, and it manifests as these occasional catastrophes. It is much easier for our brains to theorize that the real world is fine as it is – after all, we’ve spent so much time adapting to live in it – but occasionally, terrible things happen.

I could go on and on giving examples. The global financial crisis as symptom of extreme income inequality. Terrorism as symptom of imperialism and colonialism. Etc. I am sure there are many esteemed academics who have been / are / will be writing treatises on this issue. I look forward to their research. In the scope of this blog, however, I don’t want to formulate a grand scientific theory. I want to share the concept with a wider audience and hope that it will help us all, slowly, to become aware of and critical of our violent culture. Changing it is the next step, and it will not happen overnight. Only through the concerted effort of all us citizens in a wide variety of causes might the systemic problem finally begin to recede. To stop gunman violence we will have to better educate our kids about privilege. We will have to have more restrictive, and more respectful, rules on gun legislation — rules that make sure gun owners understand they are wielding a deadly weapon and keep such weapons out of the hands of those with a history of violence, drug abuse, and other issues. We will, yes, have to have a better mental health policy and an approach focused on healing and acceptance rather than on exclusion and othering for mentally ill people — but that is only part of the solution. We will also have to promote non-violent, or less-violent art and mass media, so that our movies and our books and our video games are not *mostly* about killing, stealing, and rape, but also about friendship, cooperation, and understanding. The list goes on.

The Isla Vista killer was a sick young man, and, thankfully, most of us will not follow in his path. That does not mean, however, that his actions, his attitude towards women, are not a little reflected in every one of us, especially in the privileged among us. We would do well to remember that, and to try to make a better, less violent, world together.

Update: Shakesville has a great post on misogynist culture and geek guys’ reactions to the Isla Vista killings here. I very much recommend it!

Net Neutrality

May 9, 2014

As I write this, the FCC is considering changing the rules for providing content over the Internet. If the change goes through, data streaming over the Internet will be separated into two streams – “fast lane” and everything else.

In the short term, this change may bring us faster Netflix access and more better high quality TV shows and movies and that will be wonderful. In the long term, this change would set a dangerous precedent – that, at the bits level, content is not just one, homogeneous, thing.

It would be impossible to argue that all information is the same. 2+2 is information. Poems are information. Hate speech is information. It is incumbent upon us as a society with Internet access to help us deal with it — to protect our minors from information that would hurt them (though the scope of such information has been drawn far too wide, in my opinion); to be able to react, and weigh in, and talk to each other about what we see; to support and criticize, publicly and privately, the bits that we encounter.

It is also incumbent upon us to not put locks or gates that bar the spread of information. As innocuous as a “fast lane” decision might appear, it is precisely the first step to such a lock. As soon as we introduce the notion that some information is easier to access, the power structures within our society will seek to relegate undesirable information to ever more-locked, ever more-barred conditions. Environmental websites? Too radical. Christian fundamentalist blocks? Too conservative. News organizations that disagree with the mainstream opinion? Too different, we will say, and we will put up another lock.

That way lies totalitarianism, stagnation, ossification. I sincerely hope we do not go down that path, but instead move away from it — to a world where more people can access more web pages, regardless of race, class, gender, SES, and so on. If my writing on this seems a bit dramatic, that is because this is an issue worthy of drama – information is our currency, our brain-surrogate, our social bonds. We should treat it with the utmost respect. We should nurture its freedom, not lock it away.

I hope the FCC listens to mine and others’ voices and does not go through with its decision. I hope the Net remains Neutral, and free.

PSA: Your Default Narrative Settings Are Not Apolitical

March 25, 2014

Originally posted on shattersnipe: malcontent & rainbows:

Victorian Women SmokingImage taken from tumblr.

Recently, SFF author Tansy Rayner Roberts wrote an excellent post debunking the idea that women did nothing interesting or useful throughout history, and that trying to write fictional stories based on this premise of feminine insignificance is therefore both inaccurate and offensive. To quote:

“History is not a long series of centuries in which men did all the interesting/important things and women stayed home and twiddled their thumbs in between pushing out babies, making soup and dying in childbirth.

History is actually a long series of centuries of men writing down what they thought was important and interesting, and FORGETTING TO WRITE ABOUT WOMEN. It’s also a long series of centuries of women’s work and women’s writing being actively denigrated by men. Writings were destroyed, contributions were downplayed, and women were actively oppressed against, absolutely.

But the forgetting part is vitally important. Most historians and…

View original 2,546 more words

Quick Post on Dylan Farrow

February 7, 2014

This is a short thing. Dylan Farrow recently posted a letter talking about her childhood abuse at the hands of Woody Allen. Since then, there has been no shortage of accounts in mass media questioning Dylan Farrow, rebuking Dylan Farrow, wondering why Dylan Farrow just can’t be quiet already, etc. I think that’s wrong. I stand with Dylan Farrow and I support her for writing that letter, even though I do not know her.

I realize that Woody Allen is a creative person who’s made many movies that are loved by many people. I’m not going to go out of my way to never watch any of his movies — at least not at this stage. I can recognize the influence his movies have made on world cinema without supporting him with words or money. I can praise a work of art and condemn its author for what he did in his personal life.

I believe Dylan Farrow’s words even though Woody Allen has not been convicted of a crime. Those words resonate with me, in a deeply personal way that I would rather not talk about in public space. But even if there were no personal connection, I hope that having read and watched and learned enough about the way our culture treats abusers (when they’re people of power and privilege) vs. the abused (when they’re not), I would believe Dylan Farrow. There are a thousand waves to convince, cajole, threaten, sweet-talk, bribe, persuade the victim that she or he did not suffer, and we use them daily, and it’s terrible that we do that. I believe that if we spent a little more time listening and a little less time judging we would all be better off.

Academics plagiarizing from blog posts: Not Cool!

October 27, 2013

This post is a reaction to a Gradient Lair post, specifically, to the part of the latter, where the author talks about academics plagiarizing hir blog posts.

This is deeply uncool. I have been personally guilty of an attitude that blog posts aren’t “real” academic work, even if they’re published on an academic subject. I have been inspired by ideas on blog posts and not cited them in my papers, and that’s plagiarism, and I apologize and will try to do better in the future. I have also experienced this attitude in my colleagues’ work. Blog posts are just informal exchange, they’re not peer-reviewed, so it’s not really plagiarism… well, it is, and it should stop.

I also want to be aware as I’m writing that I’m doing so from a position of privilege, as an academic. I have a Ph.D., and nobody is going to take it away for this blog post. I don’t think it’s fair that, as the author of the post I linked above writes, academics “cannot make content like mine or speak like me if they want to stay in their programs.” Academia is about freedom of expression. Often, it is about being able to stand up to power and entrenched hierarchy because (ideally, at least) you are working outside of it. So, I am sad that it is happening, and I will contribute what I can to help make it stop – mostly by using my privileged position, when I have the time and energy, to speak honestly about inequality and injustice in my area of study.

A quick note on Syria

September 13, 2013

I just got back from vacation, and am catching up on the Syria news. A piece that has particularly resonated with me is Rep. Grijalva’s, on CNN:

http://globalpublicsquare.blogs.cnn.com/2013/08/29/prevention-better-than-punitive-in-syria/

I especially like Rep. Grijalva’s wording that the whole framing of US interventions as precise, limited and strategic is fundamentally flawed. War, in my opinion, is not precise, limited, or strategic, no matter how much the military wants you to believe it. War is messy, and if a nation decides to get into it, it should do so with the understanding that it will be a mess and horrible.

An analogy occurs from aikido. When I see my sensei perform a move, it makes sense to me in my brain. The interaction is, in three words, precise, limited and strategic – the precise use of your opponent’s energy to disable him or her in a strategic way (e.g. getting the gun or knife out of their hands, getting them on the ground) in the most limited way possible (without injury or death). When I get on the mat and try it with my partner, the interaction is messy and confusing. Sure, a lot of that has to do with the difference in our skill levels, I have much to learn in aikido; however, I think that skill disparity doesn’t explain the experiential difference between understanding the move, and doing it. Understanding the move happens in my brain – this center-motion, this footwork, this arm-motion. But doing the move happens in the rest of my body as well as the brain, and it’s in the interaction of two bodies, small, large, wiry, curvy, confused, experienced, where the technique truly happens. The transition from brain-understanding to whole-body-understanding involves confusion, awkwardness, and not infrequently pain.

So it is with war. Planning out strikes and military actions happens in the theater of ideas, clean, precise. Actual engagement of two nations in mass murder happens in the messy, loud, chaotic space of bodies (metal and organic) clashing with bodies. The transition from the strategic space to actual combat involves confusion, hatred, pain, and death.

The messiness of warfare is by no means a new idea, and yet American foreign policy in the last few years seems to have ignored it. I hope that our leaders, present and future, will have the courage to be honest about the military conflicts they plan to engage in, rather than pretending these conflicts are intellectual exercises, with consequences left to the readers – us – to puzzle out.


Follow

Get every new post delivered to your Inbox.