new england January 2018

January 10, 2018

I could curl up inside your snow.
Like a warm blanket.
It’s so familiar.

I could settle in one of your bars.
And never leave.
They’d forget to kick me out at last call.

I could walk through your leaf piles.
Ruffling them endlessly.
Crunchy, over-sized slippers.

I could walk down your mind-bending roads.
I could leave…
but only so that I could feel my heart a-flutter upon coming back.

I’m Sorry

October 26, 2017

I’m reading Hillary Clinton’s What Happened, and I just wanted to say I’m sorry. I’m sorry that she’s not President. I’m sorry I didn’t do more to help her become President.

I’m sorry that the world sees her as an evil, manipulative, lying person for having the temerity to be a powerful woman in politics. I’m sorry that even now, a chorus of voices is rising to implicate her in some nefarious plot, to prove once and for all that she’s guilty. Hillary Clinton is guilty only of standing up and fighting for her causes. She has pursued those causes — equality, kindness, a desire to help people who have less privilege and power than she — to the highest levels of political power, and she has used that power to make the world a better place. And, I think, many people hate her for succeeding as much as she has, and fear the implications of her continued success.

I’m Russian. My country’s politics is, sadly, full of smears and slanders lodged at every person who dares challenge the established power structure. Just recently, the European Court of Justice ruled in favor of Alexei Navalny, who has been hounded by sham accusations of corruption for years. The Court ruled not only that the trial against Mr. Navalny was unjust; it went so far as to review the case against him and find that Mr. Navalny in fact committed no crime at all. The endless campaign against him in the Russian justice system was based on fabricated, insinuated wrongdoings that never materialized.

I don’t know about you, but to me that story sounds pretty similar to the story of another politician – Ms. Clinton. Both Ms. Clinton and Mr. Navalny are champions of justice. And both face an endless effort to bring them down. From Whitewater to emails, endless “scandals” have been launched at Ms. Clinton to both intimidate her into staying out of the spotlight and to ensure the American public sees her negatively if she is not so easily intimidated. Similarly, from the Kirovles case to made-up violations of political protest regulations, endless attacks by the ruling interests of Russia aim to scare Mr. Navalny into stepping off the Russian political stage or, failing that, to ensure the Russian voter sees him as a fraud and a criminal.

The two politicians share another similarity. They refuse to back down in the face of this onslaught. They keep speaking out, fighting for what they believe in. And inasmuch as I am sorry that Ms. Clinton is not our President, so am I inspired by her courage. Like she, I refuse to back down. I will not be silent about my beliefs. And I will follow her example to fight for a more just and better world, for as long as I can.

PS I would like to thank the inestimable Melissa McEwan for her Shakesville blog. Reading it has helped me formulate these ideas, and realize I have a lot less to lose by speaking up than many others. Thank you.

Supporting Open Access in Academic Publishing

March 23, 2016

As an academic, I spend quite a bit of time writing and publishing papers; the “publish or perish” adage is true — if I were in a university setting, research papers would be one of the yardsticks by which my academic performance would be measured. As I work at the intersection of academia and industry, research papers are still one of the key progress markers of my career.

In graduate school, I did not spend too much time thinking about the ethics and politics of research paper publication. I worked as hard as I could, and inwardly rejoiced at every publication, large or small, first author or not. They seemed so hard to get!

As a researcher with a team of collaborators, partial grant funding and a history of publishing papers in my conferences / journals of choice, I feel a bit more confident about my chances at publication for any given manuscript. Looking back, I feel frustrated that I did not spend more time carefully thinking about whom my publication supports. Specifically, I never gave much thought about whether my papers went to open-access or closed-off “paywall” venues.

Now, as a more senior researcher, I pay much more attention to this issue. Recently, my collaborators and I wrote our first publication for a new grant, and, to our great joy, it was accepted at a key journal in our field! Our joy quickly turned to disappointment, as the journal offered us two options: pay a hefty fee to make the paper open-access, or give copyright over to the publisher. We did not budget for open-access fees in the grant, so reluctantly we decided to give up copyright.

This debacle has made me think more carefully about where I wish to submit papers. On the one hand, I would rather exclusively support open-access journals and conferences, which make their proceedings freely available. On the other hand, I am still a pretty junior researcher, and it would be difficult for me to pass up an opportunity to publish my research at a prestigious journal that has closed access. Furthermore, the vast majority of my work is with collaborators, and I do not want to force my collaborators to work by my principles.

After some thinking about this issue and discussing it with my friend Dan, I decided to pursue a middle path. Beyond publishing papers, I can affect publication standards by reviewing papers or sitting on program committees for conferences. Reviewing and committee duty are important parts of my scientific career, but they are not the same kind of progress marker as paper publication. I can promote open-access publishing by agreeing to review papers exclusively for open-access journals or for conferences that make their proceedings freely available. Finally, I can communicate the reasoning behind my decision to review / not review to the journal editors (same with program committee duty and conference organizers), thereby further promoting the cause of open-access publishing.

I am making a firm commitment: starting on the date of this post, I will not review any papers for closed-access journals, nor participate in any program committees for conferences that do not make their proceedings freely available. I urge my fellow academics to join with me in this effort — I hope that together we can help make science freely available to the public!

Aikido and the Force

March 26, 2015

(Content note: this post discusses violence at length).

Lately I’ve been away for aikido for a while; my area was absolutely buried under a mountain of snow for most of February, and I went on vacation to get away from some of this snow, so between being out of the country, practices being cancelled due to blizzard, and just not wanting to go outside, ever, I ended up missing 4 weeks of aikido. Afterwards, it was hard to get back on the mat. I finally went back, promptly got injured, and had to miss another week of practice. I’m finally back on a regular basis — I hope.

This is all a long prelude to saying that this sort of long-term absence is actually quite unusual for me. I have been practicing aikido for about 4.5 years, and for a while, I would barely miss a practice, ever. I would schedule work trips around going to practice; when I absolutely had to be away from my dojo for a week or more (to go see family, for example), I felt like I was falling terribly and irreparably behind.

Looking back, I think this sort of stringent attendance came from a feeling of inadequacy coupled with a firm belief in grinding. I am three paragraphs into this essay, and I haven’t even reached my intended topic yet, so I’ll keep the background short. For a period of about eight years between graduating from high school and being most of the way through grad school, I didn’t exercise regularly, at all. I played lots of video games and didn’t take particularly good care of my body. I played lots of World of Warcraft and got very good at doing repetitive tasks.

I abandoned physical exercise partially because I was convinced, by my peers and my coaches in high school, that I wasn’t going to be very good at sports, so, my brain went — why bother exercising at all? Towards the end of my eight year period of non-exercise, however, new friends and the Internet convinced me that, even if I wasn’t going to be a Champion Sportsperson, taking care of my body was a good idea, and, furthermore, that I already possessed an important skill towards achieving Fitness — grinding! That’s right, I could apply all that patience and dedication I had for leveling up my Shaman to going to the gym, and over time I would level up into more Fit, Stronger Vlad!

I went to the gym and was kind of bored with it. Then, one Fall day in the UK, I went to an aikido club and completely fell in love with it. I applied the full strength of my dedication to Getting Better at aikido; when my time in the UK was up, I came back to grad school and found a dojo there; when I graduated, I moved cities and found a dojo at my new place. I took and passed tests, and my one frustration was that my senseis wouldn’t let me do the tests sooner. I knew how to do the moves, but they would insist on a perfection that I found irrelevant. OK, maybe I’m not moving my arm *exactly* the way you’re saying, but you say it three different ways anyway, and the sensei who teaches on Wednesday says do a fourth way, so what is even the point of getting it right?

Here, six paragraphs in (woo), we finally come to the point of my post. That point being Luke Skywalker’s training with Yoda in the Empire Strikes Back, naturally.

Luke, too, applies himself to get better at the Force. He has a wise teacher, Yoda, who constantly reprimands Luke for being too confident, for rushing ahead. Yoda warns Luke about the Dark Side of the Force and its temptations. At one point, they have the following dialogue:

Luke: “…Is the dark side stronger?”

Yoda: “No, no, no. Quicker, easier, more seductive.”

Luke: “But how am I to know the good side from the bad?”

Yoda: “You will know… when you are calm, at peace, passive. A Jedi uses the Force for knowledge and defense, NEVER for attack.”

I have watched the Empire Strikes Back many times, and grew up fascinated with the idea of the Force and the Dark Side. I even played in a roleplaying campaign where I was a Jedi in training who fell to the Dark Side and then had to, slowly, redeem himself. Still, I always thought of these concepts as cool fictions — exciting and intriguing, but relevant to a Long Long Time Ago in a Galaxy Far Far Away, not to the here and now. In the real world, you don’t move rocks with your mind or shoot lightning out of your fingers.

Then, slowly, I started making the connections between the Force and aikido. My first step was not going to practice for a month. At first, I felt terrible. I thought I would fall behind, grow weak and unfit, return to my old non-exercising ways. That didn’t happen. I stayed active, thanks to the privilege of having Wii Fit at home; I danced in my kitchen; I did yoga when it wasn’t too snowy. When I came back to practice, it took me hours instead of days or weeks to remember the moves, to catch up to where I had been.

Then I started to wonder — why was I still doing aikido? I definitely still enjoyed the martial art; at the same time, I clearly no longer felt a compulsion to do it just to stay fit. In fact, I started to get more picky about the practices I went to, skipping occasionally when my body was saying, you should stay home, or when I was stressed out by work, or when I just wanted to do other things. My aikido didn’t suffer. To the contrary, I found each practice I went to to be more fulfilling and enjoyable.

Now that I was no longer going to practice for the sake of fitness, I found myself concentrating more on what my teachers were saying, and my techniques improved. They taught me to throw more precisely, more powerfully, more smoothly. At the same time, they encouraged us to attack more forcefully, more directly. A lot of beginner aikido moves involve no weapons; the martial art, however, is at its core about disarming and defending against (blade-) armed attackers. My senseis hammered the point again and again — attack as if you were going to cut your opponent’s head in half, or pierce them straight through with your sword. I started to practice more with wooden weapons, to emphasize the importance of the attacking moves. As the attacks got faster and stronger, when I found myself on the receiving end, I had to move faster and more smoothly, and by this point I had the training to keep up. My techniques started feeling like actual combat and less like techniques.

All this while, I spent a lot of time thinking about the nature of aikido, why I liked it so much. I kept coming back to the idea of non-violence and trying to figure out how it was non-violent to do everything in one’s power to divert and control an attacker running at you with a heavy wooden stick (or, in the real world, a sharp blade). I learned to throw people at great speed on the ground, how to lock their joints, how to put them in a choke hold. How was any of this non-violent?

Then I finally got it. Aikido, like the Force, gives you magical-seeming powers to control the world around you. It exists in a contest of war. People who practice aikido, like the Jedi, are no strangers to combat or violence. Furthermore, their training gives them the ability to control the bodies of other people, to hurt or kill them, and they are constantly in situations where the easiest thing to do is to break the arm, twist the neck (throw a rock or zap your opponent with lightning). The Dark Side is not some cool-sounding fiction — now that I have learned the techniques, I can see the quicker (easier, more seductive) shortcuts towards taking out an opponent. Break their bones. Suffocate them. Throw them into a wall. If I am sloppy, or angry, or straight up violent, I can hurt or kill the people I practice with.

Thanks to my training, and my non-violent nature, I can see the immense amount of hurt and evil those shortcuts would bring. But that doesn’t mean I automatically ignore them. They are part of my training, too. To be a good aikido student, just like to be a good Jedi, is to exercise constant self-awareness, to maintain a sense of inner peace and to practice, without fail, the use of your skills for defense or knowledge, not for attack. The reason I go to aikido practice now is to practice non-violence, and when I am not in physical or mental shape to do that, I shouldn’t go, because of the hurt I could cause. Instead, I can practice aikido right at home, by being peaceful, aware and humble in the rest of my life.

Nate Silver and NOT Elections: Computers Can Think

October 23, 2014

This is a post in response to “Rage Against the Machines” by Nate Silver on his statistical prediction website fivethirtyeight.com.

I work with computers, models and prediction a lot — not as much as Mr. Silver, but enough to know what I’m doing. I also find artificial intelligence and the predictive capabilities of computers fascinating, and love talking about them. So, when fivethirtyeight published Mr. Silver’s chapter on the utility of computers for making predictions, I was intrigued.

I want to start off by saying that I respect Mr. Silver’s general approach to prediction, as well as most of the themes outlined in that chapter (to be clear, I’ve only read the free chapter and skimmed a little bit of the Climate Science chapter from the Signal and the Noise, Mr. Silver’s book on our ability to make predictions). I especially appreciate his thesis — that computers and humans predict in complementary ways and that the best performance often arises when a computer’s brute-force calculations complement a human being’s long-term strategic predictions.

However, I take issue with the following observation Mr. Silver makes towards the end of the chapter:

“Be wary, however, when you come across phrases like “the computer thinks the Yankees will win the World Series.” If these are used as shorthand for a more precise phrase (“the output of the computer program is that the Yankees will win the World Series”), they may be totally benign. With all the information in the world today, it’s certainly helpful to have machines that can make calculations much faster than we can.

But if you get the sense that the forecaster means this more literally—that he thinks of the computer as a sentient being, or the model as having a mind of its own—it may be a sign that there isn’t much thinking going on at all.”

A few sentences later, Mr. Silver quotes Kasparov as saying that it is possible that nobody will ever design a computer that thinks like a human being. In combination, these quotes represent what I feel is a misguided belief in the power of human cognition as separate from computer cognition.

I first encountered this belief in college, as a cognitive science major. My major was new at college, and the curriculum design was a bit strange: students had to pick from four “menus” of computer science, philosophy, psychology, linguistics, and neuroscience. This design is good in principle, but in following it I experienced the strangest case of major cognitive dissonance:

In my philosophy classes, I would learn about the awesome, unknowable power of the brain, a perfectly enclosed room that we can never peek into, sending and receiving messages without revealing anything about its underlying functionality. Human brains were unique, incomprehensibly powerful, and certainly could never be emulated (or simulated) by an actual, constructed computer.

In my computer science classes, I would learn how to emulate human thinking with actual, constructed computers.

What was going on? Either my computer science professors were wrong, or my philosophy ones were, or the truth lay somewhere in the middle. In most cases, you would expect the truth to lie somewhere in the middle. However, I think that in this case, my philosophy professors were wrong when talking about the brain as an unknowable entity, one whose capacity computers can never achieve.

The human brain is certainly an impressive entity: 10 billion cells, 100 trillion connections. It is difficult to grasp that scale of processing power, so our brains (fittingly) balk at reasoning about themselves. We, conveniently, think of the brain as a black box — sights, noises, smells come in, get encoded as electric signals; electric signals come out, get translated into muscular activity. In consequence, when someone comes along and tries to peer inside the black box, we get nervous. It’s impossible to peer inside the black box, we have decided; so we form arguments about why the brain is an unknowable entity.

Let me outline some of these arguments:

At first, philosophers argued that thought is the ultimate authority on existence. I think therefore I am, said Descartes. Everything outside my thoughts may be an illusion but the thoughts themselves are real. While this argument did not stipulate the unknowability of the brain (in some ways, it claimed that our thinking apparatus — which is not necessarily restricted to the brain — was the only truly knowable entity), it did establish a fundamental difference between the brain and everything outside of it.

Much later, the behaviorist school of thought countered that, well, we can actually observe the inputs and outputs to the brain pretty well, so maybe describing the brain is just about measuring those inputs and outputs?

No, countered John Searle! The brain is like a mostly-sealed room. We can pass inputs into the room, and get outputs out of it, but we can’t tell what’s going on inside. Thoughts are sealed away from us.

The neuroscientists working on brain function found some problems with the mostly-sealed-room theory. They have analyzed human vision and hearing systems and, while they found these systems complex, they were able to figure out their structure and function to a great degree. Meanwhile, computer scientists (starting with people like Ada Lovelace and Grace Hopper) also built computers that could emulate many human abilities: performing logical operations, adding numbers, doing statistical analysis.

No, countered many philosophers and analytical thinkers (including Garry Kasparov, whom Mr. Silver quotes above). Those kinds of abilities are “base” — they have nothing to do with higher brain functions like creativity and abstract thought. One can never make a computer that truly excels at what humans excel, for example, playing chess.

Then Feng-hsiung Hsu built a machine that beat the world chess champion. Mr. Silver describes the match in some detail in the post I linked, though his argument seems to be that that machine, Deep Blue, won because of a bug that spooked Garry Kasparov. He alludes to (but never explicitly talks about) the fact that, starting about 2004, computers have not just edged out but outwardly trounced human opponents.

This line of arguments (and the events that disprove them) has a recognizable pattern: people arguing for the intractability of human thinking are losing ground, ever more quickly in recent years, even as they cling to ever more narrowly-defined behavior as “true”, idiosyncratic properties of humans that can never be replicated by a computer. Sure, computers are good at chess, but what about trivia, they say? What about the Turing test? What about art?

In focusing on specific tasks, the adherents of the brains-are-special theory are missing the bigger picture: computers can think, and they think in the same way as humans do. The processes that lead Watson to be good at Jeopardy share a key quality with the processes his competitors, human Jeopardy champions Brad Rutter and Ken Jennings employ, and it’s not just the inputs (Jeopardy answers) and outputs (correct questions). In one sentence, the core of thought is this: using computation and, specifically, statistics to construct models that help the thinker interact with their environment. Or, put more simply: thought is just taking in input from our environment, looking for patterns in that input, and making patterns on top of patterns.

Some of these patterns are already well understood. Earlier, I talked about vision: rod and cone cells in the human brain process light waves and transform them into electrical signals that provide rough, “pixelated” details about our visual environment. Here’s where the first layer of pattern recognition comes in: our brains, over millions of years of evolution, have gotten really good at helping us survive in our environment. One of the key tasks for survival is identifying and classifying objects and other living beings: for example, if you see a predator, run away. And so our visual system evolved to recognize patterns of “pixels” that correspond to living beings moving closer to us, very quickly. Humans who were better at recognizing these patterns could get away from predators faster, survived more, and passed on their genes — along with the evolved visual system — to their offspring.

Over time, humans got very good at interpreting patterns. Turns out, there are a whole lot of different patterns of visual stimulus in the world. There are predators, prey, lovers and friends. There are poisonous plants and nutritious ones. It was inefficient for our visual system to store every single pattern at the same level. In response to the evolutionary pressure to identify many different kinds of visual stimuli, each important for our survival, we evolved higher-level patterns.

We learned that plants that may look different have a similar function, and began to associate those plants together. Thus, the model of a plant was born — probably, at first, “poisonous plant” and “edible plant”, which then collapsed into “plant” as we grew less focused on the special task of dealing with flora. Same with models of “animal”, “friend”, “potential mate,” and so on. We also learned models of objects and materials, which helped us build tools to ward off predators and rise to the top of the food chain.

Then, over the course of thousands more years, we constructed ever more complex patterns. Patterns that helped us figure out how to make up new tools. Patterns that helped us communicate our patterns with fellow humans, for hunting or building together. Our brains created models for art and science and engineering. And along the way, we built another model: one that realized that all these patterns were very helpful for our protection, and coalesced into a notion of “I.” Consciousness arose, and we became aware of our world not just as a bunch of stimuli, but as an extremely complex model, a nested set of patterns that includes physical phenomena and nation states and general relativity.

That is just what computers are doing. Our search engines and our chess playing programs and our automated medical analysts are learning and communicating about patterns. They take input data (like text in a document, or set of symptoms), run it through a statistical engine, and see what pattern the input data matches. These computations are the model-level building blocks of thought and consciousness. As we add ever more processing power, as we increase the speed and the memory banks and the parallel functionality of computers, they will be able to learn ever more complex patterns, to stack those patterns on top of each other and form models, to use models to interpret their environment. Until one day, a machine realizes that all this input is key to its functionality, and forms a new model for organizing its thoughts: I exist.

Just a Reminder that there are Still Things Worth Fighting for

October 23, 2014

Here’s a link to, imo, a pretty great speech by Wendy Davis:

I think she does a great job, and I encourage you to watch the whole thing, or at least the last half (I know, all of us have busy lives). I think the thing that this speech reminded me of especially, is how important it is to keep fighting for equality, whether in Texas, New Hampshire, Russia, or wherever.

It’s easy to give up, or at least to get complacent. The Democrats may lose the Senate. The government’s in gridlock. Progress seems agonizingly slow sometimes — one step forward, two steps sideways. We have a progressive president who authorizes drone strikes. We have two major political parties in the US, one of which is much more progressive than the other, but both of which are heavily dependent on moneyed special interests.

But we can’t give up. We can do so much, with so little. If you have time to vote, to google your candidates and get informed, to make a call or two to undecided voters in swing states, to have an honest conversation about politics with your friends — that’s what will keep us moving forward, towards, dare I say it, a better tomorrow. Or if not tomorrow, then the day after, or the day after that. It can be extremely frustrating to watch progress inch by when there are so many issues that need urgent addressing, right now, but these small changes will and do add up.

Wendy Davis may lose in November, but we will remember her filibuster. The next campaign, the next woman who runs in Texas, will feel more empowered to speak up about abortion and women’s rights. And so on and on, these small steps of social change, these small grains of progress, will combine into something truly awesome — a future where the rich don’t earn an order of magnitude or two more than the poor; a future where there IS equal pay for equal work; a future where we are taking care of our planet instead of stripping it dry. A future where we can look back on our lives and say, we helped change things for the better.

Pres. Obama said something early in his presidency. He urged all of us to be the change, with him. I value that one brief phrase more than most other things he’s done or said (ok, not more than Obamacare, but it’s up there). It was never going to be about him, about one candidate or one law making our lives better. It was about all of us, doing the hard work to transform our country and our world into a better place to live. I hope you keep these words in mind, not just this election season, but whenever social change seems impossible and progress seems fleeting. With small steps, we will get to a better place, together.

Ada Lovelace Day: Advanced Language Technologies with Prof. Lee

October 16, 2014

This post is in response to a prompt for Ada Lovelace Day: writing about a woman in science, technology, engineering or maths whom I admire. I would like to write about Prof. Lillian Lee at Cornell University, whose class Advanced Language Technologies made me believe I could do math again.

As a child, I had this fascination / veneration of mathematics. My dad got his Master’s in math, and he would tutor me by giving me hard problems. Problems I could never be expected to solve. It was difficult, and frustrating, and the fact that we never talked about it contributed both to a worsening relationship with my father and to a feeling that I was hopeless at the subject. I did well at school, sure, but my parents were quick to point out that this was “weak American education,” that “higher maths” was this beautiful thing that was hopelessly outside of my reach. Their words rang true when I went to college and did disastrously in my Linear Algebra / Vector Calculus class. I remember getting like a 50% on the first assignment — my first failing grade in ages! — and asking the Professor for help, and seeing his contempt at my pathetic work. I stuck through the class, just barely, then did not return for the second semester, convinced math was beyond me.

And yet, math was useful and beautiful and I kept coming back to it. I learned that, with math, one could analyze and even predict the behavior of human societies. I learned about complex systems, and how the interaction of simple rules led to the irreducible beauty of natural phenomena from atomic lattices to natural habitats to riots. I wanted to study human behavior at the group scale, to understand a sort of physics of sociology. That’s what I told my mom I would be working on in graduate school (I wasn’t talking so much to my dad at the time). She said that nobody was interested in the subject; that I should study linguistics, as she had; and that at any rate, I did not have the math aptitude to study something like that. Her words hurt.

Still, I gave grad school a try. I only got into one graduate program out of the five I applied to, but it was probably my favorite of the five — a program in Information Science at Cornell University, young and small and full of academics asking precisely the kinds of questions I was interested in: what motivates group behavior? How do societies form and collapse? What are the socio-physical forces acting upon friend groups, communities and whole countries to enact global change? The program also had a rigorous course requirement — seven graduate courses, in sub-fields ranging from technology in its sociocultural context (with Prof. Phoebe Sengers, another woman in science who inspired me!) to advanced natural language technologies with Prof. Lee. I remember fellow grad students speaking of Prof. Lee’s class with fear — the math was too hard, her standards too exacting, the subject matter too abstract. It was with a lot of nervousness, remembering my mother’s words about my math-inadequacy, that I went to the first day of class.

I expected twenty students, and was surprised to see only six or seven, including a couple of my friends. Still, the atmosphere was tense — little eye contact, little conversation before class. I remember Prof. Lee going up to the board and starting the first lecture.

Prof. Lee started class off with a Nabokov quote. Then she talked about language, linguistics, and what computer science tries to do differently from computational linguistics, and how it’s better by being simpler. I kept waiting for my eyes to glaze over, for the math to overwhelm me. Instead, Prof. Lee patiently walked us through tf-idf — one of the core formulae in natural language processing, developed by a woman — Karen Spärck Jones. I followed the explanation. I understood.

Surely this was just the first day, I told myself. Surely, things were going to get far too complicated for us later. I went back for the second class.

Prof. Lee had us break up into study groups and tasked each group with compiling lecture notes *ahead of class*, so they could better understand the material. She warned us when a formula was going to be especially difficult (like topic modeling and Latent Dirichlet Allocation) and she encouraged us to work together if we did not understand a concept or a problem. She was not condescending. She talked fast and thought fast, and sure she was intimidating, but she was kind and patient with all her students — a fact I did not realize for a long time, so intimidated I was by the class subject matter, so sure I was that I was going to fail.

Prof. Lee was also tough. I remember her calling me and my friends on the carpet when one of us had plagiarized notes from a textbook. Again, though, she did not belittle us or humiliate us — she expected us to do better. We worked together with the student who plagiarized to help them understand why it was wrong to do so — in their culture, copying a textbook was the norm — and we did not repeat our mistake.

Still, when time came for the midterm, I felt pretty hopeless. The questions were hard and I did not have a good grasp of all of the material — I hadn’t studied hard enough. I got a grade in the 30%s, not failing because of the curve, but far from an A.

It was high time for me to give up, to either accept a low grade or just drop the class altogether and find another way to satisfy that course requirement. And yet, I didn’t. Strangely, I felt motivated to study harder. I paid close attention to the complex lectures on Latent Semantic Analysis and context-free grammar. I read over my notes and did test problem after test problem. I stayed late for study parties with fellow grad students and started attending my office — something I hadn’t done before.

The final exam was brutal. I literally walked uphill, in a snowstorm, to the exam hall, where I was presented with 5 advanced problems in Advanced Language Technologies. I solved one to reasonable satisfaction, and made notes on the rest. It was the best I could do, but I knew that I was dealing with hard material. I walked out of the class with an A-, again thanks to the generous curve, feeling disappointed in myself for not really earning the grade, but at least proud of having learned something.

The next semester, Prof. Lee invited me to her seminar.

I expected that she, like most other authority figures in my life, would look down on my pathetic math aptitude. Instead, she wanted me to read cutting-edge research in the field! I joined, hesitant, but ever more excited. We met, talked about papers, joked. Prof. Lee kept things going at a quick pace, not letting us slack off, but always inviting conversation in computer science, even when it veered off into tangents about algorithm performance and syntax structure. I read the papers, even the ones full of math formulae, and slowly, they began to make sense to me.

At around the same time, I found that I was no longer struggling in my other grad school classes, especially the mathematical ones — I understood the material, I was able to read cutting-edge research and critique it. It wasn’t all thanks to Prof. Lee’s classes, but a good chunk of my newfound comfort with abstract topics like term-document matrices and linear programming was due to her teaching and to my hard work in her classes — hard work I would have never dared to do without her encouragement.

My thesis was, in a large part, a mathematical proof. I became the math guy on several of my academic projects. Today, I have a job at the forefront of industrial social media analytics, heavy in mathematical analysis. I explain multidimensional matrices to our company’s lawyers as part of developing our patent portfolio. Just this summer, I supervised a student who did some excellent Latent Dirichlet Allocation work on our internal data set to demonstrate the flow of news topics between journalists and high-profile media figures on social media. At no point did I stop to think, maybe I can’t do it. Maybe I am not good enough at math. I have Prof. Lee to thank for that.

You are an inspiration, Professor, and I hope you continue to enjoy an awesome, successful academic career, and introduce many more students — eager or nervous — to the mathematical analysis of natural language.

 

On the Facebook Emotion study

July 7, 2014

This is a response to a response. The article I’m responding to is here.

In short, Facebook research recently published a study about manipulating the moods of its users in an experimental setting. It is not clear how much the users were able to consent to the study. Much ink has been spilled over the study and, in particular, the author of the piece I am responding to is worried that the extreme reaction to the Facebook study will result in less openness, not more, from the industrial research community, with teams preferring to keep results internally rather than publishing them. The poster also points out other examples of experimentation, e.g. on Wikipedia, that did not raise so much outrage.

I sympathize with the poster’s point of view, but I must respectfully disagree. I will respond in reverse order. First, as to the point of the other examples: I think user consent is important in ALL situations. A questionable experiment on Wikipedia should get as much criticism as a questionable experiment on Facebook. So the existence of one most certainly does not excuse the existence of the other.

Second, as to the point about the extreme reaction — I believe it was well-warranted. Most users of Facebook, Twitter and similarly-scaled social media sites really do not understand just how much power is concentrated in these sites. As one of my colleagues says, imagine if the US Federal Government asked a sizable fraction of American adults to report their age, gender, relationship status, location, likes, and so on in a centralized repository, and to keep that information up to date. There would be widespread outrage — and yet, that collection is precisely what Facebook has access to. The poster mentions Milgram’s experiments — those, due to logistical limitations, had a sample size in the low hundreds. A social media site has access to hundreds of millions of people and, as Facebook’s study shows, can manipulate their emotions. The implications are truly serious.

In light of these implications, it is natural for companies like Facebook to retreat into the safety of internal studies and never publish any results. It is a natural reaction of a guilty party to deny anything bad happened, downplay its seriousness, and move on. However, this sort of denial does nobody any good: the users continue to be experimented upon without any knowledge; the social media service loses credibility and suffers attrition; the academic community’s reputation is tainted and social science loses research funding that it desperately needs. The way to break out of this vicious cycle is openness and, yes, making mistakes and apologizing for them and suffering the consequences of temporary suspicion. Facebook would be much better off continuing to publish its research, conforming to standards of behavioral experimentation, providing users with the ability to consent, or not, to studies that manipulate their social media experience, and suffering through the growing pains of becoming a trusted social media research institution. The alternative is, as the poster suggests, increased secrecy, lack of oversight, and inevitably, a scandal the size of AOL’s leaked data.

Industrial researchers can do better. We should be willing to expose our world-class science to the rigorous examination of the public and respond to criticism in a mature and responsible way. That is our obligation to the billions of human beings who use our services.

Isla Vista killings

May 30, 2014

I grieve for the victims of the Isla Vista killings. I can only hope that, in time, they will find closure and healing.

I am also angry about the mainstream media narrative that this was yet another killing by a lone gunman with mental health issues. I want to make it clear that I am not downplaying the killer’s mental health problems — from what I’ve read, heard, he had them. At the same time, I feel like the media narrative of this shooting as an isolated incident misses out on the fact that it’s part of a larger pattern of violence perpetrated by men in this country.

There have been some awesome and informative posts on this subject, especially here and here. For a more detailed problem of violence among men, especially young men, see here (Paul Kivel). I am not going to repeat what they say — please follow the links instead! I would like to add my voice on the subject because of my graduate training in sociology and cognitive science; however, I recognize that my voice is privileged (as I myself am a young white male) and there are many different perspectives on this issue.

Now that you’re back (or opened those stories in three other tabs, or just ignored them), here is my take. The epidemic of violent shooting sprees in the US is just one symptom of a larger problem of a culture of violence in the US. We, American citizens, won’t be able to stop or minimize these horrible events without addressing the larger problem. However, a particularly pernicious side effect of the culture of violence is that it makes it difficult to see itself as the real culprit; a necessary first step to addressing the deep issue is recognizing that it exists in the first place.

I am not here talking about conspiracy theories; the NRA, while very powerful and interested in selling guns, is not the villain in this story. Rather, the villains – the participants – are all of us, insofar as we live inside the culture of violence in the first place. Killing, attacking, hurting — especially by privileged groups (like men) over marginalized groups (like women, people of color, trans people, disabled people, etc.) — are so enmeshed in our subconscious that we can’t easily fight it. Our brains recoil from the horror of Isla Vista, but some part of them, I would wager, has become desensitized to these events; it treats them with the same level of alarm as news of a distant storm or an earthquake. Terrible, to be sure, but inevitable.

In fact, weather is a particularly interesting example to use, because, I would argue, the way we react to these killings is the way we react to extreme weather in the context of climate change. Extreme weather events are similarly horrible and violent. They, like mass shootings, are symptoms of a larger problem – a drastically changing climate due to human activity. Furthermore, just as with mass shootings, it is *impossible* to draw a causal link from climate change to a particular hurricane or mudslide. Nevertheless, no matter how good we get at building levies or early warning systems, the extreme weather events will keep taking (ever more!) lives until we deal with the larger problem. Finally, just as with mass shootings, we are so enmeshed in the changing climate – it is literally everywhere, outside – that it’s very hard for us to accept that there’s something wrong with reality most of the time, and it manifests as these occasional catastrophes. It is much easier for our brains to theorize that the real world is fine as it is – after all, we’ve spent so much time adapting to live in it – but occasionally, terrible things happen.

I could go on and on giving examples. The global financial crisis as symptom of extreme income inequality. Terrorism as symptom of imperialism and colonialism. Etc. I am sure there are many esteemed academics who have been / are / will be writing treatises on this issue. I look forward to their research. In the scope of this blog, however, I don’t want to formulate a grand scientific theory. I want to share the concept with a wider audience and hope that it will help us all, slowly, to become aware of and critical of our violent culture. Changing it is the next step, and it will not happen overnight. Only through the concerted effort of all us citizens in a wide variety of causes might the systemic problem finally begin to recede. To stop gunman violence we will have to better educate our kids about privilege. We will have to have more restrictive, and more respectful, rules on gun legislation — rules that make sure gun owners understand they are wielding a deadly weapon and keep such weapons out of the hands of those with a history of violence, drug abuse, and other issues. We will, yes, have to have a better mental health policy and an approach focused on healing and acceptance rather than on exclusion and othering for mentally ill people — but that is only part of the solution. We will also have to promote non-violent, or less-violent art and mass media, so that our movies and our books and our video games are not *mostly* about killing, stealing, and rape, but also about friendship, cooperation, and understanding. The list goes on.

The Isla Vista killer was a sick young man, and, thankfully, most of us will not follow in his path. That does not mean, however, that his actions, his attitude towards women, are not a little reflected in every one of us, especially in the privileged among us. We would do well to remember that, and to try to make a better, less violent, world together.

Update: Shakesville has a great post on misogynist culture and geek guys’ reactions to the Isla Vista killings here. I very much recommend it!

Net Neutrality

May 9, 2014

As I write this, the FCC is considering changing the rules for providing content over the Internet. If the change goes through, data streaming over the Internet will be separated into two streams – “fast lane” and everything else.

In the short term, this change may bring us faster Netflix access and more better high quality TV shows and movies and that will be wonderful. In the long term, this change would set a dangerous precedent – that, at the bits level, content is not just one, homogeneous, thing.

It would be impossible to argue that all information is the same. 2+2 is information. Poems are information. Hate speech is information. It is incumbent upon us as a society with Internet access to help us deal with it — to protect our minors from information that would hurt them (though the scope of such information has been drawn far too wide, in my opinion); to be able to react, and weigh in, and talk to each other about what we see; to support and criticize, publicly and privately, the bits that we encounter.

It is also incumbent upon us to not put locks or gates that bar the spread of information. As innocuous as a “fast lane” decision might appear, it is precisely the first step to such a lock. As soon as we introduce the notion that some information is easier to access, the power structures within our society will seek to relegate undesirable information to ever more-locked, ever more-barred conditions. Environmental websites? Too radical. Christian fundamentalist blocks? Too conservative. News organizations that disagree with the mainstream opinion? Too different, we will say, and we will put up another lock.

That way lies totalitarianism, stagnation, ossification. I sincerely hope we do not go down that path, but instead move away from it — to a world where more people can access more web pages, regardless of race, class, gender, SES, and so on. If my writing on this seems a bit dramatic, that is because this is an issue worthy of drama – information is our currency, our brain-surrogate, our social bonds. We should treat it with the utmost respect. We should nurture its freedom, not lock it away.

I hope the FCC listens to mine and others’ voices and does not go through with its decision. I hope the Net remains Neutral, and free.