Consider the following hypothetical.
Your government has enlisted a large number of assassins, mercenaries who wield hatchets and knives. They are stationed around the world in strategic locations. Unfortunately, they are a little stupid and clumsy. Because they are stupid, they need to be told exactly where to go and what to do at all times. Because they are clumsy, they sometimes kill the wrong person, or an entire group of people when targeting only one or two.
Thanks to this team of mercenaries, your government has been able to eliminate many enemy targets in difficult locations. Unfortunately, there have also been unintended casualties.
Would you support a government program to enlist and deploy such freelance assassins?
Now replace these assassins with mechanical devices. The devices need to be told exactly where to go and what to do at all times. They are effective, but their methods are coarse: an explosive device meant to target one person could also harm others. Nonetheless, they are state-of-the-art precision instruments that will help your government eliminate many enemy targets, thereby improving national security. Let's call these killing machines "drones."
Would you support a government program to enlist and deploy drones in this way? Is it any different from the case with human assassins?
According to the human mind and moral psychology, hatchet-wielding assassins and remotely controlled killing machines are not the same. Even if both methods result in identical outcomes, and even though the mechanical devices are ultimately controlled by people, actions mediated by mechanical devices can sometimes strike us as less morally problematic.
There's a cautionary tale here for both policy makers and the public as the nation debates targeted killings and the use of drones in the wake of Rand Paul's nearly 13-hour filibuster.
Consider a 2009 study by Harvard psychology professor Joshua Greene and colleagues. The researchers presented participants with versions of "trolley problems," thought experiments with roots in moral philosophy that have taken on new life in psychology experiments.
In the two trolley problems we'll consider, a regular guy (Joe) observes the following situation: an empty, runaway trolley car is speeding down the tracks toward five workmen on the tracks ahead. If nothing is done, all five of them face certain death.
There happens to be a sixth workman with a heavy backpack standing next to Joe on a footbridge above the tracks. If the sixth workman were on the track instead of above it, the combined weight of his body plus the backpack would be enough to halt the empty trolley car, saving the five men. Unfortunately, it would lead to the certain death of the man with the backpack.
(Joe, our bystander, doesn't weigh enough to stop the trolley car were he the one on the track. And yes, this is all terribly unrealistic, but participants are told that this is exactly what would happen and that Joe knows it's exactly what would happen.)
Is it morally acceptable for Joe to push the man off the footbridge, killing him while preventing the deaths of the five workmen on the track?
If you feel uneasy saying yes, you're in good company. Most people find the idea of pushing the man off the footbridge less than morally palatable.
What if Joe is far away from the track but can push a button that will open a trap door that will drop the sixth man in front of the trolley, saving the men?
Do the mediating factors of the button and trap door make it more acceptable for Joe to sacrifice the life of the single workman on the footbridge in order to prevent the deaths of the five workmen on the track?
In this case, participants aren't exactly cheering Joe on, but they think it's more acceptable for him to push the button than to push the man when they're standing next to each other. But notice that in both cases the possible outcomes were the same, and in both cases Joe's actions could result in the prevention of five deaths at a cost of one. The only difference is whether a complex mechanical device stands between Joe's personal actions and the death of the sixth man.
Of course, psychological findings like these can't tell us whether or when targeted killings are morally defensible, or when it is or isn't appropriate to use drones. But they do tell us that our moral judgments can be affected by factors that arguably shouldn't be relevant to our actual decisions. Should it really matter whether you push a button or a person if the outcome is the same?
So here's my challenge. Whatever your position on the use of drones, consider a few thought experiments. Would you feel the same way if the drones were manned airplanes (let us say by pilots who will not be harmed), bringing human agency one step closer to the killing? Would you feel the same way if the drones were replaced by mercenaries? Would you feel the same way if the people doing the killing were our elected officials? And if your feeling about what's right or wrong changes across these cases, can you identify a defensible reason why?
You can keep up with more of what Tania Lombrozo is thinking on Twitter: @TaniaLombrozo