Critique
Philosopher Stephen Eric Bronner on the ancient Greek understanding of the concept of "critique", the Marxist ...
Modern political theory is closely tied to the concept of representation. We talk about how our representatives in parliament express our interests. However, what it means to “represent” remains unclear and has been ambiguous throughout the history of political philosophy. One way to address the question “What does it mean to ‘represent’?” is to refer to semiotic resources and say “represent” as the signifier and the signified. One of the first definitions of a sign belonging to Charles Sanders Peirce suggests that a sign “stands for” something else, like a lieutenant or a placeholder. In some sense, our deputies are our placeholders. These people are supposed to represent us, substituting for us in performing certain functions.
However, as we understand, “representing” is not all there is. The recent work of Franklin Rudolf Ankersmit, a remarkable historian and philosopher of history, shows that the very idea of representation in parliament—the idea of parliamentarism—emerges parallel to historicism—the idea of representing a past event in a historical narrative. Representing events in a narrative and representing people in parliament are two sides of the same process. To describe and analyze this process, Ankersmit uses the vocabulary of aesthetics—contemporary art theory. In some sense, we suddenly discover that such a seemingly dull area of philosophy as aesthetic theory turns out to be a common denominator and a powerful theoretical resource for addressing questions in both political theory (the question of representation) and historical theory (the question of representing events in a narrative).
Nevertheless, this is only one approach. Another approach is to look at representation as delegation. After all, our deputies do not just represent us; this is also an operation of authorization, an operation of transferring the rights to act. As exemplified by a famous phrase once said at Bolotnaya Square, “You do not even represent us.” Thanks to Thomas Hobbes, we know these are two closely related aspects of the same process: representing and delegating. To represent means, on the one hand, to signify the one you represent and, on the other hand, to be authorized to perform specific actions. Here, another interesting borderline area is revealed—not between aesthetics, history, and political theory, but between political theory and the sociology of technology because delegation is a central concept in the modern sociology of technology. Thanks to the works of Bruno Latour and Michel Callon, we know that our relationships with smartphones, cars, and all those technical devices that surround us are relationships of delegation. We authorize the navigator to calculate the route to our home during rush hour in Moscow. We have authorized our smartphone to store phone numbers that we used to memorize 10–15 years ago. We have delegated many tasks to the devices surrounding us, tasks we used to perform ourselves.
Thus, if technological evolution is the process of complicating the balances of delegation, transferring more and more tasks to other technical intermediaries, then political theory is precisely about the transfer of rights to act. What, then, is the difference between a deputy and a smartphone? A deputy is no longer just a lieutenant representing us; a deputy is a technical device. Moreover, a technical device, Latour argues, is a deputy. A technical device knows more about you than your representative in parliament, and your smartphone represents you much better than your representative in parliament. But what does it mean to delegate? What exactly is delegated? We can talk about the three axes of delegation.
The first axis—let us call it the legislative axis—is when we delegate particular calculation, control, and forecasting functions to devices. By old habit, we call this framing: we delegate the task of framing the trip to our home to our navigator. It knows traffic better than us, knows the city better than us, evaluates and calculates the situation better than us, and offers us a specific course of action.
Not long ago, there was a case involving Taya Kyle—a programmer and software developer, as well as the widow of a soldier who died in an operation in Iraq—who came up with a startup called Smart Rifle. Smart Rifle is like Uber for snipers. It calculates wind direction and the distance to the target and guides you on how to act, including the exact moment to pull the trigger. Moreover, you pull the trigger, but it guides your hand to align the sight with the target perfectly and decides when the shot will be fired. However, you still have to pull the trigger. In a competition organized by the National Rifle Association, Taya Kyle, a woman with poor eyesight, defeated the best sniper in the US, armed with his favourite rifle, because she had Uber, developed by her. In this situation, Uber for killing and your navigator perform a vital function—calculating, choosing the optimal action strategy.
However, this is legislative power. There is also the axis of execution—the axis of actual action. The axis of framing—defining the situation—and the axis of acting in this situation. You must pull the trigger yourself. Taya Kyle pulls the trigger herself.
Similarly, in the case of the navigator, the car does not drive itself. But today, we know that a car can drive itself perfectly. Thanks to the history of drone use, we know that a drone controlled from Nevada, if it loses connection with the control centre, should hover over the place of lost contact, make a few circles, and if contact is not restored, return to base. In this case, the tactical operation will not be carried out, and soldiers may die.
Recent investigations by Matthew Powers and Hugh Gusterson in 2015 showed that the software anticipated further mission execution in autonomous mode. This means that a drone that loses contact with Nevada continues to carry out the mission and continues to kill people based on its framing of how it calculates the situation. It turns out that the accuracy of hits is much higher. In some sense, the two people sitting in a bunker in Nevada—the shooter and the spotter—hinder the drone. Therefore, on the one hand, a vast number of ethical dilemmas arise related to the autonomous killing of people by machines.
On the other hand, we understand that if a machine can better calculate shooting situations, at some point, it will be delegated not only legislative power but also executive power. And then there will be no difference between a drone that makes and implements decisions and, say, the AlphaGo algorithm, which first calculates the situation on the board and then makes a move. The same operations: fully delegated situation calculation and action execution.
However, there is a third axis—the axis of judicial power. This is the axis of decision qualification, ultimately deciding whether the decision was correct. Here, no delegation is observed yet, practically like in Viktor Pelevin’s story about “Savely Skotenko’s targeting tables” (meaning the story “Zenith Codes of Al-Efesbi”) — the editor’s note. A drone can delegate situation calculation; algorithms have long been calculating many situations for us. It can even be delegated the right to implement the decision—the actual execution of the action. We know from electronic trading that in some markets where stocks were held by hand for an average of four years a few years ago, today, they are held for four seconds because we have already delegated the algorithm the right to buy or sell specific securities with our money.
Moreover, fortunes are made in those four seconds. However, we have not delegated the right to judge. We have retained the right and duty to decide whether a particular action was justified and the calculation was correct. Even in Pelevin’s story, drones must gather all the necessary information and present a convincing narrative to the public court, but the public consists of something other than drones.
However, delegating judicial powers to algorithms that will make impartial, balanced decisions about the impartiality and balance of other decisions is entirely possible today. In 2016, we conducted a survey analyzing how techno-optimistic citizens of the Russian Federation are and found an interesting pattern: the level of support for introducing a robot judge who would make decisions instead of a human judge is significantly higher in Russia than in many European countries. This is already a relatively high percentage of people. The same goes for support for autonomous vehicles.
Today, incredible techno-optimism coexists with political pessimism, with rather pessimistic views on human nature and the political system in the country. These views spur techno-optimistic sentiments. In other words, judicial decisions will also be delegated at some point. Then we have a rather curious situation from the perspective of political theory, where, by delegating calculations—legislation, delegating execution—action, and delegating qualification—judicial decisions, we essentially create a new political body together with the surrounding technical devices. Such a political body, that is, the transition from delegation to authorization (here I refer to the work of Konstantin Gaaze), will mean that sovereignty is no longer prominent. The concept of representation is no longer obvious either.
Finally, what we should do with our deputies could be more apparent. In three to four years, political theory can no longer ignore technological progress as it does now. Technology is no longer a passive intermediary between people; it is no longer just another element in social and political relations. New political formations are emerging, a new political reality is emerging, and in this political reality, the devices around us will occupy a much more visible place.
Philosopher Stephen Eric Bronner on the ancient Greek understanding of the concept of "critique", the Marxist ...
Philosopher Andrew Haas on categories of necessity and possibility, Aristotle's 'Metaphysics', and ontohenochr...
Philosopher David Edmonds on relationship between moral philosophy and robotics, artificial intelligence and e...