An Uncertain Future - The Evolving Field of the Ethics of Technology and AI

 

An Uncertain Future– The Evolving Field of the Ethics of Technology and AI

 

 

In the world of Artificial Intelligence ethics, Seth Lazar, an Australian philosopher, not only provides a helpful background on the topic as a whole but also prompts us to consider what values are most important to us. It is not a paper that, at its surface, looks to be one that would justify such pondering. But upon the introduction of his thought experiment “pre-Emptopolis,” every one of us is implicated, whether we like it or not. The crux of this, not dystopian but rather intimate and inevitable future, rests in how we view human nature and our ability, or right, to govern. An interesting idea to work through, to say the least, ends there. Even the notion that humans could be constricted to a certain set of parameters remains an awful idea. Not only would the consequences of having our lives dictated by algorithms be serious; the feasibility of it all together seems to go against everything we know and hope to strive for as “free” beings. Pre-emptive algorithmic governance like pre-Emtopolis should be avoided at all costs because it infringes upon our human values, further cementing concerns that algorithmic alignment will quickly escape our control.

​​To meaningfully rebut the proposition of pre-Emptopolis means to first humor the points of the thought experiment to their fullest extent, both good and bad. Before doing so, however, I want to again reiterate that Lazar is in no way defending this idea but instead is simply proposing it as a way to get our wheels turning. Although I do not want to speak for everyone, the idea that, at the end of the day, we are free to do what we want is not only comforting but intrinsic to our experience in life thus far. Lazar’s pre-Emptopolis is introduced after a dive into what he calls the “Algorithmic City”. This City is a digital public space with continuously updated, dynamic edges and nodes. Viewing algorithms in the context of a physical space like a city helps us conceptualize important aspects of the space and the power structures at play. Among these power structures, is the “power over '' concept; the most important pillar for our exploration of pre-Emptopolis. Although we will address the concept more fully later on, for now, view this power as the people or institutions that have power over systems and or manage the algorithms. The last concept introduced during the foundational talks of the Algorithmic City has to do with more general ideals of political philosophy. These ideas, or rather virtues as Lazar calls them are publicity and resistibility. Publicity, in practice, is the requirement that people need to know what the law is; it needs to be public for it to work the way it is intended to work. An individual can’t regulate their behavior in accordance with the law if they do not know what it is. Resitibality, the final virtue, is largely conditional on the existence of publicity, i.e., for one to resist, you have to know the law you wish to resist. Additionally, the point of the law is to not have to force people left and right. People should have the option not to follow it, leaving non-compliance as a possible path to take.

Having briefly set up the catalyst for Lazar’s thought experiment, let us now begin to work through his contentious prospect. Pre-Emptopolis’ name comes from the term “pre-emptive.” Therefore, if one can understand the idea of pre-emptive governance, the thought experiment becomes relatively intuitive. Regardless, it is essential to look at the manner in which Lazar pitches the idea to understand what he means,

 

 “Physical artifacts and laws are also limited in the control they can exercise: walls can be surmounted or dug under; constitutive laws can determine whether a given kind of social relation is legally recognised but cannot (without enforcement) prevent people from participating in the legally-unrecognized counterpart. By contrast, algorithmic intermediaries operate in an environment that is, in principle, perfectly malleable and dynamically updatable, which can be fine-tuned and personalized at a low cost, and which can exercise in-principle perfect control of behavior.”[1]

 

It is my understanding that Pre-Emptopolis is a scenario where, even if someone wanted to, they could not act against pre-established parameters. Think about X (formerly Twitter), for example. If I wanted to post something that violated the community guidelines, I could freely do so. However, it is more likely than not that the post would be taken down shortly after being posted. In Pre-Emptopolis, I would not even be able to post something in the first place if it violated any terms. Even more, in its most extreme form, Pre-Emptopolis would not allow persons to violate any law at all, in Lazar’s words, “ [the] most ambitious projections of what the ‘metaverse’ might be like.”[2] At first glance and when viewed at its surface level, this pitch sounds quite appealing. Do you want to live in a world where crime is possible or not possible? I am willing to bet that, except in extenuating circumstances, most people would opt for as little crime as possible, myself included. That being said, let's ponder what a world in which it is impossible to act contrary to norms would look like.

At first, everything is perfect. Peace and tranquility flow through the edges and nodes of our algorithmic society and subsequently into our bodies. Crime rates drop dramatically and empirically speaking, daily life is “better”. This haven of hope and perfection has been curated for quite some time and it seems like, at least for the foreseeable future, we have finally worked out the “bad” and “cruelty” present in each and every one of us. Eventually, although it is not reflected in the constant data collection and crime statistics, cracks in the hull of this ship began to sprout and our voyage for pre-emptive governance begins to sink. Strangely, however, the problems that arise have nothing to do with how many hate crimes are committed; eventually, citizens start to question what little agency they have. While everyone is content and enjoys being able to operate autonomously, a certain feeling of monotony looms over them. Soon, a few firestarters begin protesting the governing structures at play. Though they cannot revolt or act in a way that defies the already established norms, many activists discover that engaging in symbolic acts of defiance brings them a feeling they have grown all too unfamiliar with free will.

Although this mock reaction to pre-Emptopolis makes a number of assumptions, the takeaway essential to understanding the fallbacks of this dystopian future remains the same; free will is necessary for humans to live a prosperous life. Eliminating crime altogether has no direct impact on free will. Instead, it is the algorithmic powers in place that make us question if this new world is a predetermined one. Gardner Williams, a philosopher who worked extensively on free will defines free will as, “Both determinism and free will are true. They are perfectly consistent with each other. Freedom is voluntary exertion which results in the effects desired. It is doing what we wish to do.”[3] With this working definition, perhaps it would be beneficial to apply it to pre-Emptopolis. Logically speaking, if there are a set of options that are altogether outlawed and therefore not available to exert, then the actions we choose to make would be considered voluntarily. How does this help us, then? Williams, even though he published most of his work in the 1940s, contends that “Nobody can do anything deliberately or freely which he does not prefer to do or want to do or care about doing. Only a superior external force can put a man through the motions of doing such a thing, and then obviously his act is not free. Furthermore, in a sense, he has not really performed it at all.”[4] The superior external force at play here would, obviously, be the algorithmic systems curating our behavior, or, lack thereof.

Admittedly, however, there remain a few glaring holes in this argument. As previously mentioned, Williams published most of his work almost a century ago and, undoubtedly, the prospect of our modern technological world was not apparent to him. But he does provide a potential setback that still applies to the argument previously posed. In the previous paragraph, if we worked off of the definition of freedom as “voluntary exertion” then the discussion would quickly conclude. The following logical exercise shows a plausible rebuttal to this previous assertion:

 

“The man who is caused to prefer A, and does A successfully, and thus is free, could have done B deliberately and freely if he had preferred. He is thus doubly free. However he could not have done B deliberately and freely, since he did not prefer. And the causes did actually compel him to prefer A. He had no choice as to what he should choose. This will make him seem unfree to some who forget the definition of freedom. Freedom is successful voluntary action, and an act is doubly free when one could have done something else deliberately and freely if he had preferred.”[5]

 

            If freedom is the “successful voluntary action” and, in pre-Emptopolis, there is only one action, then, technically individuals have freedom. This kind of freedom, however, should not be what we settle for in exchange for eliminating crime. The severity of crimes and the weight they hold on both the individual and collective scale should not trump the kind of freedom we currently enjoy. Our society would be signing an everlasting and irresistible social contract when implementing the algorithmic power of pre-Emptopolis. Like any other contract, a compromise needs to be met by two parties and usually, this compromise calls for some level of sacrifice. The sacrifice of our ability to have the option to choose is not worth what we get in return.

Even if the prospect of pre-Emptopolis proves to be a futile endeavor, there are still ways we could, and should, try to align AI systems with human values. The overarching problem with aligning our values is, as Lazar states, “this means focusing narrowly on substantive justification–ensuring that these incredibly powerful future systems do only what we want them to do.”[6] Because of the inherently opaque qualities present in the processing ability of AI, it is often difficult to track the steps that were taken to produce a product. Most AI and Machine Learning systems do not have a way to explain their answer, they simply spit out an answer given the prompt and data they have access to. In low-stake environments like using ChatGPT to summarize a book, we may feel comfortable simply trusting what we are told. When the implications become more consequential, though, many yearn for an explanation. If a justification of what a computer did cannot be achieved, the next best place to look for answers is with its designers.

“First, the designers of algorithmic intermediaries have so many more decisions to make than those who govern social relations in the analog city. As well as having to decide, for every option, whether to endorse or prohibit it (lacking the intermediate option), they must also determine which kinds of act to afford, which frustrate, which to promote and which demote.”[7] One assumption that we make and one that Lazar addresses is who gets to design and monitor the algorithmic powers at play. Before it is even possible to decide if our values should even be aligned with these algorithms, the first crucial step we must take is to ensure that the creation and upkeep is a democratic process. As Lazar has mentioned, many AI and ML projects have been from companies, not governments. While there is nothing inherently wrong with this and these institutions have every right to act as they have, as the prominence of AI grows it is essential that some amount of oversight is put in place.

 Even more, the decision of what we do with this technology should be open to the public and a democratic process should be followed whenever possible. One major drawback and continually difficult problem that we need to continue to address, in regard to how these systems are designed, has to do with transparency. Lazar explains that “The most powerful AI systems are deeply, fundamentally inscrutable. They offer us no guarantees. It is very hard to see how they could satisfy even basic criteria of legitimacy, like a publicity requirement.”[8] If the solution is discovered, that is, how to find out what happens “behind the scenes”, there still remains a disconnect between experts that can understand the nuanced operations and the general public who is, understandably, ignorant to the technical steps involved with AI. Thi Nguyen, a philosopher at the University of Utah, articulates this same general idea and names it the “epistemic intrusion argument”. His reservations and the basis of this argument deal with the relationship between experts and the general public, “Transparency can have an even more intrusive effect. It can change what experts do, pressuring experts to only act in ways that readily admit of justification in non-expert terms. The demand for transparency can undermine the practical application of expertise itself.”[9] Although it will be a tedious task, if our world decides to move forward with AI, there remain more than a few guardrails that must be established.

It may seem like a daunting task: how can we possibly contend with the prospect of this new, powerful technology? Should we sacrifice some of the rights we currently enjoy for the possible benefits that can be reaped from AI? We still remain quite far from an answer and, as algorithms and AI grow more and more prevalent, it becomes clear, due to its prevalence, how urgent this decision is. The core of the question at the end of the day has very little to do with actual technical computer science concerns. The real question, the one this paper tries to get closer to answering is an ethical one. Since the emergence of more advanced technology philosophers have been at the forefront of this uphill battle. In the late 20th century, James H. Moor began working through some of these questions. Although the technology he was addressing is utterly different from what we have today, his core queries still remain helpful. “Computer ethics requires us to think anew about the nature of computer technology and our values.”[10] It is my belief, as well as many others, that our values should be placed first every single time. Even if there is a situation where life can be drastically improved in one facet of life, so much so that it seems like a “no-brainer” to choose AI, I contend that we should turn away. Nothing, at the end of the day, can surpass our human values no matter how great they may seem. If there is a way in the future where we can perfectly align our values, though, I still have reservations. We must, as humans, interact with each other naturally. This may seem like a pessimistic or old-fashioned argument. That is because it is. Technology and AI can help make our lives better, yes. But it should never, ever, be at the cost of our liberties. That is a social contract no one wants to enter.

 

 

 

 

 

Bibliography

 

 

Lazar, Seth. "Connected by Code: Algorithmic Intermediaries and Political

     Philosophy." Unpublished Manuscript.

 

 

Moor, James H. "WHAT IS COMPUTER ETHICS?" Wiley 16, no. 4 (1985).

 

 

Nguyen, C. Thi. "Transparency is Surveillance." Philosophy and Phenomenological

     Research.

 

 

Williams, Gardner. “Free-Will and Determinism.” The Journal of Philosophy 38, no. 26 (1941): 701–12. https://doi.org/10.2307/2018089.


[1] Lazar, Seth. "Connected by Code: Algorithmic Intermediaries and Political

     Philosophy." Unpublished Manuscript.

[2] Ibid

[3] Williams, Gardner. “Free-Will and Determinism.” The Journal of Philosophy 38, no. 26 (1941): 701–12. https://doi.org/10.2307/2018089.

[4] Ibid

[5] Ibid

[6] Lazar, Seth. "Connected by Code: Algorithmic Intermediaries and Political

     Philosophy." Unpublished Manuscript.

 

[7] Ibid

[8] Ibid

[9] Nguyen, C. Thi. "Transparency is Surveillance." Philosophy and Phenomenological

     Research.

[10] Moor, James H. "WHAT IS COMPUTER ETHICS?" Wiley 16, no. 4 (1985).

Previous
Previous

Capitalizing on Crime: Navigating the Ethics of True Crime Entertainment

Next
Next

Doctor vs. Patient: Who Gets the Final Say?