Questioning the “Normal”: Technology, Progress, and Human Life

by Nolen Gertz

With the rise of big data, internet-of-things, machine learning, targeted advertising, face recognition algorithms, virtual assistants, cyberbullying, cyberstalking, and cyberwarfare, we find more and more people and policy makers around the world debating whether technological advances are helping us or hurting us. Such debates often focus on trying to figure out a way to balance the need to preserve human values with the desire to not interfere with technological progress. The central problem that arises then is what to do when values and progress come into direct conflict with each other. Should we err on the side of caution and rein in companies like Google, Twitter, and Facebook so they do not interfere with personal privacy and national democracy? Or should we take a more pioneering perspective and view the occasional rights violation as a necessary risk that can be outweighed by the rewards for medicine, manufacturing, and media? Or should we try to find a middle path and have tech companies and policy makers work together to develop guidelines for “responsible research and innovation”?

Answering these questions can help us to respond retroactively to conflicts between human values and technological progress as they arise. But these questions do not help us to proactively understand the nature and meaning of these conflicts because they take for granted that we do not need to question the nature and meaning of the relationship between humans and technologies. To ask where we should draw the line between violation and innovation is to assume that we already know the difference between the world of morality and the world of machines and that we simply need to figure out where best to set up our border controls. But if we cannot learn about what we consider morality to be without the use of books, chalkboards, computers, PowerPoint slides, Wikipedia pages, and YouTube tutorials, then we cannot meaningfully talk about morality without talking about machines. Likewise if we cannot learn about what we consider machines to be without the use of language, norms, values, ideologies, dualisms, hierarchies, and power structures, then we cannot meaningfully talk about machines without talking about morality. In other words, we cannot set boundaries between morality and machines without doing violence to reality. Machines are already in our morality. Morality is already in our machines.

By investigating the nature and meaning of the relationship between humans and technologies we can see that the most essential danger technologies pose to human norms and values is not one of violation, but of redefinition. As we grow used to the ubiquity and utility of technologies, we also grow used to how technologies shape the ways in which we see the world and act in the world. This does not mean that we should fear that conspiracy theories about Google, Twitter, and Facebook censoring some political views and promoting others are true, but rather that we should fear that such conspiracy theories help to distract us from realizing the role that Google, Twitter, and Facebook play in defining our political views in the first place. To worry about whether Google, Twitter, and Facebook are misinforming us is to have already accepted that using Search, Trends, and News Feeds to engage with the world is what it means to be “informed”. Thus rather than question the role of technologies in our lives, we only question whether technologies are performing their roles properly.

Technologies have not only redefined what we mean by being “informed,” they have also redefined what we mean by “progress”. Though we are well aware of how destructive technologies can be, we continue to use them. When a plane’s security video announces that we need to turn off our electronic devices or risk crashing into the ocean, we don’t rush to find the off switch (assuming we even know where it is), we just laugh and keep checking Instagram until a flight attendant yells at us. When a pop-up appears to check if we agreed to an app’s privacy policies before installing, we roll our eyes and immediately click “I accept”. When a major manufacturer recalls millions of cars because a previously unknown computer error can occasionally cause the car to catch fire, we check to make sure our car wasn’t the model recalled and then keep driving. Because today we only define “progress” in terms of technology, to avoid a technology out of fear is to risk being seen as afraid of progress, as being a Luddite, which—considering the dangers technologies actually pose to us—we apparently consider to be a fate worse than death.

As we have accepted that technological progress is human progress, we have likewise accepted that the only way to achieve progress in any endeavor is by using the most advanced technologies available. Even in the field of humanitarianism more and more organizations are adopting drone technology as a way to carry out aid work. Though drones are of course most associated with life-taking rather than life-saving missions, efforts like the #Drones4Good social media campaign are trying to change the popular perception of drones. As the wording of the hashtag implies, the campaign is trying to get people to think of drones as mere tools, as devices that are not in themselves good or bad, but can only be used for good or bad purposes. This campaign can then be seen as a parallel of Mark Zuckerberg’s similar attempts to get policy makers to think of Facebook as just a tool.

While such a view seems reasonable, it treats the issue of human-technology relations as a simple either/or, as though the only possible situation we face when using technologies is that either they control us or we control them. Such a view then leaves out of consideration that technologies can influence us without controlling us, by shaping how we see the world and how we act in the world. For example, drones can shape how we make judgments about intervention. Because with drones we can replace boots on the ground with bots in the sky, drones make us much more likely to view intervention as a simple cost-benefit calculation where drones are the least costly (both in terms of possible money spent and possible lives lost) solution available. The issue here however is not merely that drones may lower the threshold for intervention (whether that intervention is for military or humanitarian purposes), but also that drones change the way we see the world.

As we grow more comfortable seeing drones not as weapons but as tools, we also grow more comfortable seeing events not as complex affairs to be considered with care but as problems to be solved with the best tools available. Having drones not only makes us more likely to use drones, having drones also makes us more likely to view the world in a problem-solving mindset. As I argue in my forthcoming book Nihilism and Technology, the problem-solving mindset is dangerous because as we increasingly come to see events as problems and technologies as solutions, we increasingly come to see life as a problem to be solved by technologies.

Wanting our lives to be easier is of course perfectly normal, but it is for this reason that the philosopher Friedrich Nietzsche warned that precisely what we need to investigate is what we take for granted as “normal”. And today nothing is more normal than watching TV, wearing a Fitbit, taking an Uber, following friends on Facebook, and attacking enemies on Twitter. Or, as I describe these activities in my book, nothing is more normal than avoiding consciousness, avoiding decision-making, avoiding powerlessness, avoiding individuality, and avoiding accountability. In other words, nothing is more normal than using technologies to avoid being human. Technologies are meant to help us avoid experiences we don’t like, and as Nietzsche pointed out, being human has historically been an experience we really don’t like.

Nolen Gertz is Assistant Professor of Applied Philosophy at the University of Twente. His research focuses primarily on the intersection of existential phenomenology and philosophy of technology. He is the author of The Philosophy of War and Exile (Palgrave-Macmillan 2014) and Nihilism and Technology (Rowman & Littlefield International 2018).
[Photo Credit: Derek Torrellas, Jene Thomas]

1 Kommentar

  1. I agree whole-heartedly that technology tries to shape the way that we view the world. Certain news channels are geared toward liberal or conservative points of view. Also, ads that we see pop up whenever we are surfing through social media are linked to how we should live our life or how we need to change it for the better. Technology has helped us connect to others across the globe by social media or television, but instead of opening up or eyes it has narrowed our point of view. What we see is what others want us to see and think.
    The one good thing is how it has enabled us to be more innovative and to get household tasks done quicker.
    If you could check out my own blog on technologies that would be greatly appreciated! https://shylasfascinatingblogs.blogspot.com/2018/09/humans-and-use-of-technology.html.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert