by Olivia L. Meikle

Dear Diary,

In the year 2073, I will be turning 72 years old, and when I envision the future, it is informed by the current world and history thus far. Looking back 50 years from 2023, we can see the early origins of the internet in the 1970’s. Our view of the future is also informed by past conceptions of utopian and dystopian societies through fiction (Baretto, 2020). A utopian imagined future would uphold collectivism and equity through technology, while a dystopian future would weaponize technology to threaten and diminish individual rights (Fisher & Wright, 2001). In 2073, a technological dystopia will be our reality because digital technology mirrors society and upon analyzing our historical digital evolution, we can identify how the future will likely amplify ethical issues such as wealth and class disparities, exploitation, systemic bias, surveillance and misinformation due to quickening technological advancements impacting the public sphere.

“Datafied” from “Hands: Medium & Massage” series, by Merlyna Lim, 2018

Politics, Power & Tech Convergence

Baretto (2020) states in his lecture that the concept of a dystopian future is reactionary, out of a belief that contrarily to a utopian collectivist society, authoritarian use of technology will guide humans towards the loss of individual identities and vulnerable populations will be disproportionately impacted (11:00). For this idea to emerge and inspire authors to imagine future dystopic worlds, there had to be foundations for a potential distrust in technology.

Examining the Information Technology (IT) Revolution helps us see the shifts that have marked societal shifts due to digital technologies. The IT Revolution began in the mid-1940s with the development of micro-electronics, personal computers and telecommunications, which led to the creation of the internet in 1969. The United States Department of Defence created the ARPANET during the Cold War as a decentralized communication network between the military and scientists (Castells, 2011, p. 46). It was designed to distribute responsibility and power to more parties, which meant it had great democratic potential (Lim, 2023, W2). In the 1980s, the network transitioned into the INTERNET; in the ’90s, the Internet was privatized (Castells, 2011, p. 46).

The early origins of the Internet were developed by the most highly funded department in the US, which was made up of white male scientists, mathematicians, or military officials in positions of power. When imagining the time and space of the future, we must also imagine the existing systems that would be reflected in our technology (Winner, 1980). Access to early computing had gender and racial limitations reflected in modern search engines, facial recognition and AI. If exclusionary ideals within history are rooted in early tech, how does this reflect in future iterations of the technology and impact future generations? Based on this understanding, a future dystopic society would further exploit marginalized communities and maintain power hierarchies, especially now that technology production methods are hidden from end users in the Western world.

Pervasive Computing & Capitalism

Post-privatization, the Internet was expanding rapidly due to globalization. Once the internet took off, there was no looking back as it began to set exciting new boundaries for human communication. Before the 2000s, the exchange of information on the web was regarded as far less interactive and participatory than with the emergence of Web 2.0 and platforms such as Napster for file sharing music to other personal computers (O’Reilly, 2005). This period of the internet was informed by “peer-to-peer bottom-up values” through collectivism (Pakman, 2019, 2:37). However, the increase of individual user autonomy threatened control and was questioned heavily by those in positions of power, including the US government who called for regulation of the platform (Fisher & Wright, 2001). This period also begins to follow the origins of ‘networking logic’ to understand how socially connected and dependent on technology we were becoming (Castells, 2011, p. 71).

Fisher & Wright (2001) invoked William Ogburn’s theory of the cultural lag, that “technology moves forward and the social institution lags in varying degrees,” to examine the critical responses to Napster (Fisher & Wright, 2001, para. 9). The cultural lag frame is valuable for imagining the future of technology by analyzing the present. In 2023, the speed at which technology is changing is fast enough that we can now identify potential harms as they are occurring. Baretto (2020) highlights this idea using the concept of “anti-utopia” as a response to both utopian and dystopian visions of the future (19:22). We are currently provided what is marketed to us as a “perfect” world. However, we can identify through scholarly research that there are many potentials for future discrimination toward vulnerable populations.

Furthermore, a transition from a peer-to-peer model to one dominated by “corporate growth-based capitalism” shifted the media landscape to uphold certain actors and platforms with large-scale monopolies on where social interactions occur online (Pakman, 2019, 3:27). As media consumers, we need to question who is benefitting from these interactions occurring specific platforms and what this means for these motivations, who is targeted and how this appears in real life. An example of an early peer-to-peer platform that exemplified democratic ideals was Wikipedia, which used information as a raw resource (Castells, 2011, p. 70). The platform was originally highly participatory, with volunteer collaborators contributing to articles online, which supported more public access to information (Benkler & Nissenbaum, 2006). Unfortunately, many early peer-to-peer platforms become eroded by capitalistic-driven platforms that track interactions and asses behaviours to create revenue from user data. This shift from democratic uses of technology to centralized and corporate ones highlights potential fears for the future and further class disparities.

Social Media Monopolies & Misinformation

In the year 2073, the late 2000s and early 2010s would be a milestone that marked a significant shift in the trajectory of the internet. During this time, the public sphere was becoming digitally sustained by online platforms, including MySpace, Facebook, Tumblr, Instagram, Reddit and more. While this marked a new opportunity for humans to communicate and share information and knowledge, new media became a tool for companies to access and target potential consumers directly. Additionally, political leaders found new ways to leverage social media to communicate with likely voters. During his 2008 presidential campaign, Obama’s campaign collaborated with “netroots” activists to influence discussions in the online public sphere (Kreiss, 2012, p. 195). While the campaign proved to be successful, upon being studied, researchers found that online conversation surrounding the campaign that seemed organically produced by individual users was strategically disseminated by political actors (Kreiss, 2012, p. 196).

The 2008 Obama campaign marked a significant shift for the internet because powerful actors were now leveraging the platform to identify and adapt to user behaviours to secretly push a given agenda which threatens democracy (Hindman, 2008, p. 142). This has been seen more recently with Donald Trump’s 2016 presidential campaign and the Cambridge Analytical scandal, where Facebook users’ personal information was harvested to help target voters. Social media platforms now know that user data through online interactions has commercial value and want to extract it (Rosenberg et al., 2018). Suppose this raises concerns regarding privacy, surveillance and even mis/disinformation camouflaged as regular social media interactions in 2023. In that case, the strategic methods that companies could use in the future to support profit and political goals will likely become more pervasive and difficult to notice in the future.

Furthermore, when examining emerging political discussion in the public sphere, the “missing middle” frame should applied because it does not include all members of society due to wealth and class disparities (Hindman, 2008, p. 142). While digital media platform owners boast accessibility, their platforms remain exclusionary due to the digital divide. In 2073, the potential for only the most powerful and privileged voices to be uplifted on social media increases, while ordinary citizens will have more difficulty hearing their voices (Hindman, 2008, 142).  With modern AI technologies using facial recognition, racial disparities are directly exemplified, and the flexibility of technology to adapt as an extension of our lives also threatens to dehumanize us in the future.

Conclusion

By following the journey of the Internet, it is evident that its origins set the stage for the future dystopic presence of technology in 2073. Examples such as ARPANET, Napster, Obama’s campaign, and current AI capabilities for surveillance and social interaction are negatively permeating the public sphere. It is not only the output on the internet that impacts our society, but equally so, it is the actors and platforms producing it that are leveraging non-human technology to target audiences better and culturally homogenize society. Digital Platforms are extracting value from data using technology. By engaging with non-human tech, we are, in turn, non-humanizing ourselves by quickly having the answer to everything and algorithms, placing us in echo chambers where we can avoid unpleasant or contentious debates. I believe the future will only amplify social inequities unless individuals actively resist corporate authoritarian dominance and harness early internet collectivism to achieve a greater social good.

References

Barreto, E. (2020). Lecture 3: Utopias and Dystopias. Youtube. https://youtu.be/q4BC95DGEYc?si=1dfKegZOi3O1HfCo

Benkler, Y., & Nissenbaum, H. (2006). Commons-based Peer Production and Virtue. Journal of Political Philosophy, 14(4), 394–419. https://doi.org/10.1111/j.1467-9760.2006.00235.x

Castells, M. (2011). “The Information Technology Revolution”, The rise of the network society, Chapter 1.

David Pakman. (2019). Is Our Technology Future Utopian or Dystopian. Youtube. https://youtu.be/ZGPdkP4gdgs?si=6h1iw2AwfnT1UF_d

Fisher, D. R., & Wright, L. M. (2001). On utopias and dystopias: Toward an understanding of the discourse surrounding the Internet. Journal of Computer-Mediated Communication, 6(2), JCMC624. https://academic.oup.com/jcmc/article/6/2/JCMC624/4584220

Hindman, M. (2008). The Myth of Digital Democracy. Chapter 7. Princeton University Press.

O’Reilly, T. (2005). “What Is Web 2.0?”. O’Reilly Media, Inc. http://www.oreilly.com/pub/a//web2/archive/what-is-web-20.html

Rosenberg, M., Confessore, N., & Cadwalladr, C. (2018, March 17). How trump consultants exploited the Facebook data of Millions. The New York Times. https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html

Winner, L. (1980). Do Artifacts Have Politics? Daedalus, 109(1), 121–136. http://www.jstor.org/stable/20024652

Author’s bio:

Olivia Meikle has recently graduated from the Communication and Media Studies program with a minor in Canadian Studies from Carleton University. Upon graduating, she was awarded the Senate Medal for Outstanding Academic Achievement for being in the top 3% of students in her program.

Olivia is passionate about leveraging communications to increase technological transparency in Canada. She also enjoys watching films, reading magazines and listening to podcasts of all sorts.