As I noted in my last post, most of us “regular folk” are computer code illiterate. We have no idea how a string of alphanumeric characters come together to result in software, websites, and mobile apps. We understand that binary is the idea that everything can be reduced to ones and zeros, but we wouldn’t be able to actually generate a conversion. We have no frame of reference by which to evaluate whether shows like Mr. Robot or Halt and Catch Fire resemble any form of reality. 

Rami Malek as Elliot in Mr. Robot

Yet we are living in the Digital Age. The Information Age. The age in which so much of our lives is dictated by code. Gradually every nook and cranny of our everyday spaces are becoming connected to the Internet. This is the so-called Internet of Things, which is making inanimate objects “smart”.
And behind every iteration of this digital revolution are lines and lines of code.

Only a small percentage of (usually highly educated) individuals are fluent in this language of code. While on the surface our digital lives may seem easily customizable, the reality is that “systems, protocols, algorithms, and ‘codes’ of the technology usually remain locked” (20). This quote is from a book by Ramesh Srinivasan entitled Whose Global Village? Rethinking How Technology Shapes Our World. In the book, Srinivasan, as both a software engineer and a scholar, brings attention to the invisible forms of ordering in the world that are brought into existence through code, visible only to those, like Srinivasan, who have the necessary literacy to comprehend them.

Srinivasan’s work speaks loudly to the necessity of making visible these digital and invisible lines of power. Code cannot remain the language of the Silicon Valley privileged male. Code must be recognized as a global language as important as English—a language that shapes and frames the ways in which power flows, in which social and economic transactions take place, in which new societal structures and systems are formed. I would argue that in today’s digital age, true democracy is not possible without such digital and programming literacy afforded to the public.

Ian Bogost begins to argue this with his concept of “procedural rhetoric.” He suggests that procedural rhetoric is a form of rhetoric that is embedded in the logic of most digital objects but that it tends to be overlooked in an educational system focused on verbal and visual rhetoric. (For more on procedural rhetoric, see this earlier post.) But I don’t think procedural literacy is enough; I sincerely believe until the ability to read, write, and re-write programming is made mainstream, we will live in an oligarchical society organized and guided by the software elite.

If this opinion sounds overwrought, one only has to consider the recent U.S. elections as evidence of the power that the digital holds over the public. Evidence has now emerged that explicitly links both Russian use of digital spaces and the involvement of Silicon Valley corporations such as Facebook with the surprising outcome of the election. Congress can interrogate and threaten Silicon Valley as much as it likes, but the power such technological centers hold over information will remain until that power is dispersed among the people. Srinivasan refers to this power over information as the new oil of the digital economy, and he too argues that Silicon Valley is redefining democracy.

So what type of tangible change am I suggesting? 

Well, to begin, I would recommend that our schools begin teaching programming languages from the elementary level. There are languages like Alice and Scratch that use drag and drop visual interfaces to introduce users to the fundamentals of programming. Other extracurricular activities, such as the FIRST LEGO Robotics League, provide fun, competitive and hands-on environments for students to learn about programming and programming logic. If our youth encounter programming at a young age, by the time they reach middle and high school, they will be ready to pick up more complex text-based languages like Python.

By scratch.mit.edu, CC BY-SA 2.0, https://commons.wikimedia.org/w/index.php?curid=35453328

The second thing I would like to suggest is that we dissociate the act of learning a programming language from an outcome of attaining a software-oriented career. Currently, much of the rhetoric around learning how to code is about diversifying Silicon Valley or about empowering the next generation to procure well-paying jobs. Both of these objectives are valuable; however, it is imperative that we begin to see code as a language that affects all of us, regardless of our societal roles. It is our right and responsibility to be well-versed in the language(s) that are shaping the world.

Of course, the first step is to begin with ourselves. Both my parents and both my brothers are programmers, but I had only briefly dabbled in a little BASIC (because my mom made me in high school) and of course some HTML (to customize my high school Xanga page). Last semester, however, I became increasingly convicted that if I was to write about the digital realm, I needed to up my game and become at least proficient in a popular programming language. So beginning this past winter break, I have begun to teach myself Python using my mother’s online course for high schoolers. It’s been quite the adventure, sometimes exhilarating and oftentimes extremely frustrating. You can follow my progress on Twitter:

Although I’m only in module 5 of the course, I am already developing a basic understanding of how code works and the kind of decisions and logic that go into designing any piece of software. I have no illusions of pursuing a career in programming, but as my literacy grows, I am seeing how my perception of the digital realm is also changing.

If you’d like to join me on this journey, you can either check out my mom’s course, or you can look into a free site like Codecademy. I’ll be sharing some of my revelations here and on Twitter. If you decide to start learning to code too, please reach out and tell me about your experience! 

(Please note that some of this post was adapted from a response paper written for my Global Media and Society course from fall 2017.)

Amazon recently had its massive annual Prime sale–the Black Friday of summer or as some have dubbed it, Christmas in July. Among all the books, movies, clothing, gadgets, and electronics you can purchase on their site, perhaps Amazon’s greatest pride and joy is their own invention: the Amazon Echo.

The Echo is a monolithic device reminiscent of Kubrick’s 2001 Space Odyssey–a black pillar to erect in the center of one’s home that is always listening. If you want to play music, check the weather, calculate a measurement conversion for a recipe, all you need to do is ask Alexa, the genie in this bottle. 

Google, not to be outdone in the quest to control all the technology in our lives, has a similar device on the market: the Google Home.

Both devices offer increasing convenience for the modern chaotic life. As someone who bakes quite a bit, I love the idea of being able to verbally inquire after the next ingredient for my recipe when my hands are covered in flour. For parents with small children in need of extra arms, no doubt these kind of devices also come in handy. And, in truth, how different are these devices from the Siri and Google that already live in our smartphones? We have already transitioned into a world where we talk to our devices and expect a proportionate response in word or deed. 

But is this really a world we want to live in? Is convenience the framework we wish to structure the future around? It’s sorely tempting, but I would answer no–and urge you to do the same.

In a world of listening devices, everything we say in the comfort and privacy of our homes is picked up by these devices, with the potential of being recorded. There is already evidence to show that what we say to our companions and family members–not directly to the device–is being used to customize the advertising we see as we surf the web. (Listen to this Note to Self episode to learn more.) 

While most of what we say at home might be quite innocuous, suppose one of these devices picks up a casual conversation in which you speak bitterly about an acquaintance who is then subsequently found murdered. What if that conversation becomes admissible in a court of law? As we know from experience with texting, digital devices have a hard time providing an emotional context and nuance when converting a verbal statement into a written one–even with the use of emojis and gifs. And this scenario is not simply hypothetical–Amazon has already been subpoenaed to release Echo data in this murder case. (Amazon refused but the defendant himself later agreed to release the data to the police.) And then, of course, there is the case of the San Bernardino shooters in which the authorities tried to get Apple to provide access to the shooters’ phones.

Even as these corporations are currently fighting to maintain our privacy, I find it scary to think that our data is in the hands of massive companies that are shaping the world’s future. They may not be the governmental authorities but Amazon, Google, and Apple are powerful authorities over our lives in other ways. They already have so much access to our privates lives through our email inboxes, our devices, and our shopping baskets–why would we want to invite them more directly into our homes?

For those who reluctantly respond with, “well we’re in this far, we might as well just accept the state of the world, give up and enjoy the convenience of such devices,” I disagree. We are not so far that we can’t take a stand and begin to shift the needle back to a place in which we as individuals can begin to own our personal information and data again. Choosing not to own an Echo or a Home is a place to start making that shift. Baby steps. Baby steps.

I’m not the first person to discuss this issue, so here’s a few links to some great podcasts and articles that also discuss this topic. If you own or don’t own one of these devices, I’d love to hear why you chose to buy or not buy one, and if you have one, what do you think now that you have it in your home? Do you disagree with my argument? If so, why?

The featured image is courtesy of Matthew Henry

I just returned from presenting at my first academic conference at Arizona State University. Being my first, I was extremely nervous but I got to present in the panel directly after the keynote speaker, so I got it over and done with quickly. One down, two more to go for this academic year!
The conference was entitled “The Post-Human Network Conference”, and it brought together graduate students from as far as London and from disciplines ranging from physics to art. We spent the entire weekend discussing various elements of this term “the post-human,” a term that has taken on quite a range of different meanings. Add in the fact that each of us have been trained within different fields and are familiar with different theorists, and it makes for a conference that required each of us to roll up our sleeves and do the hard work of constantly listening, questioning, and translating.

The term post-human is one I first encountered a few years ago at the beginning of my master’s program. At that time, I was reading a lot of science fiction and studying both fiction and non-fiction about cyborgs and androids. (Quick primer: cyborgs are humans that have been “upgraded” with technology while androids are essentially robots modeled after humans.) Within this context, the post-human generally refers to some form of evolved human species, and it usually implies that we have used technology to take us to that place. To provide a contemporary example, the film Ghost in the Shell, based on a manga series of the same name, dives heavily into an imagined post-human realm.
The term post-human is often juxtaposed with the term trans-human. These too are words that have been inflected with many different meanings depending on their usage. Trans-human can refer to a human who is slowly experimenting with technology augmentation, on their way to becoming a full post-human. If you use it in philosophical contexts, however, trans-humanism refers to a profoundly modernist/Enlightenment approach to thinking about technology. What I mean by this is that trans-humanism views technology as providing steps towards humanity’s ever-exponential progress. Augmenting a human with technology is symbolic of humanity’s increasing domination and mastery of nature.

This is where the term post-humanism, as opposed to post-human, offers a new set of meanings. Post-humanism resists this kind of attitude towards both humanity and nature; it is fundamentally opposed to an Enlightenment perspective that privileges human reason and that justifies our exploitation of nature. Post-humanism argues for a decentering of humanity; i.e. to stop thinking about ourselves as the center of the universe, and start thinking about the non-human with greater intentionality–remembering we are not the only life in our world, even as we might be distinctly different.

Time for another term: “the anthropocene”. You may have seen this word thrown around in the news occasionally. This is a term that has been used to label the most recent epoch of human history. I’m going to pull from the Smithsonian Magazine here for some more details:

According to the International Union of Geological Sciences (IUGS), the professional organization in charge of defining Earth’s time scale, we are officially in the Holocene (“entirely recent”) epoch, which began 11,700 years ago after the last major ice age.
But that label is outdated, some experts say. They argue for “Anthropocene”—from anthropo, for “man,” and cene, for “new”—because human-kind has caused mass extinctions of plant and animal species, polluted the oceans and altered the atmosphere, among other lasting impacts.Smithsonian Magazine

While this is considered an environmentally-driven term, it is also a term used frequently within posthumanist circles, where there is a deep concern for the ways in which humanity has carelessly utilized Earth for our various civilization-building endeavors.

For myself personally, I am interested in two particular areas. At this conference, I gave a talk about singularity theories, which are a collection of scientifically-sourced theories suggesting that our earth will transition irrevocably in the coming century. These theories are taken with great seriousness in areas of the tech industry and with many scientists and mathematicians, but have been often ignored within the humanities. In my research, I temporarily suspend any disbelief, and try to fully engage with the philosophical and ethical implications of these theories. For my presentation at this conference, however, I challenged my fellow scholars to consider how we in society define “being human” because until we can address this question, we have no ground to stand on for any potential development or even evolution of humanity and indeed no ground to stand on as we try to treat our planet and our fellow tenants well.
More on this in a future post.

Second, I’ve been taking what could be described as a post-humanist approach to thinking about space exploration. Placing a settlement on Mars is becoming more and more of a serious consideration, but if and when we do so, what attitudes and ethics will we implement in such an endeavor? I’m working on a number of projects in this area currently, and I’ll be posting a blog piece on this topic soon too.

This is an interesting little area of academia with which I am becoming slowly more familiar. So please pardon my limited knowledge and do also recognize that I have barely scratched the surface in this post. But as always, I do want to make a point of sharing a bit of my own journey into research, and I will be building on the ideas of this post in many other posts to come. In the meantime, please let me know if you have any questions, or if you’d like to add to something I said above. What do you think about this idea of the post-human or post-humanism?

Related Posts Plugin for WordPress, Blogger...