I just returned from presenting at my first academic conference at Arizona State University. Being my first, I was extremely nervous but I got to present in the panel directly after the keynote speaker, so I got it over and done with quickly. One down, two more to go for this academic year!
The conference was entitled “The Post-Human Network Conference”, and it brought together graduate students from as far as London and from disciplines ranging from physics to art. We spent the entire weekend discussing various elements of this term “the post-human,” a term that has taken on quite a range of different meanings. Add in the fact that each of us have been trained within different fields and are familiar with different theorists, and it makes for a conference that required each of us to roll up our sleeves and do the hard work of constantly listening, questioning, and translating.

The term post-human is one I first encountered a few years ago at the beginning of my master’s program. At that time, I was reading a lot of science fiction and studying both fiction and non-fiction about cyborgs and androids. (Quick primer: cyborgs are humans that have been “upgraded” with technology while androids are essentially robots modeled after humans.) Within this context, the post-human generally refers to some form of evolved human species, and it usually implies that we have used technology to take us to that place. To provide a contemporary example, the film Ghost in the Shell, based on a manga series of the same name, dives heavily into an imagined post-human realm.
The term post-human is often juxtaposed with the term trans-human. These too are words that have been inflected with many different meanings depending on their usage. Trans-human can refer to a human who is slowly experimenting with technology augmentation, on their way to becoming a full post-human. If you use it in philosophical contexts, however, trans-humanism refers to a profoundly modernist/Enlightenment approach to thinking about technology. What I mean by this is that trans-humanism views technology as providing steps towards humanity’s ever-exponential progress. Augmenting a human with technology is symbolic of humanity’s increasing domination and mastery of nature.

This is where the term post-humanism, as opposed to post-human, offers a new set of meanings. Post-humanism resists this kind of attitude towards both humanity and nature; it is fundamentally opposed to an Enlightenment perspective that privileges human reason and that justifies our exploitation of nature. Post-humanism argues for a decentering of humanity; i.e. to stop thinking about ourselves as the center of the universe, and start thinking about the non-human with greater intentionality–remembering we are not the only life in our world, even as we might be distinctly different.

Time for another term: “the anthropocene”. You may have seen this word thrown around in the news occasionally. This is a term that has been used to label the most recent epoch of human history. I’m going to pull from the Smithsonian Magazine here for some more details:

According to the International Union of Geological Sciences (IUGS), the professional organization in charge of defining Earth’s time scale, we are officially in the Holocene (“entirely recent”) epoch, which began 11,700 years ago after the last major ice age.
But that label is outdated, some experts say. They argue for “Anthropocene”—from anthropo, for “man,” and cene, for “new”—because human-kind has caused mass extinctions of plant and animal species, polluted the oceans and altered the atmosphere, among other lasting impacts.Smithsonian Magazine

While this is considered an environmentally-driven term, it is also a term used frequently within posthumanist circles, where there is a deep concern for the ways in which humanity has carelessly utilized Earth for our various civilization-building endeavors.

For myself personally, I am interested in two particular areas. At this conference, I gave a talk about singularity theories, which are a collection of scientifically-sourced theories suggesting that our earth will transition irrevocably in the coming century. These theories are taken with great seriousness in areas of the tech industry and with many scientists and mathematicians, but have been often ignored within the humanities. In my research, I temporarily suspend any disbelief, and try to fully engage with the philosophical and ethical implications of these theories. For my presentation at this conference, however, I challenged my fellow scholars to consider how we in society define “being human” because until we can address this question, we have no ground to stand on for any potential development or even evolution of humanity and indeed no ground to stand on as we try to treat our planet and our fellow tenants well.
More on this in a future post.

Second, I’ve been taking what could be described as a post-humanist approach to thinking about space exploration. Placing a settlement on Mars is becoming more and more of a serious consideration, but if and when we do so, what attitudes and ethics will we implement in such an endeavor? I’m working on a number of projects in this area currently, and I’ll be posting a blog piece on this topic soon too.

This is an interesting little area of academia with which I am becoming slowly more familiar. So please pardon my limited knowledge and do also recognize that I have barely scratched the surface in this post. But as always, I do want to make a point of sharing a bit of my own journey into research, and I will be building on the ideas of this post in many other posts to come. In the meantime, please let me know if you have any questions, or if you’d like to add to something I said above. What do you think about this idea of the post-human or post-humanism?

Before anything, I want to briefly note that the video game Everybody’s Gone to the Rapture was released yesterday by developer The Chinese Room. I have been tracking its development for a while, so I am extremely excited to play it. If you enjoy narrative-driven exploratory adventure games, check it out. I’ll have a review on the game in next week’s post.

Now on to the film review. I finally had a chance to see Alex Garland’s Ex Machina last week.

While I am always fascinated with books and movies exploring Artificial Intelligence, I was a bit nervous about this one because of my past experience with Garland’s Sunshine. (Note: Garland wrote the screenplay but Danny Boyle directed Sunshine.) Sunshine is one half sci-fi thriller about a voyage to the Sun and one half space horror. I loved the first half, but once I hit the halfway point and genre shift, the rest of the film was a bit much for my blood pressure. The press surrounding Ex Machina utilized vocabulary like “chilling”, “creepy”, and “disturbing”. While it also affirmed the film’s brilliance, needless to say, I was a bit worried about things jumping out at me. Not my kind of film. But my husband insisted I needed to see it, so I saved it for its release to the small screen. I needn’t have been so worried. Yes, Ex Machina is creepy, but it falls solidly in the thriller genre not horror, and its creepiness is grounded in what you don’t see and know.

In fact, what you don’t know is precisely why I loved this film. Ex Machina is the story of a young computer programmer, Caleb, who wins a contest to participate in a top-secret experiment with the founder and CEO of his company. What he quickly discovers upon arriving at the experiment’s remote (and jaw-droppingly beautiful) location is that his task is to administer a Turing Test to his boss Nathan’s newly developed AI. The Turing Test, named for its developer Alan Turing (subject of the recent film The Imitation Game), is a process by which a computer is evaluated for how well it simulates human behavior and intelligence.


Nathan’s AI is remarkably compelling in this regard, ironically causing Caleb’s own behavior to seem stiff or robotic. Nathan, in comparison, is erratic and possibly delusional, resulting in a strange spectrum of human behavior ranging from the rational to irrational. Along this scale, the AI named Ava might be the most balanced character throughout the film. This rearrangement of norms leaves the audience a bit disoriented, unsure of whose perspective to trust. Caleb’s point of view is the most comfortable since the story is largely told from his angle, but even as his comprehension of the situation slowly expands, that knowledge simply raises new questions about the experiment, its creator, and what the project’s ramifications might be for humanity. This disorientation is intentional, and I think really highlights on a macro-level how the general public should and often does feel about the rapidly changing technology sector. Technology may be improving in leaps and bounds, but do we truly understand its implications for society and can we trust its creators – the commercial empires of the 21st century?

This paragraph includes a spoiler to the ending (Click to read)
The film ultimately concludes with Ava leaving the experiment facility alone, appropriating Caleb’s helicopter ride back to society. We have no idea if she survives, how she integrates with human society, or how her presence impacts the people she meets. Unlike the standard Philip K. Dick novel, the world doesn’t experience mass devastation due to AI. Instead, we don’t know the outcomes. At the end of the film, we still don’t know who to trust and we probably know only as much, if not less, than we did at the start of the film.

I found the simplicity of Garland’s narrative to be a poignant commentary on the reality of today’s Information Age. While data and “knowledge” is constantly being exchanged and directed around the globe, there is so much significant information that we individuals, like Caleb, lack and are told we do not need. We, as a society, are encouraged to float through life in a happy naïveté, distracted by consumer impulses and instant entertainment. But what happens next really matters. Asking about AI and other technology’s impact on society is critical. This critical lens does not imply instant judgement; it simply requires attention and intention to changes as they occur and before they occur. These are precisely the kinds of questions that Caleb tried to answer in Ex Machina, but sadly, he was in no position to procure the answers in time. Let us not allow ourselves to fall into the same kind of situation.


Thanks for reading this post! If you liked what you read, please subscribe below and tell your friends about High and Low.
Please also note that this post includes affiliate links. If you purchase an item through an Amazon link on my blog, I will receive a small percentage. This does not adjust the cost of your purchase, and all proceeds go towards supporting this blog. Thank you so much for your help!
The featured image is a screenshot from Ex Machina posted by BagoGames at http://bit.ly/1NIDrXQ. Publishing rights through http://bit.ly/1mhaR6e and Fair Use Act.

Related Posts Plugin for WordPress, Blogger...