How Virtual Is Your Reality?

One of the things that was once a term related to science fiction that have now become real life is the term “Virtual Reality”. When I was a kid, virtual reality brought about images of giant goggle helmets and gloves with wires on them. These days, VR headsets are things you can get for a relatively low price and just plug in your Samsung smart phone to experience virtual reality. However, that is really the most basic, simple product when it comes to the idea of a “virtual” space.

Have you ever known someone or have you ever played games such as Runescape, World of Warcraft, or Guild Wars? Or even Dungeons and Dragons? These things are all a sort of virtual reality.

Games in which people play a character of their creation and fulfill a role in the gaming world are in every way virtual reality. In her article, “Constructions and Reconstructins of Self in Virtual Reality: Playing in the MUDs”, regarding online PC virtual reality games, Sherry Turkle tells us,

“[These] worlds exist on international computer networks, which of course means that in a certain sense, a physical sense, they don’t exist at all. From all over the world, people use their individual machines to access a program which presents them with a game space-in the high tech world such spaces have come to be called “virtual”- in that they can navigate, converse, and build.”

And while games like this as virtual reality are not something most of us would struggle to imagine, I’m using those as an example to paint a picture of what virtual reality games are for the people who play them. As Turkle told us, these are international networks without any physical location where people can interact and build their own ‘self’ for the world.

Think about your online life. How many of us have a carefully cultivated presence online behind which we build a persona for the world to see? I confess, as a kid I was a HUGE fan of Harry Potter. I’m talking not just reading the books, but going to all the websites and being active on all the message boards. I can’t even remember what my username was, but I remember that I used the same username on every website (this was pre-twitter and tumblr, so I didn’t find just one location to be active on). Remember those little ‘build your own doll’ avatar makers that existed in the early 2000s where you designed a little cartoon version of yourself to use as a userpic? I had the same one of those on all my accounts. Because I was so active on all these websites, people recognized me from other sites they were on.

This was a virtual reality. My little 12 year old self has a virtual persona that wasn’t at all related to my real self. At the time, you had to be 13 to be on any website, so I was 15 to the websites. My avatar was redheaded when I have black hair. My name I do not remember, but it was nothing even remotely related to my real name because I grew up in the era of “never tell strangers online your real name”. I made guesses about what would happen in the next Harry Potter book and discussed these theories with people from all over the world. I would spend several hours each week talking to people who only knew that I was a 15 year old girl with red hair and a different name who loved Harry Potter as much as they did. This was its own reality. I’m sure most of those other people were also too-young-for-the-website kids with fake names and made up features on their avatars, but we all played these roles in our own nerdy fandom reality.

These days, social media allows us all to live in a virtual space. One of my best friends in my whole life, who has been my friend for the past decade, is a lady from England that I have never met in person. For the past ten years, we’ve shared not just correspondence almost daily, but life events, family tragedies, secret hopes and dreams, support, and love. She is just as any friend is to me, even though we have never been on the same continent. Our entire relationship, you could say, is therefore “Virtual Reality” rather than regular reality. Everything we have done together has been virtual by nature of space and time.

But it’s real. Our friendship is inescapably real. That raises the question, is virtual reality necessarily something that’s not real? Do the personas we build that depict a version of ourselves differ from the personas we build face to face with clients at work or relatives we don’t want knowing our secrets (I’m bisexual and very few relatives know this, for example). Though I am my genuine self with my friend, there are still parts of my life she doesn’t witness just by nature of the distance (think how she’s never seen inside my shoe closet, for example, so she may not realize I’m a shoe-addict).

These days, the question between what is virtual reality and what is ‘real’ reality is one that’s much harder to answer than it once was.

Is Open Source Really The Future?

Most of us at some point have used open source software, whether we knew it or not. You’re using open source software right now. WordPress is an open source software. Currently I’m typing this on a Firefox browser. Firefox is also open source. I’m sure at some point you’ve been recommended to use Open Office if you can’t afford Microsoft, and I’m sure you’ve heard of Linux and Ubuntu if you haven’t used it yourself. In some of my IT classes we even used things like GIMP and Blender for image and graphic design stuff. At some point, all of us have used Open Source software.

How many of us know what that means, though?

What is Open Source?

According to the Open Source Initiative, there are 10 points that must be met for something to truly be Open Source.

1. Free Redistribution

The license shall not restrict any party from selling or giving away the software as a component of an aggregate software distribution containing programs from several different sources. The license shall not require a royalty or other fee for such sale.

2. Source Code

The program must include source code, and must allow distribution in source code as well as compiled form. Where some form of a product is not distributed with source code, there must be a well-publicized means of obtaining the source code for no more than a reasonable reproduction cost, preferably downloading via the Internet without charge. The source code must be the preferred form in which a programmer would modify the program. Deliberately obfuscated source code is not allowed. Intermediate forms such as the output of a preprocessor or translator are not allowed.

3. Derived Works

The license must allow modifications and derived works, and must allow them to be distributed under the same terms as the license of the original software.

4. Integrity of The Author’s Source Code

The license may restrict source-code from being distributed in modified form only if the license allows the distribution of “patch files” with the source code for the purpose of modifying the program at build time. The license must explicitly permit distribution of software built from modified source code. The license may require derived works to carry a different name or version number from the original software.

5. No Discrimination Against Persons or Groups

The license must not discriminate against any person or group of persons.

6. No Discrimination Against Fields of Endeavor

The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research.

7. Distribution of License

The rights attached to the program must apply to all to whom the program is redistributed without the need for execution of an additional license by those parties.

8. License Must Not Be Specific to a Product

The rights attached to the program must not depend on the program’s being part of a particular software distribution. If the program is extracted from that distribution and used or distributed within the terms of the program’s license, all parties to whom the program is redistributed should have the same rights as those that are granted in conjunction with the original software distribution.

9. License Must Not Restrict Other Software

The license must not place restrictions on other software that is distributed along with the licensed software. For example, the license must not insist that all other programs distributed on the same medium must be open-source software.

10. License Must Be Technology-Neutral

No provision of the license may be predicated on any individual technology or style of interface.

Why is this significant?

Robert Steel tells us in his book, The Open-Source Everything Manifesto: Transparency, Truth and Trust,

“We are at the end of a five-thousand-year-plus historical process during which human society grew in scale while it abandoned the early indigenous wisdom councils and communal decision-making. Power was centralized in the hands of increasingly specialized ‘elites’ and ‘experts’ who not only failed to achieve all they promised but used secrecy and the control of information to deceive the public into allowing them to retain power over community resources that they ultimately looted.”

Steele’s point is a valid one if we look at society as we know it. In the prehistoric past, societies were reliant upon the idea of working together for a communal good. This was the only way a group of people could survive. This all changed over time as the concept of power and a division of power arose from the advancement of societies to a point that it wasn’t absolutely necessary to have a ‘communal good’ for the society to continue to function. The idea of how society works became one about class and separation of powerful from the powerless. Even though we in today’s modern, democratic societies claim ‘equality and freedom’, there is no denying that there are the powerful elites and the less powerful lower members of society.

Steele tells us that,

Sharing, not secrecy, is the means by which we realize such a lofty destiny as well as create infinite wealth. The wealth of networks, the wealth of knowledge, revolutionary wealth – all can create a nonzero win-win Earth that works for one hundred percent of humanity.

What Steele says is true. The only way to truly combat inequality in the future and work towards a common good for all of humanity is through free exchange of ideas and access to technology.

Now we get to the ‘but’…

But, as expected, Open Source doesn’t make the type of money that people want to make, and instead, it takes away from the paid software if the Open Source alternative is comparable in quality. Take a look at the past Microsoft has had with Open Source. There will always be a large amount of blow back against anything that challenges the status quo and threatens capitalism.

The question we’re left with is the same one that I asked: Is Open Source really the future?

According to the annual Future of Open Source Survey, the uphill battle may be leveling out just a little, because the use of Open Source software is growing and growing with very little to suggest this upward trend will be stopped by the makers of proprietary tech.

Overall, the use of open source software (OSS) increased in 65 percent of companies surveyed. The reasons given for using OSS include: quality of solutions, competitive features, and the ability to customize and fix the software. Additionally, 90 percent of this year’s respondents say that open source improves efficiency, interoperability, and innovation.

The results also show that,

Looking ahead, respondents say that, in the next 2-3 years, the main revenue-generating business models for open source vendors will be: software-as-a-service (46 percent); custom development (42 percent), and services/support (41 percent).

It seems that the answer to the initial question is yes. Open Source is the future, and little can be done to change that projection.


How Literature Impacted The Internet As We Know It


When I say the word “hypertext” what’s the first thing you think of? Most likely, you thought of HTML, right? Hypertext is defined as, “a database format in which information related to that on a display can be accessed directly from the display.” For most of us, the reason the word “hypertext” doesn’t even connect for us, is because we grew up with computers, so the idea of selecting something on a display and accessing other information is so commonplace. We just think of it as, “Well, duh, you click the link.” In reality, Hypertext has a far more interesting history than most of us would imagine, and it links back to writing.

There was an attempt at a literary revolution led, arguably, by a person called Ted Nelson. Nelson is credited with being the person behind the concept of hypertext, hypermedia, and hyperlinks. In his writings in the sixties, Nelson saw the future of hypertext as a way to bring literature back into fashion in the 21st century as a way to take people away from television and its stagnation of creativity and make reading the new big thing again. The concept was that through the use of hypertext, books would be published online in an interactive way so that the reader moves on to different parts of the story by clicking links, basically.

Though nobody in the reading I’ve done calls it this, it sounds to me like an internet version of those Choose Your Own Adventure books I used to read as a kid.

While the idea was ambitious, as we all now know, Nelson’s dream of an interactive novel online as the next revolution of entertainment didn’t, in fact, work out in the end. There are multiple reasons for that, and some of them are pretty simple to work out the cause. One of the main issues was just the timing of it all. Another was the formats through which hypertext was meant to become a reality.

Before the internet, Apple came out with one of the first platforms for hypertext in a program called HyperCard. From what I can understand, HyperCard was a lot like a powerpoint platform, but rather than doing a presentation with it, it was meant to link slides together so that the user could explore a multimedia artifact via hyperlinks. In his look back on the HyperCard, Matthew Lasar tells us,

Even before its cancellation, HyperCard’s inventor saw the end coming. In an angst-filled 2002 interview, Bill Atkinson confessed to his Big Mistake. If only he had figured out that stacks could be linked through cyberspace, and not just installed on a particular desktop, things would have been different.

“I missed the mark with HyperCard,” Atkinson lamented. “I grew up in a box-centric culture at Apple.

This goes back to the issue of timing that made hypertext miss the mark, so to speak. Before the broad release of internet connectivity, everything created was installed on a single computer. That meant that only the users of the computer the HyperCard was created on could access it. This was the same for a lot of computer programs at the time. There was no thought process leading up to the idea that one day soon, computers would be interconnected, so the creators didn’t anticipate needing that ability.

Though there were some attempts at making hypertext novels the Next Big Thing™, such as Douglas Cooper’s Delirium, Steven Johnson tells us why it was that hypertext stories just never took off.

It turned out that nonlinear reading spaces had a problem: They were incredibly difficult to write. When you tried to make an argument or tell a journalistic story in which any individual section could be a starting or ending point, it wound up creating a whole host of technical problems, the main one being that you had to reintroduce characters or concepts in every section.

As you could expect, such a difficult and complex medium for telling a novel-like story wasn’t successful. It would have created some really interesting stories, no doubt, but nothing that complicated would have ever been the next revolution to replace TV and change the way literature as we know it is experienced.

However, what we did get out of these revolutionary ideas was something equally as important: blogging! Yes, the very platform you are experiencing right now hails from the idea of hypertext fiction. And it wasn’t just blogging that came out of the hypertext revolution that never was, as Steven Johnson outlines in the same article.

It’s not that hypertext went on to become less interesting than its literary advocates imagined in those early days. Rather, a whole different set of new forms arose in its place: blogs, social networks, crowd-edited encyclopedias. Readers did end up exploring an idea or news event by following links between small blocks of text; it’s just that the blocks of text turned out to be written by different authors, publishing on different sites. Someone tweets a link to a news article, which links to a blog commentary, which links to a Wikipedia entry. Each landing point along that itinerary is a linear piece, designed to be read from start to finish. But the constellation they form is something else. Hypertext turned out to be a brilliant medium for bundling a collection of linear stories or arguments written by different people.

What may have started as an attempt at a literary revolution failed to bring about a new literary form that would bring literature back to the masses and dethrone the entertainment king that was television, but from the ashes of a failed endeavor rose basically the entirety of new media as we know it now. News, social interaction, education, communication, all of our common forms of new media that we utilize every single day is only made possible by hypertext.

Post-New Media: Cynicism and Modern Media Culture

As some of you know, I am taking a class in which we are encouraged to express our thoughts and opinions about New Media and specifically cultural shifts in New Media related in large part to the rise of Social Media and the digital age.

What a lot of you may not know, and it is something I never expected, is that there are a large amount of people in my age bracket studying New Media like I am who are extremely cynical towards the digital world. I half-jokingly think of them as ‘Post-New Media’ since a lot of people seem to have moved beyond the are of embracing New Media and moved on to disdain. I seem to be the wild and free optimist of a fairly large group of young people because I embrace the digital world and I see far more net-positives in our future due to the rise of digital communication and social media. I find myself often being one of the lone voices in the crowd who doesn’t seem to think in terms of ‘who controls what we see and hear’ and ‘The Man is still pulling the puppet strings’. I half expect someone to use the word, ‘Sheeple’ at some point half the time in that class.

However, in the reading for this class, I found something really interesting that I think mirrors the strange preponderance of digital cynics in my current class in a very funny way.

One of the things in our text book is an essay by Hans Magnus Enzensberger called “Constituents of a Theory of the New Media” that was written in 1970. In this essay, which is actually more like a collection of smaller essays, he has a section called, “Cultural Archaism in the Left Critique” in which he talks about how the New Left movement of the sixties likened media and the advances of media to a new form of manipulation. Enzensberger says that, while the basic idea is correct – “the means of production are in enemy hands” – he calls this cynicism towards new methods of communication an issue of self-defeating archaism, by which he means if people buy into the idea that the game is rigged, they give up, thus falling for the very manipulation they proclaim to be the problem.

The manipulation thesis also serves to exculpate oneself. To cast the enemy in the role of the devil is to conceal the weakness and lack of perspective in ones own agitation. If the latter leads to self-isolation instead of mobilizing the masses, then its failure is attributed holus-bolus to the overwhelming power of the media.

With respect to my peers, I somewhat feel like I’m witnessing the same phenomenon in which this cynicism is manifesting in a self-defeatist manner. Obviously, it isn’t just something attributed to my classmates, this is a far widespread phenomenon than just amongst a group of twenty-five college students, but I feel like this is something we’re all seeing lately.

Enzensberger attributed a rather interesting form of archaism to this cynicism in the sixties, as he outlines in the same section when he says:

At the very beginning of the student revolt, during the Free Speech Movement at Berkeley, the computer was a favorite target for aggression.

That is reflected today in the distrust to not just the traditional media, but towards Social Media as well. There is this push back against Social Media in constantly pointing out the negatives and talking about all the harm that Social Media does to interpersonal communication and to the culture of communication as we know it by highlighting aspects such as the ability to bully and the lack of accountability that the internet provides rather than by championing the new avenues to communication that Social Media has opened to people all over the world.

Perhaps I am being the crazy naive one of the herd, but I really feel like this is a self-defeating level of cynicism just like Enzensberger talked about seeing in the sixties, especially since I’m a New Media major. To reject and demonize the very progress in communication and democratization of access to information that allows less control by ‘the man’ just because there is still a platform in which the information is contained is to basically say all that we have striven for in terms of progress hasn’t been achieved, so we may as well give up.

I embrace the digital world and all the different forms of communication available to us, because I feel like the world as a whole always benefits from broader access to information. And it isn’t even just the platforms, like Social Media, but the culture around how we view communication that makes me feel so optimistic about the future. Like I said in a previous post about New Media,

New Media has managed to affect global changes in the very idea of communication because it has lowered the barrier to entry to what is and isn’t possible when it comes to communication regardless of location, wealth, or status of privilege.

CREATOR: gd-jpeg v1.0 (using IJG JPEG v62), quality = 75
This lady could tweet about her farm’s hay yield from rural India right now! How cool is that? Thank you technology!

Though New Media and the digital world is not a perfect system by any means, and while there are a lot of theories about why people are so cynical in the digital age, I just still absolutely fail to see how allowing more people the access to information and the ability to share ideas faster, easier, and in a better organized fashion can possibly a net-negative in the big picture.

What about you guys? Are you a cynic? Are you an optimist? Are you like me and didn’t think you were an optimist until you realized most people you meet are cynics? You can always let me know here or on Twitter. I welcome a conversation!

Women’s March 2017: A Textbook Example of New Media’s Contribution To Global Progress

By now, over a week later, we have all heard about Women’s March 2017. On January 21st, the day after President Trump’s inauguration, there was a planned Women’s March on Washington to protest the new president. The organizers of the event expected approximately 250,000 attendees the day of the event in Washington. Instead, as sister marches sprung up around the US and eventually around the world, January 21, 2017 will most likely go down in history as the second-largest global protest event in history.

The reason that what started as a single planned event, the Women’s March on Washington, became a globally successful series of protests not only about women’s rights, but also about queer rights, immigrants rights, civil rights, and just the general idea of human’s rights being threatened in the wake of the inauguration of President Trump, is because the way people communicate and the dynamic ways in which organization is possible has changed so much in just the last decade due to the rise of what we consider New Media.

While the total number of marchers around the world may never be known exactly, the count in the United States has been reported by Daniel Dale, a Washington correspondent for the Toronto Star, in a tweet linking to a table compiled by a professor from the University of Connecticut, Jeremy Pressman, with the help of Erica Chenoweth from the University of Denver.

(Though it must be emphasized that this table is unverified, all numbers on the table have sources linked, and it does match other reports regarding the turnout to the Women’s March events around the nation.)

The estimates reported by Dale and compiled by Pressman and Chenoweth only provide details to the reports since that day that the Women’s March events not only were a success, but far outreached any expected participation. Though there was a large amount of celebrity participation, which garnered a large amount of attention and may have encouraged more people to attend, but even with these things planned ahead, nobody ever expected to see seas of pink taking over American cities.

Los Angeles expected up to 80,000 participants and instead, there were 750,000

Chicago expected up to 50,000 participants and instead there were so many attendees (up to an estimated 250,000) that there wasn’t enough room for them all.

Even in cities that hadn’t prepared for such massive crowds, the turnout was a moving mass of pink as far as the eye could see.

Here in Georgia, there were at least four times as many marchers than had been expected at the Atlanta, Georgia Women’s March, and that isn’t counting the several other protests in other Georgia cities.

What made these events so successful, not just in America, but globally, is the ways in which New Media has changed not just the ability of how we can communicate, but the culture around communication that has shifted with these new methods of communication.

Traditional news spent most of the weeks leading up to President Trump’s inauguration talking about his cabinet confirmation hearings, his plans for the first days as president, and the plans regarding the actual inauguration. This makes sense, because that’s what most people in America and around the world would be talking about. Because of that, the plans for the Women’s March on Washington were casually mentioned through traditional, mainstream media sources, whereas they were broadly discussed, shared, and built upon via social media.

On November 23rd, 2016, Christina Cauterucci wrote about the potential for disaster regarding the planned Women’s March in Slate:

They weren’t professional organizers, but they knew how to make Facebook events. Eventually, a handful of different actions (one was to be called the Million Pussy March) collapsed into one: Originally dubbed the Million Woman March, it’s now the Women’s March on Washington, it’s scheduled for the day after Trump’s inauguration, and, as of this writing, 116,856 people from all over the country have said on Facebook that they are “going.” What they’re “going” to—and when, and where—nobody knows. Not even the people in charge.

She also added:

Right now, it looks like some form of the march and rally will happen, though probably not as first advertised. Without any experience planning large-scale events and without anticipating the potential scope of what they were starting, the original creators promised a rally at the Lincoln Memorial, followed by a march to the White House.

As we now know, the Women’s March on Washington, which was just one month ago still seemingly a disjointed collection of ideas that had no real sense of organization and was being planned by individuals without professional experience organizing large events, ended up being a massive success not only in Washington, but around the world. The reason for this is simple: broadly accessible and easily coordinated communication all made possible by the rise of New Media.

So while the original goal of having a larger number of people attend the Women’s March on Washington than the inauguration of President Trump was successful, with scientists saying that three times as many people attended the March than they did President Trump’s inauguration, the unintended results of an attempt at a standard counter-protest to a new presidency amounted to one of the largest global protest rallies in history, and it was all due to the ways that our perception of what communication is and the methods through which we communicate to large audiences instantly has been forever altered by New Media.

Artificial Intelligence: Not A Matter of “Can We” But A Matter Of “Should We”

While the title sounds like the one guy in a science fiction movie about the robot uprising that gets to say ‘I told you so!’ later, I don’t actually mean ‘they’ll rise up against their human overlords!’ in this case. Though I’m not entirely able to shake the mental image of the SciFi robo-uprising after all my years watching movies and TV shows on that topic, the more realistic issue I would like to pose to you guys today is the issue of the morality of AI Androids and why we should really question whether or not it’s right to pursue them as technology progresses.

I,Robot, 2004

While at first blush it seems like a dumb question to ask, because they’re just machines like any other AI programming, there is a lot of discussion in the tech industry over the actual issue of the ethics of AI. Most of the issues raised about the idea of AI comes from logistical issues such as replacing humans in areas that would take too many jobs, and in the instances where AI isn’t as reliable as a human would be.

The more SciFi issues revolve around things like the idea that Superintelligent AI Androids could outsmart humanity and take over, which is what I like to call the, “VIKI” scenario. In the film I, Robot the android VIKI utilizes the Asimov Three Laws of Robotics to the point that she realizes humans are a threat to each other, so the best way to prevent harm to humans is to ‘control’ them. There are some issues that are closer to my main point but are about the AI Androids becoming human-like and self-aware and understanding that they are not humans though they have human feelings. I call this one the “Roy Batty” scenario from the film Bladerunner, in which the replicants have gone rogue because they know they are going to be destroyed and they fear death. The Roy Batty is related to the ‘enslaved masses rise up against their masters’ concept.

Blade Runner (1982)

But the question I think is first what should be asked is in relation to the ‘enslaved masses’ but on the side of the humans: at what point does the creation of AI Androids become a replacement for slavery?

Now, before anybody goes, “Dear God, they’re robots, not people”, let’s take a look at the entire point of computers and machines. The first proposed mechanical computer was Charles Babbage’s Analytical Engine in 1837. The purpose of the Analytical Engine was to do math faster and more accurately than any human could. This continued to be the purpose of computers for a long time, as the name ‘computer’ suggests. Since the invention of computers, the entire concept of a computer is to do things at a faster and more accurate rate than humans can do. Though our modern computers are far more than just calculators on steroids, the concept of most technological advances is to make things easier on humans and allow us to do more with less effort.

While many would argue that this is exactly the point of an AI Android, to make life easier for humans, because it is just another machine, I want to raise a question about human psychology. Humans have a tendency for anthropomorphism. Anthropomorphism is defined as, “Giving human characteristics to animals, inanimate objects or natural phenomena.” Anthropomorphism is a phenomena that has also been studied in regards to robots and human-robot interaction regarding how people feel about the robot after viewing it through an anthropomorphic lens.

Humanoid Robot, ASIMO, 2000

As humans, we are more inclined to anthropomorphism with a figure that is human-like in shape and other characteristics. Rick Nauert, PhD describes the psychological and evolutionary purpose of anthropomorphism as:

Neuroscience research has shown that similar brain regions are involved when we think about the behavior of both humans and of nonhuman entities, suggesting that anthropomorphism may be using similar processes as those used for thinking about other people. Anthropomorphism carries many important implications. For example, thinking of a nonhuman entity in human ways renders it worthy of moral care and consideration.

Anthropomorphism is relative to empathy, which is only increased when the thing being anthropomorphized is humanoid in shape and, in most people’s plans for humanoid AI Androids, would become a household implement that ‘lives’ in people’s homes and performs tasks for them.

This is where we come back to the question raised above, because presuming we, as humans, are likely to anthropomorphize the household Android and give it a name and expect it to do household chores, what does it say about us as people that we would want that?

Does it not, essentially, mean that the idea of an AI Android is a servant you don’t have to pay for their services? What do we generally call servants who don’t get paid?


It is very important for me to reinforce the fact that I am not in any way claiming that people who want an AI Android want to welcome back slavery, and I am not suggesting that this is a definite ‘AI is Slavery’ idea. I don’t even know how I feel about my own questions at this point. We have moral gray areas all across the board, but does this constitute as something that belongs in that gray area?

I’ll leave it up to you guys to think about on your own and decide for yourself, but I just think that this is a very important question we in the future may to consider, especially after reading this passage regarding the anthropomorphism of military robots.

As human robot partnerships become more common, decisions will need to be made by designers and users as to whether the robot should be viewed  anthropomorphically.  In some instances, Darling notes, it might be undesirable because it can impede efficient use of the technology.  The paper uses the example of a military robot used for mine clearing that a unit stopped using because it seemed “inhumane” to put the robot at risk in that way.

Interconnected World: How New Media Has Lowered The Barrier To Entry For Global Communication

When you hear the words “New Media” you probably think of social media such as Facebook, Twitter, Instagram, ect. However, New Media is more than just the social media platforms we all use every day.


New Media, defined as “Mass communication utilizing digital technologies” by Oxford English dictionary, is not just limited to social media, but also includes the lightening fast way in which information is shared around the world. New technologies as well as the culture around global communication all are part of the idea of New Media. There are many more ways than just social media platforms that make up all the ways New Media has shaped the way that we communicate in the twenty-first century, and communication is changing every single day. The way that global news is reported and verified utilizing these types of sources is also New Media.

Just a decade ago, even with the internet age at a high point, the bulk of people seeking news were limited to what major news networks reported on, both in traditional print, television media, and in their online websites. While we had access to talk to others from around the world, there was far less platform availability for news to be disseminated by individuals in various parts of the world to a broader audience than one-on-one communication. In other words, digital communication between unofficial sources was still, largely, not an actual method of mass communication so much as a modern-day ‘phone-tree’ .

Though technology has been advancing rapidly for decades now, we have now reached a point in which there are very few barriers to communication with others. Though her take is a little more negative in her post, Mandy Edwards say something very crucial to this phenomenon when she says, “As communication and information travel faster and faster, the world seems to get smaller and smaller.” What this means to most of us is that what would have once been virtually impossible to the average person, communicating with someone anywhere in the world at any time instantly, is now just a few key-strokes or screen-taps away. In essence, New Media has lowered the boundaries of privilege regarding communication.

From the 1995 film Clueless

When I was a child in the 90s, though people on TV often had cellphones, I didn’t know a single person in real life that had a cellphone, because they were an expensive device that had a certain level of privilege attached to its ownership. My aunt and uncle owned a bag phone for their car, and even that was the kind of luxury they boasted about and showed off like one would show off a diamond necklace. Also in the 90s, my other aunt was the only person I knew who had an internet connection, because she owned her own business. Twenty years ago, having an internet connection and a mobile phone were markers of privilege, but today those items are in some ways free (think free Wifi at a cafe and free computer use at libraries) and a cellphone of some sort is relatively inexpensive for even the lowest income individuals all over the globe.

Another way in which there was a certain barrier to access in the 90s regarding the idea of global communication was the cost of long-distance phone calls. Even today, an actual phone call internationally will cost a fortune to some. The cost of even one-one-one communication across borders has formerly been limited to news reported by major news networks, individual communication via mail, or expensive phone calls. The invention of email still required an internet connection and a device to access it from, which we have already established, were expensive commodities for all but the privileged members of society just in the few decades previous.

Inflation Adjusted Price in 1995: $5,467

However, within the last decade alone, the barriers to one-on-one communication have fallen significantly and, more important and more pertinent to the actual discussion of what New Media really means, the barriers to mass communication have fallen away with the rise of social media. Social media as a method of mass communication may have its downfalls, such as lower barriers to entry meaning lower barriers to accountability in what information is spread, but it allows more information to be shared to a large audience and shared around to more large audiences without the curation of major news networks.

While there is often something negative associated with news networks curating what information to broadcast, the major issue is not some censorship-esque control of information, but rather the fact that major news networking sources choose what to report on based on what will get the most attention from their audience. There are a lot of things that an America-centric news report would leave out that social media allows reports on to be spread throughout global audiences, for example. The point of this is that New Media changes the speed and the methods of distribution of information through the channels of social media. What would have likely never become broadly reported news finds it’s audience through social media. New Media has made it possible for people all over the world to share information that is significant to certain groups of people that would otherwise not be considered significant enough to warrant an article on CNN or a spot on the nightly news on NBC in a way that makes it possible for the intended audience to find and access the information with a quick search and a few clicks.

When it comes down to it, New Media is more about the way we think about communication than it is about the methods through which we communicate. Though technology is the basis of the concept of New Media, it isn’t just about the platforms we share information through with social media, it is also about the way we think about communication. New Media has managed to affect global changes in the very idea of communication because it has lowered the barrier to entry to what is and isn’t possible when it comes to communication regardless of location, wealth, or status of privilege.

Difficult Question About Queer Diversity In Fiction

I am going to ask a question that I find difficult to answer, not because I am trying to challenge anyone, but because I genuinely want to hear what some people think. This is not a rhetorical question, this is a real question I think needs to be discussed in both the book world and in the film and television world. While this isn’t aimed at Book Twitter, I got into thinking about this because of reading discourse on queer diversity in the Book Twitter world.

The first and most important question I’ve really struggled with is related to the idea that we need more explicitly queer characters that state their identity or orientation. There’s this idea in both books and in other media that this implied queerness is just a cop-out and we want characters to verbally state their sexuality at some point. My actual question here is, “Does this risk lowering the standards of writing?”

Let me explain: In books, film, TV, ect, one of the most important rules of writing fiction is to not treat your audience like they’re stupid, and to ‘show it, don’t tell it’. I’m one of those people that really wants to KNOW what the sexual orientation of characters are because it’s just so rare still to have queer characters. However, I’m also a big fan of GOOD writing practices, and often when writers find a way to get their character to explicitly say, “I’m bisexual” or whatever, they end up having something so terribly contrived that it drags the audience out of the story. Nobody likes writing where it feels like the author is explaining something to the audience because they’re too stupid to pick up on the context clues, and there’s a serious risk of that happening in many cases.

Yes, there are definitely cases where it fits into the story to explicitly state a character’s sexuality, but more often than not, it doesn’t fit in good writing. A good example of this would be something I wrote once that won’t ever get published where this character, in a conversation, just ‘casually’ gives the other person their Tragic Past when it really did not fit the situation at all. It was so contrived and terrible, but it managed to inform the audience of the whole bisexual backstory of the character.

My biggest worry is that, with this new “SAY IT OUT LOUD!” representation demand in fiction, it’s going to make so many more situations like this. We’ve all read some story where there was a token queer character who explains their queerness just for the sake of having someone queer in the story, and it’s so cringe-worthy, isn’t it? I once read a book where there was a non-binary character that was a fucking SIDE CHARACTER and they had like two whole pages of explaining their ‘Xie’ pronouns to the protagonist and basically giving a lesson on being non-binary and then THEY WERE NEVER IN THE STORY AGAIN! It was so pointless and clearly token queer character, and I have this really frustrated feeling that with the demands for diversity, more and more people are going to start sticking token queer characters who have several pages of preaching on their gender or sexuality just so people can be sure to check that box. That sort of thing is something terrible for QUALITY writing.

I want queer characters more than you could probably understand, but I’m entirely against sacrificing quality for diversity. It’s the same reason a lot of people get on my case for giving queer films bad ratings on Chelsea Loves Movies (even though I DO give Queer Films a leg up by only comparing them to each other). I want quality diversity, and I won’t sacrifice my standards just to see more people like me on the screen or on a page.

My other difficult question is related to my own issue there, because I have to ask, “Is enjoyment of non-explicit relationships that are expressed the same as heterosexual couples (ie, their relationship can be implied, not explicit) a bad thing, because it allows people to get away with never making good on queer character relationships?”

I’m a big fan, in every medium, of normalizing queerness and not making it something that needs to be pointed out. It’s the only way I WANT to watch/read/consume queerness in fiction. However, I’m also aware that we might not be to the point yet where that’s enough, because I’m sure that there are people who use this as an excuse for ‘subtext’ and never delivering on the implication. Other times, people get really upset over some writer not delivering when I feel like they did deliver absolutely adequate confirmation of the relationship that I felt they were always working towards portraying.

Because I’m so torn on this topic, I want you guys to discuss this one with me. Comment, tweet me, and I would say DM me but I want this to be a public discussion, so try not to do that if you can help it. This is one of those places where I find myself really struggling because what I want in quality leaves gaps for chickening out on going there. What do you guys think?

Attitudes Towards Adaptation

In my lifetime, I’ve often found that book-lovers look at the word Adaptation as if it’s a bit of a controversial one.

Ever since I was a child, the big thing that people always complain about with adaptation is that it isn’t as good as the book, or it isn’t like the book, or it changes something, or it somehow doesn’t fit the reader’s expectations. People absolutely hate most adaptations because it, to them, doesn’t represent the story they love from the source material. In fact, there are so many articles regarding the reasons that books are better than movies (almost ALWAYS, as some claim). As a kid, the biggest offender of the bunch to most people was Harry Potter and almost any other film based on a long novel that had to cut a lot of stuff out for the plot. People became enraged at these changes and, I must admit, I joined in sometimes when I was a kid.

Now, I utterly hate the phrase, “The book was better than the movie”. Let me tell you why!

Let’s start with the definition of the word adaptation.


a movie, television drama, or stage play that has been adapted from a written work, typically a novel.

There we go, our product, the adaptation. Very good, now let’s further examine the word adapt.

make (something) suitable for a new use or purpose; modify.

You may ask, “What was the point of that, Chelsea?” to which I say that it’s important to pay attention to what exactly an adaptation is in order to understand why I hate the phrase “The book was better than the movie.” The simplest way of putting it is that an adaptation is an entirely new work that is inspired by the source material. The word ‘adapt’ here means that something is made suitable or modified for a new use. The new use is an entirely different work of fiction and, therefore, comparing it to the source material is pointless and just plain illogical.

“The book is always better than the movie.”
This presumption is widespread, but it is less a critical determination than a personal bias. A movie based on a literary source is often seen as a secondary work and, consequently, of secondary value. Literature, generally, still occupies a more privileged position in the cultural hierarchy than movies do; and readers often have a proprietary attitude towards the book, an attitude that influences their reception of a film based upon it. They often are disappointed when a movie does not match their concept of what they have read, not realizing that reading, itself, is an act of translation. Readers translate words into images and form strong, private, often vivid impressions of what the book’s fictional world looks like and what it means; words become translated into emotional experiences. When a film does not square with the reader’s ideas, images, interpretation – even simple recall – of the book, the movie is deemed de facto deficient and disappointing, spawning the general impression that the movie is never as good.
-Linda Costanzo Cahir, Literature Into Film: Theory and Practical Approaches

What Cahir is pointing out here is exactly why film and literature are entirely different things and, to use a common phrase, it’s like comparing apples to oranges. What she is saying in this is not that book readers are snobs, but rather that reading a book is a highly personal matter. Reading a book is a matter of forming your own adaptation of the authors intention in your mind, more or less, in that you, the reader, are the ‘director’ of your own mental movie of what is going on. In film-making, a director takes a script written, usually, by someone else and their interpretation if that source material becomes the film we see. In reading a book, you are essentially taking a script someone else wrote and interpreting it into your own final version the same way a director interprets their script.

Because of this personal nature in reading a book, it’s entirely unfair to then say a movie adaptation is or isn’t good based on whether or not it matches your interpretation of the source material.

There are different types of film adaptations and, I have noticed that people only seem to get angry at direct adaptations that aren’t carbon-copies of the book it is based on. It makes me think that perhaps people aren’t looking at adaptations as they should be, which is to say that a direct adaptation is no more or less ‘valid’ than a radical adaptation. We the viewers/readers should no more compare Harry Potter to its source material than we would compare Clueless to Emma or O Brother Where Art Thou to The Odyssey.

(Yes, if you weren’t aware, those are both adaptations of those works of literature)

Another very important point is that the structure of film-making is vastly different than the structure of book-writing. The very nature of novels is that you have a greater introspection even in the most unreliable narrator-style 3rd-Limited POV. The nature of reading is that you’re given insights that cannot be explored in film, thus limiting the way a film has to adapt (there’s that word again) the story to be told through a different narrative style. Film adaptations often have to express something thematically that may be expressed literally in literature just because you can’t show thoughts and feelings without Word of God voice-overs or some odd-ball film technique. At the same time, disjointed agitation is far more effectively portrayed in film than in literature because one cannot write such a thing unless their writing is some 1st POV stream-of-conscience, and even that might be difficult.

The point is that there are things that cannot be well translated from page to screen and vice versa, and there are limitations to both literature and film that the other doesn’t have. Time is a major factor. Tone is a big difference. Intention is established through a vastly different method in each medium. The two art forms are both limited by the nature of their own existence, they both have strengths, they both have weaknesses, and to consider one inferior just because it isn’t the same is like saying a fish is inferior to a monkey because the former cannot climb a tree. To say that a book is better than the adaptation simply because the adaptation is different is just a ridiculous statement to make.

You can say that the adaptation doesn’t accurately tell a coherent story (or as coherent of a story) limited to its own form (since I mentioned Harry Potter a few times, a good example of this would be the way that, in trying to show enough of the books to remain accurate, Goblet of Fire leaves a lot of plot holes and makes it so that the viewer HAD to read the book to fill in the gaps themselves), but that isn’t comparing the book and the movie to say the story isn’t told coherently enough in the film to merit its existence. The movie may not be good on the basis of film-making, such as the film adaptation of the novel Eragon, that was just a lousy film. It didn’t matter that was an adaptation, because it was just a bad movie. But again, there is a difference in ‘this film or television series isn’t good’ and ‘this film or television series isn’t as good as the book’.

There’s also the matter of times in which adaptations are able to be superior to the book because its changes make the basic story make more sense or fit better. We all know cases where a book just doesn’t tell a story as effectively as it could be told, and it’s not uncommon for adaptations to take a good story premise and adapt the story premise into a film that more effectively gives a good narrative from the premise that wasn’t well represented in the novel. Though it isn’t a novel, a good example of this would be how the film V for Vendetta was adapted into a very coherent and narratively compelling story from a comic book that, while decent, lacked the central coherency to make the story all that compelling.

You notice I don’t say “The movie is better than the source material” because the styles are vastly different. I say that the film was able to make the story have a better central coherency, which makes the storytelling more effective and efficient at getting the themes across. An INCREDIBLE example of this would be the mini-series North & South that is an adaptation of a novel of the same title by Elizabeth Gaskell. That novel was published in a series of pieces and they lack any central coherency. There is no real central plot even to the beginning, and the end is painfully abrupt and unfulfilling, but the story is there, and with that interesting story that the novel fails to deliver in a satisfactory way, the BBC mini-series consolidates the story that is interesting and focuses in on that in order to deliver that compelling story premise the novel fails to deliver. The novel is not good. The mini-series is good. I am not saying the mini-series is better than the novel, because the mini-series would be just as good had it been an original idea and not an adaptation, and the novel would have still not been that good even if it were the only version of the story available.

I have a very strong opinion on the phrase “The book is better than the movie” because, as someone who has studied both literature and film extensively, I feel that is an entirely unfair statement to both art forms. A book can be good. An adaption can be good. A book can be bad. An adaption can be bad. These things are not only not reliant upon each other, but they are not related to each other. So in the future think about this and ask your self, “If there was no book, would this movie be a good film?” or, “If there was no film, would this be a good book?” rather than asking, “Is this movie as good as the book?”

As always, I welcome discussion and comments in the comment section, and I welcome you to share this discussion outside of this post. You can always find me on My Twitter, and since this relates to film and television, you can always find my Movie Blog and my TV Blog as well.

Tourism Is Not A Dirty Word

When I was standing on the observation deck of the Empire State Building, looking out at tiny cars and minuscule people, one of the most incredible sights I had ever seen, and experiencing something that still moves me to remember even now, it struck me that I almost didn’t take the time to go there at all. I found myself wondering how on earth anyone could find fault in something as breathtakingly beautiful and moving as standing high above all of the life beneath you and looking down on everything around you. It was like stepping out a door and finding myself on top of the world, almost literally, and looking out at the horizon and knowing that millions of souls are within my eye line. Seeing thousands of cars, hundreds of thousands of lights, and more landmarks than can be counted in one glance is a humbling experience like no other. History, culture, life, and art all within one line of sight is an incredibly moving experience. It was an experience I will never forget, and one that I was warned by the travel blogs I looked at while I packed my bags not to bother with because it was just a “tourist trap”.

Last summer, I went to New York to visit my friend, Amber. Though she lives in Brooklyn, we decided to stay in a hotel in Midtown Manhattan so that we were within walking distance of most of the best tourist destinations. She has lived in the city her entire life, but she had never done most of the touristy things before that week. Our hotel was on 32nd street, only two blocks from the Empire State Building, and one of the things I wanted most was to go to the Empire State Building before I had to come home. I had read many travel blogs to get ideas of what to do on my visit, and many of them decried it as an over-rated tourist trap, “A waste of your money and time,” one particular blog stated, but I remember seeing the Empire State Building on TV and in movies all my life, and I knew that, if we did nothing else, I had to go up to the observation deck and see the view at night.

One thing I didn’t know was that, in summer, it rains almost every day in New York. Most nights, the view from our hotel was of the low clouds glowing eerily in whatever color the Empire State Building was lit up that evening. Finally, on the fourth night of our visit, the skies cleared after a summer shower so that we looked into the sky to see the white lights clearly. The Empire State Building allows trips to the top until 2 a.m. so we went around midnight. It was strange for me to see people filling the sidewalks after midnight, because coming from a small town, even on the weekends, everything is closed by ten. But this was New York City, and I was on my way to the experience of a lifetime.

When we arrived, we walked past the doormen and doorwomen in their historical attire, a costume of deep burgundy with gold accents, and entered a lobby that words simply cannot do justice. The ceiling was so high it felt like walking into a cathedral, and every surface was gilded. The floor was a marble waxed so thoroughly it was like walking on glass, and at the far end from the doors was a large, metal mural showing the building itself that gives any who enter the first, unavoidable taste of the art-deco decor that was to come further inside. After taking an escalator to the second floor, we entered a maze of velvet ropes, clearly laid out for the larger crowds of the daytime. The words “tourist trap” came to mine after all, but we were already there so there was no reason to not continue with our experience. After purchasing our tickets, the price of which also added to the idea of a “tourist trap” since it was an incredible thirty-seven dollars each, we joined with a small group and began our way to the museum-esque part of the trip. From the second floor we went through an exhibit about the sustainability of the building itself as well as the green movement in New York City, a clear ploy to advertise and appeal to the eco-friendly tourist. It was just another reminder of the dreaded words, “over-rated experience”.

The real magic began when we stepped up to the elevators and waited for our turn to be taken up to the observation deck. The elevator doors were decorated with a gold and silver relief much like the mural in the lobby, and beyond those doors we found the fastest elevator ride of our lives. It took us less than a minute to go hundreds of feet. I could feel my ears pop on the way up, it was such a fast elevator ride. When we exited, I was thrilled to discover that, in addition to the sustainability exhibit, prior to going to the observation deck there was an entire floor dedicated to an exhibit about the construction and the history of the Empire State Building. There were exhibits showing the purchase orders for the initial steel to begin construction, photos of the building in construction and the workers walking on the steel beams high in the sky, and even article clippings about the building being completed. A plaque near the end of the exhibit stated that the Empire State Building was built in exactly one year. I could hardly imagine such a feat.

After lingering as long as my friend would let me, we entered the final elevator to the top. I had butterflies I was so excited. I had dreamed of that moment for most of my life, and I was about to see the view from the top of the Empire State Building. When I exited the elevator, I was momentarily distracted by the beautiful artwork on the ceiling and floors from inside the observation deck. Like most of the art-deco styling of the building, there was this beautiful tile-work on the floor, and these metal sculptures that looked like clockwork gears hanging from the ceiling. It was hard to take our eyes off of those things since there was little to distract from them at that point. Through the windows, there wasn’t much to see, just blackness of the sky, and we had to try twice to find the door to the outside rather than the doors for those outside to come inside, a clear side effect of excitement. When we finally found the correct door, we walked out onto a ramp, and the first thing I could see was the spire atop the One World Trade Center building. When we approached the bars that separated viewers from a certain death fall, I started to feel a rush of energy, an overwhelming excitement at what I was about to see. I knew immediately that there was nothing over-rated about that moment, no matter how, apart from Amber, everyone else up there was a tourist to the city just like I was.

The quiet was what hit me first. I wasn’t overwhelmed with anything, because it was so very quiet up there. Instead, the moment I took a look out at the city, I was overcome with the strangest sense of calm. It was the most peaceful moment I had experienced since I got on the plane to head to New York. It was windy and cool and I honestly couldn’t imagine a more peaceful place in the entire world at that moment. Everyone was so quiet. Nobody up there spoke above a murmur. It was as if we could all sense that same odd, overwhelming peacefulness and felt that if anybody spoke too loud the spell might be broken. The view was magnificent, but the emotions are what I remember the most. I could see so much of the world from that spot, I knew intellectually that there were tens of millions of people within my sight line, and instead of feeling anxiety at how small I was in the great big world, I felt like I was a part of it.

Looking out at the cars below us, so tiny that they looked like toys, and at the people on the sidewalks who looked like ants, I felt like in that moment we were so incredibly human. Amber and I were the most human we had ever been. Every person out there, all the millions of people in the thousands of buildings below us and across from us, even the ones in tourist traps, was a human being the same as the two of us. I felt connected with humanity and it was just such a beautiful feeling. We stayed and took photos until our phones died and when we reluctantly left that platform, I remarked to her that I really felt that, had we done nothing else besides make that single visit to that single landmark, I would have been entirely happy with my vacation.

It was almost a spiritual experience to see the world from that point of view and, it raises the question of whether or not “tourist spots” are really the great dark mark on travel that the travel writers describe them as. Thrillest’s travel section listed the Empire State Building on its list of “America’s 10 Worst Tourist Traps To Avoid”. The words they used to describe the same experience I had was, “you will literally spend hours of your life (that you will never, ever get back) slogging through a crowd of Europeans and honeymooners from Western Pennsylvania for a view of the city that you’ve seen a thousand times on TV.” As someone who did spend hours of my life there, but hours I will never forget, I find it incredibly misleading that some people will read that and decide to miss the experience that Amber and I had. It seems to me that people who write about travel often have this entire angle in which they really stress the good of “going off the beaten path” instead of tourist destinations in the places they visit, and yet I was in New York with a native to the city, and both of us had a better experience at a so-called “tourist trap” than any other destination we visited. Online there is even this bad view of sorts when it comes to going to popular tourist destinations. People brag about how they travel to places where “the people are” rather than go to the tourist spots. The problem, however, is that we didn’t want to meet “the people”. Amber lives there, she knows “the people”. We wanted an experience we could always remember.

In my experience, the tourist things are almost always the best part of any trip. There’s no shame in having fun doing the most stereotypical, touristy things when you travel anywhere. Yes, there are some tourist traps that are no fun, and yes I understand the people who live there are driven crazy by tourists getting in their way, but it strikes me as something incredible that the “tourism shaming”, for lack of better words, can reach the point to where travel blog writers praise going to eat at a deli that’s no different than a dozen others because it’s not the usual tourist thing when you could, instead, visit a museum, a historical landmark, an amusement park, or just an interesting tourist attraction. In New York, I visited neighborhood places off the beaten path, and yes, it was fun. It was great walking around with a New Yorker so it wasn’t obvious I was a tourist. We saw some things I would have never seen were it not for traveling around with a New Yorker to show me all her favorite places. However, they were all things that could be experienced almost anywhere. Every city has its local eating places. Everybody has a fun little coffee shop they love to frequent. Everybody has the pizza place they claim is “the best in the world” and some quirky local shop with a zany owner. It strikes me as odd that travel blogs expect you to travel places to experience the same things you have at home but in another city. I enjoy those things, yes, but I enjoyed the Empire State Building more than L&B’s pizza, that’s for sure. Even my New York Native best friend had the same oddly spiritual experience that I did visiting the Empire State Building, tourist trap or not.

The most fun we had was seeing the sights like stereotypical tourists, because though she had lived there here whole life, there was so much she had never done. We went to kitschy tourist stops, we spent far too much time taking selfies in Times Square, we even went on a sight-seeing boat ride with all the other camera-and-fanny-pack-laden tourists. Eating at her favorite pasta place near where she went to college was fun, as was visiting her favorite independent book store, but there was nothing disappointing about tourist traps, and most of all, there was absolutely nothing over-rated about the Empire State Building. So next time you travel, think twice about the warnings from travel blogs about avoiding tourist traps. You never know which one will end up being the best part of your trip.