The topic of algorithms in relation to our highly personalized and tailored online experiences is a topic in which I found myself to be very taken aback and instantly thoughtful. How could anybody (or anything) possibly have the right to turn my interests, thoughts, and online behaviors into a dumbed-down version of an assumptive me? How can my ponderings and my curiosities be answered simply through a narrow funnel of whomever an algorithm calculates me to be?
“Literacy: are today’s youth digital natives?” by Boyd was the initial jumping-off point into my curiosity and fascination surrounding “the politics of algorithms.” My searches in and around the depths of the internet were never something I put an immense amount of thought into, in the same way that a thought may simply pop into your mind and that’s that. Of course for the classic example, just as most of our casual conversations happen in this day and age, you may be out for dinner and drinks with friends, discussing locally-sourced ingredients and cocktails and breweries and distilleries, and so on and so forth. One person may have the random curiosity of knowing the origin of their aged rum, so someone automatically whips out their smart phone for a quick Google search, and voila, we have an answer. Now, let’s say this particular group of friends catches lots of happy hours together. They become more and more centered around the details of their dining experiences, and so, it just so happens that that same friend is always the one pulling their phone out to fill in the blanks.
Unbeknownst to them, Google is creating a preconceived notion about this person who seems to always be looking up alcoholic references in the evenings. Suddenly the ads they see are centering more and more around the big sale at BevMo this weekend, and the release of a new line of high-end barware and merchandise at the local shopping mall. A simple key word search for “cherries” may then list 10 results for brandy-soaked Luxardo cherries before perhaps a simple definition of the word and some ClipArt images of cartoon cherries.
Results in searches are based off of algorithms which are created by some kind of engineer with conscious (or unconscious) biases; so, the results will always be swayed in some way or another. Google data can be manipulated based on what people are searching for, versus their actual preferences and histories.
We’ll take my example of the happy hour lover, but let’s add on some details and reference points to make her human. She is a full-time student in pursuit of a culinary degree, with aspirations of opening her own rum distillery and locally-sourced organic vegan restaurant. While yes, perhaps she does enjoy relaxation with friends over a nice cocktail and appetizer at the end of a long and stressful week, this doesn’t give anybody (or, again, anything) the right to categorize her as some kind of uninvolved and highly vacant lush with nothing else occupying her brain. Yet, this is exactly what algorithms are doing to us. A formula put into place by someone working at Google does not have the capacity to understand the individual pieces which add up to make a functioning and complicated person. An algorithm doesn’t understand that our culinary happy hour lover is not only enjoying her time out with friends, but she’s absorbing critiques and curiosities from those friends, adding those informational bits into her mental database, occasionally pulling out her phone to answer pertaining questions, and yet, all the while, she is fully enveloping herself into the space and culture in which she hopes to spend the whole of her career.
Personalization is nice, but the inherent danger here is that we’re being placed into informational bubbles based off of what Google thinks we’re interested in, or what may be profitable to Google and their sponsors, but not necessarily the informational bubbles we want to see ourselves in, or be seen in. We do not need to succumb and adhere to this practice that we have involuntarily fallen into by relying so heavily upon Google, other search engines, and the Internet all around as a whole.
Algorithmic categorization is a topic of utmost importance, not only because it limits our horizons as free thinkers and explorers in the aspect that we aren’t being allowed to come into contact with everything which we could come into contact with organically, without the algorithms, but it is also extremely shocking and threatening if we allow ourselves to compare the situation to one we may have imagined in the novel 1984, where our total concepts of reality, and of freedom of speech and thought, could be compromised, unless of course we keep mind enough to protect it.
Beyond the idea that we are being categorized by our interests and our social networks, we must also be aware that we’re being spoon-fed only the information that big business and our governments wish for us to be exposed to, with algorithms and firewalls and the like all skipping hand-in-hand into the black holes of the Internet with our intellectual properties in their back pockets. When our Internet-usage rights as Americans are compared to those rights of Internet users in other countries, the degree of information being blocked and the intellectual property of those people is being stifled at a much more alarming rate than we may even be able to understand, due to our highly selfish and blinded American views and ways.
We cannot allow ourselves to be categorized and belittled by these algorithms and the companies and corporations who write them up. We cannot allow ourselves to be pigeon-holed into neat little categories wherein the establishment may come to jab their products and ideals into our brains… through our eyes, through our ears, through our screens, just because their calculated formulas made them believe we would be the most susceptible to their messages and relentless attempts for people to hop on board for whatever they needed to sell.
Piggy-backing onto other topics we have explored, I find myself thinking about the concept of “digital natives” and how some people grow up feeling automatically inferior to some others, just because they haven’t been brought up with the same sets of skills and knowledge which make digital and Internet navigation easier and more stress-free as a whole. Perhaps in relation to Google’s algorithms, and what certain people may or may not be exposed to based upon their own personal formulas, maybe the less “digitally literate” are the ones who will suffer most from pre-calculated combinations of search results, because they may have never been taught to look past the first few results which have “Ad” posted blatantly below the website’s title in their green little boxes, enticing us to click-buy-click.
In an international sense, without further personal investigation, the idea of an internet advertisement for someone in another country is difficult to fully grasp, but the same ideas of advertising stand true across nearly all borders. With the use of algorithms, advertisements will blatantly pop up where we may least expect them, because they know we are easily distracted by items and products and services, and they have calculated our one single search for brand-name hosiery into our personal algorithmic calculation, and will never forget. For someone in a highly Communist economy, perhaps an advertisement for certain governmental institutions and officials may be more common that an advertisement for a product, or one may piggy-back off of the other. Or on the opposite end of the spectrum, in a place like Amsterdam, perhaps algorithms may skew to a more sexual due to the legality of prostitution and the abundant popularity of their Red Light District. An Internet search in one country which may be completely tame and normal and legal, could mean death or prosecution in another. These examples further demonstrates how algorithms shape our online experiences, and how their usage may vary internationally.
Perhaps a lower level of digital literacy equals into a reality for those people where they are inclined to believe almost anything that they may read on the Internet, even if only just because it’s plastered right before their very eyes. While some of the most digitally literate people may not be entirely aware of what Google is doing with their algorithms, I believe there would still be a high enough level of curiosity and disbelief wherein certain search results may just seem too meshed together and calculated, and one may wonder if perhaps they should clear their cookies, or maybe even try a search in different words or phrasing in an attempt to dive deeper into the realm of possibilities and results. Algorithms may begin to take their hold and commence calculating you and forming notions and conceptions of who you are, who you’d like to be, or who you are voluntarily or involuntarily becoming, all while you may not even be aware of their existence.
In order to dive deeper into the realm of algorithms and algorithmic research, I took a look into the Data & Society journal and its “Algorithms and Publics” project, where Dana Boyd’s work on algorithms is highlighted and broken down in “Who Controls the Public Sphere in an Era of Algorithms?” with their list of questions and assumptions. Many concerns are brought to life here, revolving around the idea that algorithms are robbing us of our full access to the immense web of information on the Internet, and even though the technology behind algorithms supposedly seeks to pinpoint our wants and needs, instead we are continuously being pigeon-holed and obstructed from exactly that. In our ever-expanding era of personalized communication, we must continue to ask ourselves exactly what role we want and need for algorithms to take in it.
The role many of us as Americans have prescribed to is one in which we are very centrally focused on our own country, with our own “issues” and viewpoints, and our “first-world problems.” One can only imagine what Americans look like as an outsider looking in, but it’s not difficult to guess. America holds only a small percentage of the world population, and yet we are consuming over 30 percent of the world’s resources. As a country, we tend to be highly ego-centric and focused almost entirely upon trivialities like the friendship between Trump and Kanye rather than opening our eyes to the immense wretchedness in some other countries throughout the world as we speak.
When this sobering fact and situation is superimposed onto our online crises and the idea of one “public sphere” of information, we must take a step back to realize that Americans and Westerners are simply not the only people on the Internet. With more than 3 billion people online throughout the world, it is important to remember that people from every imaginable walk of life are accessing the Internet in some way, and many of us would agree that it would be difficult to find commonalities between the common modern American and someone on the other side of the world in a third-world country. And so, how can it be that the Internet has revolved around the notion that we are all somehow still one “public sphere”?
Dana Boyd reminds us that all throughout history, of course even before the Internet, we have believed in the idea of a public sphere, and yet, the public sphere has always been unable to include everyone, with different people excluded at different times and in different eras and places throughout history. Today, in a world where diversity is abounding, we must learn to refocus our attention on the idea that there is actually an unknown number of varying public spheres. If the distribution of information is fully unified to reach the maximum amount of people in one public sphere all focusing upon one common set of concerns, then many people will always be left out, as history teaches us.
Take Eli Pariser’s side-by-side examples of Google results of Egypt produced for ‘Scott’ versus ‘Daniel’ – both Caucasian males living in New York, but with very different search results. Why? Are their results true to what each of them actually cares about and finds to be the most relevant? “As Eli Pariser states about the impact of increased personalization online, “Democracy requires a reliance on shared facts; instead we’re being offered parallel but separate universes.” As it may be quite a bit easier to comprehend from the start, we can compare the internet searches of a 20-something Caucasian male college student with zero religious affiliation and a bottle of whiskey, to the internet searches of an elderly, retired, devout Catholic widow with 10 cats in her Kentucky home on the prairie. Obviously or not, these two people would most likely have a completely different Internet history and persona. Now, take that same college student and compare him to a middle-aged devout Muslim living in Saudi Arabia. These two men would be absolute polar opposites in their Internet realms. This being said, to be naïve enough to believe that the Internet can be referenced to one single public sphere of information would equate to living in a cave and knowing absolutely nothing about the workings of this complex world we live in.
Stepping further outside of the bubble, we pull at the cobwebs of our brains and remember our own introductions to the Internet, and the seemingly endless possibilities it presented us with. Boyd reminds us that “for many of the visionaries, the Internet would ideally be a borderless, government-less sphere available to anyone… it was imagined to be a space that could support the emergence of multiple publics and individual voices, outside of the dominant, hierarchical traditional media ecosystem.” Outside of our overwhelmingly “American” viewpoints and tendencies, people suppressed in various ways in other countries around the world were overwhelmed with hope in their introductions to the Internet, and at first realized that there was only so much monitoring that could be done by their governments regarding their internet usage. However, with the expansion of algorithms and their immense reach, these people were once again stymied, but in a much more modernized sense, which is arguably and comparatively worse.
“The early days of the Internet were filled with the promise of breaking down impediments to a diverse and representative media environment… in theory, anyone with access to the Internet could start a blog, and spread information on social media… the traditional material means of creating and disseminating news – access to airwaves, camera equipment – were no longer a prerequisite to entering the public sphere as a source of information… the rise of the Internet put into sharp relief the degree to which news media had long served as a gatekeeper, making visible the limitations of who had access to communicate by and through contemporary news media.” The immense horizon of possibility lying within this revolutionary Internet was something that we knew immediately would change the world as we knew it, and it wasn’t going anywhere fast. With just a screen and a keyboard, it seemed that two people from opposite edges of the world could now communicate in sheer nanoseconds, and in a brand-new way that challenged even the telephone and the television in revolutionary prowess. The world would quite literally be at the lips of anybody who wanted a taste, and the intoxicating wealth of information would surely be narcotic in itself.
As if a true “freedom of information” movement could’ve been heralded in through the introduction of the Internet, a bright ray of hope shone for only a minute before the newest forms of gatekeeping were invented and introduced and thoroughly enforced in all situations, switching the generation’s leaders and influencers from a power-trip of limited information dissemination to one of extreme information filtering. There was a light at the end of the tunnel and the vision of freedom flowing abundantly through the air, but then new gatekeepers of sorts emerged.
Taking each country as an individual case, the freedoms we entertain and experience as Americans are in stark contrast to the Internet experiences of a person in North Korea, for example, where all websites are under government control, or Burma, where authorities filter e-mails and block access to sites of groups that expose human rights violations or disagree with the government, or even Cuba, where Internet available only at government-controlled access points. When the Internet was a pretty little infant, we couldn’t even fathom the way firewalls and new gatekeepers and algorithms would block and limit the online abilities of any person who could get their paws on a computer.
Our American generation who grew up with huge multi-colored Apple computers in our Elementary Schools may have experienced the beginnings of internet experience without firewalls or educational filters or the like. I distinctly recall one of my little friends showing me these filthy and violent websites that her raunchy older brothers had shown her, and somehow, in our small Elementary computer lab, nobody knew we were accessing these things. I’m not even sure whether firewalls existed yet.
Now, in stark enough contrast would be the internet experiences of those aforementioned older brothers. They had been promoted from nudie magazines to a full realm of the earliest internet pornography sites (which would later prove to introduce an entire range of cognitive sexual development issues in themselves) and even if they were just young teenagers, surely their parents either had no idea such sites existed at all, or they simply had no knowledge of how to prevent their young boys from seeing them.
Finally, to compare those experiences of the earliest American online freedoms to the online freedoms of an international flavor, surely some people with the earliest internet access in countries like Bangladesh or Pakistan would be astonished and happy and content enough with being able to access some banned books, or certain education materials at all. Our cultures cannot be viewed in a vacuum, but rather, on a scale of mental importance and freedom.
The light at the end of the tunnel swelled and flared like the sun, only to be smothered and put out just as soon as it had appeared. Boyd verifies that “although the potential for a decentralized Internet is still galvanizing for Internet activists, most people’s experience with the Internet is far from what advocates idealize,” and with the expansion and increased complexity of algorithms and their roles in the varying internet usage of people around the world, our respective governments and government institutions will dig their slimy fingers into our freedoms, teaming up with and devouring our most beloved online news sources, and meddling to the point where the most powerful governmental agendas will be the only thing available to us, as they sit upon our intellectual prowess and property in the form of bastardizing any views that oppose their own.
Not only are the algorithms choosing content for you based loosely upon the other ways you have used the platforms, but also based off of what the other people in your network are doing, or what they’re interested in. This allows huge potential for error, especially the bigger your network may become. Boyd explains that “many platforms, particularly those that now serve as significant sources of news and information (Facebook, Google, etc.), develop systems that value content based on whether an individual is more likely to be interested in that content, based on that user’s prior interaction with the platform as well as the actions of other users in the network.” This only further strengthens the fact that people in other countries are highly limited in the quality and quantity of information they’re seeing and receiving online.
In a country where the vast majority of the people you know will have similar or even identical views, religions, connections, networks, et cetera, a country in the Muslim Middle East, for example, how would it be possible (with the strength and influence of algorithms in mind) to access information outside of your personal norm, if you’re so highly and deeply tuned into your own country, and as a result, your network, and your personal online experience, that your personal algorithm would be so acutely formatted so as to make it virtually impossible to escape? It may possibly be a situation wherein starting from scratch, if possible, would be the only solution.
The conceptual terror of Big Brother is especially alarming when it’s viewed as something pre-calculated in the form of a mathematical configuration rather than on a specific, individualized basis, by a governmental entity or organization with the “editorial” role. Boyd mentions that “concerns over the role of corporations, questions over what values are driving the decisions, and issues with the mechanisms of accountability” have been in the forefront of arguments revolving around algorithmic equations and internet usage. Aside from the fact that we are being blockaded from the true and honest full realm of the Internet, when it comes to the ethical view, how can we determine whether we are actually experiencing a kind of civil rights violation? Shouldn’t we be the only ones able to decide our own fate in the way of the knowledge we receive? Boyd reminds us that “most algorithmic-driven news sources– from search engines to social media– attempt to identify what an individual is interested in and guarantee that this is what they receive… how much this differs from what they’d voluntarily consume is a topic of significant debate.”
When it comes right down to it, algorithms and the people behind them have already begun to squander their own potential. As soon as mutual trust and understanding have been shattered and broken to any degree beyond repair, then the idea or institution in itself may as well be doomed. Once individuals and the general publics of countries around the world (at least those countries with the legal right to be heard and to noticeably care at all) comprehend and realize and witness firsthand the immense impact that algorithms are actually beginning to place upon our so-called “individualized” and “personalized” online experiences, then perhaps usage of search engines and the Internet as we know it will begin to change and morph and shape into something new, something less penetrable by the government, its corporations, and their algorithms. The next and hopefully final step in this informational spiral will be to change and regain control of the degree to which individuals have equal access to the means of producing, disseminating, and accessing information online, and to redefine exactly how we want to allow algorithms to control our entire online existences.
Boyd, Danah (2014) “Literacy: are today’s youth digital natives?”
The well-known author Danah Boyd explores the ideas and taboos surrounding the concept of innate digital literacy, and how in such a technologically-advanced world that we’re currently living in, children are growing up being expected to just know how to use their technological resources, and yet, nobody is ever formally helping them to learn the more efficient or proper form or etiquette. Pros and cons of these situations are explored, and so are solutions to the negatives.
Boyd, Danah & Reed, Laura (2016) “Who Controls the Public Sphere in an Era of Algorithms?”
The authors delve into a list of questions revolving around the pros and cons of algorithmic calculations online, the current ways we are being stifled by the use of algorithms, how we should be benefiting from the algorithms, and of course, what we should expect to see happening in the near future regarding the use of algorithms. Culture, government, security, freedom, and other topic expressions are explored.
Pariser, Eli (2011) “The Filter Bubble: What The Internet Is Hiding From You” LSE Public Lecture.
This PowerPoint presentation is a simplified outline and breakdown of Pariser’s thoughts on the topic of algorithms and how they’re continually shaping our online experiences. I personally pulled upon a side-by-side comparison of how search results may vary greatly from person to person.