By: Hannah Graham-Brown
Picture credits: karengoncalvese/Shutterstock.com
Since ChatGPT’s release last year in November, the world has hailed the artificial intelligence (AI) language model as the defining technology of our decade. The powerful AI chatbot, developed by the company OpenAI, can reproduce knowledge on virtually any topic, from reciting Shakespeare to solving algebraic equations. Hitting record heights, the website topped over 1 billion visitors a few months ago. More recently, ChatGPT has hit the headlines as part of a more popular AI narrative, one defined by sentient machines and humanity’s doomsday clock. But this idea of superintelligent machines not only deludes the public understanding of the emerging technology’s capabilities – it diverts attention from the precarious human labour surrounding the development of AI systems.
While ChatGPT’s predecessor, GPT-3, demonstrated an impressive ability to scrape vast amounts of knowledge from the internet, the app was liable to language of a racist, sexist, and hateful kind. In order to build a safety mechanism within the bot’s dataset, OpenAI looked to the blueprint of companies like Meta: feed an AI with labelled examples of violence and hate speech, and that tool could learn to identify and catch those forms of toxicity before they reach consumers. And similar to other social media companies, OpenAI hired and sent hundreds of thousands of text samples to an outsourcing firm in the Global South, this time in Kenya. Headquartered in San Francisco, the ‘ethical AI’ and data services business Sama employs workers across Kenya, Uganda, Costa Rica, and more to curate and filter batches of labelled data. Spending hours scanning descriptions of some of the internet’s darkest procurements, including pedophilia, torture, and incest, these invisible workers are chained to the peripheries of a billion-dollar industry – all for a meager wage of $1.32 to $2 per hour.
OpenAI is not alone in this practice. Far from it, reports have previously exposed identical AI curation techniques from tech conglomerates like YouTube, Facebook, and Twitter. Other companies have similarly applied this playbook to other software innovations such as AI voice assistants or self-driving systems. The technology hardware industry is also no exception as it relies on cheap forms of labour, with armies of outsourced factory workers hired to build smartphones, laptops, and other electronic gadgets.
It is increasingly apparent that the market of digitalisation, one that has powered this generation of AI chatbots, has a labour problem. This issue has only worsened as the pressure to incorporate AI into company products mounts, with many startups even recruiting people to impersonate AI systems. With time, we are shifting further away from an internet built around shared community interests to one entrapped by the commercial prerogatives of the few. The story of a small number of profitable Silicon Valley companies, and the human labour which underpins their gains and celebration, is telling of a troubling future for the AI race.
The Digital Gig Economy
To the relish of tech executives, and the dismay of labour unions, the rise of the gig economy is fast underway. A term used to reference the short-term services or asset-sharing provided by freelancers via digital platforms, this form of employment defies the geographical bounds once intrinsic to the means of production. As a result, the new digital regimes of work have allowed an unprecedented number of clients, managers, and workers alike to clock-in from all over the world, at any time of the day.
This surge in digital labour can best be explained by the combination of two developments. First, the issue of unemployment remains a concern for much of the world; the International Labour Organisation estimates that global unemployment will edge up in 2023 by around 3 million, to reach 208 million. Secondly, the technology driving human connectivity is spreading and evolving at extraordinary rates. Today, nearly 4.9 billion people are active consumers of the Internet, which accounts for around 62 percent of the world’s population. Recent discussion around the ‘metaverse’ is one indication of this digital-physical fusion rapidly defining the means of communication and connection.
Together, these trends of unemployment and connectivity have led millions to outsourced, digitally mediated work to escape the reins of local markets. And as more people in low-income countries connect to the Internet, exemplified by the likes of India or the Philippines, services such as translation, marketing, or transcription are now entering a competitive global market.
The Wretched of the Internet
These new digital relations raise serious questions about levels of consumer and worker protection and, more generally, labour-market policies. Some researchers, such as Mark Graham, Isis Hjorth, and Vili Lehdonvirta from Oxford University’s Internet Institute, are seeking to understand how the Internet is irrevocably changing work patterns, labour regulations, and worker livelihoods in Southeast Asia and Sub-Saharan Africa. These ethnographic studies point to the skewed distribution of supply and demand for digital work, leaving workers with a feeling of disempowerment as they are forced to underbid practices, face racial and geographical discrimination, and are kept in the dark about the purpose of their service. The lack of formal contracts has left a number of workers in situations of financial insecurity, with long hours and an absence of sick leave or holiday pay. In other words, according to Mark Graham,
“When we use Facebook or do a Google search, we have no idea what kind of human labour is behind those clicks or those uploads, what kind of people are sitting in content moderation farms in the Philippines.”
When looking at commercial content moderation’s (CCM) workers and firms, the digital gatekeepers or ‘janitors’ of the internet, the sheer volume of uploaded content means that CCM reviewers are often exposed to visual and textual posts that are psychologically damaging. Rewarded for their invisibility, these individuals have little resources to turn to for the psychological toll of daily footage scrubbing. The near absence of interviews and memoirs, largely due to the non-disclosure agreements many are forced to sign, not only renders these workers ill-equipped and alone in handling the mental repercussions of CCM, but it also means the demands made of content moderators remain unknown, unaddressed, and unaccounted for. The impact of witnessing post after post of violence, and the physical and mental exhaustion that comes with secondary trauma, remains severely understudied. In 2019, a group of contractors who worked for Facebook’s Berlin-based moderation centres reported cases of graphic content addiction, trauma-induced drug use, and even a gradual indoctrination towards the far right. A year later, a San Mateo county court forced Facebook to pay $52 million to American employees for the post-traumatic stress disorder they experienced after working in CCM. While Silicon Valley has gradually implemented some protocols to address these issues, many workers hired by third party moderator sites continue to cite a lack of counsellor support. For this reason, the hazardous nature of outsourced digital labour, particularly in the Global South, is unlikely to see a noticeable change for workers anytime soon.
CCM workers are furthermore forced into a predicament where they must juggle the monetary value of sensationalistic content with demands for brand protection and guidelines on disturbing materials. As history has shown, racist popular culture is a lucrative business; the viral meme of ‘Bed Intruder’, which features a young Antoine Dodson from Alabama crying “Hide yo’ kids, hide yo’ wife”, is a clear example of this. Despite his eventual partnership with the YouTube channel Auto-Tune the News, the original site of the meme, the humour that traded on Dodson’s identity markers of Blackness, poverty, and effeminate attributes both fed into harmful stereotypes and reduced the complexities of those identities into one simplified caricature. The profitability of ‘Bed Intruder’ would be followed by Sweet Brown’s ‘Ain’t Nobody Got Time for That’, which would later heed way to Tik Tok compilations of the class-charged ‘M to the B’ video. Each of these posts, honeycombed with their respective racist and classist tropes, can still be found on YouTube with views in the millions. Many also include a heavy smattering of commercial advertisements throughout. One only has to google the net worth of YouTube to understand that content moderation as a practice, and the decision for what stays online or not, is largely a financially driven one.
This underbelly of the digital gig economy has often been referred to as the ‘sweatshops’ of the online world. The vulnerability of workers is most acute in Southeast Asian countries like the Philippines, which feature some of the fastest-growing hubs of the profession – a byproduct of a much older call centre industry. Academics like Jonathan Ong have pointed to the porous boundaries between the country’s digital ‘sunshine’ industry, such as creative workers in marketing, and the booming side gig of political trolling. Similar to content moderation firms, political consultancies and strategic PR firms regularly enlist young workers in the Philippines and Indonesia, often from the precarious middle classes, to work in trolling operations and click farms. Interviews conducted by Ong and Jason Cabanes in the country reveals a startling truth: the ad hoc and project based nature of disinformation production enables these same firms to claim the title of corporate digital professionals ‘doing their jobs’, thereby displacing any responsibility for the spread of political deception and toxicity. As with CCM work for multinational tech companies, the digital divide here between the powerful and wretched is very much alive and present.
Bridging the Digital Gulf
Sam Altman, co-founder and chairman of OpenAI, joked in 2015 that, “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.” Great companies. Impressive tech. Big money. In just a few short words, Altman encapsulates a dateless motive here. Even with massive supply chains of human labour and scraped data, much of which is used by tech companies without informed consent, the industry’s infatuation with commercial profits perpetuates a digital gulf worldwide. This gulf pits powerful tech companies competing on an “AI first” market against heavily surveilled and deeply exploited gig workers, including those content moderators, warehouse workers, and data labellers confined to precarious labour conditions. The result is the birth of a global underclass across the latest emerging technologies.
Certainly, the semantic danger of the term ‘artificial intelligence’ is that it risks convincing people that the world runs on a very singular, one-dimensional logic: that of highly cognitive, unforgiving rationalism. At best, artificial intelligence is neither ‘artificial’ nor is it ‘intelligent’ outside of its formal pattern-matching logic. The reason apps like ChatGPT even exist today is due to the innovative training sets and algorithms produced by humans. Humans, that is, with all their irrational and multiplex emotions and spirit. And therein lies the silver lining to the harmful outgrowths of AI: one must remember, must make visible, the human in the work of intelligence and machine-learning. For those in the underground shadows of the digital industry, like Sama’s content moderators in Kenya, it is critical that their stories are heard and acted upon vis-à-vis stricter labour regulations and social insurance. Researchers, journalists, and civic institutions must centre the voice of workers in their investigations, whether through co-developed research agendas or activist publishing systems like the Turkopticon. Efforts by Organisation for Economic Co-operation and Development (OECD) member countries to introduce a minimum wage and regulate working time are already underway with various platforms facilitating on-demand labour. In several countries, especially within the EU, independent unions of platform workers are negotiating working conditions for their self-employed members.
Moreover, in the pursuit for an ethical AI, transnational worker organising cannot be neglected; whether it is the unionisation of content moderators in Kenya or the resistance of Amazon Mechanical Turk workers in the US, the global workforce must continually explore means of searching and reaching beyond their immediate political constituency to unite with broad segments of society, now clustered into rich and increasingly transnational social movements. Yet, considering the irregularities of global capitalism, this solidarity cannot confine itself to a purely North-South direction, one which reproduces a ‘top-down, patron-client’ dynamic between Euro-American trade unions and labour activities in the Global South. The context-specific struggles of workers seeking empowerment is critical here. Ultimately, solidarity across geographic idiosyncrasies, and across hierarchically-structured collegialities such as Amazon’s Employees for Climate Justice (AECJ) unit, is cause for fear in any power-grabbing tech executive. But genuine reform is still few and far between. The reality, and historical lesson, is that those with money and interest will always lay waste to the most vulnerable unless the worker’s fight is reborn. The AI race, despite all its allure, is a timely reminder that we stand to lose a great deal when this truth is forgotten.