By: Niels Brink and Chira Tudoran
While most people have probably interacted with Artificial Intelligence (AI) before, they may not know how much it impacted them. They might think of AI as the mechanism that remembers what Netflix show they were watching, but the AI field is rapidly approaching the ability to do human jobs better than us. But this may not be a cause for concern. After all, Iron Man lived together quite well with J.A.R.V.I.S., his AI assistant which was able to interact with him as if it was an actual person. In part two of the future technologies series, we will explore the state of AI and its current and future roles.
AI, machine learning, and deep learning
Before turning to the current state of AI, we should define what AI is and how it works. For that, we need to explore three different, but closely connected concepts: AI, machine learning, and deep learning. AI combines math and logic to simulate the human thought process in order to make decisions and learn. Machine learning is a subset of AI, focusing on learning new information, generally trends, from a dataset. An example of this would be a programme that analyses a set of temperatures from the past 100 years to find trends in the data. This unsupervised learning process allows the computer to keep developing. Finally, deep learning also focuses on learning from a dataset, but this data does not need any labelling or sorting before it can be processed, like machine learning requires. This process generally attempts to simulate a human brain, although this is not fully possible. This differs from machine learning because this data does not have to be structured or labelled, which means that humans do not have to be as involved. For example, if you would give it a folder of animal images, deep learning could detect what animal it is presented with. The more images it has at its disposal, the more accurate those predictions will be.
In sum, an ‘intelligent’ computer is created by using machine learning. Then, AI, based on deep learning, allows the computer to simulate human decision processes.
The state of AI
Since the creation of the AI field in 1956, the use of AI became widespread in both civil and military settings. The rise of AI was supported by massive investments in the field, greater processing power, the availability of large datasets, and the introduction of deep learning in 2012. The amount of investments, in particular, should not be overlooked. Globally, $77.5 billion was invested by private companies in 2021, with the United States receiving $52 billion, or over two-thirds of the total. The increasing use of digital technologies, such as Zoom and other online meeting platforms, during the Covid-19 epidemic is largely responsible for this tremendous surge. However, in the years before AI investment had already been continuously increasing. While the United States is the clear leader in this field, other countries consistently outperform the United States in specific areas. For example, some of the largest investments went into autonomous driving, a sector of which China controls more than half. In addition, the healthcare sector is dominated by the UK. The US’ strength lies with its ability to attract enormous amounts of funding, mainly towards the numerous technology companies in the San Francisco Bay Area.
But in the realm of AI, research is just as important as investment. China has established itself as the leader in this field, as indicated by its dominance in the IEEE (the Institute of Electrical and Electronics Engineers), the organisation that establishes technical standards for and publishes research articles on AI. Altogether, China is a significant competitor for the US in the AI field, as it aims to become the leader in AI development by 2030.
While AI currently holds the ability to predict trends and identify images and natural language, it cannot make interpretations based on this data. For example, it would be able to recognise a falling coffee cup, but it would unable to infer that the cup is about to break. This Artificial General Intelligence is still far from reality, but research and investment is readily available to reach this goal. Additionally, there is no way to verify the products of an AI because the algorithms they produce are generally not interpretable by humans. For example, we cannot fully test the AI behind a self-driving car because we cannot simulate every possible situation it may encounter. Therefore, we cannot determine if it will always behave as expected.
The corporate world is one area where AI has achieved widespread use. In 2021, 56% of businesses said they were utilising AI, with emerging economies accounting for the majority of the expansion. This includes China, the Middle East, and northern Africa. Interestingly, the adoption rate (the speed at which the public acquires and starts using new technologies) in the US is not the highest globally, despite the enormous investments it received. Instead, Indian and other Asian-Pacific countries lead in this regard.
Currently, AI can serve a host of applications, such as recognising and understanding speech, recommending products, recognising images and emotions, and predicting trends. This allows companies to automate mundane tasks and achieve more efficiency. AI is used by all of the major technology companies. Amazon can now predict which products its customers want, when they want them, and is able to ship them to a warehouse close to them before they even buy the product. Siri is enabled on all Apple devices through AI, which allows it to understand natural language. Waymo, one of Google’s sister companies, offers fully autonomous taxis in multiple American cities. Additionally, Facebook can identify its users in images with incredible reliability.
Perhaps the most notable example of corporate AI is found across the Pacific Ocean, in China. This would be TikTok, which makes AI a fundamental component of its business. In 2020, the Chinese social media app managed to become the most downloaded app, overtaking Facebook. What sets TikTok apart is the degree of reliance on AI, both for the content producers and consumers. There is no need to express your preferences when you initially start using the app. Rather, TikTok’s AI decides which videos appear on your ‘for you page’. Based on various indicators, their AI can interpret the individual user’s preferences, and construct a hyper-specialised feed that makes the app incredibly addictive. This includes expected indicators, such as how long a user watched a video, if they liked it or commented, but also more surprising ones, such as at what speed the user swiped away a video. Additionally, TikTok uses its AI to help its creators reach as many people as possible. For example, it suggests hashtags which the AI predicts will be viewed by most people. When compared to other video services like YouTube and Netflix, consumers on TikTok watch much more content in the same amount of time because of the short video format. Coupled with the vast array of indicators, this generated enormous amounts of data for the company. As Connie Chan, a well-known technology investor put it: “TikTok is the first mainstream consumer app where artificial intelligence IS the product. It’s representative of a broader shift.” This shift is further confirmed by ByteDance, TikTok’s parent company, when they announced they would be selling their AI. Other companies could use this well-developed AI to push their own products.
But while it sounds like AI is about to completely revolutionise the way we consume online content, we have to remember that current AI is not on the level of AI assistants like J.A.R.V.I.S.. Even if TikTok can accurately predict users’ preferences, that also means it could show them more nefarious content if the AI fails to filter out malicious or illegal content. The fundamental limitations of AI are thus still significant in these applications. Furthermore, the way this data is obtained should also be scrutinised, as TikTok was fined for collecting information on children that used the app. Lastly, the sheer amount of information collected for AI and the impact its products have on people requires adequate data protection regulation to ensure that adverse outcomes can be addressed and to bolster trust in AI.
While using AI to predict shopping behaviour is one thing, it also represents a clear example of a dual-use technology; one that can be used for both civil and military purposes. While the limitations of AI, may result in TikTok showing you a video you may not like, they can also have more grave consequences. These limitations may cause more substantial problems in military applications, where they may be used to differentiate between friend and foe. However, the US government has certainly recognised its promise and threat. As the National Security Commision on AI wrote in 2021: “For the first time since World War II, America’s technological predominance – the backbone of its economic and military power – is under threat.” The commission recognised the immense promise AI holds, but it also explained how China has both the resources and the will to overtake the US as the global AI leader. Additionally, “AI is deepening the threat posed by cyber attacks and disinformation campaigns that Russia, China, and others are using to infiltrate our society, steal our data, and interfere in our democracy.” The instances where AI was used in this way currently only reflect a minute part of what AI is truly capable of. These times are unparalleled, because “unlike in past technological developments, such as atomic weapons and stealth aircraft, the United States will not have a monopoly, or even a first-mover advantage, in the competition for military AI.”
The conventional, but nevertheless essential military AI applications focus mainly on recognising threats. This includes analysing the enormous datasets compiled through surveillance for hostile activities, detecting deep fakes, streamlining the shipments of parts for maintenance, and analysing online network traffic to spot malicious activities.
However, while AI integration in the military will be vital, it will also raise significant ethical concerns. This is due to the fact that deploying military AI to make warfare judgments would undoubtedly place enormous responsibilities on that particular system. While AI’s ability to extract massive volumes of data could significantly improve military decision-making capabilities, the degree to which they trust this system should be scrutinised. Remember that we cannot test all of the scenarios that an AI might face, and we cannot simply grasp how it works. Not yet at least.
One significant example of military AI use focuses on improving the capabilities of weapons that can identify and engage with a target without human intervention. These Lethal Autonomous Weapons Systems (LAWS) have the potential to radically alter the battlefield, yet they are currently unable to overcome AI’s limitations. The ramifications of an AI failing to respond as expected on the battlefield, where civilians may be present, would be extremely severe. While these systems are still bound by international laws, as all weapons are, the legality of LAWS is hotly debated. But this already demonstrated the ethical implications of AI usage. And in the face of international competition for AI dominance, reliability and safety may be neglected in favour of rapid development. One solution to this would be the requirement to have a human operator involved in the decision-making process, which would drastically reduce the previously mentioned risks. Fortunately, most major actors in the military AI field, such as the US, Russia, and China, all agree on the importance of this requirement, although their specific interpretation of it remains unclear.
As we have demonstrated, AI is already crucial in both civil and military settings. Research and investment will continue to enable enormous progress, especially given the massive increases in investment seen over the past decade. Businesses are increasingly realising the value of AI, demonstrated by the breakthrough success of TikTok. Likewise, military applications would allow for faster decisions-making, but these very decisions must be reliable to actually be useful. Although uncertainty inhibits us from making precise predictions on AI’s future, we can be sure that its nearly limitless development potential will eventually lead to a computer that can at least match human intelligence, if not exceed it.
It remains to be seen how AI will change humanity. Since the dawn of the Stone Age, humans have built tools to make labour more efficient, and after thousands of years of this endless process we are at a crossroads. AI differs from all our previous tools in that it has the potential to surpass human intelligence. Maybe it will bring out the worst in us, maybe the best. However we must not forget that it is us who is feeding the data. It is us who teach the algorithm what we feel, what we desire, and what we think. Before we seek to be afraid of towards this tool, we should first look at ourselves and our actions.
Curious to read part 1 of this series? Make sure to check out the article here