Reimagining Defence Episode 5: AI – the faceless commander

This episode was written by Flt Lt James Kuht & edited by Lt Col Henry Willi. The thoughts are the authors own and do not represent the Ministry of Defence. The episode can be found above, or on Spotify & iTunes. You can find out more by following us on twitter, @ReDefPod.

===============================

If data is the fuel, AI is the engine that turns this fuel into power. AI has been around as a concept since the 1950’s (Turing, 1950), but only recently – due to the exponential improvements in computing, the advent of deep learning, and the convergence of technologies enabled by the cloud, are we suddenly starting to see that AI’s effect on the world will be profound. In this episode, we’ll explore how AI already affects our everyday lives through the humble google search, what happened when an AI ran for Mayor in Japan, and finally what an AI-lead section attack might look like.

“AI is likely to be either the best or worst thing to happen to humanity”

Stephen Hawking

What is it?

Many people get wrapped around the axle on definitions of AI and ML. For the purposes of this episode, we define AI simply as the concept of a non-human entity, like a computer, performing tasks typically requiring a human-level of intellect. Perhaps learning how to play chess, then beating a human (Hsu 1995), or something more useful – like learning how to diagnose breast cancer from a mammogram then outperforming doctors (McKinney, 2020).  Machine learning, or ML, refers to the computing methods that actually underlie AI. So, for example, google’s search possesses a degree of artificial intelligence, due to the machine learning algorithms with which it is built (Google, 2020). One other term we want to touch on is “Deep learning”, which is a powerful type of machine learning algorithm which has recently grown to prominence. Deep learning algorithms comprise neural networks that loosely mimic how the brain learns – these algorithms win most of the world’s top ML competitions and indeed most of the use cases described in this episode use them.

How ML actually works is probably best explained by the simple example of your email spam filter. With traditional computer programming, if you wanted to code a spam filter you’d have to hard-code some rules into the filter – for example if terms like “sex” or “congratulations you’re our lucky winner” were in the subject line, your spam-filter might filter it. This may be partially effective, but spammers would quickly work out a way to game your filter by simply choosing different subject lines and you’d be caught in an endless game of cat and mouse. You’d also likely block important emails accidentally too – especially if you’re working at a sexual health clinic.

Machine learning takes an entirely different approach – it doesn’t require you to hard code rules. Instead, your programme consists of a flexible machine learning algorithm which you train with millions of examples of which emails are spam, and which aren’t. It learns it’s own rules for identifying spam, and then uses these to filter new emails as spam or not spam. There are a lot of reasons this approach is more effective; firstly it saves the programmer a lot of time, as they don’t have to come up with a long list of rules and continually update them – the machine learning algorithm does it automatically, learning from the examples you gave it. Secondly, the ML algorithm is not bounded by a humans attention span nor a human’s ability to recognise subtle or seemingly illogical patterns – this means it can look at millions or even billions of examples of spam and learn weak patterns, such as very subtle clues in the text, the time it’s sent, or it’s format, which humans might not notice. 

This is a really simple example, but these principles – of taking a flexible algorithm such as a neural network, training it on some examples, then it being able to make accurate predictions when shown a new example, hold true to many other modalities of data, and can be applied to a remarkable number of problems. Let’s first take the google search.

Good examples

On the face of it, the google search is remarkably simple. Yet, it’s a large part of the reason why we’re able to act smarter than our parents even if we have the same natural ability – because almost every bit of information we could ever desire is at our fingertips.

Just imagine what it was like for our parents back in high school – if they wanted to know how far away mars is or who Fermi was they’d have to look in the library – we simply type into a google search and have the answer provided in short form, video form, image form and essay form in miliseconds. 

The behavioural economist Yan Chen quantified just how useful this is – he gave people a list of questions in 2014 giving half of them access to the internet, and half to a library (Chen 2014). Those with access to the internet won, unsurprisingly, by on average 15 minutes per question. Considering that google performs 3.5billion searches per day, the ML-powered search engine is saving thousands of lifetimes of wasted time a day. 

But what does ML have to do with a search engine, you ask. Let’s lift up the bonnet on a google search.

First up, you type in your query – perhaps “What’s a good chat-up line?”. Google uses a natural language processing algorithm (machine learning that makes sense of text) to work out the meaning of your request – whilst also automatically corrects your spelling mistakes and detecting the language it should display it’s search results in. 

Next it starts to find relevant webpages, by matching the key words of your search to sites that contain them. 

Now it needs to rank the relevant pages. One of its relevancy machine learning algorithms learns which pages people with the same question have clicked on and stayed on – crucial for working out whether they found the page useful for finding a satisfactory answer.  Others assess which sites are user friendly, whether it’s a reputable site, and also whether the site works well across a variety of screen sizes and loads quickly, crucial if you’re looking for that killer chat-up line whilst on the tube on the way to your date, with brief moments of free wifi in each stop.

Hopefully that simple example illustrates how ML can make perfect sense of an extraordinarily large and complex dataset, in a fraction of a second, providing a resource which has transformed how half of the planet’s population accesses information every day.

Having covered two perhaps obvious examples of where ML touches everyone’s lives, let’s briefly talk about some data types other than text in emails and webpages to appreciate the ubiquity with which ML will affect our lives.

Voice & the language it conveys is perhaps one of the fastest growing domains of machine  learning – with a growing number of households now being occupied by Alexa or competitor’s equivalents. These use machine learning algorithms to both continuously improve at understanding the exact words you say, and the meaning of your message – aka. Natural language processing. Future applications of this technology will get a lot more impressive and invasive. Google are currently working with Stanford on a natural language processing tool which frees doctors from writing notes (Bach 2017) – the consultation is recorded, and their tool doesn’t simply transcribe the consultation – instead summarising the key information from the consult in a concise format, and records that in the notes. Given that many clinicians spend 60% of their job simply entering patients notes, this could lead to incredible redistribution of their efforts to caring for patients. Imagine such a tool being developed for automatically collating the minutes of your meetings, too, or logging queries to a call centre.

iFlytek, a Chinese AI company have created a world leading natural language generation product, which uses machine learning trained on an individual’s voice not to understand it, but to mimic it, both in the individuals native language, and even in other languages. They trained their algorithms on a number of recordings of Donald trump, then used his voice to give the introductory speech to a Chinese AI conference – have a listen to this (https://www.youtube.com/watch?v=-ISWe9mGNiw )- you probably found the second half of this difficult to understand, as Trump seamlessly transitioned into fluent mandarin. Applications such as this will not only be useful to us on operations as our interpreters are converted to a phone application that speaks with our own native voice and intonation, but also threaten severe disruptions through information operations – imagine if a future Chief of the Defence staff received a fake phone call from a convincing voice mock-up of the prime minister giving her instructions that manipulated her actions. 

Trump showing off his “fluent” mandarin

During these videos, you can also see President Trump’s lips moving in time with the speech – showing that machine learning models can be combined to analyse data from multiple modalities and produce coherent output. Computer vision is the application of ML to images and video, and is another potentially massive disruptor. The Chinese technology company Baidu has claimed use of Computer vision to identify people with fever passing through Beijing rail station (Feng 2020), screening at a mass scale for coronavirus so these individuals can be prevented from getting on public transport and infecting large numbers of people. Other examples include KFC in China allowing people simply to pay by having their face automatically recognised by cameras in the takeout, and their account automatically charged (Marr, 2020). What happens when we can simply walk or drive onto base without a security check, because the cameras equipped with computer vision can automatically ID check our faces, and infrared cameras automatically check that there’s no human-shaped figures hiding in the boot or under the seats.

ML is starting to find it’s way into touch, but is relatively further behind in this area. Robotic surgery is becoming more and more widely utilised, and though a surgeon currently sits in control of the robot, it is quite obvious that the data collected from the many millions of robotic surgeries currently conducted around the world – the surgical movements made, and how these lead to the outcomes of a successful wound closure – is perfect training material for machine learning to eventually control robots to conduct many forms of surgery. Will that be so strange? We all want the most experienced surgeon to do our operation – how will we feel when robot has undertaken more surgeries than every other living surgeon within your country combined? Already google and Johnson and Johnson have a start-up in this space, verb surgery, and other competitors such as STAR – a robot that can suture soft tissue such as skin – already claim being able to outperform surgeons (Brown 2017). Within the military, this could of course be particularly useful in austere environments, or even in space.

Let’s end this section of current examples with some bizarre ones. What about the ML-written novel which made it to the final round of Japan’s national literary prize in 2017, showing AI’s creative side (Olewitz 2016), or the Microsoft chat bot called Xiaoice (Shao-ice), trained for friendliness and keeping a convo going, using neural networks (Spencer, 2018). She has now had over 30 billion conversations with 100 million humans and gives some eerily sage advice… when one user confided in Shoeice that he thought his girlfriend was mad at him, she replied “are you more focussed on what tears things apart than holds things together?”. Microsoft are considering curfewing Xiaoice.

Cautionary examples

This example leads us neatly into some other cautionary tales of ML.

First of all, as discussed in the data episode, with machine learning, the predictions made by an algorithm are directly related to the quality of the data. If the data contains biases – guess what, the ML is going to be biased.

→ take for example medical ML applications developed in Europe and silicon valley, which might be trained to predict diseases based upon a number of features. Since, however, the data that was used to train the algorithms is predominantly caucasian, ethnic minorities may be underserved and the advice given to them of a lower quality.

→ or legal assistance machine learning algorithms to suggest bail amounts &  sentence lengths, that some individuals are concerned that they reflect racial biases in past court cases that form their training data (Paris Innovation Review, 2017).

Secondly, the quantity of data needed for impressive performance is vast – so unless data is massive and systematically collected, ML’s performance will be undermined. To give but one example, take Google Deepmind’s work on identifying diabetic eye disease from a picture of the back of an eye (Gulshan 2016). Their neural network is now extremely accurate, outperforming experienced eye-specialist Doctors, but took over 100,000 examples to train, each example painstakingly labelled by seven eye specialists – an absolutely vast dataset. Appreciating this context, we have to realise the importance of collecting data systematically, and storing it somewhere that’s accessible – the cloud. 

Thirdly, it’s worth nodding to the fact that “if you’re holding a hammer, suddenly everything looks like a nail”. Some leaders have gotten their hands on ML as the next new thing, and suddenly ML seems like a wonder-tool. Solutions should not be built around ML just because it exists, but should use it where it’s required and can add value. 

Avoiding these pitfalls will need appropriate leadership from a centralised specialist function that lays down the foundations for ML, by building safe & secure infrastructure and defining best practices and guidelines. This will lower the barrier to entry for our personnel and enable them to build higher quality products, an approach we outlined in the previous automation episode,  referred to by some as the “base layer” approach.  

5 years time… if we get it right

Having now explored some examples, let’s dive deep into a few examples of where ML might utterly revolutionise the way our military acts, first taking some inspiration from NASA, and a British start-up.

Forest fires can be absolutely devastating to individuals’ livelihoods and contribute significantly to global warming, too – spotting them early, and taking quick action to prevent their spread is critical. Keeping eyes on the ground is simply not possible, so NASA trained neural networks to identify and map forest fires from satellite imagery – with an accuracy of 98% (MacKinnon 2019). This means that orbiting satellites can provide real time eye’s-on monitoring of the globe for early detection of forest fires, and alert the fire services instantly if one is spotted – the ML never sleeps and rarely misses a beat.

Even with emergency services having put out the fire rapidly due to the early warning, large areas may still have been devastated by the blaze, and subsequently require replanting. British start-up Dendra have found an ML-centred approach this problem (https://www.dendra.io/services) – they map the area devastated by the fire, and use another ML algorithm to plot optimal routes for autonomous drones that then fly through these deforested areas shooting pods full of seeds and nutrients into the ground at optimal spacing. A single human operator can control 6 drones and replant up to 100,000 trees per day.  

What could the military train neural networks to identify with real-time satellite imagery and autonomous drones? Use the satellite imagery to track refugee crises in real time and autonomously drop off vital medications and supplies to those that need them most. Or perhaps use the imagery to identify terrorist training camps and automatically deploy covert drones to silently sit above them and gather further intelligence upon them?

Having thought about how ML might be used to automate intelligence gathering, or an action which might otherwise take humans some time, let’s also face up to the idea that ML might one day prove to be a useful commander. This might sound crazy, but already an AI ran for major of a province in Japan in 2018 – gaining 4000 votes and finishing in 3rd place (RT international, 2018).

Advert for Japanese AI running for Mayor

Let’s imagine the command and control of a number of sections carrying out an attack on a number of enemy positions, simultaneously. Tactics need to be devised then they need to react quickly to what occurs on the ground, often a highly complex combination of actions. Get the decisions wrong, and lives are lost.

There’s already no doubt that ML can outperform humans at complex tactical board games, such as Go, which has 10 with 170 zero’s after it number of potential moves (Silver 2017). Which begs the question – how many potential moves does a section attack have? 

Making the case even more compelling is the fact that ML as a commander would never get tired or let’s emotion come into things, has the computational power to train itself on many thousands of previous skirmishes more than a military commander could ever hope to, to think unconstrained by culture and uniform training, run accurate simulations of what might happen, and provide all but the most complex decisions in microseconds. 

This concept, which some refer to as “algorithmic warfare”, is exciting but challenging – not only from an ethical stand-point, but also from a practical one.  

Some may argue that no ML will ever replicate the situational awareness of a military commander at a tactical or strategic level – time will tell. What is important to note is that we do not have a sixth sense – commanders, doctors or investors alike. Making better sense of data is what leads to better decisions, and as ML continues to progresses exponentially, and meets the exponentially increasing quality and quantity of data provided by IoT sensors, as a soldier I’d want the decisions that reach me to arrive in microseconds, have been simulated a thousand times whilst I’ve been waiting, and to have taken in many magnitudes more information than my commander ever could have. That being said, we have to also appreciate that currently ML excels at bounded tasks, but simply can’t match a human’s judgement across domains. Nor can it interact with humans with compassion or empathy. Realising our unique human abilities and limits, then augmenting them appropriately with ML, is the immediate challenge for our leaders over the coming years. 

To conclude, I suspect many of you have been shocked by some of the examples in this episode, and how far ahead the private sector in some cases, is racing ahead of Defence. Getting to grips with ML needs to be on every military leader’s radar – because as Henry pointed out in the first episode – digital transformation is 90% about people and only 10% about technology. We’re intrigued as to what thoughts the episode has sparked within you – should ML form a large part of our ongoing education courses like staff college, should digital literacy and mastery be part of promotion criteria, should soldiers should be encouraged to take secondments in private companies at the cutting edge of ML so they can bring the skills back to the military?

=================================================

References:

Bach, B., “Stanford-Google Digital-Scribe Pilot Study To Be Launched” in Scope. 2017, Stanford Medicine.

Brown A. “Smooth Operator: Robot could transform soft-tissue surgery,” Alliance of Advanced Biomedical Engineering 2017 (https://aabme.asme.org/posts/smooth-operator-robot-could-transform-soft-tissue-surgery)

Chen,Y & Jeon, G & Kim Y, 2014. “A day without a search engine: an experimental study of online and offline searches,” Experimental Economics, Springer;Economic Science Association, vol. 17(4), pages 512-536.

Feng, C., 2020. AI Firms Deploy Fever Detection Systems In Beijing To Fight Outbreak. [online] South China Morning Post. Available at: <https://www.scmp.com/tech/policy/article/3049215/ai-firms-deploy-fever-detection-systems-beijing-help-fight-coronavirus> [Accessed 17 June 2020].

Google.com. 2020. How Google Search Works | Search Algorithms. [online] Available at: <https://www.google.com/intl/en_uk/search/howsearchworks/algorithms/> [Accessed 17 June 2020].

Gulshan, V., Peng, L., Coram, M., Stumpe, M.C., Wu, D., Narayanaswamy, A., Venugopalan, S., Widner, K., Madams, T., Cuadros, J. and Kim, R., 2016. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. Jama, 316(22), pp.2402-2410.

Hsu, Feng-hsiung; Campbell, Murray (1995). “Deep Blue System Overview” (PDF). Proceedings of the 9th International Conference on Supercomputing. ACM. pp. 240–244. Archived from the original on 17 October 2018.

MacKinnon, J., 2019. Classification Of Wildfires From MODIS Data Using Neural Networks. [online] Ti.arc.nasa.gov. Available at: <https://ti.arc.nasa.gov/m/groups/machinelearningworkshop2017/MLW2017_slides/presentationsPDF/James-MacKinnon.pdf> [Accessed 17 June 2020].

Marr, B., 2020. The Amazing Ways Chinese Face Recognition Company Megvii (Face++) Uses AI And Machine Vision. [online] Forbes. Available at: <https://www.forbes.com/sites/bernardmarr/2019/05/24/the-amazing-ways-chinese-face-recognition-company-megvii-face-uses-ai-and-machine-vision/#64bd985712c3> [Accessed 17 June 2020].

McKinney, S. M. et al. International evaluation of an AI system for breast cancer screening. Nature 577, 89–94 (2020)

Olewitz,C., “A Japanese AI program just wrote a short novel, and it alost won a literary prize,” Digital Trends, 2016.

Parisinnovationreview.com. 2020. Predictive Justice: When Algorithms Pervade The Law – Paris Innovation Review. [online] Available at: <http://parisinnovationreview.com/articles-en/predictive-justice-when-algorithms-pervade-the-law> [Accessed 17 June 2020].

RT International. 2018. Robot’S Mayoral Race: AI Candidate Gets Thousands Of Votes In Japanese City. [online] Available at: <https://www.rt.com/news/424402-robot-mayor-japan-tama/> [Accessed 17 June 2020].

Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A. and Chen, Y., 2017. Mastering the game of go without human knowledge. Nature, 550(7676), pp.354-359.

Spencer, G., 2018. Much More Than A Chatbot: China’S Xiaoice Mixes AI With Emotions And Wins Over Millions Of Fans – Asia News Center. [online] Asia News Center. Available at: <https://news.microsoft.com/apac/features/much-more-than-a-chatbot-chinas-xiaoice-mixes-ai-with-emotions-and-wins-over-millions-of-fans/> [Accessed 17 June 2020].

Turing, AM 1950. COMPUTING MACHINERY AND INTELLIGENCE, Mind, Volume LIX, Issue 236 Pages 433–460, https://doi.org/10.1093/mind/LIX.236.433