Connect with us

Drone Below

What’s The Big Deal? The Controversy on Google’s AI and Pentagon Drones

Google Headquarters

AI

What’s The Big Deal? The Controversy on Google’s AI and Pentagon Drones

Google Headquarters

What’s The Big Deal? The Controversy on Google’s AI and Pentagon Drones

0
(0)

The biggest story in drone news in the past 24 hours has been that regarding the Pentagon using a Google AI to train drones. But what is the AI they are referring to, what is the Pentagon doing with it, and why is it so controversial?

What exactly is this Google AI?

First of all, it’s not an AI as most people would understand it – it’s machine learning software known as TensorFlow.

Artificial intelligence is implied in systems that can learn to do something without explicitly being told how to do so, such as showing thousands of photos of cats (labelled as such) to the AI, then showing it another photo of another cat. Using an approach known as ‘fuzzy logic‘, a neural network aggregates data until it reaches a threshold where it can recognise that this next photo is also a cat. Google’s preferred term for this is ‘deep learning’, and it is their deep neural network Google Brain that learned to recognise a cat.

Peter Norvig, Research Director at Google, explained in 2012 the process they followed, taking 10 million frames from Youtube to show the computer to aggregate. Of course as he says, “well this is YouTube so there will be cats.”

Machine learning on the other hand has been around for over 50 years so it’s nothing new. Following Alan Turing’s “Turing Test” in 1950, Arthur Samuel created the first computer learning program to play a game of checkers at IBM in 1952. Five years later Frank Rosenblatt designed the first neural network, known as the ‘perceptron’ to simulate human thought. Other algorithms followed, but were all based on step-by-step approaches involving actually teaching the computer one task after another. In the 1990s, this shifted from a knowledge-driven approach to a data-driven approach, however computing chips were still relatively slow.

However great leaps forward in the power of computers means it has much faster capabilities than ever before. Step forward to late 2015, when Google made TensorFlow open source, meaning anyone who wants to use it can. TensorFlow is the technology that allows Google to add features to its software including speech recognition and object detection. It is suited well for developing systems that require computing huge amounts of data, such as artificial neural networks, writes MIT alumni Erik T. Mueller, an expert in artificial intelligence, but is not so good at independently learning to recognise cats.

So how does TensorFlow work?

TensorFlow is at its heart machine learning software, but it can be used in tandem with deep learning. Shortly after TensorFlow went open source, a Japanese cucumber farmer trained the TensorFlow software to sort cucumbers, a story related very nicely on Google Cloud’s blog. Makoto Koike, a former embedded systems designer from the automobile industry, started helping out at his parents’ cucumber farm, and surprisingly enough didn’t find sorting cucumbers to be that exhilarating. Taking 3 months to photograph and teach the computer to recognise the best cucumbers, he succeeded to a certain extent in automating the task.

“Google had just open sourced TensorFlow, so I started trying it out with images of my cucumbers,” Makoto told Google Cloud. “This was the first time I tried out machine learning or deep learning technology, and right away got much higher accuracy than I expected. That gave me the confidence that it could solve my problem.”

Here’s his cucumber sorter:

However, the system has its limitations. Whereas Makoto used a standard Windows PC and 7,000 images to train the cucumber sorter, most such systems require a great deal more information and computing power to operate effectively. Due to the relatively weak capabilities of Makoto’s home computer, images had to be reduced down to 80×80 pixels, missing a lot of important details.

Makoto explained, “When I did a validation with the test images, the recognition accuracy exceeded 95%. But if you apply the system with real use cases, the accuracy drops down to about 70%. I suspect the neural network model has the issue of “overfitting” (the phenomenon in neural network where the model is trained to fit only to the small training dataset) because of the insufficient number of training images.”

But the results are coming faster, and with that a return on investment, which is needed to fund the computers needed to get this technology working accurately. As Norvig explained to Toby Walsh, Scientia Professor of Artificial Intelligence at Australia’s UNSW last year, “It’s a funding issue…I think we’re going to continue to see funding because this time around we’re seeing immediate returns.”

What is the Pentagon doing with Tensorflow and drones?

In the story that broke yesterday, Gizmodo outlined the use of TensorFlow in the US Defense Department’s Project Maven. In April 2017 the then-Deputy Defense Secretary Bob Work announced the establishment of an Algorithmic Warfare Cross-Functional Team to work on something he called Project Maven.

“As numerous studies have made clear, the Department of Defense must integrate artificial intelligence and machine learning more effectively across operations to maintain advantages over increasingly capable adversaries and competitors,” Work wrote.

At the Defense One Tech Summit in July last year, Defense Department officials discussed the use of advanced computer algorithms to recognise objects among other imagery. Marine Corps Col. Drew Cukor, who presented at the event, said, “People and computers will work symbiotically to increase the ability of weapon systems to detect objects. Eventually we hope that one analyst will be able to do twice as much work, potentially three times as much, as they’re doing now. That’s our goal.”

Drone strikes in warfare by the US government have been well catalogued, with campaigns being conducted in many countries such as Syria since 911 attacks on New York in 2001. The Guardian reported an estimated 15,000 civilians have died as a result of weapon strikes in 2017. The BBC reported that the US central command leading the campaigns against IS in Syria and Iraq admit to nearly 500 unintentional civilian deaths in June 2017 alone due to coalition airstrikes.

US Drone Strike Graffiti

This is where TensorFlow comes in, tasked first with assisting the Pentagon trawl through a deluge of video imagery collected on a daily basis by Defense Department drones. With so much footage to process that human analysts are overwhelmed, there has been a lag in the processing of the data.

In a report titled ‘Artificial Intelligence and National Security’ written on behalf the U.S. Intelligence Advanced Research Projects Activity (IARPA), co-author Allen wrote that “Before Maven, nobody in the department had a clue how to properly buy, field, and implement AI.”

Is it possible that the implementation of TensorFlow to improve the data processing and object recognition of drone footage could reduce civilian deaths with more accurate aerial strikes? As one tweeter said, “Hopefully the systems will be smarter than human operators, who can’t seem to tell the difference between an Afghan wedding and a terrorist encampment.”

Jeff Dean, a long-time engineer at Google, when questioned about the moral implications of the Google’s machine learning software, said that was not “actionable information”.

Why are people so upset?

In the development of both AI and machine learning, Google has moved from being known as a search engine company to a worldwide technological phenomenon. Gizmodo reports that some Google staffers are outraged that the company’s technology is being deployed by the military, and others say it raises ethical questions about the development of AI and machine learning software in the first place (hardly a new concern).

However, there are some reports that in actuality, Google has always had a relationship with the US intelligence community, as the CIA and NSA recognised in the 1990s that much of the work needed to be done in the race for superior intelligence technology could be done outside the public sector.

“We have long worked with government agencies to provide technology solutions. This specific project is a pilot with the Department of Defence, to provide open source TensorFlow APIs that can assist in object recognition on unclassified data,” a Google spokesperson told Gizmodo. “The technology flags images for human review, and is for non-offensive uses only. Military use of machine learning naturally raises valid concerns. We’re actively discussing this important topic internally and with others as we continue to develop policies and safeguards around the development and use of our machine learning technologies.”

Anyway, back to Youtube, cats and cucumbers:

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

As you found this post useful...

Follow us on social media!

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

Cite this article as: Sarah Whittaker, "What’s The Big Deal? The Controversy on Google’s AI and Pentagon Drones," in DroneBelow.com, March 8, 2018, https://dronebelow.com/2018/03/08/whats-the-big-deal-the-controversy-on-googles-ai-and-pentagon-drones/.
Comments

More in AI

Back To Base (B2B)

Advertisement

Trending

Advertisement

The Latest on DJI

The Drone Wire

Advertisement
To Top