“The Robots are going to take over the world.” Not really, the problem is, robots might be the means of taking over the world by the wrong entities. Throughout time, we have seen mishandling of the products that we developed. Terrorist bomb attacks use dynamite that was intended for mining raw materials, warfare equipment and artillery is based off vehicles created for civilian usage, airplanes used for warfare were originally developed for convenient and quick transport and travel.
With the global emphasis on autonomous drones or Unmanned Aerial Vehicles (UAVs), this naturally raises the question, “How can we exploit drones?” The reaction to which, that is thought through in advance is, “How do we prevent drones from being exploited?”
These questions, and many more are discussed and analyzed in the publication, Ethical And Moral Aspects In UAV And Artificial Intelligence. The paper was published at S2D2 which stands for Security, Safety, Defence, Disaster Management and Recovery.
The paper discusses different issues associated with UAV operations and moral and ethical aspects coupled with these operations.
In 2017 China was supervised by 180 million CCTV cameras, by 2020 Chinese on line cameras will be 450 million; they will control traffic and fine law’s violations, identify bank account holders thanks to bio metric tools and enable ATM transactions or identify airplane passengers at airport gates. Internet of Things will contribute to making the environment “intelligent”, enabling direct interaction between objects including smart phones and human wearables.
The point of the matter being that technology can be controlled or regulated in its operation and practice by technology itself.
Getting to drones, now.
The first drone to be developed was Lockheed D12, in 1960’s which was used as an additional aircraft to assist the manned Lockheed SR71 or Blackbird. So it can be said that drones, themselves, were from the very beginning, developed for warfare. However, the usage of drones has since expanded to vast fields; extendending all the way to their usage in architecture as the University of Berkley developed drones capable of scanning buildings and creating their 3d models using an image processing technique called photogrammetry. (Read more on our article for more information on the topic: UAV-based 3D Photogrammetry for post-Earthquake Studies on Seismic damaged Cities).
Nowadays, drones are very customizable due the installation of commonly configurable controllers and many app developers and programmers, often college students, program drones to carry out different tasks or achieve assigned missions. The field of civil applications is really vast: aerial photography and video, aerial crop surveys, real-time intervention in human/natural disasters, search and rescue, coordinating humanitarian aid, counting wildlife, detection of illegal hunting, monitoring bio-diversity, forest’s fire detection and large-accident investigation/monitoring, delivering medical supplies, inspection of power lines and pipelines, crowd monitoring and direct intervention in difficult or dangerous situations.
While it is true that considering their appeal to the public, drones have been used for advertisement and entertainment purposes as restaurants would serve food via drones or ice-cream sellers would deliver ice-cream on beaches using drones, the truth remains that drones have recently found their usage for practical benefits as they are used in agriculture, paramedic department and for traffic control in actual practice.
Ethical and Moral Aspects
Given the rapidly increasing number of sectors taking advantages from the use of UAVs, mainly owing to the added value they offer in a wide range of sectors. A relevant and recently developed and tested family of Unmanned Vehicles is attracting the public opinion and more specifically drivers; they are autonomous cars, lorries and busses. If the risk that a flying or floating drone can be hacked is concerning us as well as the temporary lack of specific legislation, what about the concerns related to ethical and moral aspects, not neglecting the legal ones, concerning autonomous road vehicles such as cars, lorries and buses?
Before we discuss that, let us look at some major definitions or concepts:
AI: The evolution of Artificial Intelligence generated two main branches “strong AI” and “weak AI”. On one side we find a broad-spectrum artificial intelligence designed to face a wide range of problems, on the side of weak AI, also known as “narrow AI”, we find vertical solutions based on a well-defined domain of knowledge as it happens for instance for expert systems or car automatic driving systems.
ML: Machine learning (ML) is an interesting subset of AI that is providing interesting solutions to complex problems, a typical field of application is the one non-approachable with algorithms and explicit programming.
The reason for understanding the above mentioned definitions Is that these concepts are the foundation of modern autonomous vehicles as they learn through experimentation and simulations. The major concern at hand is, how do we make sure that a decision made by AI is accurate; an autonomous car, for example, is supposed to prioritize human safety over its own damage.
Experiment based on AI and ML: A recent test of an autonomous car again raised some of the question already expressed on the occasion of the ICCC 2017 conference held in New Delhi. The driver set the car on auto-pilot mode and drove along the street of the city causing an accident that killed a woman riding a bicycle. In that area the experimentation of automatic driving is allowed so the focus moves on possible bugs of the system or the usual combined action of minor problems causing altogether a disaster? As a consequence, possibly before a mass diffusion of such vehicles, we must be aware about some aspects:
- The risk of cyber-attacks that may turn everyday commodities like cars into “weapons”
- The “programmed” behaviour of cars in case of “risky” scenarios. The car should be programmed to make better decisions than what people make.
- Security standards and harmonised “behaviours” together with an appropriate legal framework
Citation: Ethical and Moral aspects in UAV and artificial intelligence, Alfredo M., S2D2, Politecnico di Milano | https://re.public.polimi.it/
We are sorry that this post was not useful for you!
Let us improve this post!
Tell us how we can improve this post?