Ethics or rules as a template for glitch-free man-machine interaction be it Artificial Intelligence or automated vehicles has always been an issue of debate and discussion. An editorial piece by Jamie Smith at the Johns Hopkins University details how scholars at the Johns Hopkins University’s Berman Institute of Bioethics have begun exploring contemporary ethical issues that cross academic disciplinary lines and take place in a wide range of real-world circumstances. To support these efforts, Johns Hopkins created the Exploration of Practical Ethics program, which provides grants for faculty to undertake research in interdisciplinary fields of ethics.
The program awarded nine grants in 2016 to projects examining issues relating to criminal justice, higher education, economics, and environmentalism, among others. The faculty selected for the 2017-18 rounds of Practical Ethics grants will present their work at a symposium on Wednesday, Nov. 14, at 2 p.m. in the Glass Pavilion on the university’s Homewood campus.
Robots will pervade our daily lives as surrogates, assistants, and companions soon. Hence certain laws or ethics framework that deals with their functioning need to be implemented through an actionable value system that can be analyzed, judged, and modified by humans. As robots are granted greater autonomy, it’s imperative that they are endowed with ethical reasoning commensurate with their ability to both benefit and harm humanity. It was way back in 1942 that Isaac Asimov stipulated his Three Laws of Robotics to govern robot behaviour.
This current project, led by ethics and robotics experts from the Berman Institute and the Johns Hopkins Applied Physics Lab, aims to develop an ethical framework for robots, implement the framework by extending existing robot capabilities, and assess the framework’s impact on robot behaviour.
The team will use APL’s Robo Sally, a hyper-dexterous robot with Modular Prosthetic Limbs and human-like manipulation capabilities, to derive design guidelines and best practices to implement practical ethics in next-generation robotic systems.
An example of a pertinent and current issue in the autonomous segment is the development of autonomous vehicles that promise a future of effortless mobility. But what if there are unanticipated, negative consequences? In the realm of AVs, especially some consequences can be irreversible and therefore a wait-and-watch approach would be irresponsible.
A team of investigators from the Berman Institute, the Bloomberg School of Public Health’s Centre for Injury Research and Policy, and the Whiting School of Engineering’s Department of Civil Engineering will examine pathways of testing and deployment of AVs that could lead to widening disparities and a declining quality of life for certain segments of society. The team began with a systematic exploration of possible negative outcomes and engaged multiple stakeholders, including those who may be most impacted by these outcomes, then developed recommendations for the sponsors and implementers of AV trials and testing programs which will enable stakeholders to voice their concerns and influence the design of these trials.