Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Thinking Through the Ethics of New Tech…Before There’s a Problem

  • Beena Ammanath

technology ethics essay

Historically, it’s been a matter of trial and error. There’s a better way.

There’s a familiar pattern when a new technology is introduced: It grows rapidly, comes to permeate our lives, and only then does society begin to see and address the problems it creates. But is it possible to head off possible problems? While companies can’t predict the future, they can adopt a sound framework that will help them prepare for and respond to unexpected impacts. First, when rolling out new tech, it’s vital to pause and brainstorm potential risks, consider negative outcomes, and imagine unintended consequences. Second, it can also be clarifying to ask, early on, who would be accountable if an organization has to answer for the unintended or negative consequences of its new technology, whether that’s testifying to Congress, appearing in court, or answering questions from the media. Third, appoint a chief technology ethics officer.

We all want the technology in our lives to fulfill its promise — to delight us more than it scares us, to help much more than it harms. We also know that every new technology needs to earn our trust. Too often the pattern goes like this: A technology is introduced, grows rapidly, comes to permeate our lives, and only then does society begin to see and address any problems it might create.

technology ethics essay

  • BA Beena Ammanath is the Executive Director of the global Deloitte AI Institute, author of the book “Trustworthy AI,” founder of the non-profit Humans For AI, and also leads Trustworthy and Ethical Tech for Deloitte. She is an award-winning senior executive with extensive global experience in AI and digital transformation, spanning across e-commerce, finance, marketing, telecom, retail, software products, services and industrial domains with companies such as HPE, GE, Thomson Reuters, British Telecom, Bank of America, and e*trade.

Partner Center

Featured Topics

Featured series.

A series of random questions answered by Harvard experts.

Explore the Gazette

Read the latest.

technology ethics essay

Professor tailored AI tutor to physics course. Engagement doubled.

Harvard Postdoctoral Research Fellow Colleen Lanier-Christensen.

Did lawmakers know role of fossil fuels in climate change during Clean Air Act era?

1. Co-lead authors Maxwell Block and Bingtian Ye.

Spin squeezing for all

Illustration by Ben Boothman

Trailblazing initiative marries ethics, tech

Christina Pazzanese

Harvard Staff Writer

Computer science, philosophy faculty ask students to consider how systems affect society

First in a four-part series that taps the expertise of the Harvard community to examine the promise and potential pitfalls of the rising age of artificial intelligence and machine learning, and how to humanize it.

For two decades, the flowering of the Digital Era has been greeted with blue-skies optimism, defined by an unflagging belief that each new technological advance, whether more powerful personal computers, faster internet, smarter cellphones, or more personalized social media, would only enhance our lives.

Also in the series

Illustration Balancing business big and small.

Great promise but potential for peril

Illustration of person having an X-ray.

AI revolution in medicine

Illustration of robot making decisions.

Imagine a world in which AI is in your home, at work, everywhere

But public sentiment has curdled in recent years with revelations about Silicon Valley firms and online retailers collecting and sharing people’s data, social media gamed by bad actors spreading false information or sowing discord, and corporate algorithms using opaque metrics that favor some groups over others. These concerns multiply as artificial intelligence (AI) and machine-learning technologies, which made possible many of these advances, quietly begin to nudge aside humans, assuming greater roles in running our economy, transportation, defense, medical care, and personal lives.

“Individuality … is increasingly under siege in an era of big data and machine learning,” says Mathias Risse, Littauer Professor of Philosophy and Public Administration and director of the Carr Center for Human Rights Policy at Harvard Kennedy School. The center invites scholars and leaders in the private and nonprofit sectors on ethics and AI to engage with students as part of its growing focus on the ways technology is reshaping the future of human rights.

Building more thoughtful systems

Even before the technology field belatedly began to respond to market and government pressures with promises to do better, it had become clear to Barbara Grosz , Higgins Research Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), that the surest way to get the industry to act more responsibly is to prepare the next generation of tech leaders and workers to think more ethically about the work they’ll be doing. The result is Embedded EthiCS , a groundbreaking novel program that marries the disciplines of computer science and philosophy in an attempt to create change from within.

The timing seems on target, since the revolutionary technologies of AI and machine learning have begun making inroads in an ever-broadening range of domains and professions. In medicine, for instance, systems are expected soon to work effectively with physicians to provide better healthcare. In business, tech giants like Google, Facebook, and Amazon have been using smart technologies for years, but use of AI is rapidly spreading, with global corporate spending on software and platforms expected to reach $110 billion by 2024.

“A one-off course on ethics for computer scientists would not work. We needed a new pedagogical model.”

— Alison Simmons, the Samuel H. Wolcott Professor of Philosophy

Alison Simmons.

Stephanie Mitchell/Harvard Staff Photographer

So where are we now on these issues, and what does that mean? To answer those questions, this Gazette series will examine emerging technologies in medicine and business, with the help of various experts in the Harvard community. We’ll also take a look at how the humanities can help inform the future coordination of human values and AI efficiencies through University efforts such as the AI+Art project at metaLAB(at)Harvard and Embedded EthiCS.

In spring 2017, Grosz recruited Alison Simmons , the Samuel H. Wolcott Professor of Philosophy, and together they founded Embedded EthiCS . The idea is to weave philosophical concepts and ways of thinking into existing computer science courses so that students learn to ask not simply “Can I build it?” but rather “Should I build it, and if so, how?”

Through Embedded EthiCS, students learn to identify and think through ethical issues, explain their reasoning for taking, or not taking, a specific action, and ideally design more thoughtful systems that reflect basic human values. The program is the first of its kind nationally and is seen as a model for a number of other colleges and universities that plan to adapt it, including Massachusetts Institute of Technology and Stanford University.

In recent years, computer science has become the second most popular concentration at Harvard College, after economics. About 2,750 students have enrolled in Embedded EthiCS courses since it began. More than 30 courses, including all classes in the computer science department, participated in the program in spring 2019.

Students learn to ask not simply “Can I build it?” but rather “Should I build it, and if so, how?”

“We don’t need all courses, what we need is for enough students to learn to use ethical thinking during design to make a difference in the world and to start changing the way computing technology company leaders, systems designers, and programmers think about what they’re doing,” said Grosz.

It became clear that Harvard’s computer science students wanted and needed something more just a few years ago, when Grosz taught “Intelligent Systems: Design and Ethical Challenges,” one of only two CS courses that had integrated ethics into the syllabus at the time.

During a class discussion about Facebook’s infamous 2014 experiment covertly engineering news feeds to gauge how users’ emotions were affected, students were outraged by what they viewed as the company’s improper psychological manipulation. But just two days later, in a class activity in which students were designing a recommender system for a fictional clothing manufacturer, Grosz asked what information they thought they’d need to collect from hypothetical customers.

“It was astonishing,” she said. “How many of the groups talked about the ethical implications of the information they were collecting? None.”

When she taught the course again, only one student said she thought about the ethical implications, but felt that “it didn’t seem relevant,” Grosz recalled.

“You need to think about what information you’re collecting when you’re designing what you’re going to collect, not collect everything and then say ‘Oh, I shouldn’t have this information,’” she explained.

Making it stick

Seeing how quickly even students concerned about ethics forgot to consider them when absorbed in a technical project prompted Grosz to focus on how to help students keep ethics up front. Some empirical work shows that standalone courses aren’t very sticky with engineers, and she was also concerned that a single ethics course would not satisfy growing student interest. Grosz and Simmons designed the program to intertwine the ethical with the technical, thus helping students better understand the relevance of ethics to their everyday work.

In a broad range of Harvard CS courses now, philosophy Ph.D. students and postdocs lead modules on ethical matters tailored to the technical concepts being taught in the class.

“We want the ethical issues to arise organically out of the technical problems that they’re working on in class,’” said Simmons. “We want our students to recognize that technical and ethical challenges need to be addressed hand in hand. So a one-off course on ethics for computer scientists would not work. We needed a new pedagogical model.”

Examples of ethical problems courses are tackling

Image gallery

technology ethics essay

Are software developers morally obligated to design for inclusion?

technology ethics essay

Should social media companies suppress the spread of fake news on their platforms?

technology ethics essay

Should search engines be transparent about how they rank results?

technology ethics essay

Should we think about electronic privacy as a right?

Getting comfortable with a humanities-driven approach to learning, using the ideas and tools of moral and political philosophy, has been an adjustment for the computer-science instructors as well as students, said David Grant, who taught as an Embedded EthiCS postdoc in 2019 and is now assistant professor of philosophy at the University of Texas at San Antonio.

“The skill of ethical reasoning is best learned and practiced through open and inclusive discussion with others,” Grant wrote in an email. “But extensive in-class discussion is rare in computer science courses, which makes encouraging active participation in our modules unusually challenging.”

Students are used to being presented problems for which there are solutions, program organizers say. But in philosophy, issues or dilemmas become clearer over time, as different perspectives are brought to bear. And while sometimes there can be right or wrong answers, solutions are typically thornier and require some difficult choices.

“This is extremely hard for people who are used to finding solutions that can be proved to be right,” said Grosz. “It’s fundamentally a different way of thinking about the world.”

“They have to learn to think with normative concepts like moral responsibility and legal responsibility and rights. They need to develop skills for engaging in counterfactual reasoning with those concepts while doing algorithm and systems design” said Simmons. “We in the humanities problem-solve too, but we often do it in a normative domain.”

“What we need is for enough students to learn to use ethical thinking during design to make a difference in the world.”

— Barbara Grosz, Higgins Research Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences

Barbara Grosz.

The importance of teaching students to consider societal implications of computing systems was not evident in the field’s early days, when there were only a very small number of computer scientists, systems were used largely in closed scientific or industry settings, and there were few “adversarial attacks” by people aiming to exploit system weaknesses, said Grosz, a pioneer in the field. Fears about misuse were minimal because so few had access.

But as the technologies have become ubiquitous in the past 10 to 15 years, with more and more people worldwide connecting via smartphones, the internet, and social networking, as well as the rapid application of machine learning and big data computing since 2012, the need for ethical training is urgent. “It’s the penetration of computing technologies throughout life and its use by almost everyone now that has enabled so much that’s caused harm lately,” said Grosz.

That apathy has contributed to the perceived disconnect between science and the public. “We now have a gap between those of us who make technology and those of us who use it,” she said.

Simmons and Grosz said that while computer science concentrators leaving Harvard and other universities for jobs in the tech sector may have the desire to change the industry, until now they haven’t been furnished with the tools to do so effectively. The program hopes to arm them with an understanding of how to identify and work through potential ethical concerns that may arise from new technology and its applications.

“What’s important is giving them the knowledge that they have the skills to make an effective, rational argument with people about what’s going on,” said Grosz, “to give them the confidence … to [say], ‘This isn’t right — and here’s why.’”

“It is exciting. It’s an opportunity to make use of our skills in a way that might have a visible effect in the near- or midterm.”

— Jeffrey Behrends, co-director of Embedded EthiCS

Jeffrey Behrends.

A winner of the Responsible CS Challenge in 2019, the program received a $150,000 grant for its work in technology education that helps fund two computer science postdoc positions to collaborate with the philosophy student-teachers in developing the different course modules.

Though still young, the program has also had some nice side effects, with faculty and graduate students in the two typically distant cohorts learning in unusual ways from each other. And for the philosophy students there’s been an unexpected boon: working on ethical questions at technology’s cutting edge. It has changed the course of their research and opened up new career options in the growing field of engaged ethics.

More like this

Socrates and binary code.

Embedding ethics in computer science curriculum

Barbara Grosz (from left), Jeff Behrend, and Allison Simmons

Embedded EthiCS wins $150,000 grant

“It is exciting. It’s an opportunity to make use of our skills in a way that might have a visible effect in the near- or midterm,” said philosophy lecturer Jeffrey Behrends , one of the program’s co-directors.

Will this ethical training reshape the way students approach technology once they leave Harvard and join the workforce? That’s the critical question to which the program’s directors are now turning their attention. There isn’t enough data to know yet, and the key components for such an analysis, like tracking down students after they’ve graduated to measure the program’s impact on their work, present a “very difficult evaluation problem” for researchers, said Behrends, who is investigating how best to measure long-term effectiveness.

Ultimately, whether stocking the field with designers, technicians, executives, investors, and policymakers will bring about a more responsible and ethical era of technology remains to be seen. But leaving the industry to self-police or wait for market forces to guide reforms clearly hasn’t worked so far.

“Somebody has to figure out a different incentive mechanism. That’s where really the danger still lies,” said Grosz of the industry’s intense profit focus. “We can try to educate students to do differently, but in the end, if there isn’t a different incentive mechanism, it’s quite hard to change Silicon Valley practice.”

Next: Ethical concerns rise as AI takes an ever larger decision-making role in many industries.

Share this article

You might like.

technology ethics essay

Preliminary findings inspire other large Harvard classes to test approach this fall

Harvard Postdoctoral Research Fellow Colleen Lanier-Christensen.

New study suggests they did, offering insight into key issue in landmark 2022 Supreme Court ruling on EPA

1. Co-lead authors Maxwell Block and Bingtian Ye.

Physicists ease path to entanglement for quantum sensing

Billions worldwide deficient in essential micronutrients

Inadequate levels carry risk of adverse pregnancy outcomes, blindness

You want to be boss. You probably won’t be good at it.

Study pinpoints two measures that predict effective managers

Weight-loss drug linked to fewer COVID deaths

Large-scale study finds Wegovy reduces risk of heart attack, stroke

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Ethics of Artificial Intelligence and Robotics

Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these.

After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects , i.e., tools made and used by humans. This includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects , i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10). We close with a remark on the vision of AI (§3).

For each section within these themes, we provide a general explanation of the ethical issues , outline existing positions and arguments , then analyse how these play out with current technologies and finally, what policy consequences may be drawn.

1.1 Background of the Field

1.2 ai & robotics, 1.3 a note on policy, 2.1 privacy & surveillance, 2.2 manipulation of behaviour, 2.3 opacity of ai systems, 2.4 bias in decision systems, 2.5 human-robot interaction, 2.6 automation and employment, 2.7 autonomous systems, 2.8 machine ethics, 2.9 artificial moral agents, 2.10 singularity, research organizations, conferences, policy documents, other relevant pages, related entries, 1. introduction.

The ethics of AI and robotics is often focused on “concerns” of various sorts, which is a typical response to new technologies. Many such concerns turn out to be rather quaint (trains are too fast for souls); some are predictably wrong when they suggest that the technology will fundamentally change humans (telephones will destroy personal communication, writing will destroy memory, video cassettes will make going out redundant); some are broadly correct but moderately relevant (digital technology will destroy industries that make photographic film, cassette tapes, or vinyl records); but some are broadly correct and deeply relevant (cars will kill children and fundamentally change the landscape). The task of an article such as this is to analyse the issues and to deflate the non-issues.

Some technologies, like nuclear power, cars, or plastics, have caused ethical and political discussion and significant policy efforts to control the trajectory these technologies, usually only once some damage is done. In addition to such “ethical concerns”, new technologies challenge current norms and conceptual systems, which is of particular interest to philosophy. Finally, once we have understood a technology in its context, we need to shape our societal response, including regulation and law. All these features also exist in the case of new AI and Robotics technologies—plus the more fundamental fear that they may end the era of human control on Earth.

The ethics of AI and robotics has seen significant press coverage in recent years, which supports related research, but also may end up undermining it: the press often talks as if the issues under discussion were just predictions of what future technology will bring, and as though we already know what would be most ethical and how to achieve that. Press coverage thus focuses on risk, security (Brundage et al. 2018, in the Other Internet Resources section below, hereafter [OIR]), and prediction of impact (e.g., on the job market). The result is a discussion of essentially technical problems that focus on how to achieve a desired outcome. Current discussions in policy and industry are also motivated by image and public relations, where the label “ethical” is really not much more than the new “green”, perhaps used for “ethics washing”. For a problem to qualify as a problem for AI ethics would require that we do not readily know what the right thing to do is. In this sense, job loss, theft, or killing with AI is not a problem in ethics, but whether these are permissible under certain circumstances is a problem. This article focuses on the genuine problems of ethics where we do not readily know what the answers are.

A last caveat: The ethics of AI and robotics is a very young field within applied ethics, with significant dynamics, but few well-established issues and no authoritative overviews—though there is a promising outline (European Group on Ethics in Science and New Technologies 2018) and there are beginnings on societal impact (Floridi et al. 2018; Taddeo and Floridi 2018; S. Taylor et al. 2018; Walsh 2018; Bryson 2019; Gibert 2019; Whittlestone et al. 2019), and policy recommendations (AI HLEG 2019 [OIR]; IEEE 2019). So this article cannot merely reproduce what the community has achieved thus far, but must propose an ordering where little order exists.

The notion of “artificial intelligence” (AI) is understood broadly as any kind of artificial computational system that shows intelligent behaviour, i.e., complex behaviour that is conducive to reaching goals. In particular, we do not wish to restrict “intelligence” to what would require intelligence if done by humans , as Minsky had suggested (1985). This means we incorporate a range of machines, including those in “technical AI”, that show only limited abilities in learning or reasoning but excel at the automation of particular tasks, as well as machines in “general AI” that aim to create a generally intelligent agent.

AI somehow gets closer to our skin than other technologies—thus the field of “philosophy of AI”. Perhaps this is because the project of AI is to create machines that have a feature central to how we humans see ourselves, namely as feeling, thinking, intelligent beings. The main purposes of an artificially intelligent agent probably involve sensing, modelling, planning and action, but current AI applications also include perception, text analysis, natural language processing (NLP), logical reasoning, game-playing, decision support systems, data analytics, predictive analytics, as well as autonomous vehicles and other forms of robotics (P. Stone et al. 2016). AI may involve any number of computational techniques to achieve these aims, be that classical symbol-manipulating AI, inspired by natural cognition, or machine learning via neural networks (Goodfellow, Bengio, and Courville 2016; Silver et al. 2018).

Historically, it is worth noting that the term “AI” was used as above ca. 1950–1975, then came into disrepute during the “AI winter”, ca. 1975–1995, and narrowed. As a result, areas such as “machine learning”, “natural language processing” and “data science” were often not labelled as “AI”. Since ca. 2010, the use has broadened again, and at times almost all of computer science and even high-tech is lumped under “AI”. Now it is a name to be proud of, a booming industry with massive capital investment (Shoham et al. 2018), and on the edge of hype again. As Erik Brynjolfsson noted, it may allow us to

virtually eliminate global poverty, massively reduce disease and provide better education to almost everyone on the planet. (quoted in Anderson, Rainie, and Luchsinger 2018)

While AI can be entirely software, robots are physical machines that move. Robots are subject to physical impact, typically through “sensors”, and they exert physical force onto the world, typically through “actuators”, like a gripper or a turning wheel. Accordingly, autonomous cars or planes are robots, and only a minuscule portion of robots is “humanoid” (human-shaped), like in the movies. Some robots use AI, and some do not: Typical industrial robots blindly follow completely defined scripts with minimal sensory input and no learning or reasoning (around 500,000 such new industrial robots are installed each year (IFR 2019 [OIR])). It is probably fair to say that while robotics systems cause more concerns in the general public, AI systems are more likely to have a greater impact on humanity. Also, AI or robotics systems for a narrow set of tasks are less likely to cause new issues than systems that are more flexible and autonomous.

Robotics and AI can thus be seen as covering two overlapping sets of systems: systems that are only AI, systems that are only robotics, and systems that are both. We are interested in all three; the scope of this article is thus not only the intersection, but the union, of both sets.

Policy is only one of the concerns of this article. There is significant public discussion about AI ethics, and there are frequent pronouncements from politicians that the matter requires new policy, which is easier said than done: Actual technology policy is difficult to plan and enforce. It can take many forms, from incentives and funding, infrastructure, taxation, or good-will statements, to regulation by various actors, and the law. Policy for AI will possibly come into conflict with other aims of technology policy or general policy. Governments, parliaments, associations, and industry circles in industrialised countries have produced reports and white papers in recent years, and some have generated good-will slogans (“trusted/responsible/humane/human-centred/good/beneficial AI”), but is that what is needed? For a survey, see Jobin, Ienca, and Vayena (2019) and V. Müller’s list of PT-AI Policy Documents and Institutions .

For people who work in ethics and policy, there might be a tendency to overestimate the impact and threats from a new technology, and to underestimate how far current regulation can reach (e.g., for product liability). On the other hand, there is a tendency for businesses, the military, and some public administrations to “just talk” and do some “ethics washing” in order to preserve a good public image and continue as before. Actually implementing legally binding regulation would challenge existing business models and practices. Actual policy is not just an implementation of ethical theory, but subject to societal power structures—and the agents that do have the power will push against anything that restricts them. There is thus a significant risk that regulation will remain toothless in the face of economical and political power.

Though very little actual policy has been produced, there are some notable beginnings: The latest EU policy document suggests “trustworthy AI” should be lawful, ethical, and technically robust, and then spells this out as seven requirements: human oversight, technical robustness, privacy and data governance, transparency, fairness, well-being, and accountability (AI HLEG 2019 [OIR]). Much European research now runs under the slogan of “responsible research and innovation” (RRI), and “technology assessment” has been a standard field since the advent of nuclear power. Professional ethics is also a standard field in information technology, and this includes issues that are relevant in this article. Perhaps a “code of ethics” for AI engineers, analogous to the codes of ethics for medical doctors, is an option here (Véliz 2019). What data science itself should do is addressed in (L. Taylor and Purtova 2019). We also expect that much policy will eventually cover specific uses or technologies of AI and robotics, rather than the field as a whole. A useful summary of an ethical framework for AI is given in (European Group on Ethics in Science and New Technologies 2018: 13ff). On general AI policy, see Calo (2018) as well as Crawford and Calo (2016); Stahl, Timmermans, and Mittelstadt (2016); Johnson and Verdicchio (2017); and Giubilini and Savulescu (2018). A more political angle of technology is often discussed in the field of “Science and Technology Studies” (STS). As books like The Ethics of Invention (Jasanoff 2016) show, concerns in STS are often quite similar to those in ethics (Jacobs et al. 2019 [OIR]). In this article, we discuss the policy for each type of issue separately rather than for AI or robotics in general.

2. Main Debates

In this section we outline the ethical issues of human use of AI and robotics systems that can be more or less autonomous—which means we look at issues that arise with certain uses of the technologies which would not arise with others. It must be kept in mind, however, that technologies will always cause some uses to be easier, and thus more frequent, and hinder other uses. The design of technical artefacts thus has ethical relevance for their use (Houkes and Vermaas 2010; Verbeek 2011), so beyond “responsible use”, we also need “responsible design” in this field. The focus on use does not presuppose which ethical approaches are best suited for tackling these issues; they might well be virtue ethics (Vallor 2017) rather than consequentialist or value-based (Floridi et al. 2018). This section is also neutral with respect to the question whether AI systems truly have “intelligence” or other mental properties: It would apply equally well if AI and robotics are merely seen as the current face of automation (cf. Müller forthcoming-b).

There is a general discussion about privacy and surveillance in information technology (e.g., Macnish 2017; Roessler 2017), which mainly concerns the access to private data and data that is personally identifiable. Privacy has several well recognised aspects, e.g., “the right to be let alone”, information privacy, privacy as an aspect of personhood, control over information about oneself, and the right to secrecy (Bennett and Raab 2006). Privacy studies have historically focused on state surveillance by secret services but now include surveillance by other state agents, businesses, and even individuals. The technology has changed significantly in the last decades while regulation has been slow to respond (though there is the Regulation (EU) 2016/679)—the result is a certain anarchy that is exploited by the most powerful players, sometimes in plain sight, sometimes in hiding.

The digital sphere has widened greatly: All data collection and storage is now digital, our lives are increasingly digital, most digital data is connected to a single Internet, and there is more and more sensor technology in use that generates data about non-digital aspects of our lives. AI increases both the possibilities of intelligent data collection and the possibilities for data analysis. This applies to blanket surveillance of whole populations as well as to classic targeted surveillance. In addition, much of the data is traded between agents, usually for a fee.

At the same time, controlling who collects which data, and who has access, is much harder in the digital world than it was in the analogue world of paper and telephone calls. Many new AI technologies amplify the known issues. For example, face recognition in photos and videos allows identification and thus profiling and searching for individuals (Whittaker et al. 2018: 15ff). This continues using other techniques for identification, e.g., “device fingerprinting”, which are commonplace on the Internet (sometimes revealed in the “privacy policy”). The result is that “In this vast ocean of data, there is a frighteningly complete picture of us” (Smolan 2016: 1:01). The result is arguably a scandal that still has not received due public attention.

The data trail we leave behind is how our “free” services are paid for—but we are not told about that data collection and the value of this new raw material, and we are manipulated into leaving ever more such data. For the “big 5” companies (Amazon, Google/Alphabet, Microsoft, Apple, Facebook), the main data-collection part of their business appears to be based on deception, exploiting human weaknesses, furthering procrastination, generating addiction, and manipulation (Harris 2016 [OIR]). The primary focus of social media, gaming, and most of the Internet in this “surveillance economy” is to gain, maintain, and direct attention—and thus data supply. “Surveillance is the business model of the Internet” (Schneier 2015). This surveillance and attention economy is sometimes called “surveillance capitalism” (Zuboff 2019). It has caused many attempts to escape from the grasp of these corporations, e.g., in exercises of “minimalism” (Newport 2019), sometimes through the open source movement, but it appears that present-day citizens have lost the degree of autonomy needed to escape while fully continuing with their life and work. We have lost ownership of our data, if “ownership” is the right relation here. Arguably, we have lost control of our data.

These systems will often reveal facts about us that we ourselves wish to suppress or are not aware of: they know more about us than we know ourselves. Even just observing online behaviour allows insights into our mental states (Burr and Christianini 2019) and manipulation (see below section 2.2 ). This has led to calls for the protection of “derived data” (Wachter and Mittelstadt 2019). With the last sentence of his bestselling book, Homo Deus , Harari asks about the long-term consequences of AI:

What will happen to society, politics and daily life when non-conscious but highly intelligent algorithms know us better than we know ourselves? (2016: 462)

Robotic devices have not yet played a major role in this area, except for security patrolling, but this will change once they are more common outside of industry environments. Together with the “Internet of things”, the so-called “smart” systems (phone, TV, oven, lamp, virtual assistant, home,…), “smart city” (Sennett 2018), and “smart governance”, they are set to become part of the data-gathering machinery that offers more detailed data, of different types, in real time, with ever more information.

Privacy-preserving techniques that can largely conceal the identity of persons or groups are now a standard staple in data science; they include (relative) anonymisation , access control (plus encryption), and other models where computation is carried out with fully or partially encrypted input data (Stahl and Wright 2018); in the case of “differential privacy”, this is done by adding calibrated noise to encrypt the output of queries (Dwork et al. 2006; Abowd 2017). While requiring more effort and cost, such techniques can avoid many of the privacy issues. Some companies have also seen better privacy as a competitive advantage that can be leveraged and sold at a price.

One of the major practical difficulties is to actually enforce regulation, both on the level of the state and on the level of the individual who has a claim. They must identify the responsible legal entity, prove the action, perhaps prove intent, find a court that declares itself competent … and eventually get the court to actually enforce its decision. Well-established legal protection of rights such as consumer rights, product liability, and other civil liability or protection of intellectual property rights is often missing in digital products, or hard to enforce. This means that companies with a “digital” background are used to testing their products on the consumers without fear of liability while heavily defending their intellectual property rights. This “Internet Libertarianism” is sometimes taken to assume that technical solutions will take care of societal problems by themselves (Mozorov 2013).

The ethical issues of AI in surveillance go beyond the mere accumulation of data and direction of attention: They include the use of information to manipulate behaviour, online and offline, in a way that undermines autonomous rational choice. Of course, efforts to manipulate behaviour are ancient, but they may gain a new quality when they use AI systems. Given users’ intense interaction with data systems and the deep knowledge about individuals this provides, they are vulnerable to “nudges”, manipulation, and deception. With sufficient prior data, algorithms can be used to target individuals or small groups with just the kind of input that is likely to influence these particular individuals. A ’nudge‘ changes the environment such that it influences behaviour in a predictable way that is positive for the individual, but easy and cheap to avoid (Thaler & Sunstein 2008). There is a slippery slope from here to paternalism and manipulation.

Many advertisers, marketers, and online sellers will use any legal means at their disposal to maximise profit, including exploitation of behavioural biases, deception, and addiction generation (Costa and Halpern 2019 [OIR]). Such manipulation is the business model in much of the gambling and gaming industries, but it is spreading, e.g., to low-cost airlines. In interface design on web pages or in games, this manipulation uses what is called “dark patterns” (Mathur et al. 2019). At this moment, gambling and the sale of addictive substances are highly regulated, but online manipulation and addiction are not—even though manipulation of online behaviour is becoming a core business model of the Internet.

Furthermore, social media is now the prime location for political propaganda. This influence can be used to steer voting behaviour, as in the Facebook-Cambridge Analytica “scandal” (Woolley and Howard 2017; Bradshaw, Neudert, and Howard 2019) and—if successful—it may harm the autonomy of individuals (Susser, Roessler, and Nissenbaum 2019).

Improved AI “faking” technologies make what once was reliable evidence into unreliable evidence—this has already happened to digital photos, sound recordings, and video. It will soon be quite easy to create (rather than alter) “deep fake” text, photos, and video material with any desired content. Soon, sophisticated real-time interaction with persons over text, phone, or video will be faked, too. So we cannot trust digital interactions while we are at the same time increasingly dependent on such interactions.

One more specific issue is that machine learning techniques in AI rely on training with vast amounts of data. This means there will often be a trade-off between privacy and rights to data vs. technical quality of the product. This influences the consequentialist evaluation of privacy-violating practices.

The policy in this field has its ups and downs: Civil liberties and the protection of individual rights are under intense pressure from businesses’ lobbying, secret services, and other state agencies that depend on surveillance. Privacy protection has diminished massively compared to the pre-digital age when communication was based on letters, analogue telephone communications, and personal conversation and when surveillance operated under significant legal constraints.

While the EU General Data Protection Regulation (Regulation (EU) 2016/679) has strengthened privacy protection, the US and China prefer growth with less regulation (Thompson and Bremmer 2018), likely in the hope that this provides a competitive advantage. It is clear that state and business actors have increased their ability to invade privacy and manipulate people with the help of AI technology and will continue to do so to further their particular interests—unless reined in by policy in the interest of general society.

Opacity and bias are central issues in what is now sometimes called “data ethics” or “big data ethics” (Floridi and Taddeo 2016; Mittelstadt and Floridi 2016). AI systems for automated decision support and “predictive analytics” raise “significant concerns about lack of due process, accountability, community engagement, and auditing” (Whittaker et al. 2018: 18ff). They are part of a power structure in which “we are creating decision-making processes that constrain and limit opportunities for human participation” (Danaher 2016b: 245). At the same time, it will often be impossible for the affected person to know how the system came to this output, i.e., the system is “opaque” to that person. If the system involves machine learning, it will typically be opaque even to the expert, who will not know how a particular pattern was identified, or even what the pattern is. Bias in decision systems and data sets is exacerbated by this opacity. So, at least in cases where there is a desire to remove bias, the analysis of opacity and bias go hand in hand, and political response has to tackle both issues together.

Many AI systems rely on machine learning techniques in (simulated) neural networks that will extract patterns from a given dataset, with or without “correct” solutions provided; i.e., supervised, semi-supervised or unsupervised. With these techniques, the “learning” captures patterns in the data and these are labelled in a way that appears useful to the decision the system makes, while the programmer does not really know which patterns in the data the system has used. In fact, the programs are evolving, so when new data comes in, or new feedback is given (“this was correct”, “this was incorrect”), the patterns used by the learning system change. What this means is that the outcome is not transparent to the user or programmers: it is opaque. Furthermore, the quality of the program depends heavily on the quality of the data provided, following the old slogan “garbage in, garbage out”. So, if the data already involved a bias (e.g., police data about the skin colour of suspects), then the program will reproduce that bias. There are proposals for a standard description of datasets in a “datasheet” that would make the identification of such bias more feasible (Gebru et al. 2018 [OIR]). There is also significant recent literature about the limitations of machine learning systems that are essentially sophisticated data filters (Marcus 2018 [OIR]). Some have argued that the ethical problems of today are the result of technical “shortcuts” AI has taken (Cristianini forthcoming).

There are several technical activities that aim at “explainable AI”, starting with (Van Lent, Fisher, and Mancuso 1999; Lomas et al. 2012) and, more recently, a DARPA programme (Gunning 2017 [OIR]). More broadly, the demand for

a mechanism for elucidating and articulating the power structures, biases, and influences that computational artefacts exercise in society (Diakopoulos 2015: 398)

is sometimes called “algorithmic accountability reporting”. This does not mean that we expect an AI to “explain its reasoning”—doing so would require far more serious moral autonomy than we currently attribute to AI systems (see below §2.10 ).

The politician Henry Kissinger pointed out that there is a fundamental problem for democratic decision-making if we rely on a system that is supposedly superior to humans, but cannot explain its decisions. He says we may have “generated a potentially dominating technology in search of a guiding philosophy” (Kissinger 2018). Danaher (2016b) calls this problem “the threat of algocracy” (adopting the previous use of ‘algocracy’ from Aneesh 2002 [OIR], 2006). In a similar vein, Cave (2019) stresses that we need a broader societal move towards more “democratic” decision-making to avoid AI being a force that leads to a Kafka-style impenetrable suppression system in public administration and elsewhere. The political angle of this discussion has been stressed by O’Neil in her influential book Weapons of Math Destruction (2016), and by Yeung and Lodge (2019).

In the EU, some of these issues have been taken into account with the (Regulation (EU) 2016/679), which foresees that consumers, when faced with a decision based on data processing, will have a legal “right to explanation”—how far this goes and to what extent it can be enforced is disputed (Goodman and Flaxman 2017; Wachter, Mittelstadt, and Floridi 2016; Wachter, Mittelstadt, and Russell 2017). Zerilli et al. (2019) argue that there may be a double standard here, where we demand a high level of explanation for machine-based decisions despite humans sometimes not reaching that standard themselves.

Automated AI decision support systems and “predictive analytics” operate on data and produce a decision as “output”. This output may range from the relatively trivial to the highly significant: “this restaurant matches your preferences”, “the patient in this X-ray has completed bone growth”, “application to credit card declined”, “donor organ will be given to another patient”, “bail is denied”, or “target identified and engaged”. Data analysis is often used in “predictive analytics” in business, healthcare, and other fields, to foresee future developments—since prediction is easier, it will also become a cheaper commodity. One use of prediction is in “predictive policing” (NIJ 2014 [OIR]), which many fear might lead to an erosion of public liberties (Ferguson 2017) because it can take away power from the people whose behaviour is predicted. It appears, however, that many of the worries about policing depend on futuristic scenarios where law enforcement foresees and punishes planned actions, rather than waiting until a crime has been committed (like in the 2002 film “Minority Report”). One concern is that these systems might perpetuate bias that was already in the data used to set up the system, e.g., by increasing police patrols in an area and discovering more crime in that area. Actual “predictive policing” or “intelligence led policing” techniques mainly concern the question of where and when police forces will be needed most. Also, police officers can be provided with more data, offering them more control and facilitating better decisions, in workflow support software (e.g., “ArcGIS”). Whether this is problematic depends on the appropriate level of trust in the technical quality of these systems, and on the evaluation of aims of the police work itself. Perhaps a recent paper title points in the right direction here: “AI ethics in predictive policing: From models of threat to an ethics of care” (Asaro 2019).

Bias typically surfaces when unfair judgments are made because the individual making the judgment is influenced by a characteristic that is actually irrelevant to the matter at hand, typically a discriminatory preconception about members of a group. So, one form of bias is a learned cognitive feature of a person, often not made explicit. The person concerned may not be aware of having that bias—they may even be honestly and explicitly opposed to a bias they are found to have (e.g., through priming, cf. Graham and Lowery 2004). On fairness vs. bias in machine learning, see Binns (2018).

Apart from the social phenomenon of learned bias, the human cognitive system is generally prone to have various kinds of “cognitive biases”, e.g., the “confirmation bias”: humans tend to interpret information as confirming what they already believe. This second form of bias is often said to impede performance in rational judgment (Kahnemann 2011)—though at least some cognitive biases generate an evolutionary advantage, e.g., economical use of resources for intuitive judgment. There is a question whether AI systems could or should have such cognitive bias.

A third form of bias is present in data when it exhibits systematic error, e.g., “statistical bias”. Strictly, any given dataset will only be unbiased for a single kind of issue, so the mere creation of a dataset involves the danger that it may be used for a different kind of issue, and then turn out to be biased for that kind. Machine learning on the basis of such data would then not only fail to recognise the bias, but codify and automate the “historical bias”. Such historical bias was discovered in an automated recruitment screening system at Amazon (discontinued early 2017) that discriminated against women—presumably because the company had a history of discriminating against women in the hiring process. The “Correctional Offender Management Profiling for Alternative Sanctions” (COMPAS), a system to predict whether a defendant would re-offend, was found to be as successful (65.2% accuracy) as a group of random humans (Dressel and Farid 2018) and to produce more false positives and less false negatives for black defendants. The problem with such systems is thus bias plus humans placing excessive trust in the systems. The political dimensions of such automated systems in the USA are investigated in Eubanks (2018).

There are significant technical efforts to detect and remove bias from AI systems, but it is fair to say that these are in early stages: see UK Institute for Ethical AI & Machine Learning (Brownsword, Scotford, and Yeung 2017; Yeung and Lodge 2019). It appears that technological fixes have their limits in that they need a mathematical notion of fairness, which is hard to come by (Whittaker et al. 2018: 24ff; Selbst et al. 2019), as is a formal notion of “race” (see Benthall and Haynes 2019). An institutional proposal is in (Veale and Binns 2017).

Human-robot interaction (HRI) is an academic fields in its own right, which now pays significant attention to ethical matters, the dynamics of perception from both sides, and both the different interests present in and the intricacy of the social context, including co-working (e.g., Arnold and Scheutz 2017). Useful surveys for the ethics of robotics include Calo, Froomkin, and Kerr (2016); Royakkers and van Est (2016); Tzafestas (2016); a standard collection of papers is Lin, Abney, and Jenkins (2017).

While AI can be used to manipulate humans into believing and doing things (see section 2.2 ), it can also be used to drive robots that are problematic if their processes or appearance involve deception, threaten human dignity, or violate the Kantian requirement of “respect for humanity”. Humans very easily attribute mental properties to objects, and empathise with them, especially when the outer appearance of these objects is similar to that of living beings. This can be used to deceive humans (or animals) into attributing more intellectual or even emotional significance to robots or AI systems than they deserve. Some parts of humanoid robotics are problematic in this regard (e.g., Hiroshi Ishiguro’s remote-controlled Geminoids), and there are cases that have been clearly deceptive for public-relations purposes (e.g. on the abilities of Hanson Robotics’ “Sophia”). Of course, some fairly basic constraints of business ethics and law apply to robots, too: product safety and liability, or non-deception in advertisement. It appears that these existing constraints take care of many concerns that are raised. There are cases, however, where human-human interaction has aspects that appear specifically human in ways that can perhaps not be replaced by robots: care, love, and sex.

2.5.1 Example (a) Care Robots

The use of robots in health care for humans is currently at the level of concept studies in real environments, but it may become a usable technology in a few years, and has raised a number of concerns for a dystopian future of de-humanised care (A. Sharkey and N. Sharkey 2011; Robert Sparrow 2016). Current systems include robots that support human carers/caregivers (e.g., in lifting patients, or transporting material), robots that enable patients to do certain things by themselves (e.g., eat with a robotic arm), but also robots that are given to patients as company and comfort (e.g., the “Paro” robot seal). For an overview, see van Wynsberghe (2016); Nørskov (2017); Fosch-Villaronga and Albo-Canals (2019), for a survey of users Draper et al. (2014).

One reason why the issue of care has come to the fore is that people have argued that we will need robots in ageing societies. This argument makes problematic assumptions, namely that with longer lifespan people will need more care, and that it will not be possible to attract more humans to caring professions. It may also show a bias about age (Jecker forthcoming). Most importantly, it ignores the nature of automation, which is not simply about replacing humans, but about allowing humans to work more efficiently. It is not very clear that there really is an issue here since the discussion mostly focuses on the fear of robots de-humanising care, but the actual and foreseeable robots in care are assistive robots for classic automation of technical tasks. They are thus “care robots” only in a behavioural sense of performing tasks in care environments, not in the sense that a human “cares” for the patients. It appears that the success of “being cared for” relies on this intentional sense of “care”, which foreseeable robots cannot provide. If anything, the risk of robots in care is the absence of such intentional care—because less human carers may be needed. Interestingly, caring for something, even a virtual agent, can be good for the carer themselves (Lee et al. 2019). A system that pretends to care would be deceptive and thus problematic—unless the deception is countered by sufficiently large utility gain (Coeckelbergh 2016). Some robots that pretend to “care” on a basic level are available (Paro seal) and others are in the making. Perhaps feeling cared for by a machine, to some extent, is progress for come patients.

2.5.2 Example (b) Sex Robots

It has been argued by several tech optimists that humans will likely be interested in sex and companionship with robots and be comfortable with the idea (Levy 2007). Given the variation of human sexual preferences, including sex toys and sex dolls, this seems very likely: The question is whether such devices should be manufactured and promoted, and whether there should be limits in this touchy area. It seems to have moved into the mainstream of “robot philosophy” in recent times (Sullins 2012; Danaher and McArthur 2017; N. Sharkey et al. 2017 [OIR]; Bendel 2018; Devlin 2018).

Humans have long had deep emotional attachments to objects, so perhaps companionship or even love with a predictable android is attractive, especially to people who struggle with actual humans, and already prefer dogs, cats, birds, a computer or a tamagotchi . Danaher (2019b) argues against (Nyholm and Frank 2017) that these can be true friendships, and is thus a valuable goal. It certainly looks like such friendship might increase overall utility, even if lacking in depth. In these discussions there is an issue of deception, since a robot cannot (at present) mean what it says, or have feelings for a human. It is well known that humans are prone to attribute feelings and thoughts to entities that behave as if they had sentience,even to clearly inanimate objects that show no behaviour at all. Also, paying for deception seems to be an elementary part of the traditional sex industry.

Finally, there are concerns that have often accompanied matters of sex, namely consent (Frank and Nyholm 2017), aesthetic concerns, and the worry that humans may be “corrupted” by certain experiences. Old fashioned though this may seem, human behaviour is influenced by experience, and it is likely that pornography or sex robots support the perception of other humans as mere objects of desire, or even recipients of abuse, and thus ruin a deeper sexual and erotic experience. In this vein, the “Campaign Against Sex Robots” argues that these devices are a continuation of slavery and prostitution (Richardson 2016).

It seems clear that AI and robotics will lead to significant gains in productivity and thus overall wealth. The attempt to increase productivity has often been a feature of the economy, though the emphasis on “growth” is a modern phenomenon (Harari 2016: 240). However, productivity gains through automation typically mean that fewer humans are required for the same output. This does not necessarily imply a loss of overall employment, however, because available wealth increases and that can increase demand sufficiently to counteract the productivity gain. In the long run, higher productivity in industrial societies has led to more wealth overall. Major labour market disruptions have occurred in the past, e.g., farming employed over 60% of the workforce in Europe and North-America in 1800, while by 2010 it employed ca. 5% in the EU, and even less in the wealthiest countries (European Commission 2013). In the 20 years between 1950 and 1970 the number of hired agricultural workers in the UK was reduced by 50% (Zayed and Loft 2019). Some of these disruptions lead to more labour-intensive industries moving to places with lower labour cost. This is an ongoing process.

Classic automation replaced human muscle, whereas digital automation replaces human thought or information-processing—and unlike physical machines, digital automation is very cheap to duplicate (Bostrom and Yudkowsky 2014). It may thus mean a more radical change on the labour market. So, the main question is: will the effects be different this time? Will the creation of new jobs and wealth keep up with the destruction of jobs? And even if it is not different, what are the transition costs, and who bears them? Do we need to make societal adjustments for a fair distribution of costs and benefits of digital automation?

Responses to the issue of unemployment from AI have ranged from the alarmed (Frey and Osborne 2013; Westlake 2014) to the neutral (Metcalf, Keller, and Boyd 2016 [OIR]; Calo 2018; Frey 2019) to the optimistic (Brynjolfsson and McAfee 2016; Harari 2016; Danaher 2019a). In principle, the labour market effect of automation seems to be fairly well understood as involving two channels:

(i) the nature of interactions between differently skilled workers and new technologies affecting labour demand and (ii) the equilibrium effects of technological progress through consequent changes in labour supply and product markets. (Goos 2018: 362)

What currently seems to happen in the labour market as a result of AI and robotics automation is “job polarisation” or the “dumbbell” shape (Goos, Manning, and Salomons 2009): The highly skilled technical jobs are in demand and highly paid, the low skilled service jobs are in demand and badly paid, but the mid-qualification jobs in factories and offices, i.e., the majority of jobs, are under pressure and reduced because they are relatively predictable, and most likely to be automated (Baldwin 2019).

Perhaps enormous productivity gains will allow the “age of leisure” to be realised, something (Keynes 1930) had predicted to occur around 2030, assuming a growth rate of 1% per annum. Actually, we have already reached the level he anticipated for 2030, but we are still working—consuming more and inventing ever more levels of organisation. Harari explains how this economic development allowed humanity to overcome hunger, disease, and war—and now we aim for immortality and eternal bliss through AI, thus his title Homo Deus (Harari 2016: 75).

In general terms, the issue of unemployment is an issue of how goods in a society should be justly distributed. A standard view is that distributive justice should be rationally decided from behind a “veil of ignorance” (Rawls 1971), i.e., as if one does not know what position in a society one would actually be taking (labourer or industrialist, etc.). Rawls thought the chosen principles would then support basic liberties and a distribution that is of greatest benefit to the least-advantaged members of society. It would appear that the AI economy has three features that make such justice unlikely: First, it operates in a largely unregulated environment where responsibility is often hard to allocate. Second, it operates in markets that have a “winner takes all” feature where monopolies develop quickly. Third, the “new economy” of the digital service industries is based on intangible assets, also called “capitalism without capital” (Haskel and Westlake 2017). This means that it is difficult to control multinational digital corporations that do not rely on a physical plant in a particular location. These three features seem to suggest that if we leave the distribution of wealth to free market forces, the result would be a heavily unjust distribution: And this is indeed a development that we can already see.

One interesting question that has not received too much attention is whether the development of AI is environmentally sustainable: Like all computing systems, AI systems produce waste that is very hard to recycle and they consume vast amounts of energy, especially for the training of machine learning systems (and even for the “mining” of cryptocurrency). Again, it appears that some actors in this space offload such costs to the general society.

There are several notions of autonomy in the discussion of autonomous systems. A stronger notion is involved in philosophical debates where autonomy is the basis for responsibility and personhood (Christman 2003 [2018]). In this context, responsibility implies autonomy, but not inversely, so there can be systems that have degrees of technical autonomy without raising issues of responsibility. The weaker, more technical, notion of autonomy in robotics is relative and gradual: A system is said to be autonomous with respect to human control to a certain degree (Müller 2012). There is a parallel here to the issues of bias and opacity in AI since autonomy also concerns a power-relation: who is in control, and who is responsible?

Generally speaking, one question is the degree to which autonomous robots raise issues our present conceptual schemes must adapt to, or whether they just require technical adjustments. In most jurisdictions, there is a sophisticated system of civil and criminal liability to resolve such issues. Technical standards, e.g., for the safe use of machinery in medical environments, will likely need to be adjusted. There is already a field of “verifiable AI” for such safety-critical systems and for “security applications”. Bodies like the IEEE (The Institute of Electrical and Electronics Engineers) and the BSI (British Standards Institution) have produced “standards”, particularly on more technical sub-problems, such as data security and transparency. Among the many autonomous systems on land, on water, under water, in air or space, we discuss two samples: autonomous vehicles and autonomous weapons.

2.7.1 Example (a) Autonomous Vehicles

Autonomous vehicles hold the promise to reduce the very significant damage that human driving currently causes—approximately 1 million humans being killed per year, many more injured, the environment polluted, earth sealed with concrete and tarmac, cities full of parked cars, etc. However, there seem to be questions on how autonomous vehicles should behave, and how responsibility and risk should be distributed in the complicated system the vehicles operates in. (There is also significant disagreement over how long the development of fully autonomous, or “level 5” cars (SAE International 2018) will actually take.)

There is some discussion of “trolley problems” in this context. In the classic “trolley problems” (Thomson 1976; Woollard and Howard-Snyder 2016: section 2) various dilemmas are presented. The simplest version is that of a trolley train on a track that is heading towards five people and will kill them, unless the train is diverted onto a side track, but on that track there is one person, who will be killed if the train takes that side track. The example goes back to a remark in (Foot 1967: 6), who discusses a number of dilemma cases where tolerated and intended consequences of an action differ. “Trolley problems” are not supposed to describe actual ethical problems or to be solved with a “right” choice. Rather, they are thought-experiments where choice is artificially constrained to a small finite number of distinct one-off options and where the agent has perfect knowledge. These problems are used as a theoretical tool to investigate ethical intuitions and theories—especially the difference between actively doing vs. allowing something to happen, intended vs. tolerated consequences, and consequentialist vs. other normative approaches (Kamm 2016). This type of problem has reminded many of the problems encountered in actual driving and in autonomous driving (Lin 2016). It is doubtful, however, that an actual driver or autonomous car will ever have to solve trolley problems (but see Keeling 2020). While autonomous car trolley problems have received a lot of media attention (Awad et al. 2018), they do not seem to offer anything new to either ethical theory or to the programming of autonomous vehicles.

The more common ethical problems in driving, such as speeding, risky overtaking, not keeping a safe distance, etc. are classic problems of pursuing personal interest vs. the common good. The vast majority of these are covered by legal regulations on driving. Programming the car to drive “by the rules” rather than “by the interest of the passengers” or “to achieve maximum utility” is thus deflated to a standard problem of programming ethical machines (see section 2.9 ). There are probably additional discretionary rules of politeness and interesting questions on when to break the rules (Lin 2016), but again this seems to be more a case of applying standard considerations (rules vs. utility) to the case of autonomous vehicles.

Notable policy efforts in this field include the report (German Federal Ministry of Transport and Digital Infrastructure 2017), which stresses that safety is the primary objective. Rule 10 states

In the case of automated and connected driving systems, the accountability that was previously the sole preserve of the individual shifts from the motorist to the manufacturers and operators of the technological systems and to the bodies responsible for taking infrastructure, policy and legal decisions.

(See section 2.10.1 below). The resulting German and EU laws on licensing automated driving are much more restrictive than their US counterparts where “testing on consumers” is a strategy used by some companies—without informed consent of the consumers or their possible victims.

2.7.2 Example (b) Autonomous Weapons

The notion of automated weapons is fairly old:

For example, instead of fielding simple guided missiles or remotely piloted vehicles, we might launch completely autonomous land, sea, and air vehicles capable of complex, far-ranging reconnaissance and attack missions. (DARPA 1983: 1)

This proposal was ridiculed as “fantasy” at the time (Dreyfus, Dreyfus, and Athanasiou 1986: ix), but it is now a reality, at least for more easily identifiable targets (missiles, planes, ships, tanks, etc.), but not for human combatants. The main arguments against (lethal) autonomous weapon systems (AWS or LAWS), are that they support extrajudicial killings, take responsibility away from humans, and make wars or killings more likely—for a detailed list of issues see Lin, Bekey, and Abney (2008: 73–86).

It appears that lowering the hurdle to use such systems (autonomous vehicles, “fire-and-forget” missiles, or drones loaded with explosives) and reducing the probability of being held accountable would increase the probability of their use. The crucial asymmetry where one side can kill with impunity, and thus has few reasons not to do so, already exists in conventional drone wars with remote controlled weapons (e.g., US in Pakistan). It is easy to imagine a small drone that searches, identifies, and kills an individual human—or perhaps a type of human. These are the kinds of cases brought forward by the Campaign to Stop Killer Robots and other activist groups. Some seem to be equivalent to saying that autonomous weapons are indeed weapons …, and weapons kill, but we still make them in gigantic numbers. On the matter of accountability, autonomous weapons might make identification and prosecution of the responsible agents more difficult—but this is not clear, given the digital records that one can keep, at least in a conventional war. The difficulty of allocating punishment is sometimes called the “retribution gap” (Danaher 2016a).

Another question is whether using autonomous weapons in war would make wars worse, or make wars less bad. If robots reduce war crimes and crimes in war, the answer may well be positive and has been used as an argument in favour of these weapons (Arkin 2009; Müller 2016a) but also as an argument against them (Amoroso and Tamburrini 2018). Arguably the main threat is not the use of such weapons in conventional warfare, but in asymmetric conflicts or by non-state agents, including criminals.

It has also been said that autonomous weapons cannot conform to International Humanitarian Law, which requires observance of the principles of distinction (between combatants and civilians), proportionality (of force), and military necessity (of force) in military conflict (A. Sharkey 2019). It is true that the distinction between combatants and non-combatants is hard, but the distinction between civilian and military ships is easy—so all this says is that we should not construct and use such weapons if they do violate Humanitarian Law. Additional concerns have been raised that being killed by an autonomous weapon threatens human dignity, but even the defenders of a ban on these weapons seem to say that these are not good arguments:

There are other weapons, and other technologies, that also compromise human dignity. Given this, and the ambiguities inherent in the concept, it is wiser to draw on several types of objections in arguments against AWS, and not to rely exclusively on human dignity. (A. Sharkey 2019)

A lot has been made of keeping humans “in the loop” or “on the loop” in the military guidance on weapons—these ways of spelling out “meaningful control” are discussed in (Santoni de Sio and van den Hoven 2018). There have been discussions about the difficulties of allocating responsibility for the killings of an autonomous weapon, and a “responsibility gap” has been suggested (esp. Rob Sparrow 2007), meaning that neither the human nor the machine may be responsible. On the other hand, we do not assume that for every event there is someone responsible for that event, and the real issue may well be the distribution of risk (Simpson and Müller 2016). Risk analysis (Hansson 2013) indicates it is crucial to identify who is exposed to risk, who is a potential beneficiary , and who makes the decisions (Hansson 2018: 1822–1824).

Machine ethics is ethics for machines, for “ethical machines”, for machines as subjects , rather than for the human use of machines as objects. It is often not very clear whether this is supposed to cover all of AI ethics or to be a part of it (Floridi and Saunders 2004; Moor 2006; Anderson and Anderson 2011; Wallach and Asaro 2017). Sometimes it looks as though there is the (dubious) inference at play here that if machines act in ethically relevant ways, then we need a machine ethics. Accordingly, some use a broader notion:

machine ethics is concerned with ensuring that the behavior of machines toward human users, and perhaps other machines as well, is ethically acceptable. (Anderson and Anderson 2007: 15)

This might include mere matters of product safety, for example. Other authors sound rather ambitious but use a narrower notion:

AI reasoning should be able to take into account societal values, moral and ethical considerations; weigh the respective priorities of values held by different stakeholders in various multicultural contexts; explain its reasoning; and guarantee transparency. (Dignum 2018: 1, 2)

Some of the discussion in machine ethics makes the very substantial assumption that machines can, in some sense, be ethical agents responsible for their actions, or “autonomous moral agents” (see van Wynsberghe and Robbins 2019). The basic idea of machine ethics is now finding its way into actual robotics where the assumption that these machines are artificial moral agents in any substantial sense is usually not made (Winfield et al. 2019). It is sometimes observed that a robot that is programmed to follow ethical rules can very easily be modified to follow unethical rules (Vanderelst and Winfield 2018).

The idea that machine ethics might take the form of “laws” has famously been investigated by Isaac Asimov, who proposed “three laws of robotics” (Asimov 1942):

First Law—A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law—A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third Law—A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Asimov then showed in a number of stories how conflicts between these three laws will make it problematic to use them despite their hierarchical organisation.

It is not clear that there is a consistent notion of “machine ethics” since weaker versions are in danger of reducing “having an ethics” to notions that would not normally be considered sufficient (e.g., without “reflection” or even without “action”); stronger notions that move towards artificial moral agents may describe a—currently—empty set.

If one takes machine ethics to concern moral agents, in some substantial sense, then these agents can be called “artificial moral agents”, having rights and responsibilities. However, the discussion about artificial entities challenges a number of common notions in ethics and it can be very useful to understand these in abstraction from the human case (cf. Misselhorn 2020; Powers and Ganascia forthcoming).

Several authors use “artificial moral agent” in a less demanding sense, borrowing from the use of “agent” in software engineering in which case matters of responsibility and rights will not arise (Allen, Varner, and Zinser 2000). James Moor (2006) distinguishes four types of machine agents: ethical impact agents (e.g., robot jockeys), implicit ethical agents (e.g., safe autopilot), explicit ethical agents (e.g., using formal methods to estimate utility), and full ethical agents (who “can make explicit ethical judgments and generally is competent to reasonably justify them. An average adult human is a full ethical agent”.) Several ways to achieve “explicit” or “full” ethical agents have been proposed, via programming it in (operational morality), via “developing” the ethics itself (functional morality), and finally full-blown morality with full intelligence and sentience (Allen, Smit, and Wallach 2005; Moor 2006). Programmed agents are sometimes not considered “full” agents because they are “competent without comprehension”, just like the neurons in a brain (Dennett 2017; Hakli and Mäkelä 2019).

In some discussions, the notion of “moral patient” plays a role: Ethical agents have responsibilities while ethical patients have rights because harm to them matters. It seems clear that some entities are patients without being agents, e.g., simple animals that can feel pain but cannot make justified choices. On the other hand, it is normally understood that all agents will also be patients (e.g., in a Kantian framework). Usually, being a person is supposed to be what makes an entity a responsible agent, someone who can have duties and be the object of ethical concerns. Such personhood is typically a deep notion associated with phenomenal consciousness, intention and free will (Frankfurt 1971; Strawson 1998). Torrance (2011) suggests “artificial (or machine) ethics could be defined as designing machines that do things that, when done by humans, are indicative of the possession of ‘ethical status’ in those humans” (2011: 116)—which he takes to be “ethical productivity and ethical receptivity ” (2011: 117)—his expressions for moral agents and patients.

2.9.1 Responsibility for Robots

There is broad consensus that accountability, liability, and the rule of law are basic requirements that must be upheld in the face of new technologies (European Group on Ethics in Science and New Technologies 2018, 18), but the issue in the case of robots is how this can be done and how responsibility can be allocated. If the robots act, will they themselves be responsible, liable, or accountable for their actions? Or should the distribution of risk perhaps take precedence over discussions of responsibility?

Traditional distribution of responsibility already occurs: A car maker is responsible for the technical safety of the car, a driver is responsible for driving, a mechanic is responsible for proper maintenance, the public authorities are responsible for the technical conditions of the roads, etc. In general

The effects of decisions or actions based on AI are often the result of countless interactions among many actors, including designers, developers, users, software, and hardware.… With distributed agency comes distributed responsibility. (Taddeo and Floridi 2018: 751).

How this distribution might occur is not a problem that is specific to AI, but it gains particular urgency in this context (Nyholm 2018a, 2018b). In classical control engineering, distributed control is often achieved through a control hierarchy plus control loops across these hierarchies.

2.9.2 Rights for Robots

Some authors have indicated that it should be seriously considered whether current robots must be allocated rights (Gunkel 2018a, 2018b; Danaher forthcoming; Turner 2019). This position seems to rely largely on criticism of the opponents and on the empirical observation that robots and other non-persons are sometimes treated as having rights. In this vein, a “relational turn” has been proposed: If we relate to robots as though they had rights, then we might be well-advised not to search whether they “really” do have such rights (Coeckelbergh 2010, 2012, 2018). This raises the question how far such anti-realism or quasi-realism can go, and what it means then to say that “robots have rights” in a human-centred approach (Gerdes 2016). On the other side of the debate, Bryson has insisted that robots should not enjoy rights (Bryson 2010), though she considers it a possibility (Gunkel and Bryson 2014).

There is a wholly separate issue whether robots (or other AI systems) should be given the status of “legal entities” or “legal persons” in a sense natural persons, but also states, businesses, or organisations are “entities”, namely they can have legal rights and duties. The European Parliament has considered allocating such status to robots in order to deal with civil liability (EU Parliament 2016; Bertolini and Aiello 2018), but not criminal liability—which is reserved for natural persons. It would also be possible to assign only a certain subset of rights and duties to robots. It has been said that “such legislative action would be morally unnecessary and legally troublesome” because it would not serve the interest of humans (Bryson, Diamantis, and Grant 2017: 273). In environmental ethics there is a long-standing discussion about the legal rights for natural objects like trees (C. D. Stone 1972).

It has also been said that the reasons for developing robots with rights, or artificial moral patients, in the future are ethically doubtful (van Wynsberghe and Robbins 2019). In the community of “artificial consciousness” researchers there is a significant concern whether it would be ethical to create such consciousness since creating it would presumably imply ethical obligations to a sentient being, e.g., not to harm it and not to end its existence by switching it off—some authors have called for a “moratorium on synthetic phenomenology” (Bentley et al. 2018: 28f).

2.10.1 Singularity and Superintelligence

In some quarters, the aim of current AI is thought to be an “artificial general intelligence” (AGI), contrasted to a technical or “narrow” AI. AGI is usually distinguished from traditional notions of AI as a general purpose system, and from Searle’s notion of “strong AI”:

computers given the right programs can be literally said to understand and have other cognitive states. (Searle 1980: 417)

The idea of singularity is that if the trajectory of artificial intelligence reaches up to systems that have a human level of intelligence, then these systems would themselves have the ability to develop AI systems that surpass the human level of intelligence, i.e., they are “superintelligent” (see below). Such superintelligent AI systems would quickly self-improve or develop even more intelligent systems. This sharp turn of events after reaching superintelligent AI is the “singularity” from which the development of AI is out of human control and hard to predict (Kurzweil 2005: 487).

The fear that “the robots we created will take over the world” had captured human imagination even before there were computers (e.g., Butler 1863) and is the central theme in Čapek’s famous play that introduced the word “robot” (Čapek 1920). This fear was first formulated as a possible trajectory of existing AI into an “intelligence explosion” by Irvin Good:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. (Good 1965: 33)

The optimistic argument from acceleration to singularity is spelled out by Kurzweil (1999, 2005, 2012) who essentially points out that computing power has been increasing exponentially, i.e., doubling ca. every 2 years since 1970 in accordance with “Moore’s Law” on the number of transistors, and will continue to do so for some time in the future. He predicted in (Kurzweil 1999) that by 2010 supercomputers will reach human computation capacity, by 2030 “mind uploading” will be possible, and by 2045 the “singularity” will occur. Kurzweil talks about an increase in computing power that can be purchased at a given cost—but of course in recent years the funds available to AI companies have also increased enormously: Amodei and Hernandez (2018 [OIR]) thus estimate that in the years 2012–2018 the actual computing power available to train a particular AI system doubled every 3.4 months, resulting in an 300,000x increase—not the 7x increase that doubling every two years would have created.

A common version of this argument (Chalmers 2010) talks about an increase in “intelligence” of the AI system (rather than raw computing power), but the crucial point of “singularity” remains the one where further development of AI is taken over by AI systems and accelerates beyond human level. Bostrom (2014) explains in some detail what would happen at that point and what the risks for humanity are. The discussion is summarised in Eden et al. (2012); Armstrong (2014); Shanahan (2015). There are possible paths to superintelligence other than computing power increase, e.g., the complete emulation of the human brain on a computer (Kurzweil 2012; Sandberg 2013), biological paths, or networks and organisations (Bostrom 2014: 22–51).

Despite obvious weaknesses in the identification of “intelligence” with processing power, Kurzweil seems right that humans tend to underestimate the power of exponential growth. Mini-test: If you walked in steps in such a way that each step is double the previous, starting with a step of one metre, how far would you get with 30 steps? (answer: almost 3 times further than the Earth’s only permanent natural satellite.) Indeed, most progress in AI is readily attributable to the availability of processors that are faster by degrees of magnitude, larger storage, and higher investment (Müller 2018). The actual acceleration and its speeds are discussed in (Müller and Bostrom 2016; Bostrom, Dafoe, and Flynn forthcoming); Sandberg (2019) argues that progress will continue for some time.

The participants in this debate are united by being technophiles in the sense that they expect technology to develop rapidly and bring broadly welcome changes—but beyond that, they divide into those who focus on benefits (e.g., Kurzweil) and those who focus on risks (e.g., Bostrom). Both camps sympathise with “transhuman” views of survival for humankind in a different physical form, e.g., uploaded on a computer (Moravec 1990, 1998; Bostrom 2003a, 2003c). They also consider the prospects of “human enhancement” in various respects, including intelligence—often called “IA” (intelligence augmentation). It may be that future AI will be used for human enhancement, or will contribute further to the dissolution of the neatly defined human single person. Robin Hanson provides detailed speculation about what will happen economically in case human “brain emulation” enables truly intelligent robots or “ems” (Hanson 2016).

The argument from superintelligence to risk requires the assumption that superintelligence does not imply benevolence—contrary to Kantian traditions in ethics that have argued higher levels of rationality or intelligence would go along with a better understanding of what is moral and better ability to act morally (Gewirth 1978; Chalmers 2010: 36f). Arguments for risk from superintelligence say that rationality and morality are entirely independent dimensions—this is sometimes explicitly argued for as an “orthogonality thesis” (Bostrom 2012; Armstrong 2013; Bostrom 2014: 105–109).

Criticism of the singularity narrative has been raised from various angles. Kurzweil and Bostrom seem to assume that intelligence is a one-dimensional property and that the set of intelligent agents is totally-ordered in the mathematical sense—but neither discusses intelligence at any length in their books. Generally, it is fair to say that despite some efforts, the assumptions made in the powerful narrative of superintelligence and singularity have not been investigated in detail. One question is whether such a singularity will ever occur—it may be conceptually impossible, practically impossible or may just not happen because of contingent events, including people actively preventing it. Philosophically, the interesting question is whether singularity is just a “myth” (Floridi 2016; Ganascia 2017), and not on the trajectory of actual AI research. This is something that practitioners often assume (e.g., Brooks 2017 [OIR]). They may do so because they fear the public relations backlash, because they overestimate the practical problems, or because they have good reasons to think that superintelligence is an unlikely outcome of current AI research (Müller forthcoming-a). This discussion raises the question whether the concern about “singularity” is just a narrative about fictional AI based on human fears. But even if one does find negative reasons compelling and the singularity not likely to occur, there is still a significant possibility that one may turn out to be wrong. Philosophy is not on the “secure path of a science” (Kant 1791: B15), and maybe AI and robotics aren’t either (Müller 2020). So, it appears that discussing the very high-impact risk of singularity has justification even if one thinks the probability of such singularity ever occurring is very low.

2.10.2 Existential Risk from Superintelligence

Thinking about superintelligence in the long term raises the question whether superintelligence may lead to the extinction of the human species, which is called an “existential risk” (or XRisk): The superintelligent systems may well have preferences that conflict with the existence of humans on Earth, and may thus decide to end that existence—and given their superior intelligence, they will have the power to do so (or they may happen to end it because they do not really care).

Thinking in the long term is the crucial feature of this literature. Whether the singularity (or another catastrophic event) occurs in 30 or 300 or 3000 years does not really matter (Baum et al. 2019). Perhaps there is even an astronomical pattern such that an intelligent species is bound to discover AI at some point, and thus bring about its own demise. Such a “great filter” would contribute to the explanation of the “Fermi paradox” why there is no sign of life in the known universe despite the high probability of it emerging. It would be bad news if we found out that the “great filter” is ahead of us, rather than an obstacle that Earth has already passed. These issues are sometimes taken more narrowly to be about human extinction (Bostrom 2013), or more broadly as concerning any large risk for the species (Rees 2018)—of which AI is only one (Häggström 2016; Ord 2020). Bostrom also uses the category of “global catastrophic risk” for risks that are sufficiently high up the two dimensions of “scope” and “severity” (Bostrom and Ćirković 2011; Bostrom 2013).

These discussions of risk are usually not connected to the general problem of ethics under risk (e.g., Hansson 2013, 2018). The long-term view has its own methodological challenges but has produced a wide discussion: (Tegmark 2017) focuses on AI and human life “3.0” after singularity while Russell, Dewey, and Tegmark (2015) and Bostrom, Dafoe, and Flynn (forthcoming) survey longer-term policy issues in ethical AI. Several collections of papers have investigated the risks of artificial general intelligence (AGI) and the factors that might make this development more or less risk-laden (Müller 2016b; Callaghan et al. 2017; Yampolskiy 2018), including the development of non-agent AI (Drexler 2019).

2.10.3 Controlling Superintelligence?

In a narrow sense, the “control problem” is how we humans can remain in control of an AI system once it is superintelligent (Bostrom 2014: 127ff). In a wider sense, it is the problem of how we can make sure an AI system will turn out to be positive according to human perception (Russell 2019); this is sometimes called “value alignment”. How easy or hard it is to control a superintelligence depends significantly on the speed of “take-off” to a superintelligent system. This has led to particular attention to systems with self-improvement, such as AlphaZero (Silver et al. 2018).

One aspect of this problem is that we might decide a certain feature is desirable, but then find out that it has unforeseen consequences that are so negative that we would not desire that feature after all. This is the ancient problem of King Midas who wished that all he touched would turn into gold. This problem has been discussed on the occasion of various examples, such as the “paperclip maximiser” (Bostrom 2003b), or the program to optimise chess performance (Omohundro 2014).

Discussions about superintelligence include speculation about omniscient beings, the radical changes on a “latter day”, and the promise of immortality through transcendence of our current bodily form—so sometimes they have clear religious undertones (Capurro 1993; Geraci 2008, 2010; O’Connell 2017: 160ff). These issues also pose a well-known problem of epistemology: Can we know the ways of the omniscient (Danaher 2015)? The usual opponents have already shown up: A characteristic response of an atheist is

People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world (Domingos 2015)

The new nihilists explain that a “techno-hypnosis” through information technologies has now become our main method of distraction from the loss of meaning (Gertz 2018). Both opponents would thus say we need an ethics for the “small” problems that occur with actual AI and robotics ( sections 2.1 through 2.9 above), and that there is less need for the “big ethics” of existential risk from AI ( section 2.10 ).

The singularity thus raises the problem of the concept of AI again. It is remarkable how imagination or “vision” has played a central role since the very beginning of the discipline at the “Dartmouth Summer Research Project” (McCarthy et al. 1955 [OIR]; Simon and Newell 1958). And the evaluation of this vision is subject to dramatic change: In a few decades, we went from the slogans “AI is impossible” (Dreyfus 1972) and “AI is just automation” (Lighthill 1973) to “AI will solve all problems” (Kurzweil 1999) and “AI may kill us all” (Bostrom 2014). This created media attention and public relations efforts, but it also raises the problem of how much of this “philosophy and ethics of AI” is really about AI rather than about an imagined technology. As we said at the outset, AI and robotics have raised fundamental questions about what we should do with these systems, what the systems themselves should do, and what risks they have in the long term. They also challenge the human view of humanity as the intelligent and dominant species on Earth. We have seen issues that have been raised and will have to watch technological and social developments closely to catch the new issues early on, develop a philosophical analysis, and learn for traditional problems of philosophy.

NOTE: Citations in the main text annotated “[OIR]” may be found in the Other Internet Resources section below, not in the Bibliography.

  • Abowd, John M, 2017, “How Will Statistical Agencies Operate When All Data Are Private?”, Journal of Privacy and Confidentiality , 7(3): 1–15. doi:10.29012/jpc.v7i3.404
  • AI4EU, 2019, “Outcomes from the Strategic Orientation Workshop (Deliverable 7.1)”, (June 28, 2019). https://www.ai4eu.eu/ai4eu-project-deliverables
  • Allen, Colin, Iva Smit, and Wendell Wallach, 2005, “Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches”, Ethics and Information Technology , 7(3): 149–155. doi:10.1007/s10676-006-0004-4
  • Allen, Colin, Gary Varner, and Jason Zinser, 2000, “Prolegomena to Any Future Artificial Moral Agent”, Journal of Experimental & Theoretical Artificial Intelligence , 12(3): 251–261. doi:10.1080/09528130050111428
  • Amoroso, Daniele and Guglielmo Tamburrini, 2018, “The Ethical and Legal Case Against Autonomy in Weapons Systems”, Global Jurist , 18(1): art. 20170012. doi:10.1515/gj-2017-0012
  • Anderson, Janna, Lee Rainie, and Alex Luchsinger, 2018, Artificial Intelligence and the Future of Humans , Washington, DC: Pew Research Center.
  • Anderson, Michael and Susan Leigh Anderson, 2007, “Machine Ethics: Creating an Ethical Intelligent Agent”, AI Magazine , 28(4): 15–26.
  • ––– (eds.), 2011, Machine Ethics , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511978036
  • Aneesh, A., 2006, Virtual Migration: The Programming of Globalization , Durham, NC and London: Duke University Press.
  • Arkin, Ronald C., 2009, Governing Lethal Behavior in Autonomous Robots , Boca Raton, FL: CRC Press.
  • Armstrong, Stuart, 2013, “General Purpose Intelligence: Arguing the Orthogonality Thesis”, Analysis and Metaphysics , 12: 68–84.
  • –––, 2014, Smarter Than Us , Berkeley, CA: MIRI.
  • Arnold, Thomas and Matthias Scheutz, 2017, “Beyond Moral Dilemmas: Exploring the Ethical Landscape in HRI”, in Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction—HRI ’17 , Vienna, Austria: ACM Press, 445–452. doi:10.1145/2909824.3020255
  • Asaro, Peter M., 2019, “AI Ethics in Predictive Policing: From Models of Threat to an Ethics of Care”, IEEE Technology and Society Magazine , 38(2): 40–53. doi:10.1109/MTS.2019.2915154
  • Asimov, Isaac, 1942, “Runaround: A Short Story”, Astounding Science Fiction , March 1942. Reprinted in “I, Robot”, New York: Gnome Press 1950, 1940ff.
  • Awad, Edmond, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan, 2018, “The Moral Machine Experiment”, Nature , 563(7729): 59–64. doi:10.1038/s41586-018-0637-6
  • Baldwin, Richard, 2019, The Globotics Upheaval: Globalisation, Robotics and the Future of Work , New York: Oxford University Press.
  • Baum, Seth D., Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin, and Roman V. Yampolskiy, 2019, “Long-Term Trajectories of Human Civilization”, Foresight , 21(1): 53–83. doi:10.1108/FS-04-2018-0037
  • Bendel, Oliver, 2018, “Sexroboter aus Sicht der Maschinenethik”, in Handbuch Filmtheorie , Bernhard Groß and Thomas Morsch (eds.), (Springer Reference Geisteswissenschaften), Wiesbaden: Springer Fachmedien Wiesbaden, 1–19. doi:10.1007/978-3-658-17484-2_22-1
  • Bennett, Colin J. and Charles Raab, 2006, The Governance of Privacy: Policy Instruments in Global Perspective , second edition, Cambridge, MA: MIT Press.
  • Benthall, Sebastian and Bruce D. Haynes, 2019, “Racial Categories in Machine Learning”, in Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* ’19 , Atlanta, GA, USA: ACM Press, 289–298. doi:10.1145/3287560.3287575
  • Bentley, Peter J., Miles Brundage, Olle Häggström, and Thomas Metzinger, 2018, “Should We Fear Artificial Intelligence? In-Depth Analysis”, European Parliamentary Research Service, Scientific Foresight Unit (STOA), March 2018, PE 614.547, 1–40. [ Bentley et al. 2018 available online ]
  • Bertolini, Andrea and Giuseppe Aiello, 2018, “Robot Companions: A Legal and Ethical Analysis”, The Information Society , 34(3): 130–140. doi:10.1080/01972243.2018.1444249
  • Binns, Reuben, 2018, “Fairness in Machine Learning: Lessons from Political Philosophy”, Proceedings of the 1st Conference on Fairness, Accountability and Transparency , in Proceedings of Machine Learning Research , 81: 149–159.
  • Bostrom, Nick, 2003a, “Are We Living in a Computer Simulation?”, The Philosophical Quarterly , 53(211): 243–255. doi:10.1111/1467-9213.00309
  • –––, 2003b, “Ethical Issues in Advanced Artificial Intelligence”, in Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Volume 2 , Iva Smit, Wendell Wallach, and G.E. Lasker (eds), (IIAS-147-2003), Tecumseh, ON: International Institute of Advanced Studies in Systems Research and Cybernetics, 12–17. [ Botstrom 2003b revised available online ]
  • –––, 2003c, “Transhumanist Values”, in Ethical Issues for the Twenty-First Century , Frederick Adams (ed.), Bowling Green, OH: Philosophical Documentation Center Press.
  • –––, 2012, “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents”, Minds and Machines , 22(2): 71–85. doi:10.1007/s11023-012-9281-3
  • –––, 2013, “Existential Risk Prevention as Global Priority”, Global Policy , 4(1): 15–31. doi:10.1111/1758-5899.12002
  • –––, 2014, Superintelligence: Paths, Dangers, Strategies , Oxford: Oxford University Press.
  • Bostrom, Nick and Milan M. Ćirković (eds.), 2011, Global Catastrophic Risks , New York: Oxford University Press.
  • Bostrom, Nick, Allan Dafoe, and Carrick Flynn, forthcoming, “Policy Desiderata for Superintelligent AI: A Vector Field Approach (V. 4.3)”, in Ethics of Artificial Intelligence , S Matthew Liao (ed.), New York: Oxford University Press. [ Bostrom, Dafoe, and Flynn forthcoming – preprint available online ]
  • Bostrom, Nick and Eliezer Yudkowsky, 2014, “The Ethics of Artificial Intelligence”, in The Cambridge Handbook of Artificial Intelligence , Keith Frankish and William M. Ramsey (eds.), Cambridge: Cambridge University Press, 316–334. doi:10.1017/CBO9781139046855.020 [ Bostrom and Yudkowsky 2014 available online ]
  • Bradshaw, Samantha, Lisa-Maria Neudert, and Phil Howard, 2019, “Government Responses to Malicious Use of Social Media”, Working Paper 2019.2, Oxford: Project on Computational Propaganda. [ Bradshaw, Neudert, and Howard 2019 available online/ ]
  • Brownsword, Roger, Eloise Scotford, and Karen Yeung (eds.), 2017, The Oxford Handbook of Law, Regulation and Technology , Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780199680832.001.0001
  • Brynjolfsson, Erik and Andrew McAfee, 2016, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies , New York: W. W. Norton.
  • Bryson, Joanna J., 2010, “Robots Should Be Slaves”, in Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design Issues , Yorick Wilks (ed.), (Natural Language Processing 8), Amsterdam: John Benjamins Publishing Company, 63–74. doi:10.1075/nlp.8.11bry
  • –––, 2019, “The Past Decade and Future of Ai’s Impact on Society”, in Towards a New Enlightenment: A Transcendent Decade , Madrid: Turner - BVVA. [ Bryson 2019 available online ]
  • Bryson, Joanna J., Mihailis E. Diamantis, and Thomas D. Grant, 2017, “Of, for, and by the People: The Legal Lacuna of Synthetic Persons”, Artificial Intelligence and Law , 25(3): 273–291. doi:10.1007/s10506-017-9214-9
  • Burr, Christopher and Nello Cristianini, 2019, “Can Machines Read Our Minds?”, Minds and Machines , 29(3): 461–494. doi:10.1007/s11023-019-09497-4
  • Butler, Samuel, 1863, “Darwin among the Machines: Letter to the Editor”, Letter in The Press (Christchurch) , 13 June 1863. [ Butler 1863 available online ]
  • Callaghan, Victor, James Miller, Roman Yampolskiy, and Stuart Armstrong (eds.), 2017, The Technological Singularity: Managing the Journey , (The Frontiers Collection), Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-662-54033-6
  • Calo, Ryan, 2018, “Artificial Intelligence Policy: A Primer and Roadmap”, University of Bologna Law Review , 3(2): 180-218. doi:10.6092/ISSN.2531-6133/8670
  • Calo, Ryan, A. Michael Froomkin, and Ian Kerr (eds.), 2016, Robot Law , Cheltenham: Edward Elgar.
  • Čapek, Karel, 1920, R.U.R. , Prague: Aventium. Translated by Peter Majer and Cathy Porter, London: Methuen, 1999.
  • Capurro, Raphael, 1993, “Ein Grinsen Ohne Katze: Von der Vergleichbarkeit Zwischen ‘Künstlicher Intelligenz’ und ‘Getrennten Intelligenzen’”, Zeitschrift für philosophische Forschung , 47: 93–102.
  • Cave, Stephen, 2019, “To Save Us from a Kafkaesque Future, We Must Democratise AI”, The Guardian , 04 January 2019. [ Cave 2019 available online ]
  • Chalmers, David J., 2010, “The Singularity: A Philosophical Analysis”, Journal of Consciousness Studies , 17(9–10): 7–65. [ Chalmers 2010 available online ]
  • Christman, John, 2003 [2018], “Autonomy in Moral and Political Philosophy”, (Spring 2018) Stanford Encyclopedia of Philosophy (EDITION NEEDED), URL = < https://plato.stanford.edu/archives/spr2018/entries/autonomy-moral/ >
  • Coeckelbergh, Mark, 2010, “Robot Rights? Towards a Social-Relational Justification of Moral Consideration”, Ethics and Information Technology , 12(3): 209–221. doi:10.1007/s10676-010-9235-5
  • –––, 2012, Growing Moral Relations: Critique of Moral Status Ascription , London: Palgrave. doi:10.1057/9781137025968
  • –––, 2016, “Care Robots and the Future of ICT-Mediated Elderly Care: A Response to Doom Scenarios”, AI & Society , 31(4): 455–462. doi:10.1007/s00146-015-0626-3
  • –––, 2018, “What Do We Mean by a Relational Ethics? Growing a Relational Approach to the Moral Standing of Plants, Robots and Other Non-Humans”, in Plant Ethics: Concepts and Applications , Angela Kallhoff, Marcello Di Paola, and Maria Schörgenhumer (eds.), London: Routledge, 110–121.
  • Crawford, Kate and Ryan Calo, 2016, “There Is a Blind Spot in AI Research”, Nature , 538(7625): 311–313. doi:10.1038/538311a
  • Cristianini, Nello, forthcoming, “Shortcuts to Artificial Intelligence”, in Machines We Trust , Marcello Pelillo and Teresa Scantamburlo (eds.), Cambridge, MA: MIT Press. [ Cristianini forthcoming – preprint available online ]
  • Danaher, John, 2015, “Why AI Doomsayers Are Like Sceptical Theists and Why It Matters”, Minds and Machines , 25(3): 231–246. doi:10.1007/s11023-015-9365-y
  • –––, 2016a, “Robots, Law and the Retribution Gap”, Ethics and Information Technology , 18(4): 299–309. doi:10.1007/s10676-016-9403-3
  • –––, 2016b, “The Threat of Algocracy: Reality, Resistance and Accommodation”, Philosophy & Technology , 29(3): 245–268. doi:10.1007/s13347-015-0211-1
  • –––, 2019a, Automation and Utopia: Human Flourishing in a World without Work , Cambridge, MA: Harvard University Press.
  • –––, 2019b, “The Philosophical Case for Robot Friendship”, Journal of Posthuman Studies , 3(1): 5–24. doi:10.5325/jpoststud.3.1.0005
  • –––, forthcoming, “Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism”, Science and Engineering Ethics , first online: 20 June 2019. doi:10.1007/s11948-019-00119-x
  • Danaher, John and Neil McArthur (eds.), 2017, Robot Sex: Social and Ethical Implications , Boston, MA: MIT Press.
  • DARPA, 1983, “Strategic Computing. New-Generation Computing Technology: A Strategic Plan for Its Development an Application to Critical Problems in Defense”, ADA141982, 28 October 1983. [ DARPA 1983 available online ]
  • Dennett, Daniel C, 2017, From Bacteria to Bach and Back: The Evolution of Minds , New York: W.W. Norton.
  • Devlin, Kate, 2018, Turned On: Science, Sex and Robots , London: Bloomsbury.
  • Diakopoulos, Nicholas, 2015, “Algorithmic Accountability: Journalistic Investigation of Computational Power Structures”, Digital Journalism , 3(3): 398–415. doi:10.1080/21670811.2014.976411
  • Dignum, Virginia, 2018, “Ethics in Artificial Intelligence: Introduction to the Special Issue”, Ethics and Information Technology , 20(1): 1–3. doi:10.1007/s10676-018-9450-z
  • Domingos, Pedro, 2015, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World , London: Allen Lane.
  • Draper, Heather, Tom Sorell, Sandra Bedaf, Dag Sverre Syrdal, Carolina Gutierrez-Ruiz, Alexandre Duclos, and Farshid Amirabdollahian, 2014, “Ethical Dimensions of Human-Robot Interactions in the Care of Older People: Insights from 21 Focus Groups Convened in the UK, France and the Netherlands”, in International Conference on Social Robotics 2014 , Michael Beetz, Benjamin Johnston, and Mary-Anne Williams (eds.), (Lecture Notes in Artificial Intelligence 8755), Cham: Springer International Publishing, 135–145. doi:10.1007/978-3-319-11973-1_14
  • Dressel, Julia and Hany Farid, 2018, “The Accuracy, Fairness, and Limits of Predicting Recidivism”, Science Advances , 4(1): eaao5580. doi:10.1126/sciadv.aao5580
  • Drexler, K. Eric, 2019, “Reframing Superintelligence: Comprehensive AI Services as General Intelligence”, FHI Technical Report, 2019-1, 1-210. [ Drexler 2019 available online ]
  • Dreyfus, Hubert L., 1972, What Computers Still Can’t Do: A Critique of Artificial Reason , second edition, Cambridge, MA: MIT Press 1992.
  • Dreyfus, Hubert L., Stuart E. Dreyfus, and Tom Athanasiou, 1986, Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer , New York: Free Press.
  • Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smith, 2006, Calibrating Noise to Sensitivity in Private Data Analysis , Berlin, Heidelberg.
  • Eden, Amnon H., James H. Moor, Johnny H. Søraker, and Eric Steinhart (eds.), 2012, Singularity Hypotheses: A Scientific and Philosophical Assessment , (The Frontiers Collection), Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-642-32560-1
  • Eubanks, Virginia, 2018, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor , London: St. Martin’s Press.
  • European Commission, 2013, “How Many People Work in Agriculture in the European Union? An Answer Based on Eurostat Data Sources”, EU Agricultural Economics Briefs , 8 (July 2013). [ Anonymous 2013 available online ]
  • European Group on Ethics in Science and New Technologies, 2018, “Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems”, 9 March 2018, European Commission, Directorate-General for Research and Innovation, Unit RTD.01. [ European Group 2018 available online ]
  • Ferguson, Andrew Guthrie, 2017, The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement , New York: NYU Press.
  • Floridi, Luciano, 2016, “Should We Be Afraid of AI? Machines Seem to Be Getting Smarter and Smarter and Much Better at Human Jobs, yet True AI Is Utterly Implausible. Why?”, Aeon , 9 May 2016. URL = < Floridi 2016 available online >
  • Floridi, Luciano, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke, and Effy Vayena, 2018, “AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations”, Minds and Machines , 28(4): 689–707. doi:10.1007/s11023-018-9482-5
  • Floridi, Luciano and Jeff W. Sanders, 2004, “On the Morality of Artificial Agents”, Minds and Machines , 14(3): 349–379. doi:10.1023/B:MIND.0000035461.63578.9d
  • Floridi, Luciano and Mariarosaria Taddeo, 2016, “What Is Data Ethics?”, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences , 374(2083): 20160360. doi:10.1098/rsta.2016.0360
  • Foot, Philippa, 1967, “The Problem of Abortion and the Doctrine of the Double Effect”, Oxford Review , 5: 5–15.
  • Fosch-Villaronga, Eduard and Jordi Albo-Canals, 2019, “‘I’ll Take Care of You,’ Said the Robot”, Paladyn, Journal of Behavioral Robotics , 10(1): 77–93. doi:10.1515/pjbr-2019-0006
  • Frank, Lily and Sven Nyholm, 2017, “Robot Sex and Consent: Is Consent to Sex between a Robot and a Human Conceivable, Possible, and Desirable?”, Artificial Intelligence and Law , 25(3): 305–323. doi:10.1007/s10506-017-9212-y
  • Frankfurt, Harry G., 1971, “Freedom of the Will and the Concept of a Person”, The Journal of Philosophy , 68(1): 5–20.
  • Frey, Carl Benedict, 2019, The Technology Trap: Capital, Labour, and Power in the Age of Automation , Princeton, NJ: Princeton University Press.
  • Frey, Carl Benedikt and Michael A. Osborne, 2013, “The Future of Employment: How Susceptible Are Jobs to Computerisation?”, Oxford Martin School Working Papers, 17 September 2013. [ Frey and Osborne 2013 available online ]
  • Ganascia, Jean-Gabriel, 2017, Le Mythe De La Singularité , Paris: Éditions du Seuil.
  • EU Parliament, 2016, “Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(Inl))”, Committee on Legal Affairs , 10.11.2016. https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html
  • EU Regulation, 2016/679, “General Data Protection Regulation: Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/Ec”, Official Journal of the European Union , 119 (4 May 2016), 1–88. [ Regulation (EU) 2016/679 available online ]
  • Geraci, Robert M., 2008, “Apocalyptic AI: Religion and the Promise of Artificial Intelligence”, Journal of the American Academy of Religion , 76(1): 138–166. doi:10.1093/jaarel/lfm101
  • –––, 2010, Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780195393026.001.0001
  • Gerdes, Anne, 2016, “The Issue of Moral Consideration in Robot Ethics”, ACM SIGCAS Computers and Society , 45(3): 274–279. doi:10.1145/2874239.2874278
  • German Federal Ministry of Transport and Digital Infrastructure, 2017, “Report of the Ethics Commission: Automated and Connected Driving”, June 2017, 1–36. [ GFMTDI 2017 available online ]
  • Gertz, Nolen, 2018, Nihilism and Technology , London: Rowman & Littlefield.
  • Gewirth, Alan, 1978, “The Golden Rule Rationalized”, Midwest Studies in Philosophy , 3(1): 133–147. doi:10.1111/j.1475-4975.1978.tb00353.x
  • Gibert, Martin, 2019, “Éthique Artificielle (Version Grand Public)”, in L’Encyclopédie Philosophique , Maxime Kristanek (ed.), accessed: 16 April 2020, URL = < Gibert 2019 available online >
  • Giubilini, Alberto and Julian Savulescu, 2018, “The Artificial Moral Advisor. The ‘Ideal Observer’ Meets Artificial Intelligence”, Philosophy & Technology , 31(2): 169–188. doi:10.1007/s13347-017-0285-z
  • Good, Irving John, 1965, “Speculations Concerning the First Ultraintelligent Machine”, in Advances in Computers 6 , Franz L. Alt and Morris Rubinoff (eds.), New York & London: Academic Press, 31–88. doi:10.1016/S0065-2458(08)60418-0
  • Goodfellow, Ian, Yoshua Bengio, and Aaron Courville, 2016, Deep Learning , Cambridge, MA: MIT Press.
  • Goodman, Bryce and Seth Flaxman, 2017, “European Union Regulations on Algorithmic Decision-Making and a ‘Right to Explanation’”, AI Magazine , 38(3): 50–57. doi:10.1609/aimag.v38i3.2741
  • Goos, Maarten, 2018, “The Impact of Technological Progress on Labour Markets: Policy Challenges”, Oxford Review of Economic Policy , 34(3): 362–375. doi:10.1093/oxrep/gry002
  • Goos, Maarten, Alan Manning, and Anna Salomons, 2009, “Job Polarization in Europe”, American Economic Review , 99(2): 58–63. doi:10.1257/aer.99.2.58
  • Graham, Sandra and Brian S. Lowery, 2004, “Priming Unconscious Racial Stereotypes about Adolescent Offenders”, Law and Human Behavior , 28(5): 483–504. doi:10.1023/B:LAHU.0000046430.65485.1f
  • Gunkel, David J., 2018a, “The Other Question: Can and Should Robots Have Rights?”, Ethics and Information Technology , 20(2): 87–99. doi:10.1007/s10676-017-9442-4
  • –––, 2018b, Robot Rights , Boston, MA: MIT Press.
  • Gunkel, David J. and Joanna J. Bryson (eds.), 2014, Machine Morality: The Machine as Moral Agent and Patient special issue of Philosophy & Technology , 27(1): 1–142.
  • Häggström, Olle, 2016, Here Be Dragons: Science, Technology and the Future of Humanity , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780198723547.001.0001
  • Hakli, Raul and Pekka Mäkelä, 2019, “Moral Responsibility of Robots and Hybrid Agents”, The Monist , 102(2): 259–275. doi:10.1093/monist/onz009
  • Hanson, Robin, 2016, The Age of Em: Work, Love and Life When Robots Rule the Earth , Oxford: Oxford University Press.
  • Hansson, Sven Ove, 2013, The Ethics of Risk: Ethical Analysis in an Uncertain World , New York: Palgrave Macmillan.
  • –––, 2018, “How to Perform an Ethical Risk Analysis (eRA)”, Risk Analysis , 38(9): 1820–1829. doi:10.1111/risa.12978
  • Harari, Yuval Noah, 2016, Homo Deus: A Brief History of Tomorrow , New York: Harper.
  • Haskel, Jonathan and Stian Westlake, 2017, Capitalism without Capital: The Rise of the Intangible Economy , Princeton, NJ: Princeton University Press.
  • Houkes, Wybo and Pieter E. Vermaas, 2010, Technical Functions: On the Use and Design of Artefacts , (Philosophy of Engineering and Technology 1), Dordrecht: Springer Netherlands. doi:10.1007/978-90-481-3900-2
  • IEEE, 2019, Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems (First Version), < IEEE 2019 available online >.
  • Jasanoff, Sheila, 2016, The Ethics of Invention: Technology and the Human Future , New York: Norton.
  • Jecker, Nancy S., forthcoming, Ending Midlife Bias: New Values for Old Age , New York: Oxford University Press.
  • Jobin, Anna, Marcello Ienca, and Effy Vayena, 2019, “The Global Landscape of AI Ethics Guidelines”, Nature Machine Intelligence , 1(9): 389–399. doi:10.1038/s42256-019-0088-2
  • Johnson, Deborah G. and Mario Verdicchio, 2017, “Reframing AI Discourse”, Minds and Machines , 27(4): 575–590. doi:10.1007/s11023-017-9417-6
  • Kahnemann, Daniel, 2011, Thinking Fast and Slow , London: Macmillan.
  • Kamm, Frances Myrna, 2016, The Trolley Problem Mysteries , Eric Rakowski (ed.), Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780190247157.001.0001
  • Kant, Immanuel, 1781/1787, Kritik der reinen Vernunft . Translated as Critique of Pure Reason , Norman Kemp Smith (trans.), London: Palgrave Macmillan, 1929.
  • Keeling, Geoff, 2020, “Why Trolley Problems Matter for the Ethics of Automated Vehicles”, Science and Engineering Ethics , 26(1): 293–307. doi:10.1007/s11948-019-00096-1
  • Keynes, John Maynard, 1930, “Economic Possibilities for Our Grandchildren”. Reprinted in his Essays in Persuasion , New York: Harcourt Brace, 1932, 358–373.
  • Kissinger, Henry A., 2018, “How the Enlightenment Ends: Philosophically, Intellectually—in Every Way—Human Society Is Unprepared for the Rise of Artificial Intelligence”, The Atlantic , June 2018. [ Kissinger 2018 available online ]
  • Kurzweil, Ray, 1999, The Age of Spiritual Machines: When Computers Exceed Human Intelligence , London: Penguin.
  • –––, 2005, The Singularity Is Near: When Humans Transcend Biology , London: Viking.
  • –––, 2012, How to Create a Mind: The Secret of Human Thought Revealed , New York: Viking.
  • Lee, Minha, Sander Ackermans, Nena van As, Hanwen Chang, Enzo Lucas, and Wijnand IJsselsteijn, 2019, “Caring for Vincent: A Chatbot for Self-Compassion”, in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems—CHI ’19 , Glasgow, Scotland: ACM Press, 1–13. doi:10.1145/3290605.3300932
  • Levy, David, 2007, Love and Sex with Robots: The Evolution of Human-Robot Relationships , New York: Harper & Co.
  • Lighthill, James, 1973, “Artificial Intelligence: A General Survey”, Artificial intelligence: A Paper Symposion , London: Science Research Council. [ Lighthill 1973 available online ]
  • Lin, Patrick, 2016, “Why Ethics Matters for Autonomous Cars”, in Autonomous Driving , Markus Maurer, J. Christian Gerdes, Barbara Lenz, and Hermann Winner (eds.), Berlin, Heidelberg: Springer Berlin Heidelberg, 69–85. doi:10.1007/978-3-662-48847-8_4
  • Lin, Patrick, Keith Abney, and Ryan Jenkins (eds.), 2017, Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence , New York: Oxford University Press. doi:10.1093/oso/9780190652951.001.0001
  • Lin, Patrick, George Bekey, and Keith Abney, 2008, “Autonomous Military Robotics: Risk, Ethics, and Design”, ONR report, California Polytechnic State University, San Luis Obispo, 20 December 2008), 112 pp. [ Lin, Bekey, and Abney 2008 available online ]
  • Lomas, Meghann, Robert Chevalier, Ernest Vincent Cross, Robert Christopher Garrett, John Hoare, and Michael Kopack, 2012, “Explaining Robot Actions”, in Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction—HRI ’12 , Boston, MA: ACM Press, 187–188. doi:10.1145/2157689.2157748
  • Macnish, Kevin, 2017, The Ethics of Surveillance: An Introduction , London: Routledge.
  • Mathur, Arunesh, Gunes Acar, Michael J. Friedman, Elena Lucherini, Jonathan Mayer, Marshini Chetty, and Arvind Narayanan, 2019, “Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites”, Proceedings of the ACM on Human-Computer Interaction , 3(CSCW): art. 81. doi:10.1145/3359183
  • Minsky, Marvin, 1985, The Society of Mind , New York: Simon & Schuster.
  • Misselhorn, Catrin, 2020, “Artificial Systems with Moral Capacities? A Research Design and Its Implementation in a Geriatric Care System”, Artificial Intelligence , 278: art. 103179. doi:10.1016/j.artint.2019.103179
  • Mittelstadt, Brent Daniel and Luciano Floridi, 2016, “The Ethics of Big Data: Current and Foreseeable Issues in Biomedical Contexts”, Science and Engineering Ethics , 22(2): 303–341. doi:10.1007/s11948-015-9652-2
  • Moor, James H., 2006, “The Nature, Importance, and Difficulty of Machine Ethics”, IEEE Intelligent Systems , 21(4): 18–21. doi:10.1109/MIS.2006.80
  • Moravec, Hans, 1990, Mind Children , Cambridge, MA: Harvard University Press.
  • –––, 1998, Robot: Mere Machine to Transcendent Mind , New York: Oxford University Press.
  • Mozorov, Eygeny, 2013, To Save Everything, Click Here: The Folly of Technological Solutionism , New York: Public Affairs.
  • Müller, Vincent C., 2012, “Autonomous Cognitive Systems in Real-World Environments: Less Control, More Flexibility and Better Interaction”, Cognitive Computation , 4(3): 212–215. doi:10.1007/s12559-012-9129-4
  • –––, 2016a, “Autonomous Killer Robots Are Probably Good News”, In Drones and Responsibility: Legal, Philosophical and Socio-Technical Perspectives on the Use of Remotely Controlled Weapons , Ezio Di Nucci and Filippo Santoni de Sio (eds.), London: Ashgate, 67–81.
  • ––– (ed.), 2016b, Risks of Artificial Intelligence , London: Chapman & Hall - CRC Press. doi:10.1201/b19187
  • –––, 2018, “In 30 Schritten zum Mond? Zukünftiger Fortschritt in der KI”, Medienkorrespondenz , 20: 5–15. [ Müller 2018 available online ]
  • –––, 2020, “Measuring Progress in Robotics: Benchmarking and the ‘Measure-Target Confusion’”, in Metrics of Sensory Motor Coordination and Integration in Robots and Animals , Fabio Bonsignorio, Elena Messina, Angel P. del Pobil, and John Hallam (eds.), (Cognitive Systems Monographs 36), Cham: Springer International Publishing, 169–179. doi:10.1007/978-3-030-14126-4_9
  • –––, forthcoming-a, Can Machines Think? Fundamental Problems of Artificial Intelligence , New York: Oxford University Press.
  • ––– (ed.), forthcoming-b, Oxford Handbook of the Philosophy of Artificial Intelligence , New York: Oxford University Press.
  • Müller, Vincent C. and Nick Bostrom, 2016, “Future Progress in Artificial Intelligence: A Survey of Expert Opinion”, in Fundamental Issues of Artificial Intelligence , Vincent C. Müller (ed.), Cham: Springer International Publishing, 555–572. doi:10.1007/978-3-319-26485-1_33
  • Newport, Cal, 2019, Digital Minimalism: On Living Better with Less Technology , London: Penguin.
  • Nørskov, Marco (ed.), 2017, Social Robots , London: Routledge.
  • Nyholm, Sven, 2018a, “Attributing Agency to Automated Systems: Reflections on Human–Robot Collaborations and Responsibility-Loci”, Science and Engineering Ethics , 24(4): 1201–1219. doi:10.1007/s11948-017-9943-x
  • –––, 2018b, “The Ethics of Crashes with Self-Driving Cars: A Roadmap, II”, Philosophy Compass , 13(7): e12506. doi:10.1111/phc3.12506
  • Nyholm, Sven, and Lily Frank, 2017, “From Sex Robots to Love Robots: Is Mutual Love with a Robot Possible?”, in Danaher and McArthur 2017: 219–243.
  • O’Connell, Mark, 2017, To Be a Machine: Adventures among Cyborgs, Utopians, Hackers, and the Futurists Solving the Modest Problem of Death , London: Granta.
  • O’Neil, Cathy, 2016, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , Largo, ML: Crown.
  • Omohundro, Steve, 2014, “Autonomous Technology and the Greater Human Good”, Journal of Experimental & Theoretical Artificial Intelligence , 26(3): 303–315. doi:10.1080/0952813X.2014.895111
  • Ord, Toby, 2020, The Precipice: Existential Risk and the Future of Humanity , London: Bloomsbury.
  • Powers, Thomas M. and Jean-Gabriel Ganascia, forthcoming, “The Ethics of the Ethics of AI”, in Oxford Handbook of Ethics of Artificial Intelligence , Markus D. Dubber, Frank Pasquale, and Sunnit Das (eds.), New York: Oxford.
  • Rawls, John, 1971, A Theory of Justice , Cambridge, MA: Belknap Press.
  • Rees, Martin, 2018, On the Future: Prospects for Humanity , Princeton: Princeton University Press.
  • Richardson, Kathleen, 2016, “Sex Robot Matters: Slavery, the Prostituted, and the Rights of Machines”, IEEE Technology and Society Magazine , 35(2): 46–53. doi:10.1109/MTS.2016.2554421
  • Roessler, Beate, 2017, “Privacy as a Human Right”, Proceedings of the Aristotelian Society , 117(2): 187–206. doi:10.1093/arisoc/aox008
  • Royakkers, Lambèr and Rinie van Est, 2016, Just Ordinary Robots: Automation from Love to War , Boca Raton, LA: CRC Press, Taylor & Francis. doi:10.1201/b18899
  • Russell, Stuart, 2019, Human Compatible: Artificial Intelligence and the Problem of Control , New York: Viking.
  • Russell, Stuart, Daniel Dewey, and Max Tegmark, 2015, “Research Priorities for Robust and Beneficial Artificial Intelligence”, AI Magazine , 36(4): 105–114. doi:10.1609/aimag.v36i4.2577
  • SAE International, 2018, “Taxonomy and Definitions for Terms Related to Driving Automation Systems for on-Road Motor Vehicles”, J3016_201806, 15 June 2018. [ SAE International 2015 available online ]
  • Sandberg, Anders, 2013, “Feasibility of Whole Brain Emulation”, in Philosophy and Theory of Artificial Intelligence , Vincent C. Müller (ed.), (Studies in Applied Philosophy, Epistemology and Rational Ethics, 5), Berlin, Heidelberg: Springer Berlin Heidelberg, 251–264. doi:10.1007/978-3-642-31674-6_19
  • –––, 2019, “There Is Plenty of Time at the Bottom: The Economics, Risk and Ethics of Time Compression”, Foresight , 21(1): 84–99. doi:10.1108/FS-04-2018-0044
  • Santoni de Sio, Filippo and Jeroen van den Hoven, 2018, “Meaningful Human Control over Autonomous Systems: A Philosophical Account”, Frontiers in Robotics and AI , 5(February): 15. doi:10.3389/frobt.2018.00015
  • Schneier, Bruce, 2015, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World , New York: W. W. Norton.
  • Searle, John R., 1980, “Minds, Brains, and Programs”, Behavioral and Brain Sciences , 3(3): 417–424. doi:10.1017/S0140525X00005756
  • Selbst, Andrew D., Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi, 2019, “Fairness and Abstraction in Sociotechnical Systems”, in Proceedings of the Conference on Fairness, Accountability, and Transparency—FAT* ’19 , Atlanta, GA: ACM Press, 59–68. doi:10.1145/3287560.3287598
  • Sennett, Richard, 2018, Building and Dwelling: Ethics for the City , London: Allen Lane.
  • Shanahan, Murray, 2015, The Technological Singularity , Cambridge, MA: MIT Press.
  • Sharkey, Amanda, 2019, “Autonomous Weapons Systems, Killer Robots and Human Dignity”, Ethics and Information Technology , 21(2): 75–87. doi:10.1007/s10676-018-9494-0
  • Sharkey, Amanda and Noel Sharkey, 2011, “The Rights and Wrongs of Robot Care”, in Robot Ethics: The Ethical and Social Implications of Robotics , Patrick Lin, Keith Abney and George Bekey (eds.), Cambridge, MA: MIT Press, 267–282.
  • Shoham, Yoav, Perrault Raymond, Brynjolfsson Erik, Jack Clark, James Manyika, Juan Carlos Niebles, … Zoe Bauer, 2018, “The AI Index 2018 Annual Report”, 17 December 2018, Stanford, CA: AI Index Steering Committee, Human-Centered AI Initiative, Stanford University. [ Shoam et al. 2018 available online ]
  • SIENNA, 2019, “Deliverable Report D4.4: Ethical Issues in Artificial Intelligence and Robotics”, June 2019, published by the SIENNA project (Stakeholder-informed ethics for new technologies with high socio-economic and human rights impact), University of Twente, pp. 1–103. [ SIENNA 2019 available online ]
  • Silver, David, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis, 2018, “A General Reinforcement Learning Algorithm That Masters Chess, Shogi, and Go through Self-Play”, Science , 362(6419): 1140–1144. doi:10.1126/science.aar6404
  • Simon, Herbert A. and Allen Newell, 1958, “Heuristic Problem Solving: The Next Advance in Operations Research”, Operations Research , 6(1): 1–10. doi:10.1287/opre.6.1.1
  • Simpson, Thomas W. and Vincent C. Müller, 2016, “Just War and Robots’ Killings”, The Philosophical Quarterly , 66(263): 302–322. doi:10.1093/pq/pqv075
  • Smolan, Sandy (director), 2016, “The Human Face of Big Data”, PBS Documentary, 24 February 2016, 56 mins.
  • Sparrow, Robert, 2007, “Killer Robots”, Journal of Applied Philosophy , 24(1): 62–77. doi:10.1111/j.1468-5930.2007.00346.x
  • –––, 2016, “Robots in Aged Care: A Dystopian Future?”, AI & Society , 31(4): 445–454. doi:10.1007/s00146-015-0625-4
  • Stahl, Bernd Carsten, Job Timmermans, and Brent Daniel Mittelstadt, 2016, “The Ethics of Computing: A Survey of the Computing-Oriented Literature”, ACM Computing Surveys , 48(4): art. 55. doi:10.1145/2871196
  • Stahl, Bernd Carsten and David Wright, 2018, “Ethics and Privacy in AI and Big Data: Implementing Responsible Research and Innovation”, IEEE Security Privacy , 16(3): 26–33.
  • Stone, Christopher D., 1972, “Should Trees Have Standing - toward Legal Rights for Natural Objects”, Southern California Law Review , 45: 450–501.
  • Stone, Peter, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller, 2016, “Artificial Intelligence and Life in 2030”, One Hundred Year Study on Artificial Intelligence: Report of the 2015–2016 Study Panel, Stanford University, Stanford, CA, September 2016. [ Stone et al. 2016 available online ]
  • Strawson, Galen, 1998, “Free Will”, in Routledge Encyclopedia of Philosophy , Taylor & Francis. doi:10.4324/9780415249126-V014-1
  • Sullins, John P., 2012, “Robots, Love, and Sex: The Ethics of Building a Love Machine”, IEEE Transactions on Affective Computing , 3(4): 398–409. doi:10.1109/T-AFFC.2012.31
  • Susser, Daniel, Beate Roessler, and Helen Nissenbaum, 2019, “Technology, Autonomy, and Manipulation”, Internet Policy Review , 8(2): 30 June 2019. [ Susser, Roessler, and Nissenbaum 2019 available online ]
  • Taddeo, Mariarosaria and Luciano Floridi, 2018, “How AI Can Be a Force for Good”, Science , 361(6404): 751–752. doi:10.1126/science.aat5991
  • Taylor, Linnet and Nadezhda Purtova, 2019, “What Is Responsible and Sustainable Data Science?”, Big Data & Society, 6(2): art. 205395171985811. doi:10.1177/2053951719858114
  • Taylor, Steve, et al., 2018, “Responsible AI – Key Themes, Concerns & Recommendations for European Research and Innovation: Summary of Consultation with Multidisciplinary Experts”, June. doi:10.5281/zenodo.1303252 [ Taylor, et al. 2018 available online ]
  • Tegmark, Max, 2017, Life 3.0: Being Human in the Age of Artificial Intelligence , New York: Knopf.
  • Thaler, Richard H and Sunstein, Cass, 2008, Nudge: Improving decisions about health, wealth and happiness , New York: Penguin.
  • Thompson, Nicholas and Ian Bremmer, 2018, “The AI Cold War That Threatens Us All”, Wired , 23 November 2018. [ Thompson and Bremmer 2018 available online ]
  • Thomson, Judith Jarvis, 1976, “Killing, Letting Die, and the Trolley Problem”, Monist , 59(2): 204–217. doi:10.5840/monist197659224
  • Torrance, Steve, 2011, “Machine Ethics and the Idea of a More-Than-Human Moral World”, in Anderson and Anderson 2011: 115–137. doi:10.1017/CBO9780511978036.011
  • Trump, Donald J, 2019, “Executive Order on Maintaining American Leadership in Artificial Intelligence”, 11 February 2019. [ Trump 2019 available online ]
  • Turner, Jacob, 2019, Robot Rules: Regulating Artificial Intelligence , Berlin: Springer. doi:10.1007/978-3-319-96235-1
  • Tzafestas, Spyros G., 2016, Roboethics: A Navigating Overview , (Intelligent Systems, Control and Automation: Science and Engineering 79), Cham: Springer International Publishing. doi:10.1007/978-3-319-21714-7
  • Vallor, Shannon, 2017, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780190498511.001.0001
  • Van Lent, Michael, William Fisher, and Michael Mancuso, 2004, “An Explainable Artificial Intelligence System for Small-Unit Tactical Behavior”, in Proceedings of the 16th Conference on Innovative Applications of Artifical Intelligence, (IAAI’04) , San Jose, CA: AAAI Press, 900–907.
  • van Wynsberghe, Aimee, 2016, Healthcare Robots: Ethics, Design and Implementation , London: Routledge. doi:10.4324/9781315586397
  • van Wynsberghe, Aimee and Scott Robbins, 2019, “Critiquing the Reasons for Making Artificial Moral Agents”, Science and Engineering Ethics , 25(3): 719–735. doi:10.1007/s11948-018-0030-8
  • Vanderelst, Dieter and Alan Winfield, 2018, “The Dark Side of Ethical Robots”, in Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society , New Orleans, LA: ACM, 317–322. doi:10.1145/3278721.3278726
  • Veale, Michael and Reuben Binns, 2017, “Fairer Machine Learning in the Real World: Mitigating Discrimination without Collecting Sensitive Data”, Big Data & Society , 4(2): art. 205395171774353. doi:10.1177/2053951717743530
  • Véliz, Carissa, 2019, “Three Things Digital Ethics Can Learn from Medical Ethics”, Nature Electronics , 2(8): 316–318. doi:10.1038/s41928-019-0294-2
  • Verbeek, Peter-Paul, 2011, Moralizing Technology: Understanding and Designing the Morality of Things , Chicago: University of Chicago Press.
  • Wachter, Sandra and Brent Daniel Mittelstadt, 2019, “A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI”, Columbia Business Law Review , 2019(2): 494–620.
  • Wachter, Sandra, Brent Mittelstadt, and Luciano Floridi, 2017, “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation”, International Data Privacy Law , 7(2): 76–99. doi:10.1093/idpl/ipx005
  • Wachter, Sandra, Brent Mittelstadt, and Chris Russell, 2018, “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR”, Harvard Journal of Law & Technology , 31(2): 842–887. doi:10.2139/ssrn.3063289
  • Wallach, Wendell and Peter M. Asaro (eds.), 2017, Machine Ethics and Robot Ethics , London: Routledge.
  • Walsh, Toby, 2018, Machines That Think: The Future of Artificial Intelligence , Amherst, MA: Prometheus Books.
  • Westlake, Stian (ed.), 2014, Our Work Here Is Done: Visions of a Robot Economy , London: Nesta. [ Westlake 2014 available online ]
  • Whittaker, Meredith, Kate Crawford, Roel Dobbe, Genevieve Fried, Elizabeth Kaziunas, Varoon Mathur, … Jason Schultz, 2018, “AI Now Report 2018”, New York: AI Now Institute, New York University. [ Whittaker et al. 2018 available online ]
  • Whittlestone, Jess, Rune Nyrup, Anna Alexandrova, Kanta Dihal, and Stephen Cave, 2019, “Ethical and Societal Implications of Algorithms, Data, and Artificial Intelligence: A Roadmap for Research”, Cambridge: Nuffield Foundation, University of Cambridge. [ Whittlestone 2019 available online ]
  • Winfield, Alan, Katina Michael, Jeremy Pitt, and Vanessa Evers (eds.), 2019, Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems , special issue of Proceedings of the IEEE , 107(3): 501–632.
  • Woollard, Fiona and Frances Howard-Snyder, 2016, “Doing vs. Allowing Harm”, Stanford Encyclopedia of Philosophy (Winter 2016 edition), Edward N. Zalta (ed.), URL = < https://plato.stanford.edu/archives/win2016/entries/doing-allowing/ >
  • Woolley, Samuel C. and Philip N. Howard (eds.), 2017, Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media , Oxford: Oxford University Press. doi:10.1093/oso/9780190931407.001.0001
  • Yampolskiy, Roman V. (ed.), 2018, Artificial Intelligence Safety and Security , Boca Raton, FL: Chapman and Hall/CRC. doi:10.1201/9781351251389
  • Yeung, Karen and Martin Lodge (eds.), 2019, Algorithmic Regulation , Oxford: Oxford University Press. doi:10.1093/oso/9780198838494.001.0001
  • Zayed, Yago and Philip Loft, 2019, “Agriculture: Historical Statistics”, House of Commons Briefing Paper , 3339(25 June 2019): 1-19. [ Zayed and Loft 2019 available online ]
  • Zerilli, John, Alistair Knott, James Maclaurin, and Colin Gavaghan, 2019, “Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?”, Philosophy & Technology , 32(4): 661–683. doi:10.1007/s13347-018-0330-6
  • Zuboff, Shoshana, 2019, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power , New York: Public Affairs.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

Other Internet Resources

  • AI HLEG, 2019, “ High-Level Expert Group on Artificial Intelligence: Ethics Guidelines for Trustworthy AI ”, European Commission , accessed: 9 April 2019.
  • Amodei, Dario and Danny Hernandez, 2018, “ AI and Compute ”, OpenAI Blog , 16 July 2018.
  • Aneesh, A., 2002, Technological Modes of Governance: Beyond Private and Public Realms , paper in the Proceedings of the 4th International Summer Academy on Technology Studies, available at archive.org.
  • Brooks, Rodney, 2017, “ The Seven Deadly Sins of Predicting the Future of AI ”, on Rodney Brooks: Robots, AI, and Other Stuff , 7 September 2017.
  • Brundage, Miles, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh, Simon Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, et al., 2018, “ The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation ”, unpublished manuscript, ArXiv:1802.07228 [Cs].
  • Costa, Elisabeth and David Halpern, 2019, “ The Behavioural Science of Online Harm and Manipulation, and What to Do About It: An Exploratory Paper to Spark Ideas and Debate ”, The Behavioural Insights Team Report, 1-82.
  • Gebru, Timnit, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumeé III, and Kate Crawford, 2018, “ Datasheets for Datasets ”, unpublished manuscript, arxiv:1803.09010, 23 March 2018.
  • Gunning, David, 2017, “ Explainable Artificial Intelligence (XAI) ”, Defense Advanced Research Projects Agency (DARPA) Program.
  • Harris, Tristan, 2016, “ How Technology Is Hijacking Your Mind—from a Magician and Google Design Ethicist ”, Thrive Global , 18 May 2016.
  • International Federation of Robotics (IFR), 2019, World Robotics 2019 Edition .
  • Jacobs, An, Lynn Tytgat, Michel Maus, Romain Meeusen, and Bram Vanderborght (eds.), Homo Roboticus: 30 Questions and Answers on Man, Technology, Science & Art, 2019, Brussels: ASP .
  • Marcus, Gary, 2018, “ Deep Learning: A Critical Appraisal ”, unpublished manuscript, 2 January 2018, arxiv:1801.00631.
  • McCarthy, John, Marvin Minsky, Nathaniel Rochester, and Claude E. Shannon, 1955, “ A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence ”, 31 August 1955.
  • Metcalf, Jacob, Emily F. Keller, and Danah Boyd, 2016, “ Perspectives on Big Data, Ethics, and Society ”, 23 May 2016, Council for Big Data, Ethics, and Society.
  • National Institute of Justice (NIJ), 2014, “ Overview of Predictive Policing ”, 9 June 2014.
  • Searle, John R., 2015, “ Consciousness in Artificial Intelligence ”, Google’s Singularity Network, Talks at Google (YouTube video).
  • Sharkey, Noel, Aimee van Wynsberghe, Scott Robbins, and Eleanor Hancock, 2017, “ Report: Our Sexual Future with Robots ”, Responsible Robotics , 1–44.
  • Turing Institute (UK): Data Ethics Group
  • Leverhulme Centre for the Future of Intelligence
  • Future of Humanity Institute
  • Future of Life Institute
  • Stanford Center for Internet and Society
  • Berkman Klein Center
  • Digital Ethics Lab
  • Open Roboethics Institute
  • Philosophy & Theory of AI
  • Ethics and AI 2017
  • We Robot 2018
  • Robophilosophy
  • EUrobotics TG ‘robot ethics’ collection of policy documents
  • PhilPapers section on Ethics of Artificial Intelligence
  • PhilPapers section on Robot Ethics

computing: and moral responsibility | ethics: internet research | ethics: search engines and | information technology: and moral values | information technology: and privacy | manipulation, ethics of | social networking and ethics

Acknowledgments

Early drafts of this article were discussed with colleagues at the IDEA Centre of the University of Leeds, some friends, and my PhD students Michael Cannon, Zach Gudmunsen, Gabriela Arriagada-Bruneau and Charlotte Stix. Later drafts were made publicly available on the Internet and publicised via Twitter and e-mail to all (then) cited authors that I could locate. These later drafts were presented to audiences at the INBOTS Project Meeting (Reykjavik 2019), the Computer Science Department Colloquium (Leeds 2019), the European Robotics Forum (Bucharest 2019), the AI Lunch and the Philosophy & Ethics group (Eindhoven 2019)—many thanks for their comments.

I am grateful for detailed written comments by John Danaher, Martin Gibert, Elizabeth O’Neill, Sven Nyholm, Etienne B. Roesch, Emma Ruttkamp-Bloem, Tom Powers, Steve Taylor, and Alan Winfield. I am grateful for further useful comments by Colin Allen, Susan Anderson, Christof Wolf-Brenner, Rafael Capurro, Mark Coeckelbergh, Yazmin Morlet Corti, Erez Firt, Vasilis Galanos, Anne Gerdes, Olle Häggström, Geoff Keeling, Karabo Maiyane, Brent Mittelstadt, Britt Östlund, Steve Petersen, Brian Pickering, Zoë Porter, Amanda Sharkey, Melissa Terras, Stuart Russell, Jan F Veneman, Jeffrey White, and Xinyi Wu.

Parts of the work on this article have been supported by the European Commission under the INBOTS project (H2020 grant no. 780073).

Copyright © 2020 by Vincent C. Müller < vincent . c . mueller @ fau . de >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Browse Course Material

Course info.

  • Dr. Kevin Mills

Departments

  • Linguistics and Philosophy

As Taught In

  • Human-Computer Interfaces
  • Legal Studies
  • Science and Technology Policy

Learning Resource Types

Ethics of technology, course meeting times.

Lectures: 2 sessions / week, 1.5 hours / session

Prerequisites

There are no prerequisites for this course.

Course Overview

Technological advances, and the uses we make of them, continue to radically reshape the world we live in. Data is being collected and analyzed on an unprecedented scale; algorithms play an ever-increasing role in the decisions we make; social media is cited as radically transforming both personal and political discourse; and the list goes on. These developments raise a host of pressing ethical questions that we need to confront, both as a society, and as individuals. The aim of this class is to introduce students to some of these questions and equip them with the conceptual tools needed to engage with them productively and responsibly. Issues studied include:

  • Consent to data practices
  • Algorithmic justice
  • Online manipulation and “dark patterns”
  • Echo chambers
  • Censorship and freedom of speech

In addition to looking at technology ethics, this class is intended as an introduction to moral philosophy and does not presuppose any philosophical background.

Course Objectives

Upon successful completion of the class, students will have:

  • Been introduced to a variety of important issues in technology ethics.
  • Developed a richer understanding of what ethics is and how to reason about it.
  • Improved their skills in argument construction, analysis, evaluation, and articulation.
  • Learned to recognize, and have productive discussions in the face of, reasonable moral disagreement.
  • Learned to engage critically with the many ethical questions raised by technology.

All readings can be found in the Readings section.

Course Requirements

Class Attendance, Preparedness, and Discussion

You should come to class ready to contribute to class discussions in a way that demonstrates you have done the assigned readings. Attendance at recitations is mandatory.

Reading Quizzes

Every time a new reading is assigned (which is nearly every class), there will be a quiz. These aren’t anything to worry about; they are designed to be easy if you’ve made a sincere attempt to do the reading but hard otherwise.

Note: Quizzes are not included on the OpenCourseWare site for this course.

Writing Assignments

You are required to submit three short writing assignments (approximately 500–1000 words each) over the course of the semester. These are worth 15% of your final grade each. You will also be required to submit a single longer essay (approximately 1500 words) due on the final day of class. This is worth 30% of your final grade. 

For detail on the writing assignments and the final essay, see the  Assignments section.

Grading Policy

ACTIVITIES PERCENTAGES
Participation in Recitations 10%
Reading Quizzes 15%
Short Writing Assignment 1 15%
Short Writing Assignment 2 15%
Short Writing Assignment 3 15%
Final Essay 30%

facebook

You are leaving MIT OpenCourseWare

Moral and Ethical Issues in Technology

This essay about the ethical complexities entwined with technological progress in today’s society. It explores pressing issues such as digital privacy infringement, job displacement due to AI, the ethical implications of digital warfare, and the widening digital divide. The discourse emphasizes the importance of addressing these ethical dilemmas through transparency, accountability, and proactive measures to ensure that technological advancements align with our moral values and societal well-being. By navigating these challenges with foresight and ethical reflection, we can strive towards a more equitable and sustainable future in the digital age.

How it works

In today’s technologically driven world, we find ourselves at the crossroads of innovation and ethics, grappling with profound moral questions that accompany our technological advancements. As we marvel at the wonders of modern technology, we must also confront the ethical implications that arise from its pervasive influence in our lives. This discourse delves into the intricate web of moral dilemmas woven into the fabric of our digital landscape, urging us to navigate these complexities with wisdom and foresight.

At the heart of our ethical deliberations lies the issue of privacy in an age of ubiquitous connectivity.

With the proliferation of social media platforms and smart devices, our personal data has become a prized commodity, subject to exploitation by corporations and governments alike. The recent revelations of data breaches and surveillance scandals serve as stark reminders of the precarious nature of our digital privacy, prompting calls for greater transparency and accountability in the handling of sensitive information.

Moreover, the advent of artificial intelligence has thrust us into uncharted ethical territory, where machines are endowed with increasingly sophisticated capabilities that rival human intelligence. While AI holds immense promise in revolutionizing various industries, from healthcare to transportation, it also raises profound concerns about job displacement and algorithmic bias. As we entrust machines with greater autonomy and decision-making power, we must grapple with the ethical implications of relinquishing control to non-human entities, ensuring that our technological innovations align with our moral values and societal aspirations.

In addition to the ethical quandaries posed by AI, we are also confronted with the specter of digital warfare and cyber threats that transcend traditional notions of conflict. The weaponization of technology, from autonomous drones to state-sponsored cyber attacks, has blurred the boundaries between virtual and physical battlegrounds, posing unprecedented challenges to global security and stability. As we confront these emerging threats, we must not only develop robust defensive measures but also adhere to ethical principles that uphold the sanctity of human life and dignity in the face of technological warfare.

Furthermore, the digital divide continues to widen the gap between the digital haves and have-nots, exacerbating existing disparities in access to information, education, and economic opportunities. While affluent societies reap the benefits of high-speed internet and cutting-edge technologies, marginalized communities are left behind, further entrenching cycles of poverty and inequality. Bridging this digital divide requires concerted efforts to expand access to affordable broadband infrastructure and promote digital literacy initiatives, ensuring that all individuals have the opportunity to participate fully in the digital economy.

In conclusion, the ethical conundrums that accompany our technological advancements demand careful consideration and collective action to safeguard the values and principles that define our humanity. By embracing a holistic approach to technology ethics, we can harness the transformative power of innovation while mitigating its unintended consequences. Only through mindful engagement and ethical reflection can we navigate the complex terrain of the digital age and chart a course towards a more equitable and sustainable future.

owl

Cite this page

Moral And Ethical Issues In Technology. (2024, Apr 29). Retrieved from https://papersowl.com/examples/moral-and-ethical-issues-in-technology/

"Moral And Ethical Issues In Technology." PapersOwl.com , 29 Apr 2024, https://papersowl.com/examples/moral-and-ethical-issues-in-technology/

PapersOwl.com. (2024). Moral And Ethical Issues In Technology . [Online]. Available at: https://papersowl.com/examples/moral-and-ethical-issues-in-technology/ [Accessed: 7 Sep. 2024]

"Moral And Ethical Issues In Technology." PapersOwl.com, Apr 29, 2024. Accessed September 7, 2024. https://papersowl.com/examples/moral-and-ethical-issues-in-technology/

"Moral And Ethical Issues In Technology," PapersOwl.com , 29-Apr-2024. [Online]. Available: https://papersowl.com/examples/moral-and-ethical-issues-in-technology/. [Accessed: 7-Sep-2024]

PapersOwl.com. (2024). Moral And Ethical Issues In Technology . [Online]. Available at: https://papersowl.com/examples/moral-and-ethical-issues-in-technology/ [Accessed: 7-Sep-2024]

Don't let plagiarism ruin your grade

Hire a writer to get a unique paper crafted to your needs.

owl

Our writers will help you fix any mistakes and get an A+!

Please check your inbox.

You can order an original essay written according to your instructions.

Trusted by over 1 million students worldwide

1. Tell Us Your Requirements

2. Pick your perfect writer

3. Get Your Paper and Pay

Hi! I'm Amy, your personal assistant!

Don't know where to start? Give me your paper requirements and I connect you to an academic expert.

short deadlines

100% Plagiarism-Free

Certified writers

World Bank Blogs Logo

Ethics in the digital world: Where we are now and what’s next

Kate gromova, yaroslav eferin.

Lines connected to Thinkers, symbolizing the meaning of artificial intelligence

Will widespread adoption of emerging digital technologies such as the Internet of Things and Artificial Intelligence improve people’s lives? The answer appears to be an easy “yes.” The positive potential of data seems self-evident. Yet, this issue is being actively discussed across international summits and events. Thus, the agenda of Global Technology Government Summit 2021 is dedicated to questions around whether and how “data can work for all”, emphasizing trust aspects, and especially ethics of data use. Not without a reason, at least 50 countries are grappling independently with how to define ethical data use smoothly without violating people’s private space, personal data, and many other sensitive aspects.  

Ethics goes online

What is ethics per se? Aristotle proposed that ethics is the study of human relations in their most perfect form. He called it the science of proper behavior. Aristotle claimed that ethics is the basis for creating an optimal model of fair human relations; ethics lie at the foundation of a society’s moral consciousness. They are the shared principles necessary for mutual understanding and harmonious relations.

Ethical principles have evolved many times over since the days of the ancient Greek philosophers and have been repeatedly rethought (e.g., hedonism, utilitarianism, relativism, etc.). Today we live in a digital world, and most of our relationships have moved online to chats, messengers, social media, and many other ways of online communication.  We do not see each other, but we do share our data; we do not talk to each other, but we give our opinions liberally. So how should these principles evolve for such an online, globalized world? And what might the process look like for identifying those principles?  

Digital chaos without ethics

2020 and the lockdowns clearly demonstrate that we plunge into the digital world irrevocably. As digital technologies become ever more deeply embedded in our lives, the need for a new, shared data ethos grows more urgent. Without shared principles, we risk exacerbating existing biases that are part of our current datasets.  Just a few examples:

  • The common exclusion of women as test subjects in much medical research results in a lack of relevant data on women’s health. Heart disease, for example, has traditionally been thought of as a predominantly male disease. This has led to massive misdiagnosed or underdiagnosed heart disease in women.
  • A study of AI tools that authorities use to determine the likelihood that a criminal reoffends found that algorithms produced different results for black and white people under the same conditions. This discriminatory effect has resulted in sharp criticism and distrust of predictive policing.
  • Amazon abandoned its AI hiring program because of its bias against women. The algorithm began training on the resumes of the candidates for job postings over the previous ten years. Because most of the applicants were men, it developed a bias to prefer men and penalized features associated with women.

These examples all contribute to distrust or rejection of potentially beneficial new technological solutions. What ethical principles can we use to address the flaws in technologies that increase biases, profiling, and inequality? This question has led to significant growth in interest in data ethics over the last decade (Figures 1 and 2). And this is why many countries are now developing or adopting ethical principles, standards, or guidelines.

Figure 1. Data ethics concept, 2010-2021     

Country ethics

Figure 2. AI ethics concept, 2010-2021

AI Ethics

Guiding data ethics

Countries are taking wildly differing approaches to address data ethics. Even the definition of data ethics varies. Look, for example, at three countries—Germany, Canada, and South Korea—with differing geography, history, institutional and political arrangements, and traditions and culture.

Germany established a Data Ethics Commission in 2018 to provide recommendations for the Federal Government’s Strategy on Artificial Intelligence. The Commission declared that its  operating principles were based on the Constitution, European values, and its “cultural and intellectual history.” Ethics, according to the Commission, should not begin with establishing boundaries. Rather, when ethical issues are discussed early in the creation process, they may make a significant contribution to design, promoting appropriate and beneficial applications of AI systems.

In Canada, the advancement of AI technologies and their use in public services has spurred a discussion about data ethics. The Government of Canada’s recommendations focuses on public service officials and processes. It provided guiding principles to ensure ethical use of AI and developed a comprehensive Algorithmic Impact Assessment online tool to help government officials explore AI in a way that is “governed by clear values, ethics, and laws.”

The Korean Ministry of Science and ICT, in collaboration with the National Information Society Agency, released Ethics Guidelines for the Intelligent Information Society in 2018. These guidelines build on the Robots Ethics Charter. It calls for developing AI and robots that do not have “antisocial” characteristics.” Broadly, Korean ethical policies mainly focused on the adoption of robots into society, while emphasizing the need to balance protecting “human dignity” and “the common good ."  

Do data ethics need a common approach?

The differences among these initiatives seem to be related to traditions, institutional arrangements, and many other cultural and historical factors. Germany places emphasis on developing autonomous vehicles and presents a rather comprehensive view on ethics; Canada puts a stake on guiding government officials; Korea approaches questions through the prism of robots. Still, none of them clearly defines what data ethics is. None of them is meant to have a legal effect. Rather, they stipulate the principles of the information society. In our upcoming study, we intend to explore the reasons and rationale for different approaches that countries take.

Discussion and debate on data and technology ethics undoubtedly will continue for many years to come as digital technologies continue to develop and penetrate into all aspects of human life.   But the sooner we reach a consensus on key definitions, principles, and approaches, the easier the debates can turn into real actions. Data ethics are equally important for government, businesses, individuals and should be discussed openly. The process of such discussion will serve itself as an awareness and knowledge-sharing mechanism.

Recall the Golden Rule of Morality: Do unto others as you would have them do unto you. We suggest keeping this in mind when we all go online.

  • The World Region

Get updates from Data Blog

Thank you for choosing to be part of the Data Blog community!

Your subscription is now active. The latest blog posts and blog-related announcements will be delivered directly to your email inbox. You may unsubscribe at any time.

Kate Gromova

Digital Development Consultant, Co-founder of Women in Digital Transformation

Yaroslav Eferin

Digital Development Consultant

Join the Conversation

  • Share on mail
  • comments added

To Each Technology Its Own Ethics: The Problem of Ethical Proliferation

  • Research Article
  • Open access
  • Published: 18 October 2022
  • Volume 35 , article number  93 , ( 2022 )

Cite this article

You have full access to this open access article

technology ethics essay

  • Henrik Skaug Sætra   ORCID: orcid.org/0000-0002-7558-6451 1 &
  • John Danaher   ORCID: orcid.org/0000-0001-5879-3160 2  

11k Accesses

17 Citations

33 Altmetric

Explore all metrics

Ethics plays a key role in the normative analysis of the impacts of technology. We know that computers in general and the processing of data, the use of artificial intelligence, and the combination of computers and/or artificial intelligence with robotics are all associated with ethically relevant implications for individuals, groups, and society. In this article, we argue that while all technologies are ethically relevant, there is no need to create a separate ‘ethics of X’ or ‘X ethics’ for each and every subtype of technology or technological property—e.g. computer ethics, AI ethics, data ethics, information ethics, robot ethics, and machine ethics. Specific technologies might have specific impacts, but we argue that they are often sufficiently covered and understood through already established higher-level domains of ethics. Furthermore, the proliferation of tech ethics is problematic because (a) the conceptual boundaries between the subfields are not well-defined, (b) it leads to a duplication of effort and constant reinventing the wheel, and (c) there is danger that participants overlook or ignore more fundamental ethical insights and truths. The key to avoiding such outcomes lies in a taking the discipline of ethics seriously, and we consequently begin with a brief description of what ethics is, before presenting the main forms of technology related ethics. Through this process, we develop a hierarchy of technology ethics, which can be used by developers and engineers, researchers, or regulators who seek an understanding of the ethical implications of technology. We close by deducing two principles for positioning ethical analysis which will, in combination with the hierarchy, promote the leveraging of existing knowledge and help us to avoid an exaggerated proliferation of tech ethics.

Similar content being viewed by others

technology ethics essay

To each Technology Its Own Ethics? A Reply to Sætra & Danaher (and Their Critics)

technology ethics essay

Introduction to the Ethics of New and Emerging Science and Technology

technology ethics essay

Explore related subjects

  • Artificial Intelligence
  • Medical Ethics

Avoid common mistakes on your manuscript.

1 Introduction

In a world of rapid development and dissemination of technology, ethics plays a key role in analysing how these technologies affect individuals, businesses, groups, society, and the environment (Sætra and Fosch-Villaronga, 2021 ), and in determining how to avoid ethically undesirable outcomes and promote ethical behaviour. While ethics is a staple in many academic fields, it is also gaining significant mainstream traction in in the tech industry and policy circles. In the 2021 version of Gartner’s ‘hype cycle’ for AI, for example, digital ethics was placed at the peak of inflated expectations (Gartner, 2021 ), and terms such as human-centred AI and responsible AI are approaching the same stage.

Focusing on computer-based technologies, we know that computers in general and the processing of data, the use of artificial intelligence, and the combination of computers and/or artificial intelligence with robotics in particular are all associated with ethically relevant implications for individuals, groups, and society. This has given rise to a wide range of ‘ethics of X’ or ‘X ethics’ fields of inquiry and debate. Examples include computer ethics (Moor, 1985 ), data ethics (Hand, 2018 ), big data ethics (Zwitter, 2014 ), information ethics (Floridi, 1999 ), machine ethics (M. Anderson and Anderson ( 2011 ), robot ethics (Lin et al. 2011 ), and others that we describe in detail below.

In this article, we argue that while all technologies are ethically relevant, and studying the ethical implications of their development and use is crucial, we should not create a separate subdomain of ethical inquiry for each and every one of them. A frivolous proliferation of technology ethics is problematic for three reasons. First, the conceptual boundaries between the subfields are not well-defined. This creates problems for practitioners and regulators alike, as it becomes increasingly difficult to find historically established and valuable insight into the implications of technology. Second, it leads to a duplication of effort and constant reinventing the wheel. Third, there is a danger that participants overlook or ignore more fundamental ethical insights and truths. In general, historical efforts to build new domains of ethical inquiry risk burying and undermining historical insights, leading to a situation in which we increase the number of ethical domains and publications without increasing the actual ethicality of our decisions and practices.

We argue that the key to avoiding such outcomes lies in taking the discipline of ethics as moral philosophy seriously—acknowledging and pursuing it as a philosophical endeavour and not merely as a source of checklists and guidelines. We consequently begin with a brief description of what ethics is. We then proceed to present and review the main forms of technology-related ethics. Through this process, we develop a hierarchy of technology ethics—with a description of how certain forms of non-technology ethics are also relevant to this hierarchy. The hierarchy can be used by developers and engineers, researchers, or regulators who seek an understanding of the ethical implications of technology. It also shows how poorly defined some of the subdomains of technology ethics are. This process allows us to deduce two basic principles which will, in combination with the hierarchy, ensure that existing knowledge will be leveraged and that we avoid the proliferation of subdomains of ethics and a muddying of the waters of tech ethics.

While we perceive the proliferation of tech ethics as unfortunate, there are several seemingly plausible justifications for it. We end the article by discussing four such justifications and offering our replies.

2 The Core of Ethics

Ethics as a general object of study is originally positioned in the discipline of philosophy and more specifically in moral philosophy (Copp, 2005 ). ‘Ethics’ and ‘morality’ are often used interchangeably, and we follow Singer ( 2011 ) in conflating the terms, as ethics is fundamentally about making moral judgements. Ethics is a discipline as old as philosophy itself, but the structured approach to ethics as we use the concept today tends to be traced back to Aristotle ( 2014 ) and his Nicomachean Ethics . Ethics as a concept consists of a wide range of branches and theories, and we must be able to distinguish between these before we proceed to review the main types of technology ethics. The primary distinctions we focus on are the forms of ethics and the different ethical theories .

There are four primary forms of ethics, as shown in Fig.  1 . Meta ethics is the most abstract form of ethics and deals with the origins and study of ethics, and whether or not there are moral truths at all (Copp, 2005 ). Descriptive ethics is about describing what a particular set of people believe to be right or wrong, without necessarily connecting this to any underlying theory or comprehensive conception of morality. Normative ethics, on the other hand, is about how people should act (Copp, 2005 ), and this is the domain of ethics where the three main ethical theories (utilitarianism, deontology, and virtue ethics) are debated and refined. Finally, there is applied ethics, which are normative ethical theories applied to particular circumstances. Applied ethics includes the different forms of technology ethics we will here pursue and all other kinds of practical ethics (Singer, 2011 ). There are many kinds of applied ethics in addition to the ones we present below, such as medical ethics, business ethics, research ethics, care ethics, and migration ethics.

figure 1

The basic concepts in ethics

3 The Ethics of Science and Technology

While much of ethics is abstract, applied ethics focuses on concrete moral issues (Copp, 2005 ). Our focus in this article is on the need for different types of applied ethics theories or domains of inquiry, which we call ‘domain-specific ethics’. Applied ethics tends to take the form of norms, guidelines, and frameworks. The Mertonian norms of science (Merton, 1973 ), for example, exemplify such an approach. Ethics, in applied form, often entails the codification and systematisation of what someone has discovered through philosophical analysis, which can at times be inaccessible to non-philosophers. There is often a division of labour between those who do meta ethics and normative ethics and those who do applied ethics, which draws upon the former to generate practical and often actionable insight for practitioners. There is also an important division of labour between the ethicist who develop norms and guidelines—codifies ethical considerations for a particular domain—and the ones who apply and must adhere to norms and guidelines (Sætra and Fosch-Villaronga, 2021 ). Developers need not necessarily deal with abstract ethics but can adhere to guidelines and checklists. Political institutions may or may not make the ethicist’s proposed codified ethics law or support it in other ways (Sætra and Fosch-Villaronga, 2021 ). The key applied ethical questions arising from the use of technology tend to take the following form: will the use of an algorithm in a particular setting result in any harm/benefit to humans? What is the responsibility of a software developer to those affected by their technology? Is it ok to make machines that deceive humans? Can and should machines be designed to behave morally? And so on.

We limit our analysis to the overarching question of how to understand the ethical implications of our use of technology, and this includes both efforts to analyse these implications and to make sense of how people using technology— and the machines themselves—might act ethically. This sounds simple enough, but as we will show, many claims about the need for separate ethics for different types of technology have emerged. To make sense of this jungle of domain-specific ethics, we briefly review and analyse several of the most popular types of tech ethics. We describe the main features of each type and summarise their nestedness and relations to the other types. Our purpose, in doing so, is not merely descriptive. Through our review of these types of tech ethics we attempt to (a) highlight the conceptual confusion arising from the proliferation of subdomains and to (b) tentatively demarcate meaningful limits that could reduce the amount of overlap between the different subfields. We ultimately conclude that there are limits to this exercise in conceptual hygiene.

By beginning with engineering ethics, we omit the ethics of science. We acknowledge that this is one of the foundational types of applied ethics. Science is value-laden and political (Merton, 1972 ; Rollin, 2006 ; Sætra, 2018 ), and while not all technology-related activity involves science, much of it does and will partly be covered by science ethics, and the related research ethics. Nevertheless, our review of domain-specific ethics is already quite extensive and the fact that the subfields discussed below could, on some occasions, be nested within science ethics merely serves to reinforce our larger argument. We also acknowledge that our presentations of the different domain ethics are necessarily simplified, and we are unable to describe in detail the nuances of and philosophical differences between the researchers and positions described as belonging to each domain. Such differences will certainly be important when approaching the analysis of the implications of a particular technology, but they are of less import with regard to understanding the potential problems relating to the proliferation of domain ethics and a fragmentation of the field of technology ethics.

3.1 Engineering Ethics

The first domain-specific ethic we consider is engineering ethics. This is a professionally oriented form of ethics, aimed at engineers with the purpose of promoting ethical practice. The aim is not primarily to promote a theoretical understanding of the ethical implications of engineering work. Harris et al. ( 2013 ), for example, present what they call their profession’s perception of the primacy of the public good while discussing prohibited actions and the prevention of harm. They also discuss what they refer to as ‘aspirational ethics’ and the promotion of well-being through, for example, design . ‘Ethics-by-design’—for example, in the form of value-sensitive design (Cummings, 2006 )—is an important element of engineering ethics, which highlights its tight relation to design ethics (Costanza-Chock, 2020 ).

Engineering ethics is a vital part of the education of engineers, and ‘teaching engineering ethics’ is, according to Harris et al. ( 1996 ), seen as teaching engineering . They refer to professional ethics and engineering ethics as ethics for a particular group and state that ‘engineering ethics applies to engineers (and no one else)’ (Harris et al. 1996 ). This highlights how some domain-specific ethics are narrower and more specific than other forms of ethics. Science ethics, for example, is to a larger degree discussed by non-scientists, and AI ethics tends to be just as much a framework for understanding the implications of AI by non-practicing researchers and regulators as it is the codification of professional ethics for software developers.

That said, engineering ethics is a domain that, in theory, covers everything covered by the other domain-specific ethics discussed below. For example, the introduction to engineering ethics by Fleddermann ( 2004 ) begins by discussing how an incident involving a Ford Pinto car led to harm to humans, and that Ford was charged in a criminal court as they were held responsible for the design choices that determined the level and likelihood of harm. The Pinto was not autonomous, but the fundamental questions are the same as are asked in many fields of tech ethics: given that technology can cause harms and/or provide benefits (broadly defined) who is responsible for what in the design, production, and use of technology (Sætra, 2021a )?

3.2 Technology Ethics

Technology ethics is the highest-level technology-exclusive form of applied ethics. It is also informed by and heavily overlapping with work often referred to as philosophy of technology (Ellul, 1964 ; Mumford, 1934 ; Winner, 1977 ). While seemingly similar to engineering ethics, it is less oriented towards professionals. It is much broader. Jonas ( 1982 ), in line with the argument proposed in this article, asked whether technology is sufficiently ‘novel and special’ to warrant its own brand of ethics, and answered in the affirmative. He gave four reasons for this. First, he argued that technology defies neutrality, as it is not only malevolent use of technology that is problematic, but also the long run effects of what we’d consider beneficial use. Second, he argued that technology has a tendency, after careful beginnings, to become ‘an incessant need of life’ (Jonas, 1982 ). Third, he highlighted technology’s unique magnitude and ability to amplify human action, something that breaks historical anthropocentric ethics as the biosphere is increasingly affected by humans through technology. Fourth, he noted that technology promotes new and fundamental existential ethical questions, such as ‘whether and why there ought to be a mankind?’ However sound or unsound we find Jonas’s argument, he exemplifies the question that should always be asked when one ponders whether a new ethic is necessary, namely, what makes a particular technology distinct from something already covered by an existing ethic?

Technology ethics is nowadays often portrayed as the discipline that contains lower-level applied tech ethics, such as machine, robot, and computer ethics (Gordon and Nyholm, 2021 ). A clearly defined ethics of technology is, however, relatively hard to come by, despite the fact that many combine the terms ethics and technology. Tavani ( 2016 ), for example, authored the book Ethics and Technology , but from the get-go decides to establish the term cyberethics instead of technology ethics . While cybertechnology—computing and communication devices—is certainly central to modern technology, this takes us closer to a specific form computer ethics and away from technology ethics more generally.

3.3 Computer Ethics

Computers are technology, and consequently encompassed by technology ethics. Some describe computer ethics as concerned with ‘commercial behaviour involving computers and information’, including issues of data security and privacy (Gordon and Nyholm, 2021 ). Footnote 1 If information ethics is defined as related to issues clearly linked to computer-mediated information, seeing it as a subdiscipline makes sense. However, issues of surveillance and privacy, as mentioned above, are clearly not restricted to digital information, and this generates certain challenges leading to some overlap with regard to which branch of tech ethics privacy and surveillance-related questions belong.

A more fruitful approach is found in Tavani ( 2016 ), where computer ethics is seen as the ethics of computing machines , unrelated to the issues of how such machines communicate. But since communication is now fundamental to much computing technology, it is difficult to maintain this distinction. Furthermore, computers are seen as the basic technological foundation of anything digital , and it will consequently be considered a high-level ethics related to how digital technologies are used.

In the broader landscape of ethics mapped in this article, a pertinent question is what sort of questions belong to computer ethics that are not related to its subdisciplines, such as AI ethics, and information ethics? We argue that for the concept to be useful alongside other form of ethics, computer ethics should in fact mainly relate to the materiality of computing—how machines are designed, built, their energy use, distributional effects, and accessibility. This would, if so, indicate that much of what is discussed in relation to the environmental sustainability of AI (Brevini, 2021; Sætra, 2022 ; van Wynsberghe, 2021 ), for example, is, in reality, more properly a question of the sustainability of computing .

Computer ethics can be taken to be the basic domain in which questions related to how computers change what human beings can do and how we do things are asked (Moor, 1985 ). Furthermore, if we follow the approach of Johnson ( 2004 ), and use the term to describe all examinations of the ethical issues related to an ‘information society’, this would then subsume professional ethics for computer scientists and engineers, issues of privacy, cybercrime, VR, and so on (Johnson, 2004 ). If such a definition stands, most of the forms of ethics described below would be superfluous and in reality a part of computer ethics. Many would count, primarily, as case studies in computer ethics.

3.4 AI Ethics

To build artificially intelligent systems, a precondition is the existence of computers. Indeed, the origins of computing technology and artificial intelligence are inextricably linked thanks to the pioneering work of Turing on computation and thought (Turing, 2009 ). As a result, AI ethics could be seen as a lower-level form of tech ethics (below computing ethics) with relatively high specificity. But since AI is currently a concept in vogue, the term ‘A I ethics’ garners a lot of attention, and this, in turn, tempts researchers to describe various challenges that more properly relate to other types of ethics as AI ethics. While in vogue, AI as a concept has a long history, with the term first used in 1956 (Russell and Norvig, 2014 ), often traced back to earlier work of researchers such as Turing ( 2009 ). The notion of autonomous technology is also relevant for demarcating AI ethics, as autonomy is tightly linked to various conceptualisations of intelligence (Winner, 1977 ).

Today, AI ethics is argued to encompass a wide array of issues, and there is a need to distinguish which questions should belong to AI ethics proper, and which questions belong to other ethics domains. AI ethics entails, according to Gordon and Nyholm ( 2021 ), issues including, but not limited to, the design and use of autonomous systems in general (both weapons and other systems), machine bias, privacy and surveillance, governance, the status of intelligent machines, automation and unemployment, and even space colonisation. According to Coeckelbergh ( 2020 , p. 7), ‘AI ethics is about technological change and its impact on individual lives, but also about transformations in society and in the economy’. In his book, he includes challenges related to superintelligence, the difference between humans and machines, the potential for moral machines, issues related to data, privacy, bias, machine responsibility, policy, and even the meaning of life. Müller ( 2020 ) similarly discusses AI, but in combination with robot ethics, and includes specific discussions of bias, opacity, privacy and surveillance, machine ethics and machine morality, and the singularity. To top this off, some even argue that the scientific communication of advances in AI, and the selection of imagery and stock photos of AI or AI-related themes, is a part of AI ethics (Romele, 2022 ).

If we take one step back, and consider AI to be software capable of either thinking or acting humanly or rationally (Russell and Norvig, 2014 ), it seems pertinent to drastically reduce the number of topics seen to properly relate to AI ethics. While certain AI systems are based on machine learning approaches which entail analysing data, issues of privacy and surveillance still seem to be issues more properly conceived as belonging to data ethics, or even a form of ethics not restricted to privacy and surveillance as digital phenomena at all. Furthermore, robot ethics, machine ethics, and information ethics deal directly with subsets of the issues that are argued to belong to AI ethics.

The core topics remaining are those related to how intelligent systems allow us to do new things and to do things differently. This could relate to automation and employment, as mentioned by Müller ( 2020 ). However, it would be restricted to automation based not on replacing human force with animal or machine force, for example, but on systems performing tasks with a cognitive element that previously required humans. Other issues of automation belong more properly to technology ethics in general. Long-term existential or x-risk issues such as superintelligence and the singularity can be properly said to be part of AI ethics, although they equally branch out into discussions of other technologies (biotech, nuclear weapons, and so on). AI will in turn clearly be relevant for understanding issues of robotics, but AI ethics should only be concerned with issues relating to systems independent of embodiment, as embodied systems will create novel challenges best analysed through specific forms of robot ethics.

3.5 Robot Ethics

Robots—machines that sense and purposefully act with a certain degree of autonomy in a particular environment (Winfield, 2012 )—are necessarily driven by some form of artificial intelligence. They are, however, always embodied. Does embodiment by itself make a difference such that we ought to distinguish robot ethics from AI ethics? Lin et al. ( 2011 ) argue that it does, as ‘advanced robotics brings with it new ethical and policy challenges’ divided into three main categories: safety and errors, law and ethics, and social impact. But are these really novel? Aren’t they true of all forms of technology?

It is worth noting that the ethics of robots has been contemplated long before the contemporary fad for AI ethics and robot ethics (Winfield, 2012 ). While not a codified ethics, the science fiction literature is replete with analyses of the potential ethical implications of both robots and AI. Isaac Asimov’s three laws of robotics from the 1942 short story Runaround (Asimov, 2013 ) are perhaps the most famous example of this:

“We have: One, a robot may not injure a human being, or, through inaction, allow a human being to come to harm.” “Right!” “Two,” continued Powell, “a robot must obey the orders given it by human beings except where such orders would conflict with the First Law.” “Right” “And three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”

The question nonetheless remains: what distinguishes robots from ‘regular’ AI? In theory, Asimov’s laws could easily be said to apply to AI as well, even if embodiment makes issues of harm and protection even more pertinent.

Lin et al. ( 2011 ) argue that a robot’s ability to ‘directly exert influence on the world’ generates sufficient novelty, and their analyses encompass a wide array of robot applications, such as labour, service, military, medical, education, care and companionship, and transportation. This means that social robots, military robots and autonomous weapons systems (AWS), and autonomous vehicles are likely candidates for analyses under the robot ethics umbrella. Some have proposed separate ethics for specific types of robotic system, e.g. a specific ethics for AWSs (Horowitz, 2016 ) or ‘autonomous driving’ (Geisslinger et al. 2021 ), but we consider those to be encompassed in robot ethics.

The first category of ethical and social issues pertaining to robots relates to their safety and potential for error. But what Lin et al. ( 2011 ) list in this category reads quite similarly to what is often discussed under computer ethics—namely, considerations of what happens when computer scientists make errors or unforeseen consequences emerge when technology is applied in new settings and at large scales. They argue, however, that the magnitude of damage potentially done is larger when robots physically operate in our environment as opposed to software errors leading to the loss of data, for example. It could, however, easily be shown that a wide range of software errors also have fatal outcomes, so direct physical harm seems insufficient for creating a new ethic.

The second category relates to law and ethics and includes issues such as responsibility when robots, for example, cause harm. However, this applies just as much to computers in general, and in particular to AI, as it does to robots. There has been much debate over the presence of so-called responsibility gaps resulting from the unpredictable nature of modern AI (Matthias, 2004 ; Sætra, 2021a ). Some connect this directly to robots (Gunkel, 2017 ), but if the issue stems from the nature of advanced AI (namely, its capacity for autonomous decision-making), then this topic belongs to AI ethics, and not robot ethics.

Third, there is the social impact of robots. This is an area in which it seems more likely that robots constitute a special ethical case, as their physical presence in human social environments can have various implications not directly comparable to the presence of computers or other non-autonomous devices. Some have focused specifically on human likeness, anthropomorphism, and how social robots can change human beings and society (Danaher, 2020a ; Sætra, 2021c ), while others focus on the social implications of autonomous vehicles and weapon systems (Fleetwood, 2017 ; Horowitz, 2016 ). While social robots are often portrayed as particularly problematic, there is also an argument to be made that social AI is capable of generating many of the same challenges as social robots in interactions with human beings (Sætra, 2020 ). Having a relationship with an app on one’s smartphone is perhaps similar to having one with a robot.

In short, robot ethics can be described as the ethic of how human beings ‘design, construct, and use robots’ (Gordon and Nyholm, 2021 ), where robots are understood as embodied AI systems. However, many ethical questions seemingly caused by robots can and should be treated as issues of AI, computer, or technology ethics.

3.6 Machine Ethics

Closely linked to, but perhaps separable from, AI and robot ethics is the field of machine ethics. The motivating question behind this field of inquiry is: To what extent can machines be ethical, or deal with ethical challenges? Answering that question necessitates a new field of inquiry, according to M. Anderson and Anderson ( 2011 ). The goal of this field, Anderson ( 2011 , p. 22) states, is to make ‘a machine that follows an ideal ethical principle or set of principles in guiding its behaviour’. Why is that important? In one of the originating articles in the field, Allen et al. ( 2006 ) use the trolley problem to say something along the following lines: since an autonomous machine might run into ethical dilemmas akin to the trolley problem, we must explore how we can make machines ethical agents capable of ethical decision-making.

This form of applied tech ethics is to be contrasted with other types of tech ethics. The target of most applied tech ethics is humans and human institutions: how can they be improved to address ethical challenges. In machine ethics, the target is the machines themselves: how can we codify ethics into autonomous machines or train these machines to act ethically? That said, this framing of machine ethics is potentially problematic. As argued in Sætra ( 2021a ), presenting machines as autonomous entities partly beyond the control, and perhaps even beyond the responsibility, of the humans who make and deploy them is both controversial and potentially misleading.

While its proponents argue that machine ethics is about ‘adding an ethics dimension’ to autonomous machines (M. Anderson and Anderson, 2011 ), all autonomous machines arguably already have an ethical dimension, as all tasks performed by machines in a sociotechnical system have consequences of ethical value. Moor ( 2006 ) explores to which extent machine ethics even exists. He argues that it is reasonable to see computers as ‘technological agents’. He perceives all computing technology to be normative by its nature: computers are designed to do certain things and, consequently, follow a certain ethical code, even if this is only implied. Still, Moor accepts that it is important to distinguish an inquiry into the ethical impact of machines (and technology more generally) from the agenda of putting ‘ethics into a machine’. The former is computer ethics , while the latter is machine ethics .

With such an interpretation, machine ethics can be portrayed as a field in which the moral status of machines as potential moral agents is examined, with a particular emphasis on modelling and codifying existing human moral systems or new moral systems into autonomous machines, and so may have a distinctive identity as a subdomain of technology ethics. Machine ethics also has branches of its own, with fields such as machine medical ethics emerging to focus on ethical machine behaviour in different domains (Kochetkova, 2015 ).

3.7 Information Ethics

According to Floridi ( 1999 ), ‘standard ethical theories’ cannot deal satisfactorily with computer ethics problems. Floridi argues that computer ethics thus needs a new foundational ethics on which to build. This new foundation is what he terms ‘ information ethics ’, which he sees as a “particular case of ‘environmental’ ethics or ethics of the infosphere” (Floridi, 1999 ). Any information entity could, in this ethics, be considered worthy of moral recognition and status. It is thus clear that it is a theory aimed at the expansion of our moral circles, which again explains why it is portrayed as a particular form of environmental ethics. What is much less clear, however, is the value of this form of environmental ethics. Is it really a necessary foundation for computer ethics and all its branches, or might ‘standard’ ethical theories and environmental ethics be capable of more than suggested by Floridi?

One example of why information ethics could be relevant is the evaluation of the moral status of an artificial agent (Capurro, 2006 ). Say that we are building a social simulation, using agent-based modelling to explore issues related to the emergence of social effects. In this process, we construct a number of artificial agents with rules to determine their actions, including ‘goals’ they might be coded to optimise. Do such figments of our imagination have any value? Can we do with them as we please, including setting them free in worlds we create, in which they might be attacked by other agents, and even ‘killed’ through commands such as ‘If (energy = 0) [ die]’? Would things change if we say that the agents in question are far more sophisticated AI agents living in some metaverse in a not-too-distant future? Even if such agents have no biological life, and cannot necessarily suffer or experience joy in a specific human or animal sense, they are indeed information entities, and thus potentially recipients of moral consideration.

In this form, information ethics connects quite directly with robot and AI ethics, which encompasses questions relating to the moral value and status of robots and AI (Gunkel, 2014 , 2018 ). While embodiment could be said to matter, the basic cognitive capabilities of robots and the artificial agents just mentioned are exactly the same, which makes it pertinent to ask if questions related to the moral status of artificial agents most properly belong to information ethics or robot ethics or AI ethics. However, we could just as well ask whether information ethics is really necessary for asking these questions at all, or whether, for example, environmental ethics have already provided us with the required framework for analysing moral status and various forms of inclusion in moral communities (Nolt, 2014 ).

3.8 Data Ethics

Most attempts to define a domain-specific ethics entail attempting to highlight how special the domain is. So it is with data ethics. Hand ( 2018 ) states that ethical issues related to data are ‘more challenging’ than issues related to other technologies. This is, he states, because data and data science are ubiquitous, and because the issues involved are so complex. Right or wrong, data is central to modern society, and Hand ( 2018 ) attempts to capture a wide range of issues under the data ethics umbrella, including what data is, who owns it, consent, confidentiality and transparency, trustworthiness, and privacy.

There are ample principles, checklists, and guidelines for data ethics. Drew ( 2016 , p. 4) presents a set of principles, such as ‘use data and tools that have the minimum intrusion necessary’ and ‘keep data secure’. Such general principles are hopefully universally accepted, and many have also been codified in law, e.g. in the EU’s GDPR. Hand ( 2018 , p. 189) refers to the checklists of other unnamed domains of ethics, and makes his own, with entries such as ‘store data securely’ and ‘be clear about the benefits of the analysis, and who derives the benefits.’ One attempt at summarising the principles and demarcating data ethics is the data ethics canvas by the Open Data Institute, aimed at providing practitioners with the tools and questions required to avoid ‘adverse impacts on people and society’ (Open Data Institute, 2021 ). From an outsiders’ perspective, there appears to be a constant scramble to present and be the originator of the best framework, and Franzke et al. ( 2021 ), for example, argue for the benefits of their Data Ethics Decision Aid (DEDA) over the data ethics canvas.

There are consequently many varieties of data ethics, and some, such as the ‘data ethics of power’ (Hasselbalch, 2019 ) is less prescription- and checklist-oriented and more focused on elucidating how data relates to power and changed power relations. All in all, however, data ethics might most fruitfully be understood as the practically oriented guide to practitioners and users of computing technology, AI, and robotics, as all the broader questions crammed into this umbrella are also analysed by other domains.

It is also worth noting that while data ethics purports to be the domain of privacy and surveillance, issues related to these phenomena are much older than modern data science (Westin, 1967 ), and the questions involved in understanding them should perhaps not be limited to ‘ data ethics ’. The ethics of privacy (Moor, 1999 ; Siegel, 1979 ) and surveillance (Macnish, 2018 ; Marx, 1998 ) are established domains that seem able to serve data scientists with both historical and new insight into these phenomena without them having to be connected specifically to data ethics. Furthermore, attempts to brand even more specific data ethics have been made, such as big data ethics (Richards & King, 2014 ; Zwitter, 2014 ) and the even more specific ‘ethics of biomedical big data analytics’ (Mittelstadt, 2019a ). However, these endeavours are in reality either specific instances of data ethics, or cases of attempting to understand how data is currently used in combination with new forms of analysis, or AI, and we consequently believe the latter is encompassed in data ethics and/or AI ethics.

3.9 Digital Ethics

Digital ethics is a term not used quite as often as many of the others, but it does capture a segment of ethical questions not directly belonging to the other domains detailed above. In particular, if we use the term in line with Capurro ( 2009 ), digital ethics, or digital media ethics, relates to the use of information and communication technology, and captures questions related to the use of, for example, mobile phones and navigation services. Digital ethics can consequently be seen less as a technical checklist-oriented ethics and more oriented towards the challenges ‘raised by digital culture’ and issues related to participation, equitable access, and the implications of our use of digital media (Luke, 2018 ). As seen in the Oxford Handbook of Digital Ethics (Véliz, forthcoming), the term has been understood to subsume all forms of ethics described in this article, including Internet ethics, AI ethics, and robot ethics. This broad use of the term is seemingly also found in Aggarwal ( 2020 ), who describe how advances in AI changes the ethical implications of digital technologies. Muddying the water further, Aggarwal proceeds to state that Intercultural digital ethics is a subfield of both digital ethics and information ethics.

Another usage of the term is found in Whiting and Pritchard ( 2018 ), where digital ethics is defined as ‘the moral principles or rules of behaviour that govern and guide qualitative Internet research from its inception to publication and the curation of data’. Such a definition would, however, position digital ethics as a branch of science, and more specifically research, ethics.

3.10 Internet Ethics

The final domain-specific ethic we include is Internet ethics, which is seen as a low-level technology ethics nested in digital ethics. In Langford ( 2000 ), Internet ethics is presented through explorations of how the Internet relates to privacy and security, law and the Internet, the potential for fostering moral wrongdoing, information integrity, democratic implications, and professional responsibilities. This suggests that it is both used to describe the agenda of analysing Internet implications and Internet ethics as a professional ethic for engineers. The term is not as established as many of the other domain ethics here discussed, and Tavani ( 2016 ) suggests that ‘cyberethics’ is a preferable term that covers more than just the Internet, including other interconnected communication technologies.

While Internet ethics is already low level, this does not stop others from developing even lower-level domain ethics. Social networking ethics (Lannin and Scott, 2013 ; Vallor, 2012 ), for example, and even search engine ethics (Tavani, 2012 ), have been proposed.

3.11 The Great Chain of Technology Ethics and Neighbouring Ethics

The preceding considerations lead to the overview of technology-related domain ethics summarised in Table 1 . This categorisation shows how one might think about the hierarchical relationships between the different domains of technology ethics. It represents our attempt to make sense of the proliferation of subdomains. However, as noted in the preceding text, the way in which the subdomains are understood or applied in the philosophical, legal, and regulatory literature is not as conceptually pure or logical as we might like.

The relationships between the various types are also shown in Fig.  2 , which presents the tentative distinction that has emerged between ethics aimed at directing human action and those directed at other entities, specifically the technologies themselves. We have shown how, for example, engineering ethics is aimed at guiding the conduct of engineers, while machine ethics is about the ethical behaviour of machines. This hierarchy coupled with a proper understanding of how the domains relate to each other and their goals can help identify which forms of domain-specific ethics are novel enough to warrant unique research agendas and which are already sufficiently captured by higher-level ethics.

figure 2

The great chain of technology ethics

While we argue that even some of these forms of ethics should have a more marginal importance than it might appear in the modern discourse on technology ethics, the real challenge is further exacerbated by the fact that we have already excluded a number of proposed domain-specific ethics, such as social network ethics, search engine ethics, cyberethics, and programming ethics.

We have chosen not to go into detail on the various types of adjacent or supporting ethics. Some of these relate directly to those detailed above. Business ethics, for example, can relate very closely to computer ethics since computing technology is widely deployed by businesses. Care ethics is closely related to robot ethics since one major potential application of robots is in care settings. Environmental ethics is arguably complementary to (and possibly foundational to) both robot and information ethics. We have already noted that privacy and surveillance ethics already covers many of the bases purportedly covered by data ethics, and social and distributive ethics arguably provides the foundational analyses so often foregrounded in various forms of data ethics and AI ethics.

While we are admittedly sceptical of the importance of many of these domain-specific ethics, we do not argue that they are all superfluous and that we only need one, or very few, types of general ethics. The complexities of new technologies and the business operations of those who use them will sometimes require analyses based on intimate knowledge of the technology in question and a certain degree of specialisation. Furthermore, case studies involving the application of higher-level ethical principles or theories to particular technologies will always be needed. The question, then, is how to evaluate the need for particular types of domain-specific ethical inquiries or theories, as opposed to allowing for more specialisation within the more foundational domains or case studies arising from them.

4 An Ethical Division of Labour

As the preceding section has shown, there is significant potential overlap between the various domains of ethics related to modern technologies such as social robots, AI, and big data. While many branches of ethics have emerged for a reason, we have also shown that there is much confusion and a lack of consistency in how the various terms are used, as seen, for example, in the Oxford handbooks on digital ethics (Véliz, forthcoming) and AI (Dubber et al. 2020 ). In addition, certain technologies are associated with significant hype, and this could easily lead ethicists who are unfamiliar with the higher-level traditional technology related ethics—or who seek attention and impact within and outside academia—to align their work with the hype terms in vogue at any point in time. Big data has been an obvious example for some years, now superseded by AI, which might in turn give way to various forms of virtual/extended reality and crypto, as the metaverse and web3 seem poised for prominence. Academic specialisation is not necessarily a bad thing, but we argue that the proliferation of technology ethics domains can be, and we should avoid excessive proliferation.

4.1 The Problems of Proliferation

Our position is that the negative consequences of proliferation of domains often outweigh the positive ones, and there are three main arguments in favour of such a position.

Firstly, the conceptual boundaries between the subfields are not well-defined nor respected. This leads to a general confusion and a lack of consistency, as people from purportedly different domain-specific ethics proceed to work on the same issues. Privacy ethics is a good example of how researchers and practitioners in different domains work on the same topic. People working in Internet ethics, for example, discuss privacy-related issues arising from the tracking of online information and the monetisation of this information by social media platforms. People working in AI ethics discuss the very same issues as they pertain to, for example, facial recognition technology and predictive analytics services. The discussions are similar, perhaps even equivalent. Part of the reason for this is because AI technology has become seamlessly blended into many online services. But problems arise as soon as people unduly characterise the challenge generated by AI as novel or specific and neglect to connect their discussions to foundational insight into the nature of privacy. If AI ethicists proceed to generate their own conceptions of privacy and surveillance, and the same occurs in, for example, data ethics and digital ethics, the risk of inconsistency emerges. Furthermore, AI ethics is, as we have shown, presented as a domain encompassing many topics arguably belonging to higher-level ethics, such as technology or computer ethics, and this creates a confusion as to what belongs where. While case-based analyses of problems related to issues such as privacy related to a particular service, autonomous vehicles, and robots in public spaces are clearly necessary, we take issue with the attempts to compartmentalise such questions in specific domains.

Secondly, it leads to a duplication of effort and constant reinventing of the wheel. This is related to the first point, as insufficient demarcation leads both new and old practitioners to create new foundations and approaches within lower-level forms of ethics that ignores what came before. This way of doing compartmentalised and siloed ethics is inefficient and wasteful, as similar and overlapping knowledge is produced without sufficient interaction. This is a problem even within domain-specific ethics, as shown by the various analyses done on the proliferation of guidelines and principles of responsible, ethical, and trustworthy, AI (Floridi and Cowls, 2019 ; Jobin et al. 2019 ), which tend to repeat many of the same points (Dotan, 2021 ). We extend this argument, because we have seen that not just within domain-specific ethics, but also between them, we find an even larger universe in which similar and overlapping topics become the subject of guidelines and principles rooted in a too low-level ethics to facilitate knowledge sharing and interdisciplinary debates about the consequences of technology.

Not only is it unnecessary to invent the wheel over and over—doing so will arguably also lead to the constant invention of poor wheels, rather than improvement on the basic concepts. One example could be from the domain of robot ethics, in which various authors, in a large number of different outlets, debate the potential for robots to be friends, lovers, and romantic partners. To a large extent, the debates about those different robots work along the same basic lines: some people argue that robots currently lack the mental properties associated with human friends, sexual partners, and lovers; others argue that they do not or that they may acquire those properties in the future (Danaher, 2019 , 2020b ; Gunkel, 2018 ; Sætra, 2021b ). Contributors to the debate simply list the same basic mental properties over and over again and debate their actual or possible instantiation in a robot. There is very little progress and much duplication of effort. Why does this happen? One possibility is that new researchers from different fields continuously stumble upon topics that seem novel from their perspective (love, friendship, sex, workplace relations, or even general sociology, philosophy, etc.), but that are already being dealt with in other disciplines, or, in the best of circumstances, in specialised interdisciplinary arenas. Journal editors and reviewers are unaware of these pre-existing literatures and thus give the green light to a new take on an old, well-debated issue. Approaching a problem from different angles, or multiple times, is not necessarily a bad thing, but to provide scientific value, it should be purposeful and based on extant knowledge. Cross-validation from different disciplines and philosophical perspectives, and replication in general, is immensely valuable for separating the valuable from the discardable in extant literature. However, while a fragmented field of technology ethics might incidentally have such positive effects, the benefits will be limited if such quasi-replication is performed without knowledge of that which is supposedly replicated.

The community of editors and researchers involved in applied technology ethics consequently have a shared responsibility to search for existing work and to use reviewers who knows the field of study. This will allow newcomers to better make use of existing knowledge, while also connecting the various disciplines that need be involved in the ethics of technology, which will often by necessity be interdisciplinary.

Thirdly, there is a danger that participants overlook or ignore more fundamental ethical insights and truths in their zeal for carving out a new domain-specific ethics. While ethicists constantly reinvent the wheel, they could instead choose to rediscover and apply foundational insight from higher-level ethics. By doing so, they would be adhering to the scientific ideal of accumulation, and by standing on the shoulders of giants, they would arguably be able to get much farther into what is truly unique about their lower-level case studies (Merton, 1942 ). AI ethics is once again an interesting example. Much of what is now labelled AI ethics has been expertly detailed by writers in the philosophy and ethics of technology, such as Mumford ( 1934 ), Ellul ( 1964 ), and Winner ( 1977 ). While their accounts may be wordy, and contain very few checklists, the key questions they address reveal how non-novel the challenges purportedly attributed to cutting edge AI really are. Winner ( 1977 , pp. 326–327), for example, discusses principles related to the need to design autonomous technology in a way that makes it intelligible and accessible to those it affects, and that flexibility and mutability are crucial for avoiding ‘circumstances in which technological systems impose a permanent, rigid, and irreversible imprint on the lives of the populace’.

Or consider the work on the problem of bias in AI systems. This has become a major topic of debate and concern in recent years, much of it stemming from a landmark report by the public interest journalism platform ProPublica on bias in recidivism algorithms used by the US criminal justice system (Fazelpour and Danks, 2021 ). While there is value to the recent work on bias—in particular the so-called impossibility results derived by mathematicians and computer scientists (Kleinberg et al. 2018 )—a lot of the conceptual terrain on bias and fairness was mapped long before the modern AI hype cycle. For instance, Friedman and Nissenbaum ( 1996 ), in an article entitled ‘Bias in Computing Systems’ published in 1996, addressed many of the basic forms of bias in computing systems, all of which overlap directly with concerns about bias in modern AI systems. The economist/philosopher John Roemer ( 1998 ) mapped in detail the incompatibility between different standards of fairness and non-discrimination in his work on equality. And many classic contributions to our understanding of sexism, gender bias, racial injustice, and other forms of discrimination harbour insights that are clearly relevant to the understanding of bias in AI (Benjamin, 2019 ; D’Ignazio and Klein, 2020 ; Noble, 2018 ). There is a danger that these insights are ignored and, again, reinvented because they are included in work on political philosophy/economics or associate themselves with technology (computers) that is overlooked by participants in the modern AI ethics debate. People working with a novel ethics silo are too busy debating with their peers to harness the more foundation insights from previous generations.

4.2 Two Criteria for Choosing Technology Ethics

The way out of the predicament generated by the proliferation of ethics consists of two simple criteria. Seeing how applied ethics is ripe with checklists and principles, we have made our own very simple principles for labelling and positioning ethical research:

If the questions you address are sufficiently addressed by higher-level ethics, do not align your work with lower-level technologies.

If you are in fact pursuing novel questions, consider if they are general and apply to other, more basic technologies and questions, and do not always rush to create a new domain-specific ethics attached to lower-level terms and technologies.

The first principle suggests that ethicists should always locate their work in the highest-level domain ethics that covers the questions they address. For example, if they research questions related to general problems facing computer and software engineers, these questions are most likely already partly answered in computer ethics or engineering ethics, and it is beneficial to continue the debate there, rather than to pretend that it belongs to AI ethics because AI is some ground-breaking and magical technology and not just new software made by software engineers.

The second principle opens up the possibility that genuinely new domains which answer new questions, potentially requiring new approaches, are discovered. While real novelty might be rarer that one imagines, it is clearly also conceivable that something new is found by new generations of ethicists working with new technologies in new societal contexts. However, when this occurs, the work should not automatically be placed in low-level ethics such as AI ethics just because that is where the hype is currently strongest. Very often, the questions, despite being new, relate to fundamental issues of science, engineering, and technology, and if so, that is where the work belongs.

A shorthand for choosing one’s domain might be to apply Occam’s razor whenever a domain is chosen, as the highest-level ethic capable of explaining and describing a phenomenon is simpler in terms of not including superfluous specifications of particular technologies. This principle nicely captures our main point, which is that science and theoretical models should not be needlessly duplicated, and that precedence should be given to simplicity (Duignan, 2021 ). When having to choose between aligning with, for example, AI or more basic theories involving less complicated technological foundations capable of dealing with an issue, the simpler should be chosen when it works as well or better than the alternatives. The simplicity and non-complexity of ethical theories are important for fostering understanding and for pedagogical purposes. It is also important for furthering scientific progress, however, as testing and falsifying theories are easier the simpler a theory is (Popper, 2005 ).

4.3 Opposing Views

Before concluding, we must consider some objections to the argument we have made.

First, someone might argue that we need specialised, domain-specific rules to move ethics out of the armchair and into the real world. As we have mentioned, the complexities of technologies and business practices will potentially preclude effective analyses of ethical challenges at too high and abstract level. For example, trying to get to grips with the intricacies of algorithmic audits will be a tall order for the ethicist who insists that they will solve this with a general understanding of technology without an intimate understanding of how algorithms work and are applied. Abstract normative ethics is fun for philosophers, but it does not always connect with the real-world problems faced by designers and users of technological systems. For instance, one could argue that the specific models of ‘fairness’ in machine learning, that have been developed in the recent past, are valuable (perhaps) and you would not get those if you did not create a specific subfield of AI ethics.

In response, it is important to bear in mind that we are not arguing against the entire enterprise of applied ethics or the attempt to use case studies involving specific technologies to develop and understand ethical theories. We need to apply ethical principles to new technologies and new scenarios. This is an essential and beneficial practice. It is, rather, the creation of new tech-defined domains of inquiry to which we object. These, we argue, come with the risks of creating relatively sealed-off specialities that reinvent the wheel and ignore foundational insights. We submit that you can get the benefits of applied ethical insight without always seeking to carve out a new domain of ethical inquiry. Academic specialisation is both natural and beneficial, but mainly when it is based on a scientific approach of cumulative knowledge building and the realisation that there will often be some giant’s shoulders to stand on (Merton, 1965 ).

Second, and related to the previous objection, one might argue that subfields are needed to attract interdisciplinary expertise. You will not get engineers interested in meta ethics or normative ethics, but if you carve out a subfield of applied ethics that relates to their area of expertise, then, you might get them interested and that is what we need if ethics is to have real-world impact. In other words, the labels matter when it comes to motivating people to care. Creating a specific subfield of ‘A I ethics’ is just good marketing to the key stakeholders in that technology.

In response, there is probably some merit to this objection. Interdisciplinary work is hard. Different fields do not share the same conceptual foundations and assumptions. Building bridges to mutually beneficial collaboration requires a lot of work. That said, it is not obvious that repeatedly carving out new subfields of tech ethics is beneficial for interdisciplinary collaboration. Indeed, it may be counterproductive. If, as we have argued above, many digital and smart technologies work on the same basic technological foundation (computing machinery), then introducing new subfields simply risks perpetuating unnecessary disciplinary silos: AI engineers should be talking to data scientists and roboticists and vice versa (to give but one example). Stipulating that AI ethics is distinct from data and robot ethics may preclude the necessary interdisciplinary collaboration.

Third, one might argue that our position is too strong in that it applies to all subfields of ethics. Take medical ethics as an example. In the aftermath of WWII, this developed into a conceptually rich and rigorous subfield of ethics, generating its own checklists of principles, journals, conferences, specialists, research centres, and university degrees. Most people would accept that medical ethics is a valid and useful subfield of applied ethics. Could it not be argued that AI ethics (or whatever subfield of tech ethics we happen to be concerned with) is in a similar position? Indeed, some prominent AI ethicists have argued that the field should develop along the lines provided by the medical ethics model, albeit while noting significant differences between the two fields (Mittelstadt, 2019b ; Véliz, 2019 ). A related approach could be to argue that AI ethics should look to and take inspiration from business ethics (Schultz and Seele, 2022 ).

In response to this, it is worth bearing in mind that our objection is not to the existence of subfields of applied ethics per se. Many subfields are valid and worth developing. Our objection relates, more particularly, to the proliferation of subfields of ethics with poorly defined conceptual boundaries and excessive reinventing of the wheel. We are arguing that we should create and accept subfields of ethics with caution and not zeal. In this respect, it is noteworthy that the field of medical ethics, for instance, has not undergone the same degree of ethical proliferation as technology ethics. While there are closely related fields of applied ethics (e.g. bioethics and neuroethics), medical ethicists have avoided the temptation to create distinctive subfields of applied medical ethics, such as keyhole surgery ethics, nephrology ethics, or oncology ethics. This may be because each branch of medicine shares the same basic ethical goal—to improve the health of patients—and so the ethical focus remains the same across all subspecialities of medicine. Whatever the reason, this is very different from the situation we find in technology ethics. As we have clearly shown with our survey, the literature reveals an excessive number of poorly defined subfields. Some of these may be worth retaining, but only after an appropriate cull.

Fourth, and finally, one might argue that people (particularly academics and researchers) must chase the money and follow the hype cycle. We need grants, we need readers, and we need to attract students. By attaching our expertise to new technologies and generating ethical insights about their application—however generic or unoriginal they may be—we make ourselves relevant and can attract the requisite attention and financial input. This may be cynical and self-serving, but it is a practical necessity and cannot be overlooked.

We certainly sympathise with this objection. We feel the pull of these incentives too. But this is a poor reason for creating subfields of ethics if, as we have argued, this is counterproductive to ethical decision-making and insight. Indeed, if this is the motivation for the proliferation of tech ethics, then it seems we have even more reason to reject this practice.

5 Conclusion

The ethics of technology is garnering attention for a reason. Just about everything in modern society is the result of, and often even infused with, some kind of technology. The ethical implications are plentiful, but how should the study of applied tech ethics be organised? We have reviewed a number of specific tech ethics, and argued that there is much overlap, and much confusion relating to the demarcation of different domain ethics. For example, many issues covered by AI ethics are arguably already covered by computer ethics, and many issues argued to be data ethics, particularly issues related to privacy and surveillance, have been studied by other tech ethicists and non-tech ethicists for a long time.

We have proposed two simple principles that should help guide more ethical research to the higher levels of tech ethics, while still allowing for the existence of lower-level domain specific ethics. If this is achieved, we avoid confusion and a lack of navigability in tech ethics, ethicists avoid reinventing the wheel, and we will be better able to make use of existing insight from higher-level ethics. At the same time, the work done in lower-level ethics will be both valid and highly important, because it will be focused on issues exclusive to that domain. For example, robot ethics will be about those questions that only arise when AI is embodied in a particular sense, and not all issues related to the moral status of machines or social AI in general.

While our argument might initially be taken as a call to arms against more than one fundamental applied ethics, we hope to have allayed such fears. There are valid arguments for the existence of different types of applied ethics, and we merely argue that an exaggerated proliferation of tech ethics is occurring, and that it has negative consequences. Furthermore, we must emphasise that there is nothing preventing anyone from making specific guidelines for, for example, AI professionals, based on insight from computer ethics. The domains of ethics and the needs of practitioners are not the same, and our argument is consequently that ethical research should be more concentrated than professional practice.

To change the undesirable situation we have described, actions on several levels and in different sections are required. The most obvious target group of our efforts is researchers, who produce much of the research and generate the foundational ethical analyses used to guide practitioners and shape policy and regulation. Existing research can use the proposed hierarchy and the two criteria for choosing how to position their work to ensure that they search for ways to build on extant research in the higher-level ethics. Academia can also play a crucial role in changing this situation, and a key action relates to restructuring the way we teach ethics of science and technology to students. One way to ameliorate the current situation would be to split ethics education into introductory joint classes in science and technology ethics before splitting the student group into domain specific groups. In such groups, one should focus on (a) what can be learned from other domains and (b) what is novel for the technology of the specific domain. Industry and practitioners are a major cause of the proliferation, and hence, another potential target for reform, but we argue that it seems unlikely that the industry itself will see the need to solve the problem (Sætra et al. 2021 ).

The tech industry and practitioners have incentives to hype new technologies both to more effectively sell them and also to more effectively be able to avoid or shape regulation to suit its interests. Nevertheless, if incoming students are educated as just discussed, and researchers increasingly resist excessive proliferation, industry will inevitably find the grounds towards increased proliferation harder to tread. Finally, government and regulators are the consumers of tech ethics research and producers of tech regulation, and increased awareness of and knowledge about the challenges here discussed will help resist efforts to pursue potentially unnecessary laws that might be covered through more foundational and general—and thus potentially future-proof—regulation.

Data Availability

Not applicable.

However, information ethics is often also described as a separate discipline, necessitating the need to determine whether issues of information are distinct from issues of computing, or more properly described as a subdiscipline of computer ethics (Capurro, 2006 ).

Aggarwal, N. (2020). Introduction to the special issue on intercultural digital ethics. Philosophy Technology, 33 (4), 547–550. https://doi.org/10.1007/s13347-020-00428-1

Article   Google Scholar  

Allen, C., Wallach, W., & Smit, I. (2006). Why machine ethics? IEEE Intelligent System, 21 (4), 12–17.

Anderson, S. L. (2011). Machine metaethics. In M. Anderson & S. L. Anderson (Eds.), Machine ethics (pp. 21–27). Cambridge.

Chapter   Google Scholar  

Anderson M, & Anderson SL (2011) Machine ethics. Cambridge University Press.

Aristotle. (2014). Nicomachean Ethics (C. D. D. Reeve, Trans.). Indianapolis: Hackett Publishing.

Asimov, I. (2013). I, robot . Harpercollins Publishers.

Google Scholar  

Benjamin, R. (2019). Race after technology . Polity Press.

Capurro, R. (2006). Towards an ontological foundation of information ethics. Ethics and Information Technology, 8 (4), 175–186.

Capurro, R. (2009). Digital ethics. Paper presented at the Global Forum on Civilization and Peace. http://www.capurro.de/korea.html . Accessed 2 July 2021.

Coeckelbergh, M. (2020). AI ethics. MIT Press.

Copp, D. (2005). The Oxford handbook of ethical theory . Oxford University Press.

Book   Google Scholar  

Costanza-Chock, S. (2020). Design justice: Community-led practices to build the worlds we need . The MIT Press.

Cummings, M. L. (2006). Integrating ethics in design through the value-sensitive design approach. Science and Engineering Ethics, 12 (4), 701–715. https://doi.org/10.1007/s11948-006-0065-0

D’Ignazio, C., & Klein, L. (2020). Data feminism . MIT Press.

Danaher, J. (2019). The philosophical case for robot friendship. Journal of Posthuman Studies, 3 (1), 5–24.

Danaher, J. (2020b). Welcoming robots into the moral circle: A defence of ethical behaviourism. Science and Engineering Ethics, 26 (4), 2023–2049. https://doi.org/10.1007/s11948-019-00119-x

Danaher, J. (2020a). Robot betrayal: A guide to the ethics of robotic deception. Ethics and Information Technology , 22 , 1–12. https://doi.org/10.1007/s10676-019-09520-3

Dotan, R. (2021). The proliferation of AI ethics principles: What’s next? Retrieved from https://montrealethics.ai/the-proliferation-of-ai-ethics-principles-whats-next/ . Accessed 10 Dec 2021.

Drew, C. (2016). Data science ethics in government. Philosophical Transactions of the Royal Society a: Mathematical, Physical and Engineering Sciences, 374 (2083), 20160119. https://doi.org/10.1098/rsta.2016.0119

Dubber, M. D., Pasquale, F., & Das, S. (2020). The Oxford Handbook of Ethics of AI . Oxford University Press.

Duignan, B. (2022, August 24). Occam’s razor . Encyclopedia Britannica. https://www.britannica.com/topic/Occams-razor

Ellul, J. (1964). The technological society . Vintage Books.

Fazelpour, S., & Danks, D. (2021). Algorithmic bias: Senses, sources, solutions. Philosophy Compass, 16 (8), e12760.

Fleddermann, C. B. (2004). Engineering ethics . Prentice Hall.

Fleetwood, J. (2017). Public health, ethics, and autonomous vehicles. American Journal of Public Health, 107 (4), 532–537.

Floridi, L. (1999). Information ethics: On the philosophical foundation of computer ethics. Ethics and Information Technology, 1 (1), 33–52.

Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. HDSR, 1( 1). https://doi.org/10.1162/99608f92.8cd550d1

Franzke, A. S., Muis, I., & Schäfer, M. T. (2021). Data ethics decision aid (DEDA): A dialogical framework for ethical inquiry of AI and data projects in the Netherlands. Ethics and Information Technology, 23 , 551–567. https://doi.org/10.1007/s10676-020-09577-5

Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems (TOIS), 14 (3), 330–347.

Gartner. (2021, September 7th). Gartner identifies four trends driving near-term artificial intelligence innovation. Retrieved from https://www.gartner.com/en/newsroom/press-releases/2021-09-07-gartner-identifies-four-trends-driving-near-term-artificial-intelligence-innovation . Accessed 10 Dec 2021.

Geisslinger, M., Poszler, F., Betz, J., Lütge, C., & Lienkamp, M. (2021). Autonomous driving ethics: From trolley problem to ethics of risk. Philosophy & Technology, 34 (4), 1033–1055. https://doi.org/10.1007/s13347-021-00449-4

Gordon, J.-S., & Nyholm, S. (2021). Ethics of artificial intelligence. In Internet Encyclopedia of Philosophy . https://iep.utm.edu/ethics-of-artificial-intelligence/

Gunkel, D. J. (2014). A vindication of the rights of machines. Philosophy & Technology , 27 , 113-132. https://doi.org/10.1007/s13347-013-0121-z

Gunkel, D. J. (2017). Mind the gap: Responsible robotics and the problem of responsibility. Ethics and Information Technology , 1–14. https://doi.org/10.1007/s10676-017-9428-2

Gunkel, D. J. (2018). Robot rights . MIT Press.

Hand, D. J. (2018). Aspects of data ethics in a changing world: Where are we now? Big Data, 6 (3), 176–190.

Harris, C. E., Jr., Davis, M., Pritchard, M. S., & Rabins, M. J. (1996). Engineering ethics: What? why? how? and when? Journal of Engineering Education, 85 (2), 93–96.

Harris Jr, C. E., Pritchard, M. S., Rabins, M. J., James, R., & Englehardt, E. (2013). Engineering ethics: Concepts and cases.  Cengage Learning.

Hasselbalch, G. (2019). Making sense of data ethics The powers behind the data ethics debate in European policymaking. Internet Policy Review, 8 (2), 1–19.

Horowitz, M. C. (2016). The ethics & morality of robotic warfare: Assessing the debate over autonomous weapons. Daedalus, 145 (4), 25–36.

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1 (9), 389–399.

Johnson, D. G. (2004). Computer ethics. In L. Floridi (Ed.), The Blackwell Guide to the Philosophy of Computing and Information (pp. 65–75). Blackwell Publishing.

Jonas, H. (1982). Technology as a Subject for Ethics. Social Research, 49 (4), 891–898.

Kleinberg, J., Ludwig, J., Mullainathan, S., & Sunstein, C. R. (2018). Discrimination in the age of algorithms. Journal of Legal Analysis, 10 , 113–174.

Kochetkova, T. (2015). An overview of machine medical ethics. In S. P. v. Rysewyk & M. Pontier (Eds.), Machine Medical Ethics (pp. 3–15). Springer.

Langford, D. (2000). Internet ethics . Macmillan London.

Lannin, D. G., & Scott, N. A. (2013). Social networking ethics: Developing best practices for the new small world. Professional Psychology: Research and Practice, 44 (3), 135.

Lin, P., Abney, K., & Bekey, G. (2011). Robot ethics: Mapping the issues for a mechanized world. Artificial Intelligence, 175 (5–6), 942–949.

Luke, A. (2018). Digital ethics now. Language and Literacy, 20 (3), 185–198.

Macnish, K. (2018). The ethics of surveillance: An introduction . Routledge.

Marx, G. T. (1998). Ethics for the new surveillance. The Information Society, 14 (3), 171–185.

Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6 (3), 175–183.

Merton, R. K. (1942). Science and technology in a democratic order. Journal of Legal and Political Sociology, 1 (1), 115–126.

Merton, R. K. (1965). On the shoulders of giants: A Shandean postscript . The Free Press.

Merton, R. K. (1972). The sociology of science: Theoretical and empirical investigations . University of Chicago press.

Merton, R. K. (1973). The normative structure of science. In N. W. Storer (Ed.), The sociology of science: Theoretical and empirical investigations (pp. 267–268). University of Chicago Press.

Mittelstadt, B. (2019a). The ethics of biomedical ‘big data’ analytics. Philosophy & Technology, 32 (1), 17–21. https://doi.org/10.1007/s13347-019-00344-z

Mittelstadt, B. (2019b). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1 (11), 501–507.

Moor, J. H. (1985). What is computer ethics? Metaphilosophy, 16 (4), 266–275.

Moor, J. (1999). Ethics of Privacy Protection. Library Trends, 39 (1 & 2), 69–82.

Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21 (4), 18–21.

Müller, V. C. (2020). Ethics of artificial intelligence and robotics. In E. N. Zalta (Ed.), Stanford Encyclopedia of Philosophy (Summer 2020 Edition) . https://plato.stanford.edu/archives/sum2020/entries/ethics-ai/

Mumford, L. (1934). Technics and civilization . Routledge & Kegan Paul LTD.

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism . New York University Press.

Nolt, J. (2014). Environmental ethics for the long term: An introduction.  Routledge.

Open Data Institute. (2021). Data Ethics Canvas. Retrieved from https://theodi.org/article/data-ethics-canvas/#1562602644259-1d65b099-ea7b . Accessed 10 Dec 2021.

Popper, K. (2005). The logic of scientific discovery . Routledge.

Richards, N. M., & King, J. H. (2014). Big data ethics. Wake Forest L. Rev., 49 , 393.

Roemer, J. E. (1998). Equality of opportunity . Harvard University Press.

Rollin, B. E. (2006). Science and ethics. Cambridge University Press.

Romele, A. (2022). Images of Artificial Intelligence: A Blind Spot in AI Ethics. Philosophy & Technology, 35 (1), 1–19. https://doi.org/10.1007/s13347-022-00498-3

Russell, S., & Norvig, P. (2014). Artificial intelligence: A modern approach (3rd ed.). Pearson.

Sætra, H. S. (2018). Science as a vocation in the era of big data: The philosophy of science behind big data and humanity’s continued part in science. Integrative Psychological and Behavioral Science, 52 (4), 508–522.

Sætra, H. S. (2020). The parasitic nature of social AI: Sharing minds with the mindless. Integrative Psychological and Behavioral Science, 54 , 308–326. https://doi.org/10.1007/s12124-020-09523-6

Sætra, H. S. (2022). AI for the sustainable development goals . CRC Press.

Sætra, H. S., & Fosch-Villaronga, E. (2021). Research in AI has Implications for society: How do we respond? Morals & Machines, 1 (1), 60–73.

Sætra, H. S. (2021a). Confounding complexity of machine action: A Hobbesian Account of machine responsibility. International Journal of Technoethics, 12 (1). https://doi.org/10.4018/IJT.2021a0101.oa1

Sætra, H. S. (2021b). Loving robots changing love: Towards a practical deficiency-love. Journal of future robot life, 3 (2), 109–127. https://doi.org/10.3233/FRL-200023

Sætra, H. S. (2021c). Social robot deception and the culture of trust. Paladyn, Journal of Behavioral Robotics, 12 (1). https://doi.org/10.1515/pjbr-2021c-0021

Sætra, H. S., Coeckelbergh, M., & Danaher, J. (2021). The AI ethicist’s dilemma: Fighting big tech by supporting big tech. AI and Ethics, 2, 15–27. https://doi.org/10.1007/s43681-021-00123-7

Schultz, M. D., & Seele, P. (2022). Towards AI ethics’ institutionalization: Knowledge bridges from business ethics to advance organizational AI ethics. AI and Ethics , 1-13 https://doi.org/10.1007/s43681-022-00150-y

Siegel, M. (1979). Privacy, ethics, and confidentiality. Professional Psychology, 10 (2), 249.

Singer, P. (2011). Practical ethics. Cambridge university press.

Tavani, H. T. (2012). Search engines and ethics. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy .

Tavani, H. T. (2016). Ethics and technology. Wiley.

Turing, A. M. (2009). Computing machinery and intelligence. In Robert Epstein, Gary Roberts, & G. Beber (Eds.), Parsing the Turing Test (23–65). Springer.

Vallor, S. (2012). Social networking and ethics. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy .

Véliz, C. (forthcoming). Oxford Handbook of Digital Ethics . Oxford: Oxford University Press.

Véliz, C. (2019). Three things digital ethics can learn from medical ethics. Nature Electronics, 2 (8), 316–318. https://doi.org/10.1038/s41928-019-0294-2

Westin, A. F. (1967). Privacy and freedom . IG Publishing.

Whiting, R., & Pritchard, K. (2018). Digital ethics. In C. Cassell, A. L. Cunliffe, & G. Grandy (Eds.), The SAGE Handbook of Qualitative Business and Management Research Methods (pp. 562–579). Sage Reference.

Winfield, A. (2012). Robotics: A very short introduction. OUP Oxford.

Winner, L. (1977). Autonomous technology: Technics-out-of-control as a theme in political thought . MIT Press.

van Wynsberghe, A. (2021). Sustainable AI: AI for sustainability and the sustainability of AI. AI and Ethics , 1-6 https://doi.org/10.1007/s43681-021-00043-6

Zwitter, A. (2014). Big data ethics. Big Data & Society, 1 (2), 2053951714559253.

Download references

Open Access funding provided by Ostfold University College

Author information

Authors and affiliations.

Faculty of Computer Science, Engineering and Economics, Østfold University College, 1757-N, Remmen, Norway

Henrik Skaug Sætra

School of Law, NUI Galway, Galway, Ireland

John Danaher

You can also search for this author in PubMed   Google Scholar

Contributions

H.S.S. and J.D. contributed equally to this research. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Henrik Skaug Sætra .

Ethics declarations

Ethics approval, consent for publication, competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Sætra, H.S., Danaher, J. To Each Technology Its Own Ethics: The Problem of Ethical Proliferation. Philos. Technol. 35 , 93 (2022). https://doi.org/10.1007/s13347-022-00591-7

Download citation

Received : 15 July 2022

Accepted : 07 October 2022

Published : 18 October 2022

DOI : https://doi.org/10.1007/s13347-022-00591-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Applied ethics
  • Find a journal
  • Publish with us
  • Track your research

Ethics in the IT Practice Essay

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

Ethical dilemma

Regulations of it, regulations and business, value of it.

Bibliography

For several years, most debates about ethics and information technology (IT) have focused on issues of professional ethics and issues of privacy and security. The professional aspects of ethics in IT concern preserving and developing a successful reputation through tasks completions. In this context, the worker is constantly behaving well i.e. a case of character and action. Ethics now seems to be an inclusive term covering morality, value and justice, in addition to character and action.

However, the recent developments in IT and business have created cases of new concerns in ethics. Critics now wonder whether removing a job from one country to another in order to exploit lower paid workers and probably avoid heavy tax are ethical, business or economic issues (Schultz, 2006).

Offshoring and outsourcing of jobs creates the advantage of a social context by providing lower wages. These have become new ethical problems created by flexibility of the IT concept of “availability at any location”.

These new developments have made IT rise beyond organizational borders in covering end users, supply chains, partners, countries, and continents in the broad picture. These are among the issues which have necessitated the governments across the globe to collaborate and develop comprehensive and detailed laws regarding the use of information technology.

As new and unpredictable uses of information technology arise with the regulations, so do new and unpredictable ethical problems. IT is a part of the human world of relations, actions and institutions. Therefore, we expect interactions with the world of IT neither to bring predictable ethical problems nor predictable solutions.

For instance, the creations of online downloading sites of copyrighted materials to promote sales, or as a form of charity, or to promote mediocre contents demonstrate how users can manipulate IT systems where ethical principles are not clear. People can make digital copies at will, and these copies are available to anyone on the network.

The issue is whether it is a mere act of a friendship sharing information, which is ethical, or whether it is an illegal act, which is an unethical practice violating copyright laws. These instances are likely to create ethical dilemma to the comprehensive laws regulating IT industry.

Conversely, if the comprehensive laws cover such unpredictable uses and results of IT industry, then the users are acting ethically and legally. Some problems of ethics associated with the development of social media, such as dating and readily available sex partners and spamming raise serious issues of ethics and the role of IT in perpetuating them (Robert, 1995).

There are diverse cultural and ethical perspectives created by information technology and communication around the global and their possible resolutions. These are genuine global information ethics, which need to be regulated as the industry continues to experience exponential growths.

Therefore, the governments of the world in coming up with comprehensive laws to regulate the global IT industry is on the right step in protecting individuals, organizations, partners, territories and nations. This led to the development of a framework for global ethics provided in the form of the Earth Charter so as to realize a shared vision of a sustainable global society.

Information technology ethics use this framework to determine how information and technology can be used responsibly to develop a future that is economically and ecologically sustainable, and culturally and socially just (Hongladarom and Ess, 2007).

The regulation formalizes expression of values and embodies cultural norms, which sometimes may clash with the values in information technology. The IT and regulations are contiguous with the normative ethics of the originating culture. Therefore, both the law and technology can constrain behaviour according to the value system that they express. These are significant implications of the spread and adoption of the IT around the globe.

Technological developments presuppose the availability of informational content to be stored, communicated and processed. Adopting and using such technology is, therefore, essentially dependent upon the legal and normative rules that determine the accessibility and regulation of information to be stored, manipulated and disseminated through technology.

Regulations that specify the legal and technical factors for use of information are necessary so as to control the adoption and usage IT systems that process such information. As the IT information availability increases, the provision of informational content has equally become a matter of interest to both the nations that generate information and those that are net information consumers for different reasons, of course.

The nations that develop and export information technologies enforce their particular ethical and legal standards for regulations of end users. Thus, they effectively spread their value systems and regulations requirements alongside their technologies to other nations. Issues of ethical and control of information tend to rotate around the development of contents and gathering of personal information.

Regulating controls label the requirements, and in some instances, the authors of informational contents protect them under intellectual property or data privacy. However, the US and the European Union economic and political dominance tend to influence regulations of information technology.

Each of these powers has aggressively tended to promote their own approaches to information technology control. This way, they have displaced and overwhelmed the development of other countries from developing their informational control systems.

Government regulations can be useful or harmful to businesses. The nature and degree of the comprehensive laws developed by governments in regulating the IT sector can have both positive and negative effects across the globe. Nations which have enormous controls, and influences on the regulations will tremendously benefit. At the same time, some degree of the regulations may results in developing of moral, social and just society.

In order for IT to grow globally, the comprehensive laws developed should be essential regarding intellectual property, sensible approaches to contract laws, willingness to enforce the contracts, and efficient systems of handling bankruptcy in case of rapidly changing technology overtakes a business. At the same, the law should provide strong protections for freedoms of persons and expressions (Gentzoglanis and Henten, 2010).

IT business represents unpredictable changes and developments. The structure of the laws should accommodate the future course of the information revolution in different nations. For instance, the law should ensure availability of funding for new IT ventures and linear concepts in the manner of funding processes. This is because such instances affect the growth and development of new IT ventures in all nations. Therefore, it is an issue global concern.

This is a critical concept of new developments in IT and businesses, which usually upsets and challenges the old models of businesses, monopolies, and introduces new ways of tasks performances. The idea is that players in the IT sector should also operate within the laws when raising capital for their new ventures. The concept of law to ensure funding is an enabling factor for the growth, adoption and proliferation IT.

In developing regulations and ethical use of IT systems, we should always have in mind the positive feature of information technology as its potential to contribute to the increase in human consciousness by making more knowledge available in wide setup. Conversely, we must also beware of its questionable applications that can propel destructions of human or the environment.

Regulators must realize that the functional characteristics that make IT so valuable do not operate in a restricted or highly regulated environment.

Valuable applications such as speed of information processing, information storage capacity, availability of information at any location, and straightforward reproduction of information, can work well provided the users operate within the global laws. Such applications work better with a scope of the organizations as a whole, and potentially well with global scope where information sharing is part of the business.

In realizing the value of IT to human and environment, governments should focus on the prevention of harmful practices in serving needs of human and environment. Therefore, preventing harm in information technology use becomes an essential element of fixing serious problem that causes the greatest harm to human and environment. Therefore, ethical use and regulation should also focus on protecting vulnerable groups within a society to prevent IT harm from coming to end users (Richard, Robert, Tora and Neu, 2003).

Users and regulator must take into account that IT is not neutral. Its use provokes various changes that can result into consequences beyond human comprehension. Technological innovations often strive to build a new and incompatible order on top of what exists. We should use it as a means and enabler for other ends.

Gentzoglanis, A. and Henten, A., 2010. Regulation and the Evolution of the Global Telecommunications Industry. Massachusetts: Edward Elgar Publishing Limited.

Hongladarom, S. and Ess, C., 2007. Information Technology Ethics: Cultural Perspectives. London: Idea Group.

Richard, O., Robert, H., Tora, K. and Neu, R., 2003. The Global Course of the Information Revolution: Recurring Themes and Regional Variations. Santa Monica, CA: RAND.

Robert, A., 1995. Universal Access to E-Mail: Feasibility and Societal Implications. California: RAND.

Schultz, R. A., 2006. Contemporary issues in ethics and information technology. London: Idea Group.

  • Multimedia and Information Technology Contemporary Issues
  • Current and Past Technology
  • Regulating Plastic Surgery
  • How to Recognize an Informational Web Page: Nursing Informatics
  • Regulating Hospitality Industry Activity
  • Contemporary issues in Multimedia and IT
  • From Common Things to the Best Ideas
  • Biometric Security Systems
  • Tablet PC Product Analysis
  • Miniaturization Process Associated Risks
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2019, May 2). Ethics in the IT Practice. https://ivypanda.com/essays/ethics-in-it-essay/

"Ethics in the IT Practice." IvyPanda , 2 May 2019, ivypanda.com/essays/ethics-in-it-essay/.

IvyPanda . (2019) 'Ethics in the IT Practice'. 2 May.

IvyPanda . 2019. "Ethics in the IT Practice." May 2, 2019. https://ivypanda.com/essays/ethics-in-it-essay/.

1. IvyPanda . "Ethics in the IT Practice." May 2, 2019. https://ivypanda.com/essays/ethics-in-it-essay/.

IvyPanda . "Ethics in the IT Practice." May 2, 2019. https://ivypanda.com/essays/ethics-in-it-essay/.

Peter Joosten MSc.

Technology Ethics: Definition, Issues & Examples

Technology ethics. What is that? What kind of ethical dilemmas does technology bring about? How does technology influence our morals? What are some examples of this, what impact does it have on society, and what are the risks?

By the way, I wrote a separate article about human enhancement ethics . And, also related: What is the impact of technology on society ?

Ethics of technology

What is the ethics of technology? A lot of people talk about technology ethics, but what does it mean? What are technology ethics issues? What are ideas and theories to think about these topics?

I made this video as a sort of summary on my YouTube-channel :

This is a summary of this article:

  • With the progress of science and technology, ethics becomes more important .
  • Technology and humans are intertwined . Humans shape technology, and technology shapes humans.
  • The design of a technological product or service implies certain moral choices.
  • It is not always possible to predict the effect of a technology or scientific discovery.
  • A solution is our mindset : a balance between being progressive and conservative.

This is an overview of the article:

  • Definition of technology ethics;
  • Consequences of technology;
  • Morality of technology (especially design);
  • Technology Ethical Issues ;
  • Issues in biotechnology (like artificial brains);
  • My conclusion

Lastly, you can hire me (for a webinar or consultancy) and dive into the reading list with extra resources, like books, articles, and links.

Definition of technology ethics

Definition technology ethics

What is ethics, exactly? According to the Center of Ethics and Health, the definition is as follows: ‘Ethics is consciously thinking about taking the right actions’. One of the primary roles played by ethics, in relation to technology, is to make sure that technology doesn’t enter our lives in undesirable ways.

It’s becoming more and more important to reflect on the impact of technology. Futurologist Gerd Leonhard: ‘One of the dangers is that technological progress could overpower human values. Technology does not have ethics. And a society without ethics is doomed.’ You can watch my interview with Gerd Leonhard down below.

A society without ethics is doomed Gerd Leonhard, author

According to professor Peter-Paul Verbeek, this view is a bit too simplistic . Professor Verbeek (Philosophy, Twente University) specializes in the connection between humans and technology, and how ethics ties into this. In 2017, I started to develop an interest in the ethical aspects and consequences of technology. I regularly used Professor Verbeek’s books and ideas, both for the presentation and for this article.

What are the consequences of technology?

Consequences technology

One of the most important things to realize, is that technology has always had an influence on humans and has always played a role in our lives. Technology often functions as an intermediary between the user and their surroundings.

Just some examples: using binoculars to see better from far away, using a smartphone to communicate with others, or using earplugs to protect your ears from excessive noise.

Technology co-shapes reality. Bruno Latour, philosopher

The French sociologist and philosopher Bruno Latour argues that ‘technology is a mediator that actively co-shapes reality.’ Take a technology like the internet. If you look for information on Google, then Google’s search results also shape the reality that you experience. So, humans shape technology and vice versa.

Technology does not have ethics

Technology in and of itself doesn’t have ethics. Perhaps it sounds a bit cynical: technology, such as artificial intelligence, only uses ethics, norms and values in order to learn more about humans.

I believe technology will continue to play an important role regarding our norms and values, as it’s becoming ever more all-encompassing and invasive. Take virtual reality for instance; that’s a great example of how technology is playing a bigger and bigger role in co-shaping our reality.

In his books, Verbeek argues that there’s no use in simply thinking of ethics as ‘protecting’ the boundaries between humans and technology. That’s because humans and technology have always been intertwined . There’s no way to prohibit technology or stop it from playing a part in our lives. It’s inevitable, like language, oxygen and gravity.

What is the morality of technology?

Morality of technology

In a way, those who design a certain technology also manifest its morality. Anyone can function as the designer of a technology, be it companies, governments or individual innovators. It’s important to find a delicate balance between being completely free of any morals and being condescending , towards the users of these designs. As every technological design implicitly or explicitly has its ideas about what a good and just life looks like.

Take the Google Glass, an experimental pair of glasses by Google that allows the user to see extra information. When I’m around others, can they see that I’m using these glasses? That’s one example where Google could choose to build morality into its design – by creating a LED light in the front, which allows others to see whether the glasses are on.

Design and morality

One of the problems is that the designers of technologies aren’t always able to accurately predict how their technology eventually will be used, and what kind of consequences this could have. To name a few examples:

#1 The first prototype of the telephone was developed as a hearing aid for the hard of hearing. Another example: SMS (texting) was intended to help technicians communicate with each other.

#2 The typewriter was invented as a machine that could help visually impaired people write.

#3 This one really took me by surprise. The first cars were used for sports and medical purposes (!?). Patients would sit in the back of the car and be driven around. The idea was that they could inhale thin air, which would be good for their lungs.

Adversarial Design

A fun example of how technology and ethics can blend together, is the so-called ‘adversarial design’. This refers to creating technology designs that provoke and inspire users to be more conscious of the technologies they use, instead of looking the other way.

A few examples provided by Carl DiSavo: a browser extension that converts Amazon prices to how much oil they would amount to, an umbrella with electric lights that keep surveillance cameras from recognizing you, and the Natural Fuse. The latter is a system that allows you to monitor your energy consumption by reading it off a plant. If you consume too much energy, thus burning CO2, your plant will die.

This part is about ethical issues caused by the introduction of certain technologies.

Technology Ethical Issues (5x)

Apart from the fact that technology can be used for other purposes than intended by their designer, it can also have an (unexpected) impact on other, seemingly unrelated aspects. A few examples:

  • Contraceptive pill
  • Low-energy lightbulbs
  • Deep Brain Stimulation (DBS)

#1 The introduction of the contraceptive pill also led to a breakthrough in the social acceptance of homosexuality. Because of the pill, society started to think of sex and reproduction as two separate things.

#2 When low-energy lightbulbs were introduced, energy consumption was expected to go down. However, people started using these low-energy bulbs in far more places, which led to an overall increase in energy consumption.

#3 Deep Brain Stimulation (DBS) is a technology that can help cure certain symptoms of Parkinson patients, through an electrode in their brain. One medical journal mentioned a case where a patient who underwent this treatment didn’t just get better, but also got a completely different personality. He started purchasing expensive things, cheated on his partner, and his friends hardly recognized him anymore.

Impact on society

The examples I mentioned before, show the kind of impact technology can have – both on individuals and on society. Sometimes it takes a while before society adjusts to new technologies; this was also the case with driving (#4) and smartphones (#5).

# 4 In the first few years that cars started driving around, the number of traffic deaths rapidly increased. This continued, until car manufacturers started developing technologies that could make driving safer. Think of seat belts, airbags and ABS. In addition, the government started introducing laws and regulations (about drunk driving, for instance) and changing the infrastructure (like introducing roundabouts).

An interesting lesson is that car manufacturers and governments started with these safeguards and regulations after Ralph Naber published his book Unsafe at Any Speed .

# 5 In the first few years after the cellphone was introduced, everyone left their sound on. After a while, this was no longer socially accepted, and leaving the phone on silent became the new normal.

And yet…

These examples are all super interesting, but with the technology tsunami that is about to hit us now – with genetic modification with CRISPR/cas9, artificial intelligence and neurotechnology – we’re changing our lives and our society even more rapidly, directly and invasively.

This part zooms in on biotechnology-related examples: ectogenesis, organoids and human enhancement.

Ectogenesis

One interesting example that has caused a fair bit of controversy, is ectogenesis . This word refers to the development of a technology that allows a fetus to develop into a baby, outside a woman’s body. According to futurist Gerd Leonhard, this might become possible within the next 20 years.

This method would be less invasive for the mother, more efficient, and most likely cheaper as well. But should we decide to do it, just based on those rational arguments? How would this affect the emotional maturity of the child, or the bond between a mother and her child?

Growing brains outside of the body

Another example that raises a lot of ethical questions is the creation of mini organs, also known as organoids . This is a technique where small organs are created in a laboratory. That way, researchers can test whether or not certain medicines are effective in a patient; this is also a potential substitute for animal testing.

Several scientists published an article in Nature to inform the general public about a particular type of mini organs: the brain . Mini brains are currently being used by scientists to study disorders such as autism and schizophrenia.

Consciousness of mini brain?

Jeantine Lunshof (MIT Media Lab in Boston) was interviewed about this for a Dutch magazine.  She stated: ‘People need to know about this before we start applying these techniques in all types of ways. Cultivating brain-like structures raises a good deal of ethical questions. Would those brains also be able to think? Would they experience consciousness? Could this constitute a way to create back-up brains?’

Cultivating mini-brains raises a fair deal of ethical questions. Jeantine Lunshof (MIT Media Lab)

Because science just keeps progressing. One example is a brain-organoid of a girl with a genetic defect. Scientists were able to study this genetic defect thanks to these new techniques, but of course the organoid wasn’t able to ‘think’ in the way that the girl is able to. Another example is that a group of scientists decapitated a group of pigs and kept their brains alive for 36 hours.

Jeantine Lunshof aptly stated: ‘After a while, these types of technologies might change our definitions of life and death, and consciousness.’

Video bio-ethics

And here is my video about bio-ethics:

  • Human Enhancement Ethics

Human Enhancement concerns the use of science and technology to improve, increase and change human functions. For example: a boost in intelligence, strength, power or compassion.

As you might imagine, there are a host of issues and dilemma’s in this domain. Read this article if you want to know more:

I read the book  The Ethics of Human Enhancement: understanding the debate . In this video I tell you what I learned from this book. The video is published on my  YouTube Channel :

This part zooms in on potential solutions.

In his books, Verbeek refers to ancient Greece to point us in the right direction for our ethical dilemmas. According to him, there’s a reason why the word ‘hybris’ ( overconfidence ) and the word ‘hybrid’ (humans + technology) are so similar. The ancient Greeks realized that the use of technology doesn’t come without risks. One of those risks is that people can become reckless and power-hungry.

One of the solutions that I personally like, goes back to the title of one of his book: On Icarus’ wings . In the Greek myth that this refers to, Daedalus created wings of feathers and wax for his son, to help him escape the island of Crete. He warned his son: ‘Don’t fly too low, because your wings will get wet in the ocean. Don’t fly too close to the sun, because the wax will melt and you will lose your feathers.’

That’s how we could look at the role of technology as well. Don’t become overconfident and recklessly try out everything you can, but don’t be too conservative either, because then you’d halt progress that could be used in beautiful ways. Such as curing diseases (with neurotechnology), solving food scarcity (with genetic modification) or further advancing humanity (with artificial intelligence).

Coding ethics

Kevin Kelly, author and technologist, believes that technology, and particularly artificial intelligence, will force humans to reflect more on ethics. That’s because we’re forced to code these kinds of ethical questions into deep learning and machine learning systems.

Entrepreneur and futurist Nell Watson is working on this as well. I met her at the Brave New World Conference 2017 in Leiden. Watson is currently working on a database of ethical dilemmas, to help train artificial intelligence systems. According to her, the way we teach computers to distinguish pictures of cats from pictures of other animals, can also be used to teach artificial intelligence what is and isn’t ethical.

What is my conclusion?

That’s why it’s important that we keep experimenting and, subsequently, discussing the consequences. Not just scientists, but psychologists, anthropologists, sociologists and experts from within other fields as well. Professor Gary Marcus (New York University) shared a similar message at the World AI Summit 2017 in Amsterdam.

Our discussion focused specifically on artificial intelligence, but I think this goes for all technologies out there. We shouldn’t just rely on experts to give their opinion, but on everyone who wants to contribute to the discussion.

Because in the end, that’s what technology ethics is about. How do we increase our quality of life through the use of technology? Just to reiterate it: technology is a part of us, whether we like it or not.

No quick fix

In a world full of neurotechnology, genetic modification, artificial intelligence, biohacking , human enhancement , and human augmentation , we’re only going to become more intertwined with technology: we’re going to modify ourselves and upgrade our bodies with hardware and wetware. We’ll certainly make mistakes while we’re trying to find our way in this, but I’m convinced that ultimately, we’re going to be better off for it as humans.

But we do have to be smart about it. Or even better: wise. And no, there is no technical quick fix for that.

You can hire me for more knowledge and insights about this topic.

Please contact me if you want to invite me to give a  lecture , presentation, or  webinar  at your company, at your congress, symposium, or meeting.

Here you can find additional resources, like videos, websites and books.

At TBX 2022 I was in a panel with Jason Hart (Rapid 7) and Stefan Buijsman (TU Delft). The panel was moderated by Monique van Dusseldorp.

Reading list

This is a specific type of ethics:

  • What are the ethics of human enhancement ?

These are relevant books:

  • Book Technology versus Humanity by Leonhard
  • Book What things do by Verbeek

What are your thoughts on technology ethics? Leave a comment!

Share This Story, Choose Your Platform!

Related posts.

Top 8 tech trends of 2021

Top 8 tech trends of 2021

What is Neuralink? Brain chip by Elon Musk

What is Neuralink? Brain chip by Elon Musk

Technology’s Impact on Society: 5 Developments & 4 Downsides

Technology’s Impact on Society: 5 Developments & 4 Downsides

' src=

Like you, I am concerned about technology getting out of control. My book is about Slaughterbots and other horrors of technology, Bioengineering, AI, Machine Learning, et al. We need articles like yours to lift awareness and raise concern. However, I think a good fictional story that explores the down side of technology is another vehicle for lifting awareness and raising concern.

I invite you to visit amazon.com to read a free snippet of Disposable Human Beings by Julian Olson. Perhaps together, and with others, we can get in front of technologies that seem to be getting ahead of our ability to understand their ramifications and hopefully control them before they get beyond our control. Thank you Julian.

' src=

thanks for sharing this info with us

Leave A Comment Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed .

  • Undergraduate
  • High School
  • Architecture
  • American History
  • Asian History
  • Antique Literature
  • American Literature
  • Asian Literature
  • Classic English Literature
  • World Literature
  • Creative Writing
  • Linguistics
  • Criminal Justice
  • Legal Issues
  • Anthropology
  • Archaeology
  • Political Science
  • World Affairs
  • African-American Studies
  • East European Studies
  • Latin-American Studies
  • Native-American Studies
  • West European Studies
  • Family and Consumer Science
  • Social Issues
  • Women and Gender Studies
  • Social Work
  • Natural Sciences
  • Pharmacology
  • Earth science
  • Agriculture
  • Agricultural Studies
  • Computer Science
  • IT Management
  • Mathematics
  • Investments
  • Engineering and Technology
  • Engineering
  • Aeronautics
  • Medicine and Health
  • Alternative Medicine
  • Communications and Media
  • Advertising
  • Communication Strategies
  • Public Relations
  • Educational Theories
  • Teacher's Career
  • Chicago/Turabian
  • Company Analysis
  • Education Theories
  • Shakespeare
  • Canadian Studies
  • Food Safety
  • Relation of Global Warming and Extreme Weather Condition
  • Movie Review
  • Admission Essay
  • Annotated Bibliography
  • Application Essay
  • Article Critique
  • Article Review
  • Article Writing
  • Book Review
  • Business Plan
  • Business Proposal
  • Capstone Project
  • Cover Letter
  • Creative Essay
  • Dissertation
  • Dissertation - Abstract
  • Dissertation - Conclusion
  • Dissertation - Discussion
  • Dissertation - Hypothesis
  • Dissertation - Introduction
  • Dissertation - Literature
  • Dissertation - Methodology
  • Dissertation - Results
  • GCSE Coursework
  • Grant Proposal
  • Marketing Plan
  • Multiple Choice Quiz
  • Personal Statement
  • Power Point Presentation
  • Power Point Presentation With Speaker Notes
  • Questionnaire
  • Reaction Paper
  • Research Paper
  • Research Proposal
  • SWOT analysis
  • Thesis Paper
  • Online Quiz
  • Literature Review
  • Movie Analysis
  • Statistics problem
  • Math Problem
  • All papers examples
  • How It Works
  • Money Back Policy
  • Terms of Use
  • Privacy Policy
  • We Are Hiring

Technology and Ethics, Essay Example

Pages: 3

Words: 884

Hire a Writer for Custom Essay

Use 10% Off Discount: "custom10" in 1 Click 👇

You are free to use it as an inspiration or a source for your own work.

Understanding the relationship between technology and ethics requires an understanding of each one of these concepts separately. Most people learn some form of ethical values when growing up as children. Consequently, most people have some preconceived notion about what is ethical and what is not ethical, and this differs from person to person because each person may have a different definition of ethics in their own minds. Oxford Dictionaries defines ethics as “Moral principles that govern a person’s or group’s behavior”(Ethics, n.d.). Therefore, what ethics means in the minds of the individual sometimes depends on what that individual sees as moral. Technology, on the other hand, is more likely seen the same by most people because it does not have an emotional element to it like ethics does. Technology, as defined by Oxford Dictionaries , is “The application of scientific knowledge for practical purposes” (Technology, n.d.). This definition, however, seems somewhat vague, considering it does not include the contributions of information and communication advancements specifically, so it is included here.

Privacy in the Workplace

Workplace privacy has become one of the most pressing issues in society today. However, according toHartman (2001), workplace “privacy cannot be adequately addressed without considering a basic foundation of ‘ethics’”. This has much to do with individual concerns and organizational motivations. As stated, ethics is a matter of perception, which impacts how people behave in the workplace when using technology. The notions of technology and ethics often merge when it comes to their use in the business context, such as in the workplace. This often leads to issues that arise as a result of the differences in people’s perceptions of ethics and their level of access to technology in the workplace. In addition, issues also arise when employers choose to use technology ethically or unethically for or against their employees in the workplace. This highlights the issue of privacy in the workplace.

Companies acquire certain personal information from employees, shareholders, customers and vendors. Companies should, and most do, have privacy and ethical policies in place to protect personal information. This information includes names, addresses, phone numbers, social security numbers, federal identification numbers, etc. Companies also have knowledge of employees’ income, investment and some health information. In addition, companies have access to and often keep track of employees’ computer activities by monitoring their Internet surfing at work, their emails or their phone conversations. According to Akcay (2008), these are things that most people would consider private. However, these are not necessarily always kept private in the workplace. Sometimes unscrupulous companies actually sell personal information to telemarking companies. This scenario is certainly unethical and a clear violation of employee privacy by the employer.

Privacy issues relating to technology and ethics in the workplace can also take another form, which is the use of social media in the workplace to post negative things about co-workers, subordinates or managers, or even using social media to do these things from home. In addition, privacy and ethical breaches can occur when employees inadvertently post private or image-damaging information about shareholders, customers, vendors or the company, as well. For example, two employees were fired from Dominos Pizza in North Carolina after one of the employees shot a video of the other stuffing cheese up his nose at work, while he was making a customer’s sandwich. They posted the video on YouTube, which went viral, thereby damaging the image and reputation of Dominos Pizza and sparking allegations against the food chain for improper food preparation standards (Chen, 2010). This is definitely an example of how technology, ethics and privacy come together and, in this case, produced a negative outcome for both the employees and the company.

Witnessing Unethical Activity

There have been instances where I have witnessed unethical activities in the workplace related to technology, but one particularly sticks out in the mind. That is a time when I witnessed a co-worker pirating software from the company to take home to install on his own computer. He even bragged about it to me and another co-worker. This activity was definitely unethical because that software was not free. It was paid for and licensed by the company. Therefore, taking it without permission was stealing it. What we did not know was the fact the IT department ran a weekly report on all computer activity and they saw that he had done this. He was subsequently called into the director’s office, shown the proof of what he had done and he was fired. Some would say that they spied on him and that was unethical, but the company’s privacy and ethics policies state that employees are not to assume privacy while using workplace systems.

Privacy in the workplace, in my opinion, should be minimal when it comes to a company protecting their investments and making sure employees are working diligently and being good stewards of company resources. After all, no one would want someone misusing their personal assets.

Akcay, B. (2008). The Relationship Between Technology and Ethics; From Society to Schools. Turkish Online Journal of Distance Education, 9 (4), 120-127. Retrieved from http://files.eric.ed.gov/fulltext/EJ816485.pdf

Chen, S. (2010, May 12). Workplace rants on social media are headache for companies . Retrieved from CNN: http://www.cnn.com/2010/LIVING/05/12/social.media.work.rants/

Ethics . (n.d.). Retrieved from Oxford Dictionaries: http://www.oxforddictionaries.com/us/definition/american_english/ethics

Hartman, L. P. (2001, Spring). Technology and Ethics: Privacy in the Workplace. Business & Society Review, 106 (1), 1, 27.

Technology . (n.d.). Retrieved from Oxford Dictionaries: http://www.oxforddictionaries.com/us/definition/american_english/technology

Stuck with your Essay?

Get in touch with one of our experts for instant help!

Public Health Issue, Essay Example

Gender and Buddhism, Essay Example

Time is precious

don’t waste it!

Plagiarism-free guarantee

Privacy guarantee

Secure checkout

Money back guarantee

E-book

Related Essay Samples & Examples

Voting as a civic responsibility, essay example.

Pages: 1

Words: 287

Utilitarianism and Its Applications, Essay Example

Words: 356

The Age-Related Changes of the Older Person, Essay Example

Pages: 2

Words: 448

The Problems ESOL Teachers Face, Essay Example

Pages: 8

Words: 2293

Should English Be the Primary Language? Essay Example

Pages: 4

Words: 999

The Term “Social Construction of Reality”, Essay Example

Words: 371

technology ethics essay

  • ITEC Handbook
  • Markkula Center for Applied Ethics
  • Focus Areas
  • Institute for Technology, Ethics, and Culture

360x360

The ITEC Handbook By José Roger Flahaux, Brian Patrick Green, and Ann Skeet

"Ethics in the Age of Disruptive Technologies: An Operational Roadmap, ” or, more briefly, the “ ITEC Handbook, ” offers organizations a strategic plan to enhance ethical management practices, empowering them to navigate the complex landscape of disruptive technologies such as AI, machine learning, encryption, tracking, and others while upholding strong ethical standards.

The ITEC Handbook is available below as a free PDF download  or purchase an eBook or print-on-demand edition from Amazon .

Download the ITEC Handbook

Outline of a human form in lights and connected dots.

A condensed excerpt from “Ethics in the Age of Disruptive Technologies: An Operational Roadmap."

Overview

A high-level overview of the five stages and the four appendices of the ITEC Operationalization Roadmap.

ITEC Principles and How to Use Them: Anchoring, Guiding, Specifying, and Action

Anchoring, guiding and specifying principles and examples of actions, customizable for varying organizational needs.

"This handbook gives the reader and their organization the clarity of vision necessary to deal with the new problems that are appearing and will continue to appear as emerging technologies begin to affect society."

About the Authors

José Roger Flahaux is a retired hands-on high-tech operations executive with an extensive background in global supply chain and operations management with companies such as Burroughs, Unisys, Raychem, SanDisk, and Corsair Components. He is also an adjunct professor in the department of Industrial and Systems Engineering at San José State University, where he teaches Engineering Management Systems in a Global Society. He holds an Electrical Engineering undergraduate degree from ITPLG in Belgium, and an MSE in Engineering Management from SJSU.

Brian Patrick Green is the Director of Technology Ethics at the Markkula Center for Applied Ethics at Santa Clara University. He has worked extensively with technology corporations as well as organizations such as the World Economic Forum, the Partnership on AI, and the Vatican’s Dicastery for Culture and Education . He teaches AI Ethics in Santa Clara University’s Graduate School of Engineering. He holds doctoral and MA degrees in ethics and social theory from the Graduate Theological Union in Berkeley, a BS in genetics from the University of California at Davis, and is author or editor of several volumes on ethics and technology.

Ann Gregg Skeet is the Senior Director of Leadership Ethics at the Markkula Center for Applied Ethics. She works with corporations and organizations such as the World Economic Forum, the P artnership on AI, and the Vatican’s Dicastery for Culture and Education. She teaches in corporate board readiness programs at Santa Clara University’s Silicon Valley Executive Center. She served as CEO of American Leadership Forum Silicon Valley for 8 years, worked for a decade as a Knight Ridder executive, and served as president of Notre Dame High School in San Jose. She is a graduate of Bucknell University and has a master of business administration degree from Harvard Business School.

COMMENTS

  1. Thinking Through the Ethics of New Tech…Before There's a Problem

    Thinking Through the Ethics of New Tech…Before There's ...

  2. Technology Ethics

    In Technology Ethics, the Markkula Center for Applied Ethics addresses issues arising from transhumanism and human enhancement ethics, catastrophic risk and ethics, religion and technology ethics, and space ethics. AI ethics and corporate tech ethics development and training are researched, created, and delivered in collaboration with Internet ...

  3. Philosophy of Technology

    Philosophy of Technology. If philosophy is the attempt "to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term", as Sellars (1962) put it, philosophy should not ignore technology. It is largely by technology that contemporary society hangs together.

  4. Ethics of technology

    The ethics of technology is a sub-field of ethics addressing the ethical questions specific to the Technology Age, the transitional shift in society wherein personal computers and subsequent devices provide for the quick and easy transfer of information.Technology ethics is the application of ethical thinking to the growing concerns of technology as new technologies continue to rise in prominence.

  5. The Ethics of Technology: From Thinking Big to Small—and Big Again

    Abstract. The trajectory of critical ethical reflection on technology has been from big issues (eighteenth century arguments for social revolution responding to the evils of the industrial revolution) to small ones (particular issues associated with the practices of engineers). It is time again to think in large-scale terms.

  6. Sven Ove Hansson (ed.): The Ethics of Technology. Methods and

    The collection of 15 essays edited by Hansson succeeds in its educational aim. It brings students and novices interested in the ethics of technology an awareness of foundational aspects and the broader goals of the discipline, as well as presenting a diverse range of theoretical instruments to approach them.

  7. Technology Ethics: Origins, Paradigms and Implications for ...

    Technology ethics is both a rather old and young field: Old in the sense, as technê (τέχνη in ancient Greek) means doing and practice, including the specific methods and contexts.The term has been subject of philosophical reflection since philosophy's beginning with Socrates and Plato, where it was used to describe human knowledge in general, before Aristotle began to differentiate ...

  8. Information Technology and Moral Values

    Information Technology and Moral Values. First published Tue Jun 12, 2012; substantive revision Fri Nov 9, 2018. Every action we take leaves a trail of information that could, in principle, be recorded and stored for future use. For instance, one might use the older forms of information technologies of pen and paper and keep a detailed diary ...

  9. Ethics of Technology

    Course Description. This course introduces the tools of philosophical ethics through application to contemporary issues concerning technology. It takes up current debates on topics such as privacy and surveillance, algorithmic bias, the promise and peril of artificial intelligence, automation and the future of work, and threats to …. Show more.

  10. Experts consider the ethical implications of new technology

    In business, tech giants like Google, Facebook, and Amazon have been using smart technologies for years, but use of AI is rapidly spreading, with global corporate spending on software and platforms expected to reach $110 billion by 2024. "A one-off course on ethics for computer scientists would not work. We needed a new pedagogical model.".

  11. Ethics of Artificial Intelligence and Robotics

    Ethics of Artificial Intelligence and Robotics

  12. Syllabus

    In addition to looking at technology ethics, this class is intended as an introduction to moral philosophy and does not presuppose any philosophical background. ... You will also be required to submit a single longer essay (approximately 1500 words) due on the final day of class. This is worth 30% of your final grade.

  13. Moral and Ethical Issues in Technology

    Essay Example: In today's technologically driven world, we find ourselves at the crossroads of innovation and ethics, grappling with profound moral questions that accompany our technological advancements. As we marvel at the wonders of modern technology, we must also confront the ethical implications

  14. Best Ethical Practices in Technology

    However, those specific codes of practice can be shaped by reflecting on these 16 broad norms and guidelines for ethical practice. 1. Keep Ethics in the Spotlight—and Out of the Compliance Box: Ethics is a pervasive aspect of technological practice. Because of the immense social power of technology, ethical issues are virtually always in play ...

  15. Ethics in the digital world: Where we are now and what's next

    This question has led to significant growth in interest in data ethics over the last decade (Figures 1 and 2). And this is why many countries are now developing or adopting ethical principles, standards, or guidelines. Figure 1. Data ethics concept, 2010-2021 Figure 2. AI ethics concept, 2010-2021. Source: Google Trends

  16. To Each Technology Its Own Ethics: The Problem of Ethical ...

    In a world of rapid development and dissemination of technology, ethics plays a key role in analysing how these technologies affect individuals, businesses, groups, society, and the environment (Sætra and Fosch-Villaronga, 2021), and in determining how to avoid ethically undesirable outcomes and promote ethical behaviour.While ethics is a staple in many academic fields, it is also gaining ...

  17. Ethics in the IT Practice

    Ethics in the IT Practice Essay. For several years, most debates about ethics and information technology (IT) have focused on issues of professional ethics and issues of privacy and security. The professional aspects of ethics in IT concern preserving and developing a successful reputation through tasks completions.

  18. Technology Ethics: Definition, Issues & Examples

    Human Enhancement Ethics. Human Enhancement concerns the use of science and technology to improve, increase and change human functions. For example: a boost in intelligence, strength, power or compassion. As you might imagine, there are a host of issues and dilemma's in this domain.

  19. Ethics in Technology Essay

    Ethics in Technology Essay. In the early years of computers and computerized technology, computer engineers had to believe that their contribution to the development of computer technology would produce positive impacts on the people that would use it. During the infancy of computer technology, ethical issues concerning computer technology were ...

  20. Tech ethics 101: The importance of responsible technology use

    Conclusion. In conclusion, tech ethics 101 involves being mindful of the ethical implications of our technology use and approaching it with a sense of responsibility. This includes considering ...

  21. Technology and Ethics, Essay Example

    The notions of technology and ethics often merge when it comes to their use in the business context, such as in the workplace. This often leads to issues that arise as a result of the differences in people's perceptions of ethics and their level of access to technology in the workplace. In addition, issues also arise when employers choose to ...

  22. Ethics in the Age of Disruptive Technologies: An Operational Roadmap

    The ITEC Handbook By José Roger Flahaux, Brian Patrick Green, and Ann Skeet. "Ethics in the Age of Disruptive Technologies: An Operational Roadmap," or, more briefly, the "ITEC Handbook," offers organizations a strategic plan to enhance ethical management practices, empowering them to navigate the complex landscape of disruptive ...

  23. Ethical dilemmas in technology

    Ethical dilemmas in technology | Deloitte Insights