The IEEE has recently published a set of ethical guidelines for the future of AI development. The guidelines aim to make sure that algorithms are made to benefit humanity as a whole, especially in sensitive roles such as driverless cars and medical care. Here are a few points covered in the article, as well as a brief discussion of the ethical issues surrounding the development of advanced AI algorithms.
AI algorithms are a burgeoning new technology, and like any new technological development, they require regulation to make sure that they remain beneficial to humanity as a whole. In comes the Institute of Electrical and Electronics Engineers (IEEE) . Recently the organization released the first draft of a document called Ethically Aligned Design, where the thoughts of more than 100 industry leaders were collected to create guidelines for the future of AI ethics.The #IEEE has recently published a set of ethical guidelines for the future of AI development.Click To Tweet
The guidelines comprise 136 pages and can be read here. In it, you’ll find a discussion of issues ranging from autonomous weapons systems to a methodological guide to ethical research. Here’s a brief rundown of three essential sections in the AI ethics document:
1. AI Ethics Need to Respect Human Rights
Multiple humanitarian issues are surrounding the rise of advanced AI. For example, as automation becomes more sophisticated, people have an increased potential to lose their job to an AI.
The employment structures of the world will be heavily stressed by the speed at which technological change is occurring, and various regulatory bodies will need to take this shift into account when making legislation and policy so that they can respect humans’ right to employment without slowing down the march of innovation.
2. Advanced AI Will Need to Operate Transparently
When AI are put into sensitive positions that could potentially decide the fate of a human life, it is important to be able to track every movement and decision of that AI.
The IEEE notes that as there is no organization dedicated to the independent review of algorithmic operations, developers should ensure that AI programs are created by “a multidisciplinary and diverse group of individuals.” The hope is that such a group would be more able to cover all potential AI ethics issues that arise from their research.
3. Somebody Needs to be Accountable for Automated Decisions
The final section of the document details the need to improve accountability and verifiability in intelligent systems. Machine learning implies that the actions of AI aren’t pre-programmed, rather they are decided on by any given algorithm based on the information they have at hand. In other words, some AI programs are going to be making judgment calls when they act, and this gives rise to a host of legal and ethical decisions that lawmakers will need to consider as the technology flourishes.
“In other words, AI will independently make judgment calls.”
Machine learning implies that the actions of AI aren’t pre-programmed, rather they are decided on by any given algorithm based on the information they have at hand. In other words, AI will independently make judgment calls. This gives rise to a host of legal and ethical decisions that lawmakers will need to consider as the technology flourishes.
Trudging on Into the Future
Morality depends on respecting a set of absolutes. Yet, the variability of complex situations makes a set of absolute rules impossible in the world. The only constant in our world is change.
For example, ‘Do not kill’ is a common law in many societies. However, would you obey that law if someone is holding a gun to your mother’s head? We can make rules, but they aren’t actually absolute. How do we make AI that can understand the rules as well as the exceptions?
“The only constant in our world is change.”
We may be able to use quantum computing to allow an algorithm to generate every possible action and response to a given situation, but a more human-like discretion is hard to imprint.
The IEEE AI ethics guidelines are a step in the right direction toward solving these problems, and with any hope, the developers of AI will pay attention. Just like humans need support structures to learn their morals and ethics, so do AI.