Hero image for Explore Summer '21 feature story, "All-Seeing Algorithms"

All-Seeing Algorithms

Building ethics into artificial intelligence systems
By Cindy Spence


Artificial intelligence and computer science researchers say getting machines to do the right thing has turned out to be relatively easy.

We program Roombas to vacuum our homes, but don’t expect them to brew our coffee. We program robotic arms to sort parts in factories, but not to decide which colors to paint cars. We program doorbells to tell us who is at the door, but not to let them in. Most of our machines do one thing and do it well, usually in error-free fashion. They get the task right.

But getting machines to do the right thing — the ethical thing — now that’s a different problem.

And, for now at least, it has a lot more to do with getting people to do the right thing.

Duncan Purves, an associate professor of philosophy, specializes in emerging ethical issues for novel technologies, artificial intelligence applications and big data applications. Machines run on algorithms and do what algorithms tell them to do. But algorithms are mostly designed by people, and it’s challenging, Purves says, to create an algorithm that aligns with our ethical values.

— Duncan Purves

“One way to think about ethics is as a set of principles or rules that determine how we ought to behave, so that ethics are about action, behavior,” Purves says. “The ability to think ethically is what distinguishes humans from animals.”

And from machines.

If ethics are the guidelines that determine human actions, algorithms are the guidelines that determine the actions of machines. Algorithms already permeate our lives: who shows up on our dating apps, which job applicants make it into a hiring pool, who gets a mortgage or car loan, which route we travel from point A to point B, which advertisements we see on social media, which books Amazon recommends, who gets in to college, and where and when we deploy police.

Ethical considerations often don’t have legal consequences, but they have consequences that matter, nevertheless. Purves uses the example of keeping a secret.

“You can tell me a secret, and I promise not to tell the secret, but then I do. It’s not a crime. It might damage my reputation with others, or I could lose your friendship, but these are not the only reasons I would give for keeping a secret. I would simply say, because I promised I would keep it, keeping it is the right thing to do,” Purves says. “A commitment to doing the right thing is what motivates me.”

Who Controls Data?

Just as there is no law prohibiting you from telling a friend’s secrets, there are few laws today about collecting and using data that feed algorithms. Many of the issues with data collection and algorithms only become apparent after an application is in use, Purves says.

The data collected about us — from our cellphones, GPS trackers, shopping and browsing histories, our social media posts — add up to a bonanza for marketers and researchers. Both commercially and scientifically, the data have value.

But the people generating the data often don’t control how the data are used. And the data can be used to develop algorithms that manipulate those very people. We give away our information in byzantine terms of service agreements that we often don’t read. Social media platforms and dating apps often use us for A/B testing, a fairly routine and benign use, but also for their own research.

“We need a protected sphere of control over information about ourselves and our lives. Controlling information about ourselves helps us to shape our relationships with other people.”

— Duncan Purves

Purves points to a Facebook example from 2012, when the platform manipulated the newsfeed of selected users, showing some users positive articles and showing others negative articles. The results demonstrated emotional contagion: Those who saw positive articles, created more positive posts themselves. Those who saw negative articles, posted more negative articles themselves. No one asked the users if they wanted to take part.

In another example in 2015, the dating app OkCupid experimented with its matching algorithm, removing photographs in one experiment and telling users in another they were good matches with people they otherwise would not have matched with. When the manipulation was revealed, the CEO said the experiments were fair game since the users were on the internet.

These cases were not illegal, but were they ethical? Purves says it might be worth exploring a review process for data science experiments along the lines of the review process for health-related experiments.

“Do data scientists have the same obligations to data subjects as medical researchers have with their subjects?” Purves asks.

Another issue is the privacy of our data. Privacy, Purves says, is a basic human need.

“We need a protected sphere of control over information about ourselves and our lives,” Purves says. “Controlling information about ourselves helps us to shape our relationships with other people. Part of what makes my relationship to my wife a special one is that I choose to share information about myself with her that I would not share with my colleagues. In a world where everyone had perfect information about me, this selective sharing would not be possible.”

Privacy also protects us from those who would use our information to harm us, for example, by talking about our private history with addiction or disease.

“When we lose control over access to our personal data, we lose some degree of privacy,” Purves says.

Still, all these data are too tantalizing to lock away. A method known as differential privacy aims to provide as much statistical data as possible while introducing “noise” into the data to keep it anonymous. Differential privacy, for example, ensures someone can contribute her genetic information to a database for research without being identified and having the information used against her.

With a security guarantee for users, researchers can use the data to make new discoveries.

Predictive Policing

Algorithms designed to classify information into categories do their job well, Purves says. But optimizing algorithms to meet our social values is tricky.

Purves and a colleague at California Polytechnic State University are exploring the intricacies of algorithms used in predictive policing in a $509,946 grant from the National Science Foundation. On the surface, using algorithms as a crime-fighting tool makes sense. Many large departments from Los Angeles to New York use predictive policing to stretch resources because algorithms can assess crime hotspots — predicting where and when crimes will occur — and replace people who do the same thing.

“Police departments don’t adopt these technologies for no reason,” Purves says.

Machines also may be more accurate and potentially less biased than human officers — with a caveat.

Some algorithms are based on historical arrest data, and if those data are a function of pre-existing discriminatory practices by police officers, the algorithms will reflect that bias and reinforce it.

Algorithms trained on biased arrest data will recommend greater police presence in communities of color, Purves says. The presence of more officers will yield more arrests. The increase in arrests will be used as a proxy for higher crime rates, and the cycle becomes a feedback loop.

Purves and his colleague are trying to determine whether other kinds of less biased data can be used to train algorithms.

Another avenue of investigation, he says, is the gap between identifying where crime happens — assuming it can be done accurately — and what to do with that information. PredPol, a predictive policing software, can identify a 500-by-500-square-foot area susceptible to, for example, vehicular theft at a particular time and day. The department can respond with beefed-up patrols.

“An important question is whether increased patrols is the best police response to crime predictions,” Purves says. “There are others available.”

A better response to such predictions might be to alter the physical environment, rather than put more officers in the neighborhood. It may be possible, Purves says, to disincentivize crime by demolishing abandoned buildings and installing better street lighting, which would avoid unintentional violent confrontations between officers and citizens.

“That’s a feature of these technologies that’s been underexplored,” says Purves, who serves on several dissertation committees for computer science graduate students interested in ethics. “We’ve got the technology, it can anticipate crime and even do so effectively, but how should we respond to those predictions?”

Purves and his colleague hope to produce a report of best practices police departments can use in deploying AI at the end of the three-year grant next year.

War Machines

Purves’ interest in the intersection of AI and ethical issues started with an email from a friend asking about ethical concerns with using autonomous weapons in warfare.

“Autonomous weapons are interesting from the perspective of ethicists because they’re essentially machines that decide to kill people,” Purves says.

Some concerns include how a machine identifies a gun on a battlefield or distinguishes between an innocent civilian and a combatant. But even in a world where the technical capabilities are perfect, there are other issues with machines that decide to take life.

Unmanned military drone on patrol air territory at high altitude

“No matter how sophisticated you make a machine, it’s very unlikely that it’s going to ever develop the moral intuition or moral perception we make use of in issuing our moral judgments,” Purves says. “Without these distinctly human capacities, you could never rely on it to make sound moral judgments in warfare.”

But suppose machines could develop these human capacities. That presents still another set of problems.

“If you have reason to believe that you can rely on the moral judgment of an autonomous weapon system, then you also have reason to believe that you should care about what you do to that system,” Purves says. “Can you send it off to war to kill people without having asked its permission? You’re caught in a kind of dilemma.”

Finding commonality with the awareness or consciousness of a machine is difficult for humans. With other humans and with non-human animals, we have a kind of solution to the problem of other minds, Purves says. We share an evolutionary past and a physiological composition.

“We share enough that I can say, ‘If I have these features, and I’m having these experiences, then I have reason to believe that if you share those features you also have those experiences,” Purves says.

“We don’t have any of that shared history or shared physiology with machines.”

With weapons of war, there may be an argument that if machines make fewer mistakes than humans on the battlefield, then we should deploy them, Purves says. Give them very circumscribed tasks and limit their moral mistakes.

The dilemma of what we want our algorithms to produce — accuracy AND fairness — may require a deeper look into our society.

“Structural power imbalances in society can be the source of some of the greatest ethical challenges for AI and big data,” Purves says. “To create algorithms that align with our ethical values, sometimes we must think deeply about what those ethical values really are.

“In some sense, the dilemma is not for the data scientists,” Purves says, “but rather a dilemma for our own concepts of fairness.”


Source:

Duncan Purves
Associate Professor of Philosophy
dpurves@ufl.edu