Having studied math in Iran and completed her PhD in machine learning (ML) and computer science at Georgia Tech in the US, Samira Samadi considers herself a “completely technical person.” However, in recent years she has begun pursuing more interdisciplinary research, with a particular interest on exploring the impact of machine learning systems on people.
In September 2020, Samadi joined the Max Planck Institute for Intelligent Systems (MPI-IS) and the Tübingen AI Center as leader of the “Human Aspects of ML” research group. The group focuses primarily on two topics. The first relates to ethics and fairness in machine learning, with a particular emphasis on exploring ways to prevent bias and discrimination in decision-making. The second explores ways of building teams between machine learning and humans in order to make better joint decisions.
One of the central goals of research in the field of artificial intelligence is to design machines that can make decisions in much the same way humans do. However, machine learning algorithms can only make decisions based on the data they’ve been fed – they then reproduce biases in the data in their decision-making. This poses a challenge particularly in areas where the use of machine learning and artificial intelligence have an impact on people’s lives. For instance, many banks use algorithms to determine their customers’ credit scores, and thus to make decisions about loan applications. However, given that these algorithms are trained with historical data that can be both biased and incomplete, the recommendations they make can be unfair or discriminatory. In examining approaches to machine learning that can help ensure fair and unbiased decisions, Samadi collaborates with researchers from other disciplines, including philosophy, economics, and the social sciences.
“In the past, machine learning applications were generally developed in isolation. People were looking at how they could get the best performance, but they weren’t sufficiently considering how the systems they developed actually affected people,” Samadi says. “In order to develop machine learning algorithms that are ethical and fair, you need to get the perspectives of experts in broad range of disciplines.” This is one of the reasons why she chose to come to Tübingen: the city’s multidisciplinary research community has enabled her to initiate projects with scientists who are just a phone call and a quick walk away.
In addition to developing machine learning systems that make decisions based on principles of fairness, Samadi also aims to explore methods that will enable a collaborative synergy between humans and machines when they make joint decisions. For instance, in medical diagnosis, the physician might get help from a machine that analyzes all the patient’s historical data and makes a prediction about their medical needs. To build more effective hybrid human-ML models, Samadi builds meta-algorithms that shape the decision-making dynamic between humans and ML.
Samadi’s research is also motivated by a broader movement of scientists who believe that simple models can result in more interpretability and fairness without necessarily compromising performance. It challenges a long-held view in the machine learning community that the more complicated the model, the better it is. “People don’t only care about outcomes, but also about their dignity and whether they were treated the right way. They want to feel that the procedure is fair. But unless you understand what’s going on in an ML system, you can’t make sure it’s doing things right.”