| News | Interview | Ethics and software: what works, what doesn’t?

Ethics and software: what works, what doesn’t?

Philosopher Julian Nida-Rümelin and computer scientist Alexander Pretschner are jointly researching how ethical considerations can already be integrated into the development of software.

© tippapatt / stock.adobe.com

You are researching together at bidt in the project “Ethics in agile software development” – how are ethical considerations currently taken into account in this area?

Alexander Pretschner: Software development is not something that one engineer does alone; it is a social process with many participants. There is a context in which a system is developed, the company for which developers work, and the society in which the product is used.

If someone is programming a calculator, there may not be much to consider ethically. But there are areas in which ethical considerations already play a major role today, and by that I don’t just mean software for motor control systems, for example, which is created with fraudulent intent. Even without fraud, ethical considerations play a role, for example, in companies that provide software for data integration – that is, that sell systems that may be used in contexts that are at least questionable.

Questions arise, for example, as to whether one should integrate facial recognition in the knowledge that one is accepting certain error recognition rates that cannot be arbitrarily reduced technically. These companies are massively thinking about where to draw the line in software development, what should and should not be done, and are establishing processes for this. But there are also companies that only use ethics as a fig leaf to sell what they do as ethically correct.

Society is not in a position to control developments that have already taken on a momentum of their own in science and technology

Prof. Dr. Dr. h.c. Julian Nida-Rümelin To the profile

Julian Nida-Rümelin: I would like to place our project in a larger context. The ethical dimension is always present in all major technical and scientific developments.

If you remember the heated debate about nuclear energy: at that time, it was about setting the course for the future of energy production for mankind. Almost the entire technical, political and scientific complex was initially of the opinion that this was the safest and most sustainable form of energy generation for the globe. Then public resistance arose, at first with weak arguments and seemingly irrational fears, gradually supported by individual scientists, so that the debate spread and in the end it was felt that society had a basis for judging the energy scenarios more rationally than was possible before.

The same was repeated with human genetics. There, too, there were fears – some completely exaggerated – that human-animal hybrids would soon be bred and that humans would clone themselves to live forever. At the same time, this has given rise to a kind of critical response to the potential of human genetics.

The question is always how to deal with the ethical dimension. One possibility is to strictly separate: On the one hand, there is basic research and technology, and on the other hand, society, churches or legislators have to judge what is ethically appropriate and what is not. I don’t think it works that way. That is my motive for being involved in these areas, including digital transformation, for decades.

Society is not in a position to control developments that have already taken on a momentum of their own in science and technology. Different disciplines – economic, philosophical, legal, computer science, human genetics, etc. – must be brought together to enable sensible steering.

What does that then mean for the individual, for example in software development?

Julian Nida-Rümelin: That’s the other extreme that always comes up in such debates: every single scientist and every single technician would have to take responsibility for ethically sensitive issues. But that is a complete overburdening of these people and a moralisation of these activities, which is also unacceptable.

So the question is how to control this so that the individual does not break under the responsibility or cynically only does what he or she is told, but on the other hand, developments are not set in motion without ethics, which are subsequently restricted by legislation. This is the context in which I see our joint project. It is about integrating the ethical dimension into software development itself and also into management methods without overburdening the individual actors, for example software developers.

How can you imagine this in practice?

Alexander Pretschner: We look at agile software development. Among other things, it is characterised by completing partial products in short cycles. One of the core elements of digitalisation is that contexts, users and needs change, as do opportunities, both technical and organisational. Therefore, in agile software development, one works in so-called sprints to be able to react to changing requirements.

The idea of ethical development is that you always consider ethical aspects – can this or that be abused? This approach is already being applied relatively successfully to data protection and security issues.

Julian Nida-Rümelin: There is a simple integration of technical, scientific, empirical questions on the one hand and ethical questions on the other, which in technical terminology is called consequentialism: the consideration of what the consequences of a certain practice are. Part of my scientific work was to show that this alone does not work.

It is absolutely important to consider what the consequences of one’s actions are, but that is not all. Other things also come into play. For example, now in the Corona crisis, it is individual rights that raise the question: can a society simply set in motion a practice that minimises harm and massively violates individual rights? And here I must disappoint the expectation that ethics will simply provide a criterion. In my opinion, it cannot. It can only encourage clear thinking and ask: How do you want to weigh this? Let’s weigh it. This weighing is not so simple, it is a bit more complex, and there are also dilemmas, perhaps insoluble ones.

Philosophy must then be modest and say: what we contribute is conceptual clarity. But in the end, society as a whole is called upon to weigh up these ethical questions.

Alexander Pretschner: So far in the discussion we have sounded as if the decision is always yes or no. That is not so. It is possible to build in mechanisms that may not be able to prevent terrible things from happening, but they can deter them. For example, by building in logging mechanisms so that you can see who has accessed certain data and when. That’s a deterrent, even if it doesn’t prevent systems from being abused. I think this is a promising approach.

Can you give examples where the ethical dimension was not sufficiently taken into account in the development of software?

Julian Nida-Rümelin: There are examples of software systems that reproduce or reinforce certain social prejudices. For example, racist prejudices in facial recognition or the systematic discrimination of women in hiring practices in the world of work.

This is because a good part of data-driven software development relies on correlations. Of course, we have to be very careful that this Big Data development doesn’t end up with dynamics that lead to a completely skewed control of society.

Alexander Pretschner: One of my favourite examples is from Austria, where artificial intelligence was used to decide by machine whether unemployed people should be financed for training measures or not, and the system automatically refused this to anyone over 50. Another example is when the postcode of one’s place of residence determines whether one gets a loan. There are certainly reasons for this, for example because default rates are higher in certain districts, but the question is then how to deal with this knowledge.

Julian Nida-Rümelin: Correlations are only interesting as indications of causal relationships. If it can be shown that there is a correlation but not a causal connection, it becomes irrelevant. So basically you have to build in a kind of filter in software development that is theory-laden and decides what is causally relevant. That’s a difficult question, and it’s not easy to answer. But you have to make this effort, otherwise grotesque results keep coming up.

Ultimately, developers will not be able to make the decision alone. One must not burden them with all the responsibility

Alexander Pretschner

Can you always foresee whether a product might have negative effects in the future?

Alexander Pretschner: Of course, it is often unclear. You can argue today that the flat brokerage airbnb is terrible because it has brought about the consequences that can be seen today, for example that flats in city centres are only rented to day tourists. If one takes the ethical argument too far, one could say: This should have been known from the start and should not have been done. But such an approach would prevent innovation. If the end product, at some point – perhaps in ten years’ time – has effects that are socially undesirable: Is that something that the individual engineer has to take into account in advance? Of course not.

I don’t think it would always make sense to decide in advance not to do something. And yet there are things that one should perhaps already pay attention to. These include, for example, the question: should drones be able to fly over religious sites and film people? Developers can and must think about this. Ultimately, they will not be able to make the decision alone. They should not be saddled with all the responsibility.

But they can talk about it and, if you have appropriate mechanisms in the companies, they can make sure that they are heard when they say: We find what we are doing here really scary – do we really want that?

Julian Nida-Rümelin: Perhaps I can add one more aspect: The idea that politics is able to answer these questions out of its own resources is unfounded. This also has to do with the distribution of competences. There are lawyers in the ministries, highly qualified, but they usually have no expertise, neither when it comes to human genetics nor computer science.

In such a highly complex system in which we now live in the modern age, in which science and technology are so central, the impulses from these areas must be actively given to the public and to politics. In the sense: there are developments where we see risks, we cannot generally regulate this for society, but we advise how to deal with it.

There is a famous example of this: the Asilomar conference of human geneticists, who actually called for a kind of moratorium on their own initiative in order to establish safety standards so that control over the process does not slip away. In this context, I also see our project at bidt as a kind of early warning system for software development.