Professor Eric Hilgendorf is a member of the bidt Board of Directors and co-project leader of two research projects at the Institute. In the interview, the lawyer talks about the legal challenges of digitalisation, for example in technologies such as artificial intelligence, and about the sometimes necessary change in legal assessments over time.
You are leading the project “On the relationship between legal policy and ethics in digitalisation” with the philosopher Professor Nida-Rümelin. Could you briefly explain the legal perspective on this?
The digital transformation raises a multitude of societal challenges. Society will have to create legal framework conditions to make it compatible with people. If you look back 250 years, to the industrial revolution, you can see that at that time there was a failure to create legal regulations in time. This led to catastrophic conditions, to the impoverishment of broad sections of the population, to social upheavals and even revolutions, to extreme environmental pollution. At that time, it took more than 100 years to eliminate these problems. The task now is to do better and to introduce legal regulations in time.
Our project is based on the premise that lawyers and legal politicians cannot simply develop such regulations, but that ethical considerations are necessary.
The aim is to examine recent legal policy efforts in connection with the containment of digitalisation to see which legal ethical considerations are mentioned here, and whether the mention of such arguments meets philosophical standards: Are they sufficiently clearly formulated and consistent? Is the invocation of human dignity or the common good, for example, unreflective or theoretically well-founded? Is the legislator content with slogans or are his arguments based on a viable theory?
We hope that legal policy will gain in rationality and thus, of course, in persuasiveness and acceptance among the population. Our project is therefore not about the superficial evaluation of legal policy projects in the context of digitalisation, but about the systematic recording, analysis and evaluation of the legal ethical arguments put forward in this context. Law and ethics work hand in hand here.
The dynamics of digital transformation can only be captured legally by first focusing on framework conditions.
What challenge does the dynamic of the digital transformation pose for the law?
It will not be possible to regulate every detail immediately. It is often even dangerous to create detailed regulations under the impression of real or supposedly acute problem scenarios, which may not even fit in the future and become obsolete again shortly afterwards.
Basically, the first step is always to consider scenarios or to analyse existing scenarios to see whether the current law is applicable to them and, if so, what new interpretations might be necessary to ensure applicability – for example, in the case of autonomous driving or the formation of monopolies on the internet.
So new regulations are not necessarily needed for new technologies?
That’s right. You will often find that the existing rules are sufficient. Here, science is first called upon to think through situations and put solutions up for discussion. Legal policy snap decisions often do more harm than good.
You are one of the project leaders in a new project at bidt on “Human-AI Partnerships for Explanations in Complex Sociotechnical Systems”. What do you think are the interesting questions here?
It seems particularly interesting to me that a new type of actor is possibly emerging here that has not existed in this form until now – at least that’s what many people think. Unlike machines like a classic car or an industrial machine that drives an assembly line, with artificial intelligence we are dealing with something that at least appears to act on its own. This is the case, for example, when an artificial intelligence automatically concludes contracts or a machine does not react to human input but acts completely independently and learns for itself. If this leads to damage, new types of liability issues arise.
Artificial intelligence poses very exciting new ethical and legal challenges.
Some AI applications are built in such a way, such as humanoid robots, that they also look like actors. What does that mean legally?
With robots that look like humans and express themselves in this way, perhaps even feigning emotions, it will be very difficult for a citizen to draw the line between human and object.
If, for example, the robotic seal Paro makes purring noises, feels warm and furry and has a friendly, welcoming face, this leads many people to personalise it and think they are dealing with a sentient being.
In my opinion, this is a dangerous mistake. No new subjects are being created here. But the appearance is there. And that means that one has to examine to what extent the law has to be adapted here or whether perhaps certain legal policy advances have to be criticised as errors.
You said earlier that the existing law is often sufficient. Are there also gaps that are already obvious in your view?
Yes, there are also gaps. German law is very broad when it comes to liability, especially civil liability. It’s quite different with data protection. Data protection in Europe is regulated by the GDPR, which primarily covers personal data. This can be explained historically and is fundamentally correct. There is an understandable need that information about personal preferences or personal medical data, for example, cannot be disseminated at will. For Europeans, especially Germans who have experienced two totalitarian dictatorships, the “transparent human being” is a frightening idea. It makes sense to erect legal barriers against this.
But there is a new field: technical data. This is, for example, data that is generated in an industrial robot by means of sensors about wearing parts, which can be transmitted for remote maintenance. This data has an extraordinary economic value for the manufacturers of such machines. For example, it can mean advantages over the competition if one has information about the wear of a machine part relative to the service life or the outside temperature, because in this way the machines can be precisely improved.
But this data is not protected at the moment. As things stand today, there is no legal way to hold someone responsible who hacks the system and illegally extracts the data, at least not if the data was not specially secured. Strictly speaking, one cannot speak of an “illegal” extraction of data: Because data are not property, they do not belong to anyone.
Property law regulates things, and data are not things. They are something new and there is no legal regulation for them. At a time when data is being called the new “oil of the 21st century”, this is a surprising finding.
You have been dealing with digitalisation from an ethical and legal perspective since the mid-1990s. Did you already suspect back then how much it would determine our lives and our everyday life?
I don’t think anyone foresaw the impact of the whole development, although many people, including myself, warned relatively early on that the tech giants might become too big and at some point they would no longer be able to be controlled. Even large corporations must not disregard the common good; at some point, the protection of property reaches its limits. Or to put it another way: “Property obliges”.
One topic that has strongly concerned me from the beginning is the liability of providers. This concerns, for example, the question of whether not only a criminal is criminally liable who spreads radical right-wing propaganda or hate speech via the internet, but also whether providers can be held responsible if they allow such hate speech to run via their servers.
There was already an intensive discussion about this in the nineties. At that time, the legislators, especially the European legislators, rightly decided to strongly privilege the providers, because they did not want to overregulate this then new business field, the internet.
Privileges that made sense 20 or 25 years ago are no longer appropriate today, also because digitalisation has taken over so many areas.
If a provider, knowing that one of his users is engaged in criminal activities on the net, does not block him, he should, in my opinion, be punished for aiding and abetting. That would give providers a very strong motive to take such persons off the net and to secure the net better. This applies to attacks on hospitals that are hacked via the internet, as well as to hate speech and right-wing propaganda, for example.
So far, the legislator has not dared to abolish this old privilege. After all, a few years ago it passed the Network Enforcement Act, which gives those affected by hate speech the right to demand that entries be deleted. It provides for heavy fines if this deletion does not take place. This is a step in the right direction. The Network Enforcement Act has only recently been tightened up, rightly so in my opinion. I suspect that the way forward will be to also hold providers criminally responsible under certain conditions as accessories or accomplices. I think this is right.
So far, we have mainly talked about German and European law. Is it even possible to regulate a global phenomenon like digitalisation at national level?
In the whole area of digitalisation, Europe creates the legal standards. In the Anglo-Saxon world, this is called the “Brussels Effect”, half angrily, half enviously.
In America, regulation of new products does not take place in advance, but product liability is very strongly developed and there can be high damage claims if something happens.
The European way is different. Europe regulates before new technologies come to market and sets framework conditions. Large manufacturers like Microsoft, which are often based in the USA but also want to sell in Europe, now often adapt to the European regulations nolens volens. Naturally, however, attempts are made to prevent overly strict European rules through targeted lobbying.
In the case of automated driving, for example, Germany was the first country in the world to enact a regulation for a certain category of vehicles in the summer of 2017. This was immediately discussed in Japan and China.
The world is looking very much to Europe, and Germany in particular, for the legal side of things. In the meantime, a new regulation for largely automated Level 4 vehicles is being prepared in the Federal Ministry of Transport. One can be sure that this advance will also receive worldwide attention.
Europe regulates before new technologies enter the market and sets framework conditions.
About the person
Professor Eric Hilgendorf has been following digitalisation from a legal ethics and legal perspective since the mid-1990s. The holder of the Chair of Criminal Law, Criminal Procedure Law, Legal Theory, Information Law and Legal Informatics at the University of Würzburg, where he heads the RobotLaw Research Unit, is a member of the bidt Board of Directors. Together with Professor Julian Nida-Rümelin, he heads the project “On the relationship between legal policy and ethics in digitalisation”.