LEAD: What can AI do today?
Helmut Scherer: When you hear AI, many think it’s computer systems that behave like human intelligence. But in most cases we are not that far yet. What is used today is machine learning. A toolbox that learns from data rules to implement so-called AI systems. The machine can answer relatively narrow questions, in the industry for example in the field of machine maintenance.
Here you can determine which parts are expected to be replaced next. Also, such technology in automating, for example, self-driving cars ahead of a turn, can predict in which direction the steering wheel must be taken. There is often a gross misconception that answering such narrow questions based on large amounts of data could replace human intelligence.
LEAD: So the machines only support humans?
Helmut Scherer: Absolutely. People often make decisions that are made for emotional reasons. Especially here it makes sense to combine your own skills with AI and to combine knowledge across different areas. It is by no means the case that human intelligence is replaced by AI.
LEAD: In which fields is AI mainly used today?
Helmut Scherer: For me, there are four points in which AI is used: prediction, personalization, recognition of objects and capturing structures. An example of the prediction is the above-mentioned maintenance example in the industry. With regard to personalization, for example, we have developed a tool for a large media group that coordinates article recommendations with personal interests.
When it comes to recognizing people, we have successfully implemented a faster check-in process for business travelers with facial recognition Finnair. The fourth area is the collection of structures. This is used particularly in knowledge-intensive environments, for example in the customer service of mechanical engineers. There, the AI identifies patterns in information from emails and databases, and helps identify the most common customer questions so that employees can find solutions faster in customer support.
LEAD: How far has AI arrived in everyday life and how do you think that will evolve over the next few years?
Scherer: By using different devices and digital assistants like Alexa or Siri, AI is already often an integral part of everyday life. Many people are just unaware of this. I think that AI will be used much more in the future than it already is. It is important that AI is not only accessible to a few users. A prerequisite for the right handling of the technology is, of course, the right awareness of it.
LEAD: What should that look like?
Scherer: This is similar to the distribution of the media. For example, 20 years ago, the production of a video was extremely costly and feasible only for very few, not to mention dissemination. Today, almost anyone can create and distribute a video. I think so you have to see that with the AI. You also need to make people who are not technology experts understand the opportunities and risks associated with AI.
LEAD: What does that mean in concrete terms?
Scherer: Teams that develop machine learning or AI solutions in companies should be as diverse as possible in order to gain a high level of transparency and diversification. Both users and decision makers, technologists and design experts should be brought together to create the best possible understanding. For example, you should communicate in a language that everyone understands.
With AI you always have a set of data, an algorithm that has a certain output and derives a defined action. There you have to work out questions like: What do we want to achieve? What does the algorithm actually optimize? Is the data good enough? Do we ask the questions in the right way or is there a bias? What do we use the output for? What are the consequences and can we feel responsible for it?
Also interesting: smarter online searches with AI & Co.
“One can answer narrow questions with machine-learning technology, but the actual decision is still made by humans.”
LEAD: And that’s where ethical principles and questions come into play?
Scherer: At Futurice, we have developed four guidelines for ethical dealings with AI, which are increasingly being used by companies. First, you should be aware of what purpose you are pursuing with AI – and how it affects others. Second, transparency must always be prioritized. It is important that users trust the technology. Third, we need to be inclusive and understand who is influenced by the system. And fourth, personal data must be securely stored in order to ensure a high level of privacy.
LEAD: What are the risks of using AI?
Scherer: There are risks, as with any technology, of course. Examples of this would be bias, but also fairness. The recruitment process already uses machine-learning algorithms. There are also cases in which, for example, certain groups of people are preferred. That should of course be avoided. If you imagine a police operation, then the technology used should not detect false people or draw false conclusions. An AI system should be secure, reliable and not accumulate a limitless amount of personal data.
LEAD: How realistic are you about the fear of people like Elon Musk, that AI could someday take control?
Scherer: AI has been an absolute hype topic for some years now and still is. In the here and now, however, the purposes and scope of AI are still relatively limited. The mathematical skills that exist today are not much better than a few years ago. However, today there are better ways to evaluate and use data. Currently we see here: You can answer questions that are narrowly defined with machine-learning technology. However, the actual decision is still made by humans.
From the sky: drones can do more than just fly. Most of them are not even aware of their potential for advertising and marketing or environmental protection. The LEAD Bookazine 1/2019 shows what drones can do.
LEAD: But could not that be different in ten years?
Scherer: If you address the Skynet scenario, I think that’s unlikely at this time. In the next ten years, however, many developments will come to us, especially when it comes to automation technology or predictive analytics. All trends indicate that the greatest potential is seen here.
LEAD: And beyond?
Scherer: Now I would need a glass ball … But you can see in many areas that people are well aware of ethical responsibility in dealing with technology. At the moment, with the Data Protection Regulation, we see that people are more responsible with their data and want to know what’s happening to their data. This will expand into other areas. Also, political institutions and the companies see themselves as having ethical responsibility. There are several international initiatives that deal with the ethics of dealing with AI, and one can only contribute to that.
LEAD: Nonetheless, you can see from the example of China that governments can use AI to completely monitor citizens and derive sanctions from them. So you can imagine how far that can lead.
Scherer: Absolutely. That’s why it’s even more important for companies to think carefully about which ethical principles to follow.
LEAD: Still, we’re pretty much behind in the legal basics of dealing with AI, right?
Scherer: I think politics is on the right track. The EU Commission is already taking the first steps and is currently developing recommendations for ethical guidelines for dealing with AI in Europe. Concrete legislative proposals should follow from 2020.
Also interesting: AI will bring the perfect form of digitization
Newsletter & Messenger
Always up to date on all topics of digital life with the LEAD Newsletter and the LEAD Tech Newsletter. Whether professional or private. In your inbox or via messenger.
Subscribe to our newsletter now
Subscribe now via messenger