TÜV test for AI applications – com! professional


Association of TÜV eV (VdTÜV)

The TÜV companies have founded a development laboratory for artificial intelligence (AI) to support the development of standards for testing safety-critical AI applications.

Dr. Dirk Stenkamp, ​​President of the TÜV Association (VdTÜV): “The TÜV AI Lab will make an important contribution to making artificial intelligence safer and enabling its use in safety-critical areas”. These include, for example, automated vehicles, assistance systems in medicine or mobile robots. In addition, AI systems should meet certain requirements when elementary fundamental rights such as privacy or equal treatment are at risk. “With the development of suitable test procedures, we support the intended regulation of artificial intelligence and provide practical examples,” said Stenkamp. The EU Commission has summarized proposals for a European legal framework for AI systems in the “White Paper Artificial Intelligence”. Accordingly, security criteria for “high-risk AI applications” are to be defined in the future. There is widespread support for regulating AI in business. According to a representative TÜV study, 87 percent of companies with 50 or more employees in Germany are of the opinion that AI applications should be regulated depending on their risk.

In the “TÜV AI Lab”, AI experts from TÜV organizations will work on practical test scenarios. For example, the AI ​​inspectors could determine how safely automated vehicles with AI systems recognize people, traffic signs or certain obstacles and react to them. Both performance and aspects of cybersecurity and the robustness of AI play an important role. A corresponding test should be a prerequisite for the approval of new vehicles in the future. “Artificial intelligence is complex and can behave dynamically,” said Dr. Dirk Schlesinger, head of the TÜV AI Lab. “So we cannot simply transfer existing test approaches, we have to develop new test methods.” The data with which algorithms are trained are also taken into account. For example, the training data of an AI system for personnel selection can lead to a disadvantage for certain groups of people if the data is not balanced. Schlesinger: “The TÜV AI Lab will develop criteria for assessing the suitability of training data for certain AI applications.”

Another goal of the TÜV AI Lab is to develop solutions for classifying AI applications into risk classes. “Not all AI systems have to meet the same requirements,” said Schlesinger. Therefore, AI-supported systems should be regulated according to their risk potential. The requirements could range from a complete waiver of regulation, transparency obligations and approval procedures before the market launch to a ban on particularly dangerous or ethically questionable AI applications. The EU Commission has proposed two risk classes (low and high risk), while the Federal Government’s Data Ethics Commission advocates five levels. Schlesinger: “The question is how many risk classes actually make sense and according to which criteria AI systems can be assigned to the individual classes in practice.” The TÜV AI Lab, which will start work in the first quarter of 2020, will also deal with this.

Further information such as the position paper “Security of AI-Based Applications” and the study “Artificial Intelligence in Companies: Using Opportunities – Countering Risks” are available at www.vdtuev.de/digitalisierung/kuenstliche-intelligenz.

Source: com! professional by www.com-magazin.de.

*The article has been translated based on the content of com! professional by www.com-magazin.de. If there is any problem regarding the content, copyright, please leave a report below the article. We will try to process as quickly as possible to protect the rights of the author. Thank you very much!

*We just want readers to access information more quickly and easily with other multilingual content, instead of information only available in a certain language.

*We always respect the copyright of the content of the author and always include the original link of the source article.If the author disagrees, just leave the report below the article, the article will be edited or deleted at the request of the author. Thanks very much! Best regards!