Name of the participant: Lisa Jöckel
Description of the IT research project: Artificial intelligence (AI)-based components are increasingly being used in various fields because they offer possibilities that go beyond traditional software. As a subfield of AI, the focus here is on machine learning (ML). Based on data, a relationship between input and output of the components is obtained through methods of ML. This is done by means of a learning process on data examples for the desired behaviour and is checked using test data. Particularly in autonomous systems, AI is used for perception, as these systems operate in open contexts and thus need to perceive their environment in order to be able to react according to the situation. But AI can also offer added value to support humans in tasks with high complexity. In the industrial sector, for example, AI can support production planning, quality assurance, transport planning/monitoring or the use of machines. However, misbehaviour of AI can be accompanied by high financial costs or even the endangerment of people. This represents an obstacle to the use of AI and the associated opportunities for innovation.
In order to create the necessary acceptance for the use of AI, procedures for safeguarding and certification are being developed. An important aspect of this is testing the reliability of the AI-based components. The data-based learning process with ML is a challenge here, since the intended functionality is only known for a fragment of reality. Therefore, the quality of the test data is crucial for a meaningful reliability check.
Within the scope of the project, a framework for the qualitative evaluation and improvement of test data is to be developed. The results contribute to a proof of reliability of AI-based components and possible certification, which should lead to a broader applicability of AI, especially in safety-critical areas.
Software Campus partners: Fraunhofer IESE, TRUMPF
Implementation period: 01.03.2021 – 31.08.2022