Not interested in AI?
Lünendonk study: concerns about shadow AI and regulations are delaying the introduction of GenKI tools.
Only 3 percent of companies in the DACH region describe themselves as advanced in the introduction of generative AI, according to the latest Lünendonk study. Despite the high potential and numerous fields of application, skepticism towards the new technology is high among users and decision-makers alike. Uncertainty and the fear of shadow AI are holding back its introduction. At the same time, every second company hopes that GenKI will help them with their digital transformation. For retail companies, applications in the areas of e-commerce and marketing are particularly relevant. The IT consultancy KPS sees active change management and structured implementation as the key to leveraging the potential of GenKI.
Use cases often still a dream of the future
Almost every second CIO considers generative AI to be relevant. Respondents see the greatest potential in conceptual work (85%), data analysis and forecasting (80%), digital services (71%) and chatbots (68%). However, the use cases are often still dreams of the future. Half of the companies surveyed are still in the early stages of identifying use cases. “Companies that actively work with AI will leave companies that do not use AI behind. At the same time, there are many challenges associated with implementation, from shadow AI to compliance issues. This tension between risks and regulations needs to be reduced,” says Paul Anderie from KPS.
The risk of shadow AI: unstructured introduction and unlimited access
The introduction of AI tools is rarely controlled centrally by the management (28%) or a CIO and IT (19%); 15% see the individual departments as being responsible. At only 7%, responsibility lies with a dedicated Chief Data or Chief Digital Officer. In a third of the companies surveyed, this task falls to individual employees.
“In the DACH region, generative AI is less strategically conceived and more ‘simply done’. This empowers the specialist departments, but also opens up complex issues in terms of access rights,” says Paul Anderie. The study also reveals clear differences in internal regulations: 35% of companies only allow selected areas and functions access to GenAI tools, while in a third of companies every employee has unrestricted access.
AI scepticism prevails: lack of trust as a barrier
In 59 percent of companies, employees have little or very little trust in the results generated by GenAI tools, while only 7 percent trust them completely. At the same time, more than half of those surveyed currently observe only minor increases in productivity as a result of the technology. The benefits of the technology are often not yet clear to users.
Above all, managers have legal concerns about the consequences of incorrect AI-generated results (71%) and fear liability risks in relation to decisions made using AI (70%). Inadequate data governance (54%) and internal compliance requirements (43%) also concern managers. In total, the use of generative AI is not permitted in 13 percent of all companies – the main reason for this is compliance and regulation (58 percent).
EU AI Data Act often not known
Paul Anderie: “Many of our customers report a lack of knowledge about which regulatory provisions apply or how data protection issues are dealt with. Legislation such as the EU AI Data Act offers the opportunity to establish clear rules for the use of AI technologies in the EU and create security and trust for companies. Guidelines are also important within companies in order to strengthen employee confidence in the quality of results and avoid problems such as data governance violations. However, we are still at the beginning here in some sectors. A full 40 percent of the retail companies surveyed are not aware of the EU AI Data Act, for example, and only 8 percent are working on implementing it.”
For the study
150 IT and business managers from companies and public authorities in German-speaking countries were surveyed for the study