50% of Enterprises Exclude Their Cybersecurity Teams from AI Implementation, says ISACA
According to an ISACA study, 45 per cent of companies do not include their cybersecurity teams in the development and adoption of AI solutions, posing significant risks.
A recent report by ISACA, a global association that promotes trust in technology, reveals that nearly half (45%) of companies do not involve their cybersecurity teams in the development, implementation and adoption of Artificial Intelligence (AI) solutions. This finding is part of ISACA’s ‘State of Cybersecurity 2024 Report’, which compiles feedback from more than 1,800 cybersecurity professionals and highlights growing concerns around the use of AI in security operations without proper oversight from security experts.
The ISACA study highlights that only 35% of cybersecurity professionals are involved in developing policies for AI implementation in their organisations. This lack of involvement in such a critical area poses significant risks, especially in a context where AI applications in cybersecurity have gained popularity. Among the most common uses of AI in security operations, the report cites automation of threat detection and response (28%), endpoint device security (27%) and automation of routine security tasks (24%).
AI and the challenge of cyber resilience
Amid growing demand for advanced cybersecurity solutions, ISACA warns that organisations must integrate security teams into all stages of the AI lifecycle, from development to deployment. ‘AI has great potential to ease the workload of cybersecurity professionals, especially in areas such as task automation and threat detection. However, security leaders cannot focus solely on the role of AI in the security operation; it is critical that the security function is involved at every stage of the process,’ said Jon Brandt, director of professional practices and innovation at ISACA.
In addition, ISACA is developing specialised resources and courses to help professionals adapt to the challenges posed by this technology. This includes topics on adaptive authentication in the age of deepfakes, integrating AI with quantum computing to improve security, and the importance of maintaining ethical AI policies. Among the latest courses, ISACA offers programmes on neural networks, large-scale language modelling and deep learning, available on its online learning platform.
Recommendations for ethical and responsible AI
Another key point of the report is the growing need for ethical and responsible AI policies, especially in the context of the European Union. ISACA has published a report titled ‘Understanding the EU AI Act: Requirements and Next Steps’, which details the regulations that will come into force in August 2026. These regulations will require auditing and traceability of AI systems, as well as designating an AI leader in each enterprise to oversee the AI tools in use and ensure compliance with privacy and cybersecurity regulations.
For ISACA, creating an ethical AI policy requires answering key questions such as ‘Who is impacted by the scope of this policy?’, ‘What are the acceptable terms of use?’ and ‘How will legal and compliance requirements be ensured?’. These considerations are critical for businesses to leverage AI without compromising security or digital trust.
AI education and training for cybersecurity.
To accompany the AI-driven transformation in cybersecurity, ISACA is launching a new certification programme for Cybersecurity Operations Analysts. This certification, which will be available from January 2025, focuses on critical technical skills, such as assessing threats, identifying vulnerabilities and recommending countermeasures to prevent security incidents in an increasingly digitised environment.
In conclusion, the ISACA report underlines that the exclusion of cybersecurity teams in the implementation of AI represents a considerable risk for businesses. The association calls on organisations to actively integrate these teams and train their professionals to ensure the secure and ethical use of artificial intelligence. For more information, the full report is available on the ISACA website along with additional resources on cybersecurity and technology.