Devoa cloud-native data analytics and security company, has presented its latest study on the presence of the artificial intelligence in the internal security systems of companies. Thus, 96% of those surveyed have expressed their discontent, which leads them to resort to unauthorized tools in 80% of the cases analyzed.
When they hire a team of cybersecurity experts, companies seek to establish a clearly defined line of work and get rid of all kinds of concerns. However, if they see that the members of the Security Operations Center (SOC) make decisions freely and venture to freely acquire unauthorized AI tools, they often take restrictive measures in this regard.
MS Recommends
In the survey, in which he also collaborates Wakefield Research, it is concluded that 96% consider that they know at least one colleague who is dissatisfied with cybersecurity systems. Nevertheless, three out of four (78% of the total) estimate that their company would put an end to such unauthorized AI tools and could even lead to irreversible dismissal.
Regarding the consequences of using unauthorized AIs41% say that their organization could ask them to stop using it immediately, but that it would be evaluated in the future, while 19% believe that no action would be taken in this regard.
Why the discontent of the experts?
Fundamentally, and as previously mentioned, to the dissatisfaction with the levels of automation applied in security for de side of the company. But if the matter is investigated further, we will find causes technological typesuch as the poor scalability and flexibility of the available solutions (in 42%), and economic type, due to its high implementation and maintenance costs (in 39%). To this is added an internal problem, since 34% speak Lack of internal knowledge and resources by the workers themselves.
The critical internal situation experienced by the companies’ SOC would be resolved by listening to the needs and improvement options proposed by the experts. 33% are dissatisfied with the levels of adoption of security automation, while 28% consider their companies inflexible when it comes to giving them autonomy to select the best tools they can use.
Unauthorized AI applications
The presence of rogue AI tools leads enterprise security experts to use them to implement the services they already offered. In this way, 47% of those surveyed ensure that they allow a better interface46% apply more advanced capabilities or specialized and 44% a more efficient work.
That being said, with the increase in automation, the vast majority would help fill staff shortages performing: incident analysis, application landscape analysis and data sources, and threat detection and response. Respondents also talk about the importance of AI in SOC automation related to protecting against cyber threats and easing staff training.
Without a doubt, AI is a complement to other automated security Technologysas SOAR (used in 53% of cases), Cloud SIEM solutions (in 52%) and AIOps (in 51% of cases) in their security operations center. They have also been complemented with machine learning analysis (in 48% of cases) and automation in threat detection and response in 45%.
Economic impact
Another factor that drives enterprise security experts to apply rogue AI solutions in the SOC is the positive impact it has on the company. In fact, two out of three respondents (65%) say that will have financial gainswhich will materialize in the increase in income (by 39%) and in the reduction of hiring or training costs (by 37%).
It’s an unstoppable trend, and if enterprise security experts are left free to work, rogue AI will end up occupying a priority role in the SOC, improving profitability and the future in the short-medium term.