Survey: Almost 80% of IT Leaders Saw Negative Company Outcomes Due to AI

Ad

Somaderm


Survey: Almost 80% of IT Leaders Saw Negative Company Outcomes Due to AI

A significant majority — nearly 80% — of IT leaders say their organizations have suffered negative consequences from employees using generative AI tools, according to a new report from data management firm Komprise.

The study, conducted in April by a third-party firm, polled 200 IT directors and executives from US-based enterprises with more than 1,000 employees. The findings underscore the urgency for IT departments to monitor shadow AI, which is the unsanctioned or unauthorized use of AI tools within the enterprise.

“Using GenAI is ridiculously easy,” said Krishna Subramanian, co-founder of Komprise, in an email to TechRepublic. “That means it is also ridiculously easy to put the company and its customers and employees at risk.”

What are the adverse outcomes of employee use of generative AI?

According to the survey:

  • 46% of IT leaders reported false or inaccurate AI-generated outputs.
  • 44% cited leaks of sensitive data into AI models.
  • Of those who experienced issues, 13% reported that the consequences directly affected their finances, customer trust, or brand reputation.

SEE: Threat actors can use easily accessible generative AI chatbots to exploit users.

In addition, 79% of IT leaders reported that their organizations have experienced negative outcomes, including inaccurate results and leaks of Personally Identifiable Information, after sending corporate data to AI.

As a result, IT leaders are concerned about what Komprise refers to as “unsanctioned, unmanaged AI.” Privacy and security top the list of concerns, with 90% of respondents worried about shadow AI from this perspective. Of those, 46% said they were “extremely worried.”

To mitigate risks, 75% of IT leaders plan to adopt data management platforms, while 74% are investing in AI discovery and monitoring tools to track the use of generative AI across their networks.

How to prepare unstructured data for AI safely

A key component of using generative AI safely is making sure you know which data is exposed to the model.

When preparing large amounts of company data to be fed into AI, 73% of IT teams approach it by classifying sensitive data, then using workflow automation to restrict its use by AI. Unstructured data management solutions that use tags and keywords can leverage those keywords to sort the data.

Other common tactics for preparing unstructured data are:

  • Automated scanning and classification tools.
  • Storing data in vector databases for semantic search and RAG (retrieval-augmented generation).
  • Using other technology for automated AI data workflows and auditing.

What else can IT teams do to reduce the risk of shadow AI?

IT leaders may want to restrict the use of AI tools within the organization. However, others may wish to limit which data sets can be used for AI inferencing or training.

Around 74-75% of organizations turn to data management and AI discovery or monitoring tools to gain insight into what AI is being used within their company. An additional 55-56% use access management and data loss prevention tools simultaneously with employee training. Data management tools are a popular choice for auditing and governing workflows, thereby reducing data leakage.

“IT really needs to lead the charge on education, training and policies,” Subramanian said. “They must go hand-in-hand. Employees need to understand the risks so that they can use AI safely and not expose sensitive and proprietary corporate data to public AI applications.”

About 24% of respondents stated they have a team evaluating AI solutions but haven’t yet implemented any guidelines or controls. Only 1% admitted to taking no action to address shadow AI risks.


Ad

Somaderm