Shadow IT: The Risks of Generative AI in the Workplace

The article from Cyberhaven linked here goes through the rapid adoption of generative AI in the workplace since the launch of ChatGPT in November 2022. This adoption mirrors past technological shifts, such as cloud computing, with significant productivity benefits accompanied by new risks.

I have provided the highlights below, for more details click here.

Key Points:

  1. Shadow AI: Workers are using AI tools without formal IT approval, termed “shadow AI.” This unsanctioned use poses risks to data confidentiality and integrity since AI tools can assimilate and potentially leak sensitive information.
  2. Growth in AI Usage: The use of AI tools in workplaces has surged, with a 485% increase in corporate data being fed into AI tools from March 2023 to March 2024. ChatGPT remains the most used AI tool, with OpenAI, Google, and Microsoft dominating the market.
  3. Industry-Specific Adoption: The technology sector leads in AI adoption, with significant use in media, entertainment, finance, and pharmaceuticals. Conversely, retail and manufacturing sectors have lower adoption rates.
  4. Sensitive Data Exposure: A growing amount of sensitive data, such as customer support information, source code, R&D material, and HR records, is being inputted into AI tools. This trend raises concerns about data security and the potential for information leakage.
  5. AI-Generated Content: AI-generated content is being integrated into various organizational functions, including professional networking posts, R&D materials, source code, and graphic design. This raises concerns about accuracy, security vulnerabilities, and legal issues such as copyright infringement.
  6. Shadow AI Risks: A significant portion of sensitive data is being used with non-corporate AI accounts, lacking proper security measures. For instance, 82.8% of legal documents and over 50% of source code and R&D materials are being handled through these risky channels.

The article concludes by highlighting the need for organizations to manage and secure AI usage to mitigate risks while capitalizing on its benefits.