close
close

Governance considerations and pitfalls in implementing generative artificial intelligence

Governance considerations and pitfalls in implementing generative artificial intelligence

The adoption of generative artificial intelligence within enterprises has created a whole new realm of governance and compliance risk. It has also highlighted the need to review and strengthen traditional information management frameworks. Many large organizations are still in the process of establishing robust information management frameworks for their current environment. Now they must also answer questions about their preparedness to manage Copilot’s impact1 and similar generative AI tools. These questions include whether they can ensure appropriate access, use and management within their IT infrastructure. Additionally, organizations should assess whether new artifacts are being created that could pose unforeseen regulatory risks. This includes new forms of information that may need to be disclosed to regulatory authorities under existing obligations.

Many large organizations are still in the process of establishing robust information management frameworks for their current environment. Now they must also answer questions about their readiness to address the impact of Copilot and similar generative AI tools.

Although asking the necessary questions can immediately reveal vulnerabilities, it is essential to ask questions and test often. Governance-related questions should be part of the foundation of generative AI test plans, proof-of-concept evaluations, and pilot initiatives. Ultimately, the answers to these questions provide important insights, including identifying business use cases and technology applicability, understanding risks so they can be appropriately mitigated and managed, and developing resilience documentation and adaptations to information management and AI management processes. By leveraging these insights, organizations can safely and effectively integrate generative AI tools into their data governance processes.

Supervision

The use of AI tools should be monitored and managed across the organization to detect misuse or violation of various legal obligations. These initiatives are more successful with cooperation from IT, compliance, legal and organizational users. Such stakeholders are tasked with ensuring compliance and preventing employees from inappropriately using prompts or accessing sensitive or restricted company information when working with AI tools.

Policies and workflows monitor for inappropriate or non-compliant activity, establish permissions for who can access certain categories of information, and support an organization’s data retention and deletion policies. Unfortunately, even when these controls, which are critical to effectively managing a range of legal, regulatory and organizational risks, are in place, they are not always easily applied to new AI deployments, including Copilot within Microsoft 365 environments. In some cases, policies will need to be created or reconfigured to apply to generative AI interactions and activities.

Access controls

The issue of access control within Microsoft 365 is not a new concept, and information management professionals have been advocating for well-managed access rights within SharePoint and other aspects of the Microsoft 365 environment for years.2 However, it is especially relevant in Copilot implementations, and if left unchecked it could pose significant risks on many fronts.

With Copilot, anything a user has access to can surface as part of an answer to a question or prompt. Without Copilot, when users are over-privileged and have access to documents they shouldn’t have, they would only be able to discover the document if they actively search for it. Excessive permissions and failure to restrict access to certain materials can therefore result in information being exposed to many more employees than intended. To manage this, organizations must be rigorous in defining controls and have a thorough understanding of the range of materials that Copilot users have access to at different permission levels.

Specifically, when Copilot is enabled for a user, any application within Microsoft 365 that has a Copilot element will activate AI. Administrators and users cannot select which applications are allowed to use Copilot and which are not. For example, a user cannot disable Copilot for a specific product, but there are options to limit certain functionality and features through administrative settings. An example of this is a Teams admin updating meeting transcription settings so that Copilot cannot be used during Teams meetings.

Therefore, every application in the tenant must be monitored for access controls and evaluated for different types of information risks. For example, in Copilot for Microsoft 365 chat, Copilot works across applications to respond to user questions about upcoming meetings, related emails, and items it thinks require follow-up. Users can point Copilot to Word documents or PowerPoint files to answer questions or generate content, prompting the system to scan accessible files in SharePoint, OneDrive, and Outlook.

Important considerations

Given the constant cycle of change within the Microsoft 365 environment, frequent auditing of these applications and controls is essential to maintain compliance with governance rules over time. In addition to regularly monitoring consents, organizations should take and document the following steps to strengthen governance when implementing new AI:

Proof-of-concept evaluationBefore a generative AI tool goes live, legal and IT teams should work closely together to conduct a limited pilot with a small group of test users. This will help reveal risk and governance gaps that may be unique to the organization before mass rollout.

AI governance readiness assessment—This step involves reviewing existing access control management for all systems within the environment that Copilot (or another generative AI tool) may have access to. The good news is that audit testing so far has shown that Copilot appears to remain on par with established access controls and is accurate in only accessing documents and data that an individual has permission to view. So a rigorous review of permissions can pay off in mitigating the risk of access control errors for Copilot users.

Establish an AI committee—A team of active stakeholders is essential to establish policies, advance them in an informed manner, and keep them current as features and functionality within the Microsoft 365 environment change. AI committees cannot be vanity committees. They must consist of people who understand the legal, regulatory, technical and organizational needs, and how these can be affected by the use of AI.

Labeling policy—Defining a labeling system for documents and information categories that need to be treated with varying levels of confidentiality or protection is an effective way to support governance in a Copilot environment. This will ensure that sensitive materials are eliminated from the AI ​​system so that they are not accidentally shared outside the groups that have permission to view them.

Continuous evaluation—Cloud systems and AI technology are developing rapidly. Functionality and controls are constantly changing, so organizations need an AI governance program built for adaptability. Part of maintaining flexibility is understanding that even after the initial assessment of strengths and weaknesses in access controls and other aspects of governance, and even after proof of concept is complete, the program cannot go on autopilot . System owners must be vigilant and continually retest to confirm whether controls hold up over time and whether anything in the system introduces new or unexpected risks.

Ongoing training—Developing and implementing an actively engaged training plan is the linchpin for successful implementations and governance. All employees must understand their organization’s AI management policies, practices, and procedures. Likewise, everyone needs to understand and recognize the appropriate use of new AI tools. Additionally, the training will help convey unique organizational and departmental use cases discovered through pilots and ongoing evaluation to ensure employees responsibly maximize the value of Copilot.

Conclusion

When Copilot first became available, many organizations experienced the excitement and pressure of being early adopters. There will continue to be a push to adopt these quickly as other tools and new AI features come to market. Innovation is important, but should not come at the expense of effective risk management. Organizations, especially those in highly regulated industries, must take the time to test their use cases and enable IT to adapt to other stakeholders. This will become increasingly important as a critical compliance issue as regulators begin to scrutinize how organizations use AI, and as the use of AI spreads across enterprise systems containing sensitive and confidential information. Organizations should strive for a middle-ground approach, embracing AI while establishing controls and verifying that the tools are working properly.

Endnotes

1 Microsoft Copilot
2 One identity, “Identities and security in 2022

Tori Anderson

Is a director at FTI Technology, with almost 10 years of experience in ediscovery, information management and governance. Anderson holds a law degree from the University of Miami (Florida, USA) and is licensed to practice law in Florida and Washington DC, USA.

Tracy Bordignon

Is a senior director at FTI Technology, with more than a decade of experience in information management and privacy and helping organizations manage legal risk. Bordignon holds a law degree from Southwestern Law School (California, USA) and is licensed to practice law in Florida, USA.