Why Project Managers Should Lead the
Guardrails Conversation Now
ADVISORY ARTICLE
By Ashley Essick, MBA, PMP
United States
Abstract
AI adoption on project teams is outpacing formal governance, and the gap between what is approved and what is actually being used is widening. Shadow AI use, uninformed reliance on unvalidated outputs, and unclear ownership of AI-generated work are quiet risks already present on most teams. Project managers are uniquely positioned to address this as the function that has always owned work visibility, decision accountability, and output ownership. This article makes the case that AI governance is already PM work and offers a practical four-part framework: visibility, guardrails, checks, and accountability. Drawing from experience in highly regulated global clinical operations, the author examines how poor AI use patterns surface, what they cost teams when left unaddressed, and why PMs who begin these conversations now will be the ones leading enterprise AI governance as it matures.
Introduction
The application of AI is changing how teams work, but the ownership of the work is the key aspect of what should never change.
On most project teams, AI has been implemented to draft emails, summarize meetings, clean up trackers, build slide decks and to support with critical thinking tasks to drive to the next step. These avenues of AI workstreams typically first surface as low risk administrative task because these are the easiest to test on. However, as team members become more comfortable with the use of AI implementation quickly moves past the low-risk administrative functions. AI use can quickly move into communication, analysis, planning and decision support.
The silent challenge we see rising is not the use of AI itself but when teams begin using AI without clear boundaries, clear review, or little to no discussion about where it helps versus where it creates risk. Project managers are already strategically positioned to address this issue, not because we should monitor tools, but because oversight of application is already PM work. As a PM, we make work visible, clarify ownership and ensure accountability for the final output.
What Shadow AI Actually Means
The term shadow AI can often seem as an ambiguous amorphic term which feels unmanageable to many PMs. To simplify the matter, shadow AI is simply AI which is being used outside of approved workflows which are normally visible and have established governance but due to the unvetted application may open the workflow and team to unforeseen risk.
Use of shadow AI can surface as someone using an outside tool the team or company has not yet approved, or potentially they are utilizing an approved tool but it is unclear how often it is used, for what application and what level of review of output is occurring. In both cases, the issue is not the tool itself but rather that there is not a clear line of sight as to the application and use scenarios of the tool.
More…
To read entire article, click here
How to cite this article: Essick, A. (2026). AI Accountability Is Already PM Work: Why Project Managers Should Lead the Guardrails Conversation Now, PM World Journal, Vol. XV, Issue V, May. Available online at https://pmworldjournal.com/wp-content/uploads/2026/05/pmwj164-May2026-Essick-AI-Accountability-Is-Already-PM-Work.pdf
About the Author

Ashley Essick
USA
![]()
Ashley Essick, MBA, PMP is a Global Project Manager at ICON PLC, a leading global contract research organization, where she leads complex oncology and plasma-derived therapy programs spanning 23 countries across all major global regions. She holds an MBA and the Project Management Professional (PMP) certification, and is currently completing MIT Professional Education’s Applied Agentic AI for Organizational Transformation program alongside active pursuit of AI governance certification through the International Association of Privacy Professionals (IAPP).
With 13 years of progressive healthcare operations experience and 7 years leading global Phase I through III clinical trials, Ashley has contributed operationally to three FDA-approved therapies: Inlexzo (TAR-200, first-in-class bladder cancer), Zepbound (tirzepatide, obesity), and the Wegovy cardiovascular indication (semaglutide). Her programs have engaged cross-functional teams spanning clinical operations, regulatory affairs, data management, finance, and global site networks across 23 countries.
Ashley architects AI-forward solutions at the intersection of clinical operations, enterprise governance, and regulated drug development. As Product Owner for ICON’s enterprise Operational Resourcing and Forecasting Platform, she led the initiative through full C-suite endorsement and ILT approval, with projected conservative annual operational savings of $16M at full scale. She has designed a broader portfolio of 18 AI solutions mapped across a phased enterprise rollout, which has received full organizational approval, positioning her as a strategic architect of AI adoption at scale. Her work is grounded in direct experience managing complex global programs where AI governance is not a theoretical exercise but an operational necessity. She writes and speaks on practical frameworks that give project teams accountability over AI output, not just access to it. The views expressed are the author’s own.




