SPONSORS

SPONSORS

Managing AI: Risks and Opportunities

 

SECOND EDITION

By Donald R. Polaski and Marissa J. Brienza

Booz Allen Hamilton

McLean, VA, USA


 ABSTRACT

Artificial intelligence (AI) is fundamentally changing the way humans interact with machines by automating tasks which before only humans could perform. While AI can seem like magic, using these innovative techniques comes with considerable risks. AI models can be fraught with bias, as was the case when Amazon launched an internal recruiting tool that used AI to vet job resumes. In designing the tool, researchers identified that the model was ranking women’s resumes significantly lower than men’s resumes. The model penalized resumes for having the term “women” in activities like “women’s chess club captain” and downgraded applicants for having attended all-women’s colleges. While Amazon scrapped the program in 2015, it is an important lesson that even organizations with the best intentions may run into unexpected risks when managing AI-based projects. In this paper we review multiple case studies of how AI projects realized risk and highlight additional risks associated with managing AI projects. Topics addressed include bias in AI models, privacy concerns with data used to train AI systems, legal issues that may rise from using AI-powered tools, lack of model transparency and explainability, and model drift – the concept of AI models losing accuracy over time. We also cover strategies for dealing with these risks to help program managers maximize the impact that AI has on their projects.

INTRODUCTION

Artificial intelligence (AI) is fundamentally changing the way humans interact with machines by automating tasks which before only humans could perform. While AI can seem like magic, using these innovative techniques comes with considerable risks. AI models can be fraught with bias, as was the case when Amazon launched an internal recruiting tool that used AI to vet job resumes. In designing the tool, researchers identified that the model was ranking women’s resumes significantly lower than men’s resumes. The model penalized resumes for having the term “women” in activities like “women’s chess club captain” and downgraded applicants for having attended all-women’s colleges. While Amazon scrapped the program in 2015, it is an important lesson that even organizations with the best intentions may run into unexpected risks when managing AI-based projects. In this paper we review multiple case studies of how AI projects realized risk and highlight additional risks associated with managing AI projects. Topics addressed include bias in AI models, privacy concerns with data used to train AI systems, legal issues that may rise from using AI-powered tools, lack of model transparency and explainability, and model drift – the concept of AI models losing accuracy over time. We also cover strategies for dealing with these risks to help program managers maximize the impact that AI has on their projects.

AI BIAS

AI bias occurs when an AI system produces outputs that lead to discrimination against specific groups or individuals (Belenguer, 2022). PMs should be concerned about bias in the AI tools they use or develop because biased AI can lead to unfair and discriminatory outcomes. If the AI tools are used in critical areas such as hiring, lending, or criminal justice, biased outcomes can have dire consequences for individuals and society.

Take for example an AI project to assist recruiters and managers in hiring decisions developed and ultimately shelved by Amazon. As early as 2014, Amazon had been building AI systems to review job applicants resumes and assign a one to five star ranking to help separate strong candidates from weak candidates (Goodman, 2018). On its surface, using AI to filter out candidates could result in real time savings for an enterprise as large as Amazon. In execution, the company quickly identified that their AI system had built in bias against female candidates. Resumes that included the term “women” as in “women’s chess club captain” would be downgrade. The AI system also downgraded graduates of two women’s colleges. (Dustin, 2018). Part of the reason this bias was present in the recruiting tool was due to an imbalance in the data used to train it. The tech industry is overwhelmingly male (Hupfer, 2021) and as such the resumes used to train the underlying algorithms came primarily from male candidates. Because Amazon hired mostly men in the past, men scored higher in their new recruiting tool. Fortunately, Amazon was quick to catch on to the bias in their system and terminated the program. Unfortunately, this kind of bias can creep into any AI system where historical bias may be present in the data used to train it.

To reduce the risk of AI bias program managers can ask the following questions during project execution:

More…

To read entire paper, click here

Editor’s note: Second Editions are previously published papers that have continued relevance in today’s project management world, or which were originally published in conference proceedings or in a language other than English.  Original publication acknowledged; authors retain copyright.  This paper was originally presented at the 10th Annual University of Maryland PM Symposium in April 2023.  It is republished here with the permission of the authors and conference organizers.

How to cite this paper: Polaski, D.R., Brienza, M.J. (2023). Managing AI: Risks and Opportunities; originally presented at the 10th Annual University of Maryland Project Management Symposium, College Park, Maryland, USA in April 2023; PM World Journal, Vol. XII, Issue VII, July. Available online at https://pmworldlibrary.net/wp-content/uploads/2023/07/pmwj131-Jul2023-Polaski-Brienza-managing-AI-risks-opportunities-2.pdf


About the Authors


Donald R. Polaski

Virginia, USA

 

Donald Polaski is a Director of AI and ML at Booz Allen Hamilton leading the development of Artificial Intelligence (AI) solutions. He has led the development of enterprise cloud-based data science platforms and is currently driving the development of new AI-based analytic solutions across the Federal Government.

Donald holds Bachelor of Science degrees from the University of Rochester in Physics and Mathematics and is a Certified NVIDIA Deep Learning Institute instructor. He has applied his Data Science and AI skills to a wide range of fields including space situational awareness, materials science, particle physics, and childhood cancer research.

He can be contacted at polaski_donald@bah.com

 


Marissa J. Brienza

Virginia, USA

 

Marissa Brienza is a chief data scientist at Booz Allen Hamilton with 9 years of experience delivering innovative, data-driven solutions to clients across the DoD and Intelligence community. She helps organizations create and operationalize automation, machine learning, and big data capabilities to protect national security interests and accelerate the delivery of mission critical information to the warfighter.

Marissa holds a Bachelor of Science degree in Statistics from the University of South Carolina and a Master of Science degree in Operations Research from George Mason University. She is affiliated with Women in Data Science (WiDS) and Girls Who Code to encourage and support young women pursuing a career in technology.

She can be contacted at brienza_marissa@bah.com