SPONSORS

SPONSORS

Verification & Validation of Project Management AI

 

FEATURED PAPER

By Bob Prieto

Chairman & CEO
Strategic Program Management LLC

Florida, USA

 


 

Previously I have written about the potential use of AI in the management of large complex projects[1] and several issues which arise in such usage and outlined the need for effective verification and validation (V&V). In this paper some areas of special concern are highlighted and thoughts on how to approach aspects of the V&V process offered. This paper considers the work of others in the broader verification and validation community as well as my own thoughts as to V&V in AI enabled project management. The intent of this paper is to foster a discussion of this important area as various project management AI efforts move apace.

Defining V&V

There exists a need to continuously define verification and validation since a range of thoughts exist across the literature. This definitional challenge is addressed by Gonzalez (2000)[2] and can be seen in the differences between the Institute of Electrical and Electronic Engineers (IEEE) and the US Department of Defense (DoD).

Verification deals with satisfying specifications. Verification involves the structural correctness of the knowledge base, that is to say it is internally consistent and complete. Among the challenges are a taxonomy which addresses key milestones and activities in the project execution process. Such a taxonomy must be capable of transcending project types and a tendency to redefine it for each project or data subset must be avoided.

Verification of AI enabled project management systems must move beyond static rule testing given the non-deterministic nature of AI programs. We must check for inconsistencies and incompleteness; discrepancies, ambiguities and redundancy. Verification must ensure that all portions of the rules base are exercised in testing. Testing with known results may leave us susceptible to errors because of weak or incomplete coverage of the test set. Verification of AI enabled project management systems requires us to think about underrepresented data subsets and late stage or other temporal failure regimes. For example, how does our AI model behave when confronted with a test data set of all project successes and asked to look for failure?

Accuracy depends on training data set and data inputs. Issues of correctness, completeness and appropriateness of source data quality can be a failure point. There is a need for automated data quality checks.

Project management AI systems will require coverage measures to assess the quality of the domain populations (training and test) and meta-knowledge to provide guidance on:

  • fitness for a specific use case
  • likelihood they are in the training population
  • how representative the test set is of the intended population

There is a need to develop standardized measures and ontologies to allow the proposed project management AI systems to be evaluated.

More…

To read entire paper, click here

 

How to cite this paper: Prieto, R. (2019). Verification & Validation of Project Management AI. PM World Journal, Vol. VIII, Issue X, November. Available online at https://pmworldlibrary.net/wp-content/uploads/2019/11/pmwj87-Nov2019-Prieto-Verification-and-Validation-of-project-management-AI.pdf

 


 

About the Author


Bob Prieto
Chairman & CEO
Strategic Program Management LLC
Jupiter, Florida, USA

 

 

 Bob Prieto is a senior executive effective in shaping and executing business strategy and a recognized leader within the infrastructure, engineering and construction industries. Currently Bob heads his own management consulting practice, Strategic Program Management LLC.  He previously served as a senior vice president of Fluor, one of the largest engineering and construction companies in the world. He focuses on the development and delivery of large, complex projects worldwide and consults with owners across all market sectors in the development of programmatic delivery strategies. He is author of nine books including “Strategic Program Management”, “The Giga Factor: Program Management in the Engineering and Construction Industry”, “Application of Life Cycle Analysis in the Capital Assets Industry”, “Capital Efficiency: Pull All the Levers” and, most recently, “Theory of Management of Large Complex Projects” published by the Construction Management Association of America (CMAA) as well as over 600 other papers and presentations.

Bob is an Independent Member of the Shareholder Committee of Mott MacDonald. He is a member of the ASCE Industry Leaders Council, National Academy of Construction, a Fellow of the Construction Management Association of America and member of several university departmental and campus advisory boards. Bob served until 2006 as a U.S. presidential appointee to the Asia Pacific Economic Cooperation (APEC) Business Advisory Council (ABAC), working with U.S. and Asia-Pacific business leaders to shape the framework for trade and economic growth.  He had previously served as both as Chairman of the Engineering and Construction Governors of the World Economic Forum and co-chair of the infrastructure task force formed after September 11th by the New York City Chamber of Commerce. Previously, he served as Chairman at Parsons Brinckerhoff (PB) and a non-executive director of Cardn0 (ASX)

Bob can be contacted at rpstrategic@comcast.net.

 

[1] Prieto, B. (2019). Impacts of Artificial Intelligence on Management of Large Complex Projects. PM World Journal, Vol. VIII, Issue V, June.

[2] Gonzalez, A., Barr, V. (2000). Validation and verification of intelligent systems – what are they and how are they different? Journal of Experimental and Theoretical Artificial Intelligence, Volume 12