SPONSORS

SPONSORS

Assessing Program & Project Performance

 

with the ‘THAI-SCALE’ Technique & Template

 

ADVISORY ARTICLE

By Dr. Kenneth Smith, PMP 

Honolulu, Hawaii
& Manila, The Philippines


Background

The Thai Scale technique for targeting and assessing program & project performance was originally conceived and developed in 1996, together with an accompanying template to facilitate various computations for comparative analysis and progress reporting of State Enterprise – aka ‘Quango’ — programs.  Applied and field-tested on several disparate programs by my ‘Dream Team’ of Thai auditors, it earned after-action accolades from the program sponsors.

Subsequently, I introduced the Thai Scale technique to regional participants at the Mekong Institute in Khon Kaen, Thailand where I conducted intermittent project management & evaluation seminars for several years under Asian Development Bank (ADB) auspices — although it never made the Project Management Institute’s ‘PMBOK,’ or other ‘Prime’ P/PM publicationsSince then, through limited consultancies and PM&E courses — as the need arose — I have continued to advocate and teach the Thai Scale, along with other project management techniques.

However, Ashley Majika’s article last month[1] alerted me it would be timely to inform other contemporary practitioners who might benefit from a tried-and-true technique – now on its 25th Anniversary – for application to their programs & projects.

Here then — to bridge the generational gap — is an exposition of the Thai Scale.

The prime purpose of programs and projects is to improve situations in some manner for selected areas &/or their inhabitants.  Consequently, in addition to the implementation status – i.e. whether it is on-schedule &/or on-budget — the three most pertinent issues for on-going program and post-project performance assessment are:

  1. How much better is the situation than before, or likely to be in the near future?
  2. Are the changes – now or anticipated — up to expectation?and also
  3. “How best to assess, summarize and report the foregoing results to program / project sponsors and other stakeholders?”

Given appropriate organizational management objectives, project management quality delivery & service indicators with baseline situation data, quantitative targeting, and follow-up, those questions should be easy to answer. [2]  All too often during my career however, I encountered programs and projects where baseline data had not been compiled; meaningful indicators were not established; quantitative targets were either not set or were unrealistically high; and implementation performance was not systematically tracked.

In such instances instead of assessing project performance with on-hand data, evaluation teams used most of their time during brief on-site visits hurriedly gathering superficial data through ad hoc rapid reconnaissance studies.

As a consequence, inadequate anecdotal reports for stakeholders were the norm for these programs and projects — replete with scattered unsubstantiated statistics, and unverifiable assumptions & opinions submitted as ‘facts.’  Performance assessments were also produced in a wide variety of rating scales: from subjective nominal to rank-ordered ordinal (some low to high, others high to low), sometimes interval; but rarely ratio data.  In short, mostly an unsatisfactory situation!

In 1996 I had an extensive Asian Development Bank (ADB) consultancy to assist the Auditor General’s (AG) Office of the Government of Thailand improve ‘management audit’[3] performance assessment practices of State Enterprise management, service delivery and customer satisfaction.[4]  In addition to ‘Vulnas,’[5] another technique I devised with Tanom & Karanee (my principal AG Thai counterparts) for my Thai auditor ‘Dream Team’ to employ was systematic application of the standard deviation[6] — essentially ‘an old wine in new bottle’ approach for organizations to target, evaluate and report project performance.

To make the standard deviation concept palatable to recipient stakeholders, we equated the normal probability distribution scale to the five-banded Thai national flag (with its double-wide central band), and dubbed it the ‘Thai Scale’ – as indicated in Figure 1:

More…

To read entire article, click here

How to cite this article: Smith, K. F. (2021). Assessing Program & Project Performance with the ‘THAI-SCALE’ Technique & Template, PM World Journal, Vol. X, Issue XII, December. Available online at https://pmworldlibrary.net/wp-content/uploads/2021/12/pmwj112-Dec2021-Smith-assessing-program-and-project-performance-with-thai-scale-technique.pdf


About the Author


Dr. Kenneth Smith

Honolulu, Hawaii
& Manila, The Philippines

 

Initially a US Civil Service Management Intern, then a management analyst & systems specialist with the US Defense Department, Ken subsequently had a career as a senior foreign service officer — management & evaluation specialist, project manager, and in-house facilitator/trainer — with the US Agency for International Development (USAID).  Ken assisted host country governments in many countries to plan, monitor and evaluate projects in various technical sectors; working ‘hands-on’ with their officers as well as other USAID personnel, contractors and NGOs.  Intermittently, he was also a team leader &/or team member to conduct project, program & and country-level portfolio analyses and evaluations.

Concurrently, Ken had an active dual career as Air Force ready-reservist in Asia (Japan, Korea, Vietnam, Thailand, Indonesia, Philippines) as well as the Washington D.C. area; was Chairman of a Congressional Services Academy Advisory Board (SAAB); and had additional duties as an Air Force Academy Liaison Officer.  He retired as a ‘bird’ colonel.

After retirement from USAID, Ken was a project management consultant for ADB, the World Bank, UNDP and USAID.

He earned his DPA (Doctor of Public Administration) from the George Mason University (GMU) in Virginia, his MS from Massachusetts Institute of Technology (MIT Systems Analysis Fellow, Center for Advanced Engineering Study), and BA & MA degrees in Government & International Relations from the University of Connecticut (UCONN).  A long-time member of the Project Management Institute (PMI) and IPMA-USA, Ken is a Certified Project Management Professional (PMP®) and a member of the PMI®-Honolulu and Philippines Chapters.

Ken’s book — Project Management PRAXIS (available from Amazon) — includes many innovative project management tools & techniques; and describes a “Toolkit” of related templates available directly from him at kenfsmith@aol.com on proof of purchase of PRAXIS.

To view other works by Ken Smith, visit his author showcase in the PM World Library at https://pmworldlibrary.net/authors/dr-kenneth-smith/

[1] Majika, A. (2021). Customer service and project performance at state-owned enterprises: Towards a sustainable mechanism; PM World Journal, Vol. X, Issue XI, November
[2] Other issues — such as sustainability, collateral benefits & unintended consequences, as well as whether further financing and follow-up activity are required – need much more in-depth probing and analysis.
[3] i.e. extending auditing practices beyond traditional financial aspects, to include other dimensions of organizational program & project management.
[4] Unfortunately, the situation that most state-owned enterprises consistently fail to meet project deadlines, and the lack of customer satisfaction with state-owned enterprises — as discussed by Ashley Majika regarding South Africa in last month’s PMWJ article –– are generic problems; not confined to one country.
[5] Smith, K. F. (2021). Internal Controls for Project Managers: How to Audit Yourself Before the Auditor Audits, PM World Journal, Vol. X, Issue VII, July
[6] The standard deviation normal distribution curve is the basis for statistical probability assessment.  Some instructors use the range and distribution of their students test results to grade them ‘on the curve.’ The Program Evaluation & Review Technique (PERT) used a ‘quick & easyestimated standard deviation (ESD) in conjunction with the Critical Path Method (CPM) where One ESD = (Pessimistic – Optimistic)/6 to estimate activity duration probabilities as well as to subsequently assess performance. We also used the ESD to estimate organizational management, program & project performance, as described herein.