Sampling for More Effective & Efficient


Program/Project Progress Reporting



By Dr. Kenneth Smith, PMP

Honolulu, Hawaii and
Manila, The Philippines

Last month, I reviewed the basic precepts for random sampling, as well as possible pitfalls to avoid or mitigate when collecting data.  However, with those criteria aforethought, undertaken with prudent precautions, sampling is an economical, efficient and effective means for acquiring useful information about a target population, as well as a program/project’s interventions to improve a situation.  Consequently, throughout my international economic, infrastructure and social development career in program and project management in diverse sectors, whenever I encountered dysfunctional census-type management information systems (MIS); I sought to supplement – if not supplant — them with some systematic sampling capability.

This month I continue on that tack by highlighting the weaknesses inherent in census-style recurrent reporting, and — even if not replacing the extant MIS in its entirety — offering an even ‘quicker & easier’ sampling approach to validate program / project performance data.

Unquestionably, a lot of data are needed to measure implementation performance and support program / project management decision-making.  Unfortunately, all too often the structures institutionalized to acquire and analyze data are bureaucratically burdensome, cumbersome, costly, excessive, untimely and – regrettably — ultimately ineffective.

At the request of one government national agency Executive – for example — I interviewed, reviewed, categorized and compiled the following ‘fourteen faults of recurrent reports’ within his agency for his follow-up administrative-cleanup action.

Fourteen faults of recurrent reports

  1. Many different reports, formats and structures exist for managing the same program.
  2. Extensive and excessively detailed data are required.
  3. There are numerous redundancies in data requirements, some obvious gaps in coverage, ambiguity in requests for data and responses; and (in some instances) discrepancies exist in the answers provided.
  4. The authenticity (validity and accuracy) of the data in many instances is unknown.
  5. National targets are set by a “Top Down” Process, and assigned to Provinces without their participation, request for concurrence as to realism, or feasibility to accomplish.
  6. Reporting requirements are too many, and too frequent for the field staff to fulfill meaningfully.
  7. Most reporting requirements consume excessive amounts of the field technician’s time to complete.
  8. Most reports from the field are submitted too late to be of use for action-oriented decision-making at intermediate and higher levels.
  9. Intermediate levels as well as the Central Office spend inordinate amounts of time and effort summarizing data and preparing reports for their level of management.
  10. Some data are summarized manually (primarily in spread-sheet form) and laboriously retyped or re-entered, with many errors. [Late or missing data is simply ignored. Missing or delinquent reports are seldom followed up.]
  11. Many data reported are ad hoc and ad hominem. Reports are largely descriptive narrative with some statistics scattered ‘randomly’ throughout the text.
  12. Data are seldom analyzed by the staff or management at any level. For the most part, there is no standardized data base or on-going measurement against targets for performance accomplishment, or trend analysis.
  13. Almost every request by managers for information generates a frantic new search for — or re-creation of — historical data, and often results in the creation of new reporting requirements.
  14. Lack of feedback.  Data reporting is a “one way street.”  Data eventually comes in to the Central Office, but only directives go out.  No indication is made of the utility of the data, nor are summaries of information provided to the field for their possible use. Neither are reported problems dealt with in any systematic manner, or feedback provided.

While, today, computerization can readily resolve some of these syndromes, nevertheless, the litany of many of the less-than-desirable issues still linger on in many programs and projects; especially reporting bias by individuals with vested interests in the outcome.  However, surveys can bypass – if not obviate — bureaucratic bungling from endemically-entrenched and regularly-reported recurrent census-style service statistics; as well as clarify obfuscated issues, &/or worse – reveal deliberate distortion by program/project operational personnel.


To read entire article, click here

How to cite this article: Smith, K. F. (2022).  Sampling for More Effective & Efficient Program/Project Progress Reporting, PM World Journal, Vol. XI, Issue II, February. Available online at https://pmworldlibrary.net/wp-content/uploads/2022/02/pmwj114-Feb2022-Smith-sampling-for-more-effective-progress-reporting.pdf

About the Author

Dr. Kenneth Smith

Honolulu, Hawaii
& Manila, The Philippines



Initially a US Civil Service Management Intern, then a management analyst & systems specialist with the US Defense Department, Ken subsequently had a career as a senior foreign service officer — management & evaluation specialist, project manager, and in-house facilitator/trainer — with the US Agency for International Development (USAID).  Ken assisted host country governments in many countries to plan, monitor and evaluate projects in various technical sectors; working ‘hands-on’ with their officers as well as other USAID personnel, contractors and NGOs.  Intermittently, he was also a team leader &/or team member to conduct project, program & and country-level portfolio analyses and evaluations.

Concurrently, Ken had an active dual career as Air Force ready-reservist in Asia (Japan, Korea, Vietnam, Thailand, Indonesia, Philippines) as well as the Washington D.C. area; was Chairman of a Congressional Services Academy Advisory Board (SAAB); and had additional duties as an Air Force Academy Liaison Officer.  He retired as a ‘bird’ colonel.

After retirement from USAID, Ken was a project management consultant for ADB, the World Bank, UNDP and USAID.

He earned his DPA (Doctor of Public Administration) from the George Mason University (GMU) in Virginia, his MS from Massachusetts Institute of Technology (MIT Systems Analysis Fellow, Center for Advanced Engineering Study), and BA & MA degrees in Government & International Relations from the University of Connecticut (UCONN).  A long-time member of the Project Management Institute (PMI) and IPMA-USA, Ken is a Certified Project Management Professional (PMP®) and a member of the PMI®-Honolulu and Philippines Chapters.

Ken’s book — Project Management PRAXIS (available from Amazon) — includes many innovative project management tools & techniques; and describes a “Toolkit” of related templates available directly from him at kenfsmith@aol.com on proof of purchase of PRAXIS.

To view other works by Ken Smith, visit his author showcase in the PM World Library at https://pmworldlibrary.net/authors/dr-kenneth-smith/