Skip to content
Quick Start for:
Texas Performance Review 
Home Improvements
A Manual For Conducting Performance Reviews 


CHAPTER 3
Research and Analysis


Thorough research is probably the most important ingredient of a successful performance review. This chapter is specifically designed for the performance review analyst who will be responsible for the review's detail. The following information is fairly basic since some users may not have conducted research of this type before.

This information also should be useful to managers, who may gain an appreciation of the demands that will be placed on their analysts. Managers should endeavor to create a supportive working environment that allows analysts to complete their review activities with as few interruptions as possible.

Performance review analysts may be assigned to study unfamiliar topics. Often, they will work under tight deadlines. To be effective in these situations, analysts must collect as much information as possible, as quickly as possible.
 


Background research
Background research provides a general overview of the organization or program in question. It may also reveal previous research in the area of interest. Ultimately, preliminary research should help identify fruitful areas for later, concentrated study. This phase is completed primarily by team analysts with the involvement of team leaders or senior managers.

Background research is essentially a matter of context. To learn the historical context of an organization or program, study its origin and evolution. Identifying the reasons for a program's creation and the forces that have shaped it will make it easier to evaluate options for its future.

It's also important to understand the organizational context of an agency or program--how it operates within the broader environment of government. Consult organizational charts or other documents that provide insight into who controls the organization or program. In reviewing a specific program, for instance, consider how it relates to other programs within its organization. Is the program a significant activity of the organization or a minor one that is largely unrelated to the organization's main mission?

The topic should also be considered within a budgetary context. Learning that a program has experienced significant declines or increases in funding, for example, certainly should color any assessment of its efficiency and effectiveness. Budgetary information is particularly important when considering recommendations that would require more funding. In its review of the Texas Air Control Board, TPR learned that recent changes in federal law authorized the agency to charge industries a fee based on their emissions of certain pollutants. TPR analysts recommended using this new funding source to finance a series of improvements. They might have suggested a different approach had the proposed changes required a tax increase.

Finally, the performance review analyst should have a thorough understanding of the topic's

political context--the political forces influencing the organization or program under review. Realistically, the goals and objectives of the legislature, agency administrators and all other interested parties can affect the review's outcome. The analyst should be able to defend the weight given to political factors in developing recommendations.

Becoming familiar with these and other basic implications of the topic area will lay a solid foundation for the review. The organization under review, the persons requesting the review and other affected parties will want to be sure that the analyst has considered all pertinent information and all reasonable approaches to a problem and that recommendations are as informed and reliable as possible. Failure to complete this "homework" will weaken the entire review's results and leave it open to criticism or outright dismissal by defenders of the status quo.
 


Background sources
As early as possible in the review, become familiar with any relevant reports or studies on the topic, many of which should be available from the organization under review. Request specific information, including budgetary and staffing data and descriptive information about programs of interest. Ask contacts what sources they suggest. Ask also whether an agency library is available.

Other potential sources of information include research libraries, legislative libraries, on-line databases and numerous statewide and national organizations. Examples of these organizations include the Council of State Governments, the Texas Association of Counties and various trade and professional bodies. In general, background information sources may include the following:

  • Legislative information--
    - Current statutes and related laws
    - Legislation proposed, but not passed
    - Attorney General Opinions
    - Court decisions
    - Legislative committee reports
    - Appropriations acts
  • Agency information--
    - Agency board meeting minutes
    - Agency annual report
    - Agency budget documents
    - Agency organizational charts
    - Agency rules
  • Reports prepared by other agencies within state or local government (state auditors, internal auditors, etc.)
  • Reports issued by private consultants, foundations, "think tanks," professional associations or organizations associated with the review topic
  • News clippings on the agency or program in question
  • Monitoring reports issued by agencies that oversee the agency or program
  • Briefing materials that may have been prepared for other purposes
  • Overview interviews with management
  • Tours of facilities
 


In-depth research
Once specific issues for investigation are identified, the analyst should gather detailed information. Such research can lead in virtually any direction. At this point, the analyst must decide if a problem or opportunity merits thorough investigation. Otherwise, it is all too easy to become buried in research efforts that may never be used in any meaningful way.

Some data sources that may contribute to in-depth research include:

  • Individual records or files
  • Interviews with front-line workers
  • Interviews with outside experts
  • Direct observation of functions
  • Specific policies and procedures
  • Studies and reports on similar activities in other state or local governments
  • Relevant reports by private consultants, foundations, "think tanks" and trade or professional associations
  • Agency rules
  • Relevant academic journals
  • Testimony from meetings or hearings
 


Interviewing
Interviewing is a fast and simple way to become immersed in the substance of a review topic. Properly conducted interviews can yield insights into the causes of major problems and generate ideas for their improvement or solution.

The analyst should interview employees at all levels of the organization, as well as people outside the organization who are affected by the program or function under review. By interviewing on many levels inside and outside the organization, the analyst can avoid communication barriers that often prevent organizations from seeing themselves objectively.

Assuming the review involves a single agency or organization, begin the interview series with the senior manager and continue systematically with various program directors. Starting at the top will give the broadest policy perspective and illustrate how these perspectives are put into practice within the organization. (It's helpful to ask the senior manager to designate a review liaison who can schedule interviews and supply other requested information.) Of course, the analyst should interview non-managerial employees as well, since they often know more about inefficient or unnecessary activities.

In addition, talk with the staff of agencies that interact with the organization under review, such as budget office personnel, legislative committee members and staff, agencies that serve or are served by the organization and client advocacy groups. Academic experts, private consultants and others with expertise on the topic also can help identify critical issues and may later help the analyst weigh the merits of possible recommendations.

Interview approaches Interview approaches range from informal talks to structured interviews. Tailor the approach according to who is being interviewed, their position, the scope of the review and the information needed.

In an informal interview, make a conscious effort to minimize intimidating methods such as note-taking during the discussion. An informal interview should occur in a relaxed setting, such as a break room. The interview should help the analyst learn about organizational difficulties or administrative problems. Information from such interviews often is given on the basis of trust and confidentiality, so take special care in how it is used.

Structured interviews require the analyst to organize the interview in advance. For a structured interview, the analyst should assess his or her information needs carefully and design a series of specific questions to garner the specific data sought.

Note that most interviews connected with a performance review will fall between these two extremes. Choose the topics and issues to be discussed, but leave room for the interviewee to raise other issues. Use specific questions or simply improvise. If time is limited and the analyst has a large number of interviews to conduct, a structured approach to interviewing is more desirable. On the other hand, an improvisational style can uncover information that might be missed in a more structured interview.

While the personal, on-site interview is highly effective, don't underestimate the value of telephone interviews. They usually are less costly and time-consuming than personal interviews and can be used to contact a number of interviewees quickly.

Generally, interviews should begin with simple, open-ended questions, gradually moving to more complex issues. Similarly, the interview should move from neutral issues to those more sensitive or controversial. This approach can help the interviewer establish rapport and trust, and judge whether the interviewee is shy, defensive or confident. Based on these early clues, adjust the approach for the rest of the interview.

The interview should touch on opinions and feelings; such questions can unearth important information that is not documented. In closing, ask for suggested solutions or improvements.

Good "focusing" questions to use in the interview could include:

"What is the worst thing about . . . ?"

"What is the best thing about . . . ?"

"What's the biggest problem with . . . ?"

"What changes would you recommend if you were me?"

Managers may want to select the times when analysts can meet with employees in their areas. However, reviewers should try to maintain some control over timing and, more importantly, the interview environment. For each interview, request a reasonable time period, usually no more than one hour. Whenever possible, the interview should be conducted in a private area away from interruptions.

Sooner or later an interviewee will bring up a subject that touches on a key issue or problem. When this happens, ask the journalist's traditional questions--what happened, how and why it happened and who was involved. If the interviewee omits critical points, interrupt and prevent the conversation from moving on until the interviewee has provided the necessary information.

Understandably, interviewees may hesitate to discuss matters that could be confidential, self-incriminating or damaging to their managers' or agencies' image. However, this information may be necessary to discover the true causes of problems. To uncover such sensitive information, the analyst may need to offer assurances of confidentiality. Don't make promises that can't be kept. But the analyst can and should promise that the sources for comments heard in interviews will not be revealed.

Recording the interview Rather than entrusting important issues to memory, the analyst should carefully record the details of each interview. The safest approach is to take notes while the interview is in progress. However, note-taking should be as unobtrusive as possible to avoid disrupting the conversation. Prepare a formal write-up from interview notes as soon as possible. The sooner the notes are recorded, the more accurate they will be. Don't guess at what was said; contact the interviewee for clarification if necessary. These notes should be retained as part of the review's work papers (the documents kept on file as a "paper trail" of your recommendations).

Tape-recorded interviews have obvious advantages, but in general they should be avoided. Interviewees tend to become nervous when confronted with a microphone and may not respond spontaneously and honestly. In addition, the mechanics of taping--changing batteries, flipping or replacing cassettes--tend to interrupt the interview. Lastly, the taping process is inefficient; the analyst must listen to the same interview more than once and still often must produce written notes.
 


Focus groups
TPR also uses focus groups to gather diverse opinions about an organization's operations. Focus groups bring together individuals representing different groups interested in and involved with the organization under review, such as employee groups, users of the organization's services and bodies regulated by the organization. Typically, a focus group of ten to 12 individuals will be invited to participate in an in-depth discussion lasting several hours and designed to identify areas to target in the review.

For example, in 1994, TPR reviewed the University Interscholastic League (UIL), a statewide organization that administers high school academic, music and athletic competitions. The review team organized focus groups in four locations around the state by inviting various groups interested in UIL's operations to discuss their perceptions of the organization. Teachers, administrators, parents, students, coaches, band leaders and drama instructors attended these groups and provided the TPR review team with a variety of issues to pursue.
 


Observation
Observation is another useful method for studying an organization's activities. Observations can range from a prearranged tour of an organization's facilities to an unannounced visit as a client or customer. Such participation allows the analyst to experience personally such problems as poor customer service.

All observations should be documented. Record the date, time, place and activity observed. Exceptions to normal procedures and events should be noted; sometimes observation reveals that what an organization calls "exceptions" are actually the norm. As with interviews, carefully note opinions and emotions. Document whether individuals seem frustrated or bored by the functions observed.

In general, an inconspicuous analyst is more likely to see normal routine. If possible, spend some time working in the environment being reviewed. An analyst sitting long enough at a desk in a section will soon be accepted by most workers who will then resume their normal routines. This is a good way to observe morale, interpersonal relations and relative workloads. The analyst is also more likely to see unusual problems, which often illuminate larger problems.

Certain types of reviews are particularly well-suited to this technique. For example, observation is a good way to assess an organization's customer service program. Watching the way in which an organization's employees deal with expected (and unexpected) questions from customers can quickly reveal the strengths or weaknesses of a customer service program.
 


Interest-group meetings
Another research method is the interest-group meeting. Review deadlines often make it impossible to conduct detailed interviews with every group that has an interest in the organization or program under review. In such cases, the analyst may wish to hold one or more meetings with representatives of industry groups, professional associations or client advocacy groups, giving each an opportunity to present suggestions. Interest-group meetings can be held with or without the representatives of the organization being reviewed.

The review team may choose to hold a public meeting with formal notices in newspapers or other appropriate publications. At such meetings, testimony probably should be limited to ten minutes each to enable everyone to be heard. Analysts also may choose to allow the submission of written testimony. Of course, remember that public meetings sometimes tend to be dominated by people who are highly motivated due to personal grievances or single-issue concerns. By the same token, other persons with valuable input simply may not choose to speak in a public forum.
 


Surveys
The survey is one of the most efficient and useful research methods. Surveys can be used to contact many persons in a cost-effective manner, and often provide the reviewer with franker responses than face-to-face interviews. Surveys shouldn't be limited to the body under review, but extended to client groups or other parties affected by the organization or program.

The survey is best used after analysts have established a basic foundation of knowledge about the review topic. Their initial research should help decide which areas need the additional information a survey can provide, and which people (employees, customers, etc.) should be targeted by the survey.

The choice of how to administer the survey--in person, by telephone or fax or via mail--will be guided by the type of survey designed and the time, staffing and financial limitations of the review team. Each method has advantages and disadvantages. For instance, an in-person survey generally costs more than a mail or phone survey, but may yield a greater amount of data per respondent than the other two methods. Mail and phone surveys can be used to contact much larger populations than would be possible via direct interviews. However, contacting each member of a large group, such as the client population of a major agency, may be impossible even for an ambitious review. In such cases, the analyst can survey a selected sample population (see the section on sampling below).

Developing survey questions Regardless of the type of survey planned, think carefully about the survey's questions. First, the questions should relate to the information sought; useful answers are the only goal. The questions also should be clear. A confused or bored subject may give useless responses or simply refuse to respond at all.

As with interviews, avoid leading or "loaded" questions that may affect subjects' responses; ask "Have you noticed any problems with employees' use of sick leave?"--not "How much abuse of sick leave policies have you noticed?" Also, try to avoid questions that ask too much of your respondents' memories. Ask "About what percentage of your time is spent traveling on business?"--not "How many days did you spend traveling last year?"

There are two basic types of survey questions. One is the unstructured or open question, which allows respondents to give their own answer, instead of choosing among options. This is the sort of question asked in a personal interview, and indeed there is little difference between a personally administered survey featuring open questions and a conventional interview. Open questions often are useful in uncovering hidden problems and fresh ideas and solutions. But remember that in a telephone or personally administered survey, open questioning requires the interviewer to accurately record specific and often lengthy responses. Moreover, open questions in a mail survey may require more time and effort than respondents are willing to expend. For this reason, mail surveys should feature only a few open questions to keep the document from seeming too formidable.

Structured or closed questions are more common in surveys. Closed questions ask for specific answers, often provided by selecting yes or no, using a number-ranking scale or choosing among multiple-choice answers. Such uniform responses are easier to quantify, compare and contrast, and are particularly useful in surveying a large group. However, these questions also call for careful preparation; for instance, don't overlook the possibility of more than one correct answer to a multiple-choice question.

Consider the order in which questions appear. It's best to use the so-called inverted-funnel sequence, which puts specific, closed questions first and general, open questions last. This format gives respondents the opportunity to think carefully about specific issues before making sweeping value judgments. Also, closed questions are usually less intimidating for the respondent, and positioning them at the beginning of a survey encourages respondents to begin.

No matter how carefully the analyst prepares them, the set of survey questions should be tested. An initial test will identify problems before time and money are invested in a full-scale survey effort. An analyst may want to test a proposed survey on fellow staff members or to conduct a pilot test with a small number of respondents. After administering the survey to a pilot group, ask them to evaluate the clarity of the questions and the adequacy of the answer options, and revise the survey accordingly.

Processing surveys The final step of surveying is data processing--the compilation and analysis of responses. Planning ahead can make this task easier.

Structured or closed questions are easily prepared for data processing. One option is "precoding," or assigning numbers to each response choice, which allows all the responses to be compared and analyzed in numeric terms. Open questions are more difficult and time-consuming to analyze. Give each response a careful reading to identify new issues and ideas. In addition, categorize similar responses to get a general picture of overall trends in the survey group.

Several types of computer software are available for processing surveys. For example, a CATI (computer-assisted telephone interviewing) program allows the interviewer to enter answers directly into a computer while a phone interview is in progress. If the responses are numerically precoded, survey results can be produced quickly, with no extra work. CAPI (computer-assisted personal interviewing) programs allow analysts using laptop computers to enter answers directly at the interview site. CADE (computer-assisted data entry) programs compile and quantify survey responses; a newer variant is the self-administered survey program, which allows respondents to fill out the survey on a computer, entering their own responses.

This is only a general guide to the art of surveying; for more specifics, consult the books referenced in the bibliography. Surveys are not easy, and they're never perfect. But surveying is a useful tool, and careful preparation can maximize its usefulness.
 


Sampling
Sampling is a time-saving research technique. The analyst uses a sample to make a judgment about a targeted population based on information gleaned from a small portion of that population. In a performance review, sampling can be used to select individuals to interview or survey or to sift evidence to support or refute possible conclusions and recommendations. Sampling is used when the analyst needs information that is not already available or when the reliability of already-compiled data is in question.

This chapter does not attempt to discuss sampling techniques in detail. However, it does provide enough information to allow analysts to determine whether they should refer to technical sampling manuals or other resources. The three key decisions to be made in sampling--what sampling method to use, the total population to be sampled and the sample size needed--are discussed below.

Determining the sampling method Which sampling method is used can greatly affect the accuracy of the results. When doing rigorous statistical studies, an analyst must be able to stipulate with reasonable confidence that findings based on a sample will not vary from the total number or "universe" of items under study by more than a certain predetermined amount.

The time period associated with a sample also can be important. If a review covers an agency's performance over a given year, it makes sense to spread the sample over that entire time period, not just a few months. Otherwise, one risks selecting a sample that represents atypical conditions--a "peak" or "valley." Unless the time period selected is typical of the entire time period under review, the findings cannot safely be related to the universe.

Random sampling is one method used to ensure that a sample is representative. A sample unit is selected randomly when it has the same chance of being selected as any other unit of the universe. If all units of a given universe were placed in a box and selected by drawing, the sampling would be considered random.

Another way to achieve randomness is interval sampling. This is done by listing all units of a universe and selecting items at intervals. If the universe is an alphabetical list of a high school's 1992 graduates, for example, the analyst could simply select every third person on the list for the sample. Sampling is often done by assigning each sampling unit a number and selecting units using a manual or computerized random-number table.

If the universe is a varied group with many significant differences, the analyst may resort to stratified sampling. This involves separating the universe into two or more logical layers or strata. For example, in studying the effect of a particular medication on patients ranging in age from birth to 90, an analyst might stratify the samples by age groups.

A key rule of thumb: the more varied the universe, the larger the sample size needed to depict that universe accurately.

Much of the sampling done in standard performance reviews is relatively simple because such reviews are not meant to be rigorously scientific. Often, achieving a truly scientific sample would be costly, time-consuming and unnecessary to prove a point.

A number of software packages and experts are available to assist with sampling. Consider these options for a complex job requiring scientifically reliable results.

Determining the population to be sampled (the "universe") The universe of a review is the total population from which the sample will be drawn. Sometimes it is relatively simple to identify this universe. To return to our earlier example, the graduating class of a particular year in a particular high school would comprise each person in that class at the school in that year. Of course, the analyst would need to define "graduating class." For example, the definition might not include persons who were in the class but failed to graduate.

Determining the sample size needed Determining the sample size can be a very simple or a complicated endeavor, depending on how accurate the results must be. The sample size must be large enough to represent the entire population being sampled if the research is to be scientifically valid. Usually, a sample between 5 and 20 percent can be considered representative. If the total population is relatively small, use a larger percentage. Another rule of thumb is the more similar the units in the total population are, the smaller the sample size needed. The more varied the population, the larger the sample size needed.
 


Conducting the analysis
Analysis involves compiling and interpreting facts, statements, observations, events and impressions gleaned during the research stage of a performance review. The key to effective analysis is the ability to see an organization from a fresh perspective. The analyst must find new ways to look at issues and cast new light on problems and opportunities. Again, you should maintain careful records of all analytical activities.

Analysis occurs throughout the review, but most analysis will occur in Phase II, Focusing and Issue Identification, and in Phase III, Issue Development.
 


Focusing and issue identification
Among the first analytical activities in a review is an assessment of the information collected and the selection of issue areas for further development. This phase is carried out by team analysts in consultation with the team leader.

Organize the information. Before beginning analysis of any information, the analyst should organize raw information by topics or functions. At this phase, review the information's accuracy and relevance. Accuracy involves cross-checking information between reports, verifying numbers with knowledgeable people and assessing the plausibility, detail, documentation, consistency and overall "ring of truth" of interview and survey results.

Weigh data in light of possible biases. Interviews, for example, often yield conflicting views, and the analyst should avoid jumping to conclusions before reviewing all available information.

Identify a list of issues. To determine which issues deserve more study, the analyst may develop standards or criteria for activities under review. By comparing the actual situation to an ideal--"what should be"--the analyst can identify issues that merit closer analysis.

The criteria set may be pre-existing ones developed by experts or simply some common-sense principles. Criteria may be quantitative or qualitative, precise or general. For example, a frequently reviewed area is an agency's policies for handling client complaints. The criteria for such a complaint process might include such factors as the timeliness with which the agency initially responds, how quickly the complaint is resolved and the client's level of satisfaction with the outcome.

Extensive analysis is not required in the issue identification phase; the goal is simply to identify a fruitful series of issues for review that offer opportunities for improvement.

In identifying issues, look for opportunities to increase revenue or reduce costs. One current trend is to charge fees to persons who benefit from a government service, rather than spreading the cost of the service to the entire public through general taxation.

Furthermore, keep the intended purpose of the program or organization being reviewed foremost in mind. For each activity under review, the analyst should ask if the activity is important to the intended purpose and if that purpose could be better served in another way. For example, if a program is intended to provide information to the public, but the public actually has to go through numerous steps to get the information, the analyst has probably found an area warranting close attention.

When selecting issues for review, pay attention to "quality improvement." The "Total Quality Management" movement has caused many organizations to rethink their activities in terms of how well they serve their clients.

Issue identification can be accomplished by involving the review staff in a brainstorming session. In this session, the project leader should allow a free exchange of opinions and information. Since the goal is to develop issues for possible review, reviewers should be encouraged to identify any activities with potential problems or room for improvement. Each reviewer should articulate why an issue deserves more study.

Prioritize the issues. Once an analyst has compiled a list of issues, they should be placed in order of priority. Those that clearly fall outside the review scope should be eliminated. If an issue seems likely to require more time or expertise than is available, either scale it down or drop it. Make sure the final issues list represents a reasonable workload for the review team. Once the list is completed, determine which team members will develop each issue.

As a record of his or her work and a tool for discussion with management, the analyst should prepare a briefing sheet on each issue. This sheet should include background information on the topic, a statement of the suspected problem and possible solutions or recommendations. Make this briefing sheet a working document that can be updated during the course of the analysis. A tabbed notebook or computer file for storing each briefing sheet and related materials is recommended. These materials eventually will serve as a permanent record of the review work.
 


Issue development
Developing issues is the core of the analytical work. For each issue, identify suspected "gaps" between the existing situation and the ideal by studying the existing situation more closely; clarify problems, if any; design alternative approaches; conduct additional interviews or surveys of the staff in charge of the area, and complete any other analysis needed to substantiate the recommendations.

The steps in issue development are:

  • Planning the analysis
  • Gathering and summarizing detailed information
  • Analyzing the information
  • Testing the findings
  • Developing recommendations

This phase is performed primarily by team analysts and could be subject to approval of the team leader.

Plan the analysis. The analysis plan, which should be incorporated into the master project plan, should describe specifically how each issue will be developed. It should include detailed information about what will be evaluated, what analytical tools will be used and what information will be gathered.

What the analyst chooses to evaluate will influence the choice of analytical tools and information to be gathered. For example, the analyst might evaluate a program based either on "performance" or "customer satisfaction." The plan should document which criterion fits the objective and how it can be measured. Hard-to-measure concepts such as satisfaction may be estimated using both direct and indirect information; for instance, indirect information such as the number of products returned and written complaints received may prove more revealing than direct interviews.

Gather and summarize detailed information. Gather detailed information based on the plan of analysis. When determining which information-gathering techniques to use, such as interviews and surveys, the analyst should consider what type of analysis is desired. The information-gathering technique used should be compatible with content and format needs. For efficiency, determine all the types of information needed for all issues to be reviewed; a survey, for instance, may be used to collect information for several issues.

Analyze the information. Examine the data, identifying possible problems and exploring their causes to determine the best possible solutions.

To determine whether a program or activity is proceeding in the most efficient manner possible, compare pertinent aspects of the program to some standard, such as expected performance or goals, performance in a similar program, previous performance or to performance standards required by legislation. (More information on using such comparisons--"benchmarking"--is discussed as follows.)

TPR analysts assigned to review a state agency examine the activities of similar agencies in other states. This is especially useful in considering a new program or approach because other states may provide appropriate models for comparison. By analyzing the experience of other states, the analyst can avoid pitfalls and help ensure that a recommendation's implications have been addressed.

When analyzing a program or policy for potential problems, always consider what the "customer" thinks. One way to do this is to review complaints received by the organization. Complaints can be a valuable source of information. Consider conducting customer surveys or meeting with clients to learn what sort of service they expect. Customers tend to be most concerned about issues such as timeliness, accuracy and responsiveness. By determining exactly what a program's customers consider to be timely, for example, the analyst will learn how the process needs to change.

Another analytical step is interpreting your findings and determining their implications for the agency or program under review. To do so, you must fully understand the "whys" behind the current system. You need to understand the legislative intent behind the program's statutory authority, as well as the evolution of any legislative changes to that authority. Each program or aspect of a program may have a vested constituent group and represent hard-fought battles of the past. Interpreting involves looking for patterns, strengths, weaknesses, best practices, barriers, vulnerabilities, inefficiencies and emerging problems. After reviewing the information, make some value judgments. Collaboration with experts and impartial third parties will help to ensure accuracy and objectivity.

At times, the findings will lead the analyst to drop issues, if the analysis does not sufficiently substantiate that a problem exists or that there are opportunities for improvement.

Test the findings. Once recommendations are developed, make sure the evidence supports the conclusions. To test the adequacy of the evidence, ask the following questions:

Is the evidence sufficient? Is there enough convincing evidence to lead a prudent person to the same conclusion as the analyst?

Is the evidence competent? Evidence should be reliable and substantive; an unsupported oral statement that certain conditions exist is less substantive as evidence than copies of memoranda or other documents that clearly demonstrate that the conditions exist.

Is the evidence relevant? Evidence should have a logical and sensible relationship to the issue. Would a prudent person clearly recognize this relationship?4

Answering these questions should prepare analysts to respond to inquiries from agency or program heads who review their work. If possible, ask personnel from the organization under review to examine a draft of the findings. The organization should be able to verify that the operation has been explained accurately. Putting the work to this test gives analysts a chance to correct errors or refine the fact presentation before final publication.

However, remember that cooperation and information exchanges may be difficult to maintain if the agency under review anticipates unfavorable results or undesired changes, so use discretion in sharing findings. Personnel in the organization under review will be particularly sensitive to any opinions or unsubstantiated conclusions in the draft.

Develop recommendations. An analyst should develop recommendations when his or her analysis identifies changes that could yield measurable benefits. These benefits can be changes that raise additional revenue, save money, increase or improve the quality of services, increase efficiency or effectiveness, improve treatment of customers, the public or employees, increase accuracy or provide additional information. Analysts must be able to substantiate the benefits of their recommendations.

In developing a recommendation, an analyst should assess all of its implications, including fiscal implications--the resulting costs and savings or benefits over time. TPR's three major reports for the Legislature calculated savings and costs and estimated increases or decreases in staffing over a five-year period.

In estimating fiscal implications, consider direct fiscal changes such as employee wages, fringe benefits and even materials and supplies, if substantial. (Fringe benefits are a major governmental expense and should always be considered in any estimate; in Texas, every dollar of state salary saved produces an additional 30 cents in savings on fringe benefits.) If the recommendation doesn't target specific positions for reduction, use average salary figures. For example, if a recommendation would reduce an agency's staff of auditors by five, base the estimate on the agency's average auditor salary.

Indirect costs, such as rent and utilities, also should be calculated if they would be affected by

a recommendation. Document the sources of cost information, any assumptions and all cal-culations.

Conduct a cost/benefit analysis. A cost/benefit analysis compares the costs of implementing a recommendation against its immediate and future benefits. Obviously, a recommendation's additional revenue or savings should exceed its costs. The trick is in quantifying a benefit's value; some benefits cannot be measured, but they should be listed in the report. An admitted weakness of the cost/benefit analysis is the subjectivity associated with quantifying certain benefits. For instance, the increased feeling of safety a town might enjoy by purchasing a new fire truck is a real benefit, but not one likely to sway a hard-pressed city budget officer.
 


Analytical tools
Performance analysts may use any of a number of analytical "tools." Some of the most useful analytical tools are:

Benchmarking/best practices One of the most productive analytical activities for issue development is the study of ways in which others have designed and performed similar activities. The experiences of others provide insight into what works and can yield data that will help to substantiate the recommendations.

The key is to compare the process with the best of its type. For example, one area of the Texas criminal justice system addressed by TPR's review was the state's prison industries; TPR compared TDCJ's with a highly successful program in Washington state. The Xerox Corporation calls this approach "Competitive Benchmarking"; General Electric uses a similar "Best Practices" program; IBM's version of this process is called "Best of Breed," while a 1984 article in the Harvard Business Review calls it "Service Blueprinting." Ultimately, all such approaches involve understanding the process under review and comparing it to a successful counterpart. Moreover, the benchmarking approach is most successful when applied to an organization's most important processes.

Benchmarking tends to be labor-intensive, requiring a thorough mapping of each step in the process under study, a subsequent mapping of the "best" process and a final identification of variations between the two. Because it is a process, and not a product, being compared, the analyst need not select an identical company or agency to review. When Xerox wanted to improve its system for processing customer orders, it chose to use mail-order sales giant L.L. Bean as a benchmark. L.L. Bean was processing orders three times faster, and Xerox wanted to dissect L.L. Bean's best practices and find opportunities to improve its own operations. By identifying certain "fail points" where deficiencies exist, the analyst can concentrate attention on areas offering maximum opportunities for improvement.

Benchmarking Questions to ask:

1. Planning
* Benchmark what?
  • - Does the topic reflect an important business need?
  • - Is the topic an area in which additional information could influence plans and actions?
  • - Is the topic significant in terms of costs or important non-financial indicators?
* Who to look to for best practices?
  • - Do exact models for comparison exist?
  • - Are there leaders in unrelated areas?
  • - What sources do you have access to?
2. Data Collection
* What information is already available on the subject?
* What characteristics are to be measured?
* How do you measure performance?
* How do you perform the activity?
* How does the benchmark organization perform it?
3. Analysis
* Is there a competitive gap?
* Does the benchmark entity perform better? Why?
* What might improve performance?
* What should be avoided?
* How can you apply what you have learned?

Brainstorming "Brainstorming" is a technique for generating useful ideas through open, freewheeling discussion among team members. Brainstorming is intended to expand available alternatives, look beyond obvious solutions, encourage innovation, shift points of view, challenge tradition, reduce inhibitions and tap the team's creative resources.

The three basic brainstorming methods include the unstructured approach, in which everyone contributes ideas spontaneously, with a designated scribe or "facilitator" recording them; a structured format, in which each team member takes a turn at presenting ideas; and a written, or "pen-and-paper" method, in which participants record their ideas on slips of paper and submit them to a facilitator or team leader.

After choosing an appropriate brainstorming method, the team leader should state a problem or discussion topic. This topic or problem should be clear and concise. Place the statement on a flip chart so everyone can refer to it and then solicit ideas from the group members.

Some tips for successful brainstorming include: Never criticize or evaluate an idea when it is first presented, and record all ideas; appoint a good facilitator to ensure that everyone participates and that questions that need to be asked are actually addressed; keep the setting informal; encourage offbeat and unconventional ideas; combine and build on ideas, and move quickly from one member to the next. The brainstorming session is complete when all the participants' ideas are recorded.

After all ideas are recorded, select the most fruitful alternatives, either by having participants vote for the best ideas or by reaching consensus through discussion. The top choices should be discussed in detail. Try listing each idea's advantages and disadvantages.

Flow charts Flow charts are analytical tools commonly used to identify problems. They illustrate the flow of an activity, a process or a set of interrelated decisions or communications from beginning to end.

Flow charts can be applied to anything from the processing of a tax return to the flow of materials in a manufacturing process. The major benefit of flow charting is that the process forces analysts to understand all the steps of a process and to ask questions about the sequence of events in a process.

Flow charts are prepared from information gathered through interviews or observations. If an organization under review has already prepared a flow chart of an activity, verify the steps involved. Activities should be shown in sequence and significant time lapses during and between processes should be noted. It is best to use common, agreed-upon flow chart symbols so that the work will be readily recognizable to team members.

The layout of a flow chart can be either vertical or horizontal. After the chart has been drafted, its contents should be reviewed by those who provided the information involved. This review will often produce modifications to the flow chart. Once the chart's accuracy is verified, the analyst is ready to analyze the process it portrays.

In this analysis, look for duplicated activities, activities that should be performed but aren't, unnecessary activities, misuses of time and any unusual occurrences. For example, look for any obvious bottlenecks in the process--anything that interrupts the orderly and efficient use of personnel and resources to produce the desired end.

Cause-and-effect ("fishbone") diagrams Cause and effect diagrams show the relationship between a problem and possible factors creating or influencing it. They are used to isolate, identify and verify the actual causes of a problem in a process or activity. Such diagrams often are called "fishbone" diagrams because of their appearance.

The steps in constructing a cause-and-effect diagram are:

1. Determine a problem statement and categorize four or five possible causes of the problem. Major categories of causes include policies, procedures, people, equipment, work environment, measurement, management or money. Use any category that fits the situation and helps people think creatively.

2. Construct a cause-and-effect diagram. Place the problem in a box on the right side of a flip-chart page and draw a horizontal line (the fish's "spine") leftward from the box. List two or three major cause categories above the horizontal line and a similar number below, connecting them with lines (the fish's "bones") to the "spine."

3. Conduct a brainstorming session to determine the specific factors the team believes to be causes of the problem in question; as these factors are identified, list them under their appropriate major category.

4. After all ideas are presented and understood, the group identifies the most likely causes

(either by voting or group discussion). Causes that are quantifiable should be measured. This will provide a basis for prioritizing the causes.

Check sheet A check sheet is used to compile, summarize and track observations, interview results or other data. It can help translate opinions into facts by showing how often an event occurs or the amount of time an activity requires.

Visually, a check sheet is simply a series of rows and columns denoting activities and categories. Creating one involves the following steps:

1. Determine the activity you wish to track.

2. Design a form that is clear and easy to use, making sure that all columns are clearly labeled, with enough space to enter the data.

3. Record the data on the form in a consistent manner.

4. Analyze the data.

Scatter diagram The scatter diagram is another tool for determining cause-and-effect relationships. A scatter diagram charts two variables on vertical and horizontal axes to determine whether there is a relationship between them--typically, whether one variable is a cause of the other.

An example of a use for the scatter diagram could be an analysis of the relationship of overtime to processing errors among workers. To create a scatter diagram:

1. Collect the data and construct a data table. For the example cited above, the analyst would assemble overtime hours worked and errors made over a given time period for a selected group of employees.

2. Draw the horizontal and vertical axes of your diagram, with values rising as the reader moves up and to the right. Place the possible "cause" variable on the horizontal axis (in this case, overtime worked) and the "effect" variable on the vertical axis (the number of errors made).

3. Plot the data on the diagram.

4. Interpret the diagram. A cause-effect relationship is indicated if the plotted points form a clustered pattern. The direction and tightness of this cluster determines the relationship between the two variables. The more the cluster resembles a straight line, the stronger the relationship between the variables. If the cluster rises diagonally to the right, the suspected factor appears to be a cause of the problem. If the cluster falls diagonally, the suspected cause actually appears to discourage or suppress the problem. If the data points are scattered over the whole diagram, no correlation between variables is indicated.

Pareto charts A Pareto chart (named for the 19th-century economist who devised this type of analysis) is a vertical bar graph used to determine the most serious of a group of problems, so that priorities may be set. This analysis is based on the assumption that problems have different levels of importance, and that organizations always face more problems than their time and resources can address. Pareto analysis is responsible for the famous "80/20" doctrine, a rule of thumb that holds that about 80 percent of the problems in any organization are created by 20 percent of its employees. The review can focus on the most vital problems by using a Pareto chart.

Suppose an analyst suspects that an organization takes too much time to issue permits. He or she will want to identify problems causing this delay and correct the most significant ones. Studying the problem may indicate that the highest number of delays occur because of incorrectly completed applications. Now the review can focus on improving the accuracy of applications to resolve the most significant reason for delays. This is the sort of judgment facilitated by Pareto charting.

To construct a Pareto chart:

1. Select the issues or causes to be ranked.

2. Select a measure for comparison, typically frequency (number of occurrences) or cost. If you do not have a direct measure for a cause or problem, try using a percentage.

3. List the issues or causes from left to right on the horizontal axis in order of decreasing frequency or cost.

4. Analyze the chart and choose the most significant issues for review.

Histograms A histogram is a bar chart mainly used to show the frequency of certain activities. In a histogram, the horizontal axis signifies some quality being measured, while the vertical axis measures frequency. For example, an analyst could use a histogram to chart employee use of sick leave. To construct a histogram for this purpose:

1. Gather data.

2. Divide the data into manageable categories. The number of categories (the bars in the graph) will determine how much of a pattern will be visible. For our example, appropriate categories might be zero to four days' leave used per year, five to nine days' leave per year, 10 to 14 days' leave per year and 15 or more sick days used per year.

3. Construct the histogram based on your data, with the vertical axis representing frequency--in this case, the number of employees. The horizontal axis would represent the categories of leave use as established above.

4. Analyze the histogram to determine whether employee sick-leave patterns seem unusual or problematic.


Texas Comptroller of Public Accounts Window on State Government
Contact Us
Privacy and Security Policy