Definitions in this glossary apply solely to the context of the evaluation of research entities, and relate to the reference documents of KEIS. They do not intend to be exhaustive in any way, but, rather, aim at providing a guide to readers of KEIS documents. Asterisks indicate terms that have separate entries in this glossary.
Academic The adjective academic, particularly applied to the *appeal and *reputation of *research entities, describes a context for scientific activity which is structured around universities and research organisations. By contrast, a context that does not involve this form of structuring is termed non-academic. Accordingly, partnerships between a research entity and a private company or a regional authority, for example, can be qualified as non-academic.
Applied (research) Applied research is research focusing on scientific and technological questions associated with socioeconomic issues pertaining to specific sectors (such as energy, the environment, information, health or agriculture). Its aim is not only to increase knowledge but also to produce findings and applicable innovations that are likely to have an impact on society, (it is therefore broad in its meanings, captured by the French term “finalisé” of which “recherche appliquée” is just one part).
Appraisal We call appraisal the *results and, in general, all of the activities and *scientific outputs of a research entity during the past period of evaluation (currently the last 5 years). The appraisal is based on the objectives and strategy that the research entity had developed in its previous scientific plan.
Attractiveness Appeal (in effect, ability to attract) can be defined as a *research lab’s ability to promote its activities before an *academic or a non-academic community. It therefore depends on the lab’s ability to become attractive in its field.
Bibliometrics Quantitative analysis and statistics on the scientific publications of a *research entity (media, authors, citations, institutional affiliations, etc.).
Characterisation The characterisation elements of a *research entity activities and operation are provided by *observable facts which enable the evaluation to be based on data. Clinical investigation centre (CIC) Clinical investigation centres are infrastructures built for the purpose of developing *clinical research such as development of new treatments or investigations intended to gain knowledge on a disease. CICs are supervised by both the French Ministry in charge of Health and the INSERM.
Clinical (research) Clinical research (from the Latin clinice meaning medicine that is practiced at the sickbed) is research that is directed at investigating new treatments or new techniques.
Component We refer to components when we talk about the way in which *research units are structured. A *team, a *theme, a department and a focus are different types of components.
Context The term context identifies the various aspects of the situation (both past and present) and environment of a research entity being evaluated. In this regard, the context 39 must be viewed as a parameter influencing qualitative evaluation. The history, identity and missions of the *research entity, its scientific and educational environment, its regional situation, social, economic and cultural environment altogether represent the context.
Descriptor The term descriptor is sometimes used to refer to scientific results and activities allowing the evaluation to be based on evidence – in other words, on data. With regard to a scientific evaluation activity, we therefore call descriptor the function of an *observable fact.
Disciplinary group Group of *disciplines used for structuring *scientific domains. Discipline Scientific domain. In the evaluation of *research entities conducted by KEIS, disciplines are divided into *disciplinary groups (or disciplinary fields) within each *scientific domain.
Domain (scientific, disciplinary) KEIS lists three scientific domains, organised into disciplinary fields: Scientific domain Sciences and technologies (ST): disciplinary fields: Mathematics; Physics; Earth and space sciences; Chemistry; Engineering sciences; Information and communication sciences and technologies. Scientific domain Life and environmental sciences (SVE): disciplinary field Biology/Health (sub-fields: molecular biology, structural biology, biochemistry; genetics, genomics, bio-informatics, systems biology; cell biology, animal development biology; physiology, physiopathology, endocrinology; neurosciences; immunology, infectious diseases; clinical research, public health); disciplinary 40 field Ecology/Environment (sub-fields: cell biology, plant development biology; evolution, ecology, environmental biology; life sciences and technologies, biotechnology). Scientific domain Human and social sciences (SHS): disciplinary field Markets and organisations (sub-fields: economics, finance/management); disciplinary field Norms, institutions and social behaviour (sub-fields: law; political science; anthropology and ethnology; sociology, demography; information and communication sciences); disciplinary field space, environment and societies (sub-fields: geography; town planning and land development, architecture); disciplinary field Human mind, language, education (sub-fields: linguistics; psychology; educational sciences; sport and exercise sciences and techniques); disciplinary field Languages, texts, arts and cultures (sub-fields: languages/ancient and French literature, comparative literature; foreign languages and literature, regional languages, cultures and civilisations, arts, philosophy, religious sciences, theology); disciplinary field Ancient and modern worlds (sub-fields: history, history of art, archaeology).
Environment (social, economic, cultural) The social, economic and cultural environment constitutes a fundamental piece of data for evaluating *research entities as it enables the interactions of a collective research entity with society – taken in its non-*academic dimension – to be assessed. These interactions depend on the nature and purpose of activities of the research entities. The main types of facts related to these interactions are for example: outputs for nonacademic institutions such as regional authorities or enterprises (e.g. study reports, patents, licences, publications in professional journals, etc.); 41 involvement in partnerships with cultural institutions, industrial groups, international organisations, etc.) ; the impact of the entity activities on economy and society, etc. Evaluation [see Evaluation criterion]
Evaluation criterion A criterion is what is considered as relevant in evaluating *research entities.KEIS reviewing work is based on six evaluation criteria: 1. *Scientific production and quality; 2. *Academic reputation and attractiveness; 3. Interactions with the social, economic and cultural *environment; 4. Organisation and life of the entity; 5. Involvement in *training through research; 6. *Strategy and research perspectives for the next evaluation period.
Evaluation field The evaluation field (field of evaluation) is the scope of a *criterion, i. e. the diverse parameters that the evaluator has to assess. The evaluation field of the *scientific outputs and quality criterion, for example, includes breakthroughs, findings, problems, experimental factors leading to scientific achievements, and originality, quality and reach of the research.
Evaluative intention Term denoting the application points of the *evaluation criteria implemented. Evaluative intention is defined by the specification of the *evaluation field covered by each criterion, and by that of the *observable facts and *quality indicators relating thereto. Executive summary Brief description of the activities and objectives of a research entity, with a concise definition of its field and profile. 42
Expert The term expert refers to a *peer (a researcher with a recognised level of scientific competence in a disciplinary field) in charge of evaluating a research entity. Experts work in *committees. They are chosen for their competences, deemed appropriate for the subject being reviewed, and within disciplinary scope, research purposes, possible interdisciplinary dimension etc. of the entity.
Expert committee In order to evaluate *research entities, *experts work in committees made up of *peers chosen for their scientific competences. The expert committee collectively evaluates the lab’s scientific production and projects, in its context and produces an evaluation report (*appraisal and research perspectives).
Exploitation This term has two different meanings, which can sometimes lead to confusion when discussing evaluation. The first is a common, broad meaning in The sense of “showing to advantage”, which applies to an undefined series of items. The second is more specialised, referring to a series of activities and initiatives that are likely to increase the *reputation and *appeal of the research and its impact on the social, economic and cultural environment.
Factual data [see Observable fact] Federated units *Research labs grouped together around shared scientific topics or equipments. Federated labs may belong to different research institutions and may be multidisciplinary. They help identifying dominant scientific centres and/or pooling facilities and personnel. At CNRS, federated organisations are “federated research institutes” (IFRC) that bring together specific CNRS labs located in a single place, or research federations (FR), which group together labs working on joint research subjects. Federated units remain independent. 43
Focus [see Component] Governance Originally from the French word which emerged around the 13th century, meaning “government”, “jurisdiction” or “power”, particularly to refer to the reach of a territory placed under the jurisdiction of a bailiff, i.e. a governor missioned with running this territory, this term then entered the English language initially to denote the way in which feudal power was organised. At the turn of the 21st century, with the development of the notion of globalisation, the word now refers to an organisation and administration process of human societies that is supposedly respectful of diversities and rooted in sharing and the community of interests. In the economic and political spheres, the term governance identifies a flexible system for managing collective structures (states, companies, international organisations, etc.). Swiftly entering our everyday vocabulary, the word has undergone a significant semantic extension. It has been used in the field of scientific evaluation to identify a method for directing and managing a research lab. Largely incongruous with this field of activities – where its meaning is still ambiguous – it has been replaced by the term *management in KEIS’ standards. I
mpact The term impact is frequently encountered in the vocabulary of evaluation. Whatever the scope attributed to it (scientific, socio-economic or cultural impact for example), it should be understood as an effect (positive or negative) of a *research lab’s activities on a given aspect of its *context.
Indicator An indicator is based on facts obtained during a comparative evaluation. In the field of research 44 evaluation, indicators are most often described as sets of *observable facts serving as *descriptors applied to scientific *results or activities. In this regard, they are generally used to obtain a research lab’s performance *metric and are part of the *quantitative model of scientific.
Innovation Broadly speaking, innovation is a creative process of scientific or technological transformation that either partially changes what has been known to date or that makes a clear breakthrough. This transformation leads to new concept(s) that may concern a theoretical framework, methodology, process, technique, product, and so on. Innovation often brings about a change in people’s behaviour and is associated with values linked to performance, improvement or simplification of an activity or set of activities. In the industrial field, the term innovation more specifically refers to the evolution or creation of a process, technique or product. In this sense, innovation is often associated with the notion of efficiency (e.g. a competitive advantage arising from this transformation process). Interdisciplinarity. The term interdisciplinarity identifies the interactions and cooperation of several disciplines around common projects and subjects. For each discipline involved, the work carried out within an interdisciplinary context opens up research prospects that are not limited to their respective field of study. Such work makes use of data, methods, tools, theories and concepts from different disciplines in a synthesis in which the role of the disciplinary components goes beyond simple juxtaposition. Indicators of this integration include, in particular: combinations of models or representations that unify disparate approaches, partnerships or collaboration and not a mere exchange of services, 45 with coordinated investment of resources and cooperative-style organisation, the creation of a common language leading to the revision of initial hypotheses, broader understanding of a problem, opening of new avenues and development of new knowledge.
Management This term primarily applies to the management and running of a research entity by its manager(s). The lab’s method of management is evaluated under the criterion “Organisation and life of the entity”. KEIS substituted management for *governance.
Metrics The term metrics is used in the context of quantitative evaluation of the performances of a research entity. The metrics based evaluation model aims at going beyond a mere subjective approach and, to this end, at producing numerical *indicators with robustness and generality supposed to guarantee reliability. The pertinence of metrics for evaluation nevertheless depends on the precise definition of the scope of the indicators and their appropriateness for the evaluated entity.
Multidisciplinarity Multidisciplinarity usually refers to a juxtaposition of disciplinary perspectives that broadens the field of knowledge by increasing the amount of data, tools and methods available. In the multidisciplinary perspective, the disciplines maintain their boundaries and identity: accordingly, a particular discipline, which generally steers the others, uses a methodology and the tools of one or more other disciplines to address a question or make progress in a research project that is specific to its disciplinary field. 46
Observable fact An observable fact is a factual piece of data (e.g. an activity or a *result) that allows the evaluator to base his or her judgement on evidences. Observable facts therefore act as *descriptors in the evaluation process. For example, the main types of observable facts relating to the criterion “*Scientific outputs and quality” are: publications, lectures and other oral forms of communication without publication, other scientific outputs specific to the field, tools, resources, methodologies, etc.
Peer review [see Peers] Peers In the field of scientific evaluation, the term peers refers to researchers in a field with a recognised level of scientific expertise. Peer review denotes a qualitative assessment applied to research (e.g. in the case of an article submitted to an editorial committee) or collective research (e.g. in the case of the scientific outputs of a research entity).
Performance This term refers to the level of scientific activities of a research entity, assessed on the basis of the six *evaluation criteria defined by KEIS . The performance may be subjected to *quantitative and *qualitative evaluation.
Proximity The notion of proximity is used as a *characterisation element of interactions between disciplines. Proximity is estimated using the proximity of way of thinking, paradigms and concepts, types of data, observation and measurement tools. Proximity also assesses the degree of interaction between disciplines in a corpus of scientific texts (such as guidance texts, project proposals or publications), by considering their content, media or the authors› experience in the discipline. 47
Qualitative This adjective is applied to an evaluation model based on the consideration of quality *indicators. In contrast to quantitative evaluation, which relies on *metrics, qualitative evaluation goes beyond metrics alone, and particularly takes into account the context of the evaluated entity.
Quality indicator A quality indicator helps the evaluator in the qualitative assessment. For example, the main quality indicators relating to the criterion “*Scientific outputs and quality” are: the originality and scope of research, progress in the field; breakthrough theories and methodologies, paradigm shifts, emergence of new problems or research avenues; academic impact (citations, references, etc.); multidisciplinarity; international engagement; reputation and selectivity of journals used for publication, etc. In *peer evaluation, quality Indicators are founded on elements that are widely accepted by scientific communities. As such, they establish a standard or at least a set of references on which a discussion can be based within expert committees and/or between evaluated groups and their evaluators.
Quantitative This adjective applies to an evaluation model that gives precedence to the *metrics of the performance of a research entity. The quantitative model is based on a normative concept of evaluation that overvalues raw numbers to the detriment of a proper analysis of their significance and value in the context of the evaluated entity.
Reputation Reputation is one of the criteria for evaluating *research entities, closely correlated with the *appeal criterion. The two notions describes the 48 Quality of being recognised by *academic and/or non-academic communities. Reputation and appeal have a very positive *impact on the community, the former being outgoing and the latter ingoing.
Research entities Research entities include *research units, unit *components such as *teams or *themes, *Federated units, *Clinical investigation centres, etc.
Research unit A research entity accredited by a research institution or a university – for example an “UMR” or an “EA” – organised around a scientific programme that is the subject of a contract with the research institution. The personnel of research units are researchers, professors, engineers, technicians and administrative staff. A research unit can be divided into *teams, *themes, departments, “focuses” or be made up of a single *component depending on the Nature of its research programme and workforce. Result Type of *observable fact in the criteria *scientific production, brought about by the *strategy defined by a *research entity. A results can be a discovery or any other significant breakthrough in the field of basic or *applied research. Results constitute the essential part on which is based the *appraisal of a research entity.
Risk-taking A risk in a scientific project can be a negative point when it is a danger or a threat (e.g. the uncertain feasibility of a research programme, which may indicate a mismatch between an entities actual resources and its short- and mediumterm strategy). But Risk-taking may be a positive point when it has an important potential outcome (e.g. a programme leading to scientific *innovations, likely to boost the institution’s *appeal and *reputation, and enabling partnerships). 49
Scientific outputs *Evaluation criterion of a *research entity, closely correlated with *scientific quality. The main *observable facts relating to scientific outputs are publications, lectures and forms of communication, outputs specific to *disciplinary fields (excavation reports, corpuses, software, prototypes, etc.), tools, resources or methodological tools etc.
Scientific quality *Evaluation criterion of a *research entity, closely correlated with *scientific outputs. The scientific quality of a *research entity is determined using *quality indicators: for example, the originality and outreach of research, paradigm shifts and emergence of new questions, scientific impact of the entity’s academic activities, reputation and selectivity of the editorial supports of publications, etc.
Scientific officer KEIS scientific officers (DS) are researchers and professors who are in charge of organizing the evaluation of several entities within their field of competence. They select the experts on behalf of KEIS. They attend the site visit and review the final report. They ensure that KEIS procedures and rules are followed at all times.
Self-evaluation An approach to evaluations that involves conducting, by the *research entity, an analysis of its past, present and future activities in a way that is likely to improve its operation and to develop or build its *reputation. Self-evaluation is the first stage in the KEIS process for the evaluation of *research entities. The entity collectively presents it *findings and research perspectives in an objective manner so that it takes into account both its strengths and weaknesses. On the basis of this 50 Self-evaluation, an independent, collective and transparent external evaluation is performed by experts belonging to the same scientific community. This leads to a written report to which are appended the entity’s responses.
Standards Document specifying KEIS methodological principles and defining the evaluation criteria. Science, scientific Although the term ‹science› has a narrower meaning in English than in French, this document uses the term in its broader sense. Science is understood to embrace all academic disciplines and all fields of academic researchbased knowledge, including social sciences, arts and humanities.
Strategy The term strategy is used to identify the means that a *research entity has implemented to meet Its objectives and which it intends to implement when defining its research perspectives for the next evaluation period. The strategy is a decisive part of a research entity’s scientific policy.
SWOT Acronym for Strengths, Weaknesses, Opportunities and Threats. The SWOT tool refers to the analysis of a situation, process, project, policy or strategy. This tool is also used by economic decision-makers insofar as it is meant to help them make the best decisions.
Team*Component of a *research unit. The team structures foster cohesive scientific work on both research subjects and methodologies. Teams are scientifically independent within their research units. 51
Technological (research) Technological research is a research directly linked to society – particularly the economic community and industry – with the aim not only of increasing but also of creating new conceptual approaches, methods, processes, software, instruments, tools and objects of all kinds.
Theme *Component of a *research unit. Themes are beneficial to scientific work carried out on common research subjects but with diverse methodologies. This organisation is often used to foster a transverse approach to the project of several teams.
Training through research Training in research, which refers to training for students in the research jobs, needs to be distinguished from training through research, the theoretical, methodological and experimental Training of students irrespective of their professional specialisation. Training in and through research correspond to the involvement of a research entity’s members in putting together courses and teaching content, in attracting, supporting and supervising students and so on.
Transdisciplinarity Transdisciplinarity is a scientific practice that goes beyond disciplinary points of view by offering a very wide-range of approaches to a question. It shows an additional degree of integration in comparison with interdisciplinarity that disciplines achieve when this repeated practice leads to the definition of new paradigms and the creation of a community, thus allowing the gradual emergence of a new discipline. We will use the term transsectorality to refer to a new means of producing knowledge based on collaboration with organisations outside of the research community and which integrates both 52 scientific knowledge and knowledge of nonscientist partners (professionals, decision- makers, etc.).
Translational (research) In the medical field, translational research transfers scientific innovations from basic research to *clinics and creates new clinical practices frombasic hypotheses, in order to improve patient treatment.
Mission, Vision, and Values The checklists Project’s mission is to advance excellence in evaluation by providing high-quality checklists to guide practice. Our vision is for all evaluators to have the information they need to provide exceptional evaluation service and advance the public good. These values guide the project’s work: Diversity: We are dedicated to supporting the work of evaluators of all skill levels and backgrounds, working in an array of contexts, serving a wide variety of communities. Excellence: We strive to meet standards of the highest quality to help evaluators provide exceptional service to their clients and stakeholders. Professional community: We actively seek out and use input from across the evaluation community to improve our work. Practicality: We are committed to developing and disseminating resources that evaluators can use right away to enhance their practice.
Definition of Evaluation Checklist An evaluation checklist distils and clarifies relevant elements of practitioner experience, theory, principles, and research to support evaluators in their work.
Criteria for Evaluation Checklists Checklists accepted for inclusion in the Evaluation Checklists Project collection should meet the following criteria:* 53
Appropriateness of Evaluation Content • The checklist addresses one or more specific evaluation tasks (e.g., a discrete task or an activity that cuts across multiple tasks). • The checklist clarifies or simplifies complex content to guide performance of evaluation tasks. • Content is based on credible sources, including the author’s experience.
Clarity of Purpose • A succinct title clearly identifies what the checklist is about. • A brief introduction orients the user to the checklist’s purpose, including the following: O The circumstances in which it should be used O How it should be used (including caveats about how it should not be used, if needed) O Intended users
Completeness and Relevance
• All essential aspects of the evaluation task(s) are addressed.
• All content is pertinent to what users need to do to complete the task(s).
Organization
• Content is presented in a logical order, whether conceptually or sequentially.
• Content is organized in sections labelled with concise, descriptive headings.
• Complex steps or components are broken down into multiple smaller parts.
Clarity of Writing
• Content is focused on what users should do, rather than questions for them to ponder.
• Everyday language is used, rather than jargon or highly technical terms.
• Verbs are direct and action-oriented.
• Terms are precise.
• Terms are used consistently.
•Definions are provided where terms are used but might not be obviously known.
• Sentences are concise. References and Sources
• Sources used to develop the checklist’s content are cited.
• Additional resources are listed for users who wish to learn more about the topic.
• A preferred citation for the checklist is included (at the end or beginning of the checklist).
• The author’s contact information is included. Procedures for Evaluation Checklist Authors and Editors In the steps described below, • Author refers to the individual(s) who creates an original checklist.
• Editor refers to the member of the Evaluation Checklists Project staff assigned as the point- of contact for the checklist’s author (tasks associated with Editor may be performed by more than one member of the Evaluation Checklists Project staff). Authors should review the steps described below before they begin the checklist development process. In addition to these steps, authors should be aware of two important points:
• It is typical for a checklist to undergo several revisions before it meets all criteria listed on pages 1-2 of this charter. Authors are encouraged to recognize this is a normal part of the checklist development process, keeping in mind that their checklist may be the only one in the world on its given topic. The Evaluation Checklists Project is committed to working with authors to produce checklists that are of exceptionally high quality and utility.
• Either Author or Editor may discontinue the checklist development process at any time if there are irreconcilable 55 Differences in opinions about the checklist’s content or quality.
Steps
1. Author submits an idea for an evaluation checklist (either by completing the online form or emailing a member of the Evaluation Checklists Project staff).
2. Editor responds to Author to confirm the proposed checklist’s topic is appropriate (or ask for more information) and explain process (as outlined here).
3. Author submits first draft of checklist to Editor.
4.Editor, with input from other Evaluation Checklists Project staff, provides initial feedback and suggestions for improving the checklist.
5. Author revises checklist based on feedback and sends revised draft to Editor (this step may need to be repeated at the discretion of either Editor or Author).
6. Editor sends checklist to at least three expert reviewers for a double-blind review.
7. Reviewers send feedback to Editor.
8. Editor summarizes reviewers’ feedback and offers guidance to Author about how to address the reviewers’ input.
9. Author revises checklist based on reviewer feedback and sends revised draft to Editor (this step may need to be repeated at the discretion of either Editor or Author). Editor may perform minor editing and formatting to the document prior to field-testing.
10. When both Author and Editor agree the checklist is ready for field-testing, Editor posts the checklist in the field testing section of the Checklist Project’s website, creates an online form to collect field test feedback, and announce the checklist’s availability for field-testing via appropriate channels. Author may disseminate through their networks as well. (Note: At this stage, the checklist’s authorship will be known to field testers – it is the Evaluation Checklists project’s experience that more people 56 are likely to engage in field-testing when they know whom they are helping with their time).
11. The checklist will remain in field testing for a designated period—typically between two and four weeks (it may depend on how long it takes to get a sufficient number of responses). When the fieldtest period ends, Editor will compile the results and send to Author with guidance on how to revise based on field-testers’ input.
12. Author revises checklist based on field-test results and sends revised draft to Editor (this step may need to be repeated at the discretion of either Editor or Author).
13. When both Author and Editor agree the checklist has been sufficiently revised, Editor sends the checklist to a professional editor for copy editing (paid for by the Evaluation Checklist Project).
14. Editor sends Author the copyedited version of the checklist.
15. Author reviews the edits and accepts, declines, or modifies them as appropriate and sends the checklist back to Editor for finalization. 16. Editor formats the checklist and posts it on the Checklist Project website. 17. Editor announces the availability of the finalized checklist via appropriate channels. Author may disseminate through their networks as well.
Evaluation Task Areas This list of evaluation tasks areas is intended to guide the Checklists Project in curating the collection, with the aim of building a collection of checklists that provides coverage of evaluation tasks and cross-cutting activities that is as comprehensive as possible. However, not all important evaluation tasks and activities are appropriate checklists topics. The list of common evaluation tasks below is divided into nine domains of evaluation activity. 57 This list is not intended to be exhaustive for all evaluation contexts, and some tasks may not be relevant for a given evaluation. However, they collectively represent a core set of tasks typical in many evaluation contexts. Although presented as discrete tasks in linear order, many will intersect and inform each other and will occur concurrently or iteratively. This list is not intended to be a checklist for conducting an evaluation. 1.
Managing the Evaluation Plan and manage use of resources involved in conducting an evaluation, including people, time, and money. a. Assemble a competent evaluation team and determine each member’s role. b. Prepare an evaluation plan that includes details about the evaluation design, as well as timelines, tasks, and deliverables.
Engaging Stakeholders Identify stakeholders who should be informed about and involved in the evaluation and engage them accordingly in the evaluation.
a. Identify stakeholders who should be involved in planning, conducting, or using the evaluation.
b. Determine the appropriate level of and means For stakeholder involvement throughout the evaluation process and related to Specific tasks, with recognition that not all stakeholders must be involved equally at all times.
c. Determine if key stakeholders value certain types of evidence or evaluation approaches over others so their preferences can be reflected in the evaluation design.
d. Determine appropriate mode and frequency of communication about the evaluation with various stakeholders. Situating the Evaluation in Context Identify the key 58 characteristics of the program being evaluated and tailor the evaluation activities to the conditions in which the program operates.
a. Identify the purpose and intended uses of the evaluation.
b. Identify the specific information needs of the evaluation’s intended users.
c. Identify key program factors, including activities, expected effects, resources, and participant needs.
d. Identify the program’s theory of change. e. Identify potential unintended positive or negative consequences of the program. f. Identify key contextual factors that are likely to influence the program, its outcomes, or the evaluation, such as sociopolitical and economic conditions.
Applying Specific Evaluation Approaches Draw on established evaluation approaches, theories, and models to guide the evaluation process. a. With understanding of the underlying values and distinct features of major evaluation approaches, determine which one(s) are appropriate for the context. b. Apply established principles and guidelines associated with the selected approach(es) in designing and conducting the evaluation, as appropriate for the context.
Designing the Evaluation Determine what aspects of the program the evaluation will focus on and make decisions about how to structure the inquiry to serve intended purposes.
a. Determine the specific evaluation questions, objectives, and/or criteria.
b. Identify potential negative consequences of the evaluation and establish appropriate safeguards for human welfare. 59
c. Identify what will be measured to address the evaluation questions, objectives, and/or criteria.
d. Determine what methods and data sources will be used and ensure they are appropriate for the evaluation’s context.
e. Determine if comparison or control groups are appropriate and feasible.
f. Determine what, if any, sampling techniques should be used to obtain data of sufficient quantity and quality. If appropriate, identify sampling frame and develop sampling protocol. Determine how conclusions and judgments about the programs will be derived, including procedures and sources of values that will inform interpretation.
Collecting and Analyzing Data Obtain and describe data to generate credible findings.
a. Establish and follow protocols for ensuring security of collected data.
b. Develop and test data collection instruments and protocols (or identify and obtainexisting instruments appropriate for context).
c. Collect data in a contextually responsive and technically sound manner.
d. Assess the trustworthiness or validity of the collected data.
e. Prepare data for analysis.
f. Analyze data in a contextually responsive and technically sound manner.
g. Establish a process of checks and balances to ensure analysis is trustworthy, such asmember checking, triangulation, etc.
Interpreting Evidence Combine findings from data sources and use agreed-upon procedures and values to reach conclusions and judgments about the program. 60
Reporting Results and Promoting Use Describe and communicate the evaluation’s processes and results in a way that encouragesunderstanding and use of results by stakeholders.
a. Determine the appropriate means for communicating the evaluation results, such asmeetings, memos, presentations, infographics, technical reports, or journal articles.
b. Determine what content to include in each reporting medium, based on the intendedaudience.
c. Prepare evaluation report(s) with attention to visual elements and formatting to supportunderstanding of evaluation results.
d. Disseminate reports and other media into the appropriate hands.
e. Follow up with stakeholders to support understanding and use of results. Interpreting Evidence Combine findings from data sources and use agreedupon procedures and values to reach conclusions and judgments about the program.
a. Identify appropriate points of comparison or values for interpreting evidence, such ashistorical data, program goals, organizational priorities, and stakeholder expectations.
b. Integrate and interpret results in a systematic manner that supports conclusions inrelation to evaluation questions, objectives, and/or criteria.
c. Seek out and explain possible alternative explanations for observed results.
d. Identify actions to recommend, based on evidence, if appropriate. Reporting Results and Promoting Use Describe and communicate the evaluation’s processes and results in a way that encouragesunderstanding and use of results by stakeholders. 61
a. Determine the appropriate means for communicating the evaluation results, such asmeetings, memos, presentations, infographics, technical reports, or journal articles.
b. Determine what content to include in each reporting medium, based on the intendedaudience.
c. Prepare evaluation report(s) with attention to visual elements and formatting to supportunderstanding of evaluation results.
d. Disseminate reports and other media into the appropriate hands. e. Follow up with stakeholders to support understanding and use of results.
Evaluating the Evaluation (Metaevaluation) Assess the quality of the evaluation.
a. Reflect on the evaluation process and deliverables to identify opportunities forimprovement.
b. Formally evaluate the evaluation.