Evaluation Glossary
Definitions in this glossary apply solely to the context of the evaluation of research entities, and relate to the reference documents of KEIS. They do not intend to be exhaustive in any way, but, rather, aim at providing a guide to readers of KEIS documents. Asterisks indicate terms that have separate entries in this glossary.
Academic The adjective academic, particularly applied to the *appeal and *reputation of *research entities, describes a context for scientific activity which is structured around universities and research organisations. By contrast, a context that does not involve this form of structuring is termed non-academic. Accordingly, partnerships between a research entity and a private company or a regional authority, for example, can be qualified as non-academic.
Applied (research) Applied research is research focusing on scientific and technological questions associated with socioeconomic issues pertaining to specific sectors (such as energy, the environment, information, health or agriculture). Its aim is not only to increase knowledge but also to produce findings and applicable innovations that are likely to have an impact on society, (it is therefore broad in its meanings, captured by the French term “finalisé” of which “recherche appliquée” is just one part).
Appraisal We call appraisal the *results and, in general, all of the activities and *scientific outputs of a research entity during the past period of evaluation (currently the last 5 years). The appraisal is based on the objectives and strategy that the research entity had developed in its previous scientific plan.
Attractiveness Appeal (in effect, ability to attract) can be defined as a *research lab’s ability to promote its activities before an *academic or a non-academic community. It therefore depends on the lab’s ability to become attractive in its field.
Bibliometrics Quantitative analysis and statistics on the scientific publications of a *research entity (media, authors, citations, institutional affiliations, etc.).
Characterisation The characterisation elements of a *research entity activities and operation are provided by *observable facts which enable the evaluation to be based on data. Clinical investigation centre (CIC) Clinical investigation centres are infrastructures built for the purpose of developing *clinical research such as development of new treatments or investigations intended to gain knowledge on a disease. CICs are supervised by both the French Ministry in charge of Health and the INSERM.
Clinical (research) Clinical research (from the Latin clinice meaning medicine that is practiced at the sickbed) is research that is directed at investigating new treatments or new techniques.
Component We refer to components when we talk about the way in which *research units are structured. A *team, a *theme, a department and a focus are different types of components.
Context The term context identifies the various aspects of the situation (both past and present) and environment of a research entity being evaluated. In this regard, the context 39 must be viewed as a parameter influencing qualitative evaluation. The history, identity and missions of the *research entity, its scientific and educational environment, its regional situation, social, economic and cultural environment altogether represent the context.
Descriptor The term descriptor is sometimes used to refer to scientific results and activities allowing the evaluation to be based on evidence – in other words, on data. With regard to a scientific evaluation activity, we therefore call descriptor the function of an *observable fact.
Disciplinary group Group of *disciplines used for structuring *scientific domains. Discipline Scientific domain. In the evaluation of *research entities conducted by KEIS, disciplines are divided into *disciplinary groups (or disciplinary fields) within each *scientific domain.
Domain (scientific, disciplinary) KEIS lists three scientific domains, organised into disciplinary fields: Scientific domain Sciences and technologies (ST): disciplinary fields: Mathematics; Physics; Earth and space sciences; Chemistry; Engineering sciences; Information and communication sciences and technologies. Scientific domain Life and environmental sciences (SVE): disciplinary field Biology/Health (sub-fields: molecular biology, structural biology, biochemistry; genetics, genomics, bio-informatics, systems biology; cell biology, animal development biology; physiology, physiopathology, endocrinology; neurosciences; immunology, infectious diseases; clinical research, public health); disciplinary 40 field Ecology/Environment (sub-fields: cell biology, plant development biology; evolution, ecology, environmental biology; life sciences and technologies, biotechnology). Scientific domain Human and social sciences (SHS): disciplinary field Markets and organisations (sub-fields: economics, finance/management); disciplinary field Norms, institutions and social behaviour (sub-fields: law; political science; anthropology and ethnology; sociology, demography; information and communication sciences); disciplinary field space, environment and societies (sub-fields: geography; town planning and land development, architecture); disciplinary field Human mind, language, education (sub-fields: linguistics; psychology; educational sciences; sport and exercise sciences and techniques); disciplinary field Languages, texts, arts and cultures (sub-fields: languages/ancient and French literature, comparative literature; foreign languages and literature, regional languages, cultures and civilisations, arts, philosophy, religious sciences, theology); disciplinary field Ancient and modern worlds (sub-fields: history, history of art, archaeology).
Environment (social, economic, cultural) The social, economic and cultural environment constitutes a fundamental piece of data for evaluating *research entities as it enables the interactions of a collective research entity with society – taken in its non-*academic dimension – to be assessed. These interactions depend on the nature and purpose of activities of the research entities. The main types of facts related to these interactions are for example: outputs for nonacademic institutions such as regional authorities or enterprises (e.g. study reports, patents, licences, publications in professional journals, etc.); 41 involvement in partnerships with cultural institutions, industrial groups, international organisations, etc.) ; the impact of the entity activities on economy and society, etc. Evaluation [see Evaluation criterion]
Evaluation criterion A criterion is what is considered as relevant in evaluating *research entities.KEIS reviewing work is based on six evaluation criteria: 1. *Scientific production and quality; 2. *Academic reputation and attractiveness; 3. Interactions with the social, economic and cultural *environment; 4. Organisation and life of the entity; 5. Involvement in *training through research; 6. *Strategy and research perspectives for the next evaluation period.
Evaluation field The evaluation field (field of evaluation) is the scope of a *criterion, i. e. the diverse parameters that the evaluator has to assess. The evaluation field of the *scientific outputs and quality criterion, for example, includes breakthroughs, findings, problems, experimental factors leading to scientific achievements, and originality, quality and reach of the research.
Evaluative intention Term denoting the application points of the *evaluation criteria implemented. Evaluative intention is defined by the specification of the *evaluation field covered by each criterion, and by that of the *observable facts and *quality indicators relating thereto. Executive summary Brief description of the activities and objectives of a research entity, with a concise definition of its field and profile. 42
Expert The term expert refers to a *peer (a researcher with a recognised level of scientific competence in a disciplinary field) in charge of evaluating a research entity. Experts work in *committees. They are chosen for their competences, deemed appropriate for the subject being reviewed, and within disciplinary scope, research purposes, possible interdisciplinary dimension etc. of the entity.
Expert committee In order to evaluate *research entities, *experts work in committees made up of *peers chosen for their scientific competences. The expert committee collectively evaluates the lab’s scientific production and projects, in its context and produces an evaluation report (*appraisal and research perspectives).
Exploitation This term has two different meanings, which can sometimes lead to confusion when discussing evaluation. The first is a common, broad meaning in The sense of “showing to advantage”, which applies to an undefined series of items. The second is more specialised, referring to a series of activities and initiatives that are likely to increase the *reputation and *appeal of the research and its impact on the social, economic and cultural environment.
Factual data [see Observable fact] Federated units *Research labs grouped together around shared scientific topics or equipments. Federated labs may belong to different research institutions and may be multidisciplinary. They help identifying dominant scientific centres and/or pooling facilities and personnel. At CNRS, federated organisations are “federated research institutes” (IFRC) that bring together specific CNRS labs located in a single place, or research federations (FR), which group together labs working on joint research subjects. Federated units remain independent. 43
Focus [see Component] Governance Originally from the French word which emerged around the 13th century, meaning “government”, “jurisdiction” or “power”, particularly to refer to the reach of a territory placed under the jurisdiction of a bailiff, i.e. a governor missioned with running this territory, this term then entered the English language initially to denote the way in which feudal power was organised. At the turn of the 21st century, with the development of the notion of globalisation, the word now refers to an organisation and administration process of human societies that is supposedly respectful of diversities and rooted in sharing and the community of interests. In the economic and political spheres, the term governance identifies a flexible system for managing collective structures (states, companies, international organisations, etc.). Swiftly entering our everyday vocabulary, the word has undergone a significant semantic extension. It has been used in the field of scientific evaluation to identify a method for directing and managing a research lab. Largely incongruous with this field of activities – where its meaning is still ambiguous – it has been replaced by the term *management in KEIS’ standards. I
mpact The term impact is frequently encountered in the vocabulary of evaluation. Whatever the scope attributed to it (scientific, socio-economic or cultural impact for example), it should be understood as an effect (positive or negative) of a *research lab’s activities on a given aspect of its *context.
Indicator An indicator is based on facts obtained during a comparative evaluation. In the field of research 44 evaluation, indicators are most often described as sets of *observable facts serving as *descriptors applied to scientific *results or activities. In this regard, they are generally used to obtain a research lab’s performance *metric and are part of the *quantitative model of scientific.
Innovation Broadly speaking, innovation is a creative process of scientific or technological transformation that either partially changes what has been known to date or that makes a clear breakthrough. This transformation leads to new concept(s) that may concern a theoretical framework, methodology, process, technique, product, and so on. Innovation often brings about a change in people’s behaviour and is associated with values linked to performance, improvement or simplification of an activity or set of activities. In the industrial field, the term innovation more specifically refers to the evolution or creation of a process, technique or product. In this sense, innovation is often associated with the notion of efficiency (e.g. a competitive advantage arising from this transformation process). Interdisciplinarity. The term interdisciplinarity identifies the interactions and cooperation of several disciplines around common projects and subjects. For each discipline involved, the work carried out within an interdisciplinary context opens up research prospects that are not limited to their respective field of study. Such work makes use of data, methods, tools, theories and concepts from different disciplines in a synthesis in which the role of the disciplinary components goes beyond simple juxtaposition. Indicators of this integration include, in particular: combinations of models or representations that unify disparate approaches, partnerships or collaboration and not a mere exchange of services, 45 with coordinated investment of resources and cooperative-style organisation, the creation of a common language leading to the revision of initial hypotheses, broader understanding of a problem, opening of new avenues and development of new knowledge.
Management This term primarily applies to the management and running of a research entity by its manager(s). The lab’s method of management is evaluated under the criterion “Organisation and life of the entity”. KEIS substituted management for *governance.
Metrics The term metrics is used in the context of quantitative evaluation of the performances of a research entity. The metrics based evaluation model aims at going beyond a mere subjective approach and, to this end, at producing numerical *indicators with robustness and generality supposed to guarantee reliability. The pertinence of metrics for evaluation nevertheless depends on the precise definition of the scope of the indicators and their appropriateness for the evaluated entity.
Multidisciplinarity Multidisciplinarity usually refers to a juxtaposition of disciplinary perspectives that broadens the field of knowledge by increasing the amount of data, tools and methods available. In the multidisciplinary perspective, the disciplines maintain their boundaries and identity: accordingly, a particular discipline, which generally steers the others, uses a methodology and the tools of one or more other disciplines to address a question or make progress in a research project that is specific to its disciplinary field. 46
Observable fact An observable fact is a factual piece of data (e.g. an activity or a *result) that allows the evaluator to base his or her judgement on evidences. Observable facts therefore act as *descriptors in the evaluation process. For example, the main types of observable facts relating to the criterion “*Scientific outputs and quality” are: publications, lectures and other oral forms of communication without publication, other scientific outputs specific to the field, tools, resources, methodologies, etc.
Peer review [see Peers] Peers In the field of scientific evaluation, the term peers refers to researchers in a field with a recognised level of scientific expertise. Peer review denotes a qualitative assessment applied to research (e.g. in the case of an article submitted to an editorial committee) or collective research (e.g. in the case of the scientific outputs of a research entity).
Performance This term refers to the level of scientific activities of a research entity, assessed on the basis of the six *evaluation criteria defined by KEIS . The performance may be subjected to *quantitative and *qualitative evaluation.
Proximity The notion of proximity is used as a *characterisation element of interactions between disciplines. Proximity is estimated using the proximity of way of thinking, paradigms and concepts, types of data, observation and measurement tools. Proximity also assesses the degree of interaction between disciplines in a corpus of scientific texts (such as guidance texts, project proposals or publications), by considering their content, media or the authors› experience in the discipline. 47
Qualitative This adjective is applied to an evaluation model based on the consideration of quality *indicators. In contrast to quantitative evaluation, which relies on *metrics, qualitative evaluation goes beyond metrics alone, and particularly takes into account the context of the evaluated entity.
Quality indicator A quality indicator helps the evaluator in the qualitative assessment. For example, the main quality indicators relating to the criterion “*Scientific outputs and quality” are: the originality and scope of research, progress in the field; breakthrough theories and methodologies, paradigm shifts, emergence of new problems or research avenues; academic impact (citations, references, etc.); multidisciplinarity; international engagement; reputation and selectivity of journals used for publication, etc. In *peer evaluation, quality Indicators are founded on elements that are widely accepted by scientific communities. As such, they establish a standard or at least a set of references on which a discussion can be based within expert committees and/or between evaluated groups and their evaluators.
Quantitative This adjective applies to an evaluation model that gives precedence to the *metrics of the performance of a research entity. The quantitative model is based on a normative concept of evaluation that overvalues raw numbers to the detriment of a proper analysis of their significance and value in the context of the evaluated entity.
Reputation Reputation is one of the criteria for evaluating *research entities, closely correlated with the *appeal criterion. The two notions describes the 48 Quality of being recognised by *academic and/or non-academic communities. Reputation and appeal have a very positive *impact on the community, the former being outgoing and the latter ingoing.
Research entities Research entities include *research units, unit *components such as *teams or *themes, *Federated units, *Clinical investigation centres, etc.
Research unit A research entity accredited by a research institution or a university – for example an “UMR” or an “EA” – organised around a scientific programme that is the subject of a contract with the research institution. The personnel of research units are researchers, professors, engineers, technicians and administrative staff. A research unit can be divided into *teams, *themes, departments, “focuses” or be made up of a single *component depending on the Nature of its research programme and workforce. Result Type of *observable fact in the criteria *scientific production, brought about by the *strategy defined by a *research entity. A results can be a discovery or any other significant breakthrough in the field of basic or *applied research. Results constitute the essential part on which is based the *appraisal of a research entity.
Risk-taking A risk in a scientific project can be a negative point when it is a danger or a threat (e.g. the uncertain feasibility of a research programme, which may indicate a mismatch between an entities actual resources and its short- and mediumterm strategy). But Risk-taking may be a positive point when it has an important potential outcome (e.g. a programme leading to scientific *innovations, likely to boost the institution’s *appeal and *reputation, and enabling partnerships). 49
Scientific outputs *Evaluation criterion of a *research entity, closely correlated with *scientific quality. The main *observable facts relating to scientific outputs are publications, lectures and forms of communication, outputs specific to *disciplinary fields (excavation reports, corpuses, software, prototypes, etc.), tools, resources or methodological tools etc.
Scientific quality *Evaluation criterion of a *research entity, closely correlated with *scientific outputs. The scientific quality of a *research entity is determined using *quality indicators: for example, the originality and outreach of research, paradigm shifts and emergence of new questions, scientific impact of the entity’s academic activities, reputation and selectivity of the editorial supports of publications, etc.
Scientific officer KEIS scientific officers (DS) are researchers and professors who are in charge of organizing the evaluation of several entities within their field of competence. They select the experts on behalf of KEIS. They attend the site visit and review the final report. They ensure that KEIS procedures and rules are followed at all times.
Self-evaluation An approach to evaluations that involves conducting, by the *research entity, an analysis of its past, present and future activities in a way that is likely to improve its operation and to develop or build its *reputation. Self-evaluation is the first stage in the KEIS process for the evaluation of *research entities. The entity collectively presents it *findings and research perspectives in an objective manner so that it takes into account both its strengths and weaknesses. On the basis of this 50 Self-evaluation, an independent, collective and transparent external evaluation is performed by experts belonging to the same scientific community. This leads to a written report to which are appended the entity’s responses.
Standards Document specifying KEIS methodological principles and defining the evaluation criteria. Science, scientific Although the term ‹science› has a narrower meaning in English than in French, this document uses the term in its broader sense. Science is understood to embrace all academic disciplines and all fields of academic researchbased knowledge, including social sciences, arts and humanities.
Strategy The term strategy is used to identify the means that a *research entity has implemented to meet Its objectives and which it intends to implement when defining its research perspectives for the next evaluation period. The strategy is a decisive part of a research entity’s scientific policy.
SWOT Acronym for Strengths, Weaknesses, Opportunities and Threats. The SWOT tool refers to the analysis of a situation, process, project, policy or strategy. This tool is also used by economic decision-makers insofar as it is meant to help them make the best decisions.
Team*Component of a *research unit. The team structures foster cohesive scientific work on both research subjects and methodologies. Teams are scientifically independent within their research units. 51
Technological (research) Technological research is a research directly linked to society – particularly the economic community and industry – with the aim not only of increasing but also of creating new conceptual approaches, methods, processes, software, instruments, tools and objects of all kinds.
Theme *Component of a *research unit. Themes are beneficial to scientific work carried out on common research subjects but with diverse methodologies. This organisation is often used to foster a transverse approach to the project of several teams.
Training through research Training in research, which refers to training for students in the research jobs, needs to be distinguished from training through research, the theoretical, methodological and experimental Training of students irrespective of their professional specialisation. Training in and through research correspond to the involvement of a research entity’s members in putting together courses and teaching content, in attracting, supporting and supervising students and so on.
Transdisciplinarity Transdisciplinarity is a scientific practice that goes beyond disciplinary points of view by offering a very wide-range of approaches to a question. It shows an additional degree of integration in comparison with interdisciplinarity that disciplines achieve when this repeated practice leads to the definition of new paradigms and the creation of a community, thus allowing the gradual emergence of a new discipline. We will use the term transsectorality to refer to a new means of producing knowledge based on collaboration with organisations outside of the research community and which integrates both 52 scientific knowledge and knowledge of nonscientist partners (professionals, decision- makers, etc.).
Translational (research) In the medical field, translational research transfers scientific innovations from basic research to *clinics and creates new clinical practices frombasic hypotheses, in order to improve patient treatment.
Mission, Vision, and Values The checklists Project’s mission is to advance excellence in evaluation by providing high-quality checklists to guide practice. Our vision is for all evaluators to have the information they need to provide exceptional evaluation service and advance the public good. These values guide the project’s work: Diversity: We are dedicated to supporting the work of evaluators of all skill levels and backgrounds, working in an array of contexts, serving a wide variety of communities. Excellence: We strive to meet standards of the highest quality to help evaluators provide exceptional service to their clients and stakeholders. Professional community: We actively seek out and use input from across the evaluation community to improve our work. Practicality: We are committed to developing and disseminating resources that evaluators can use right away to enhance their practice.
Definition of Evaluation Checklist An evaluation checklist distils and clarifies relevant elements of practitioner experience, theory, principles, and research to support evaluators in their work.
Criteria for Evaluation Checklists Checklists accepted for inclusion in the Evaluation Checklists Project collection should meet the following criteria:* 53
Appropriateness of Evaluation Content • The checklist addresses one or more specific evaluation tasks (e.g., a discrete task or an activity that cuts across multiple tasks). • The checklist clarifies or simplifies complex content to guide performance of evaluation tasks. • Content is based on credible sources, including the author’s experience.
Clarity of Purpose • A succinct title clearly identifies what the checklist is about. • A brief introduction orients the user to the checklist’s purpose, including the following: O The circumstances in which it should be used O How it should be used (including caveats about how it should not be used, if needed) O Intended users
Completeness and Relevance
• All essential aspects of the evaluation task(s) are addressed.
• All content is pertinent to what users need to do to complete the task(s).
Organization
• Content is presented in a logical order, whether conceptually or sequentially.
• Content is organized in sections labelled with concise, descriptive headings.
• Complex steps or components are broken down into multiple smaller parts.
Clarity of Writing
• Content is focused on what users should do, rather than questions for them to ponder.
• Everyday language is used, rather than jargon or highly technical terms.
• Verbs are direct and action-oriented.
• Terms are precise.
• Terms are used consistently.
•Definions are provided where terms are used but might not be obviously known.
• Sentences are concise. References and Sources
• Sources used to develop the checklist’s content are cited.
• Additional resources are listed for users who wish to learn more about the topic.
• A preferred citation for the checklist is included (at the end or beginning of the checklist).
• The author’s contact information is included. Procedures for Evaluation Checklist Authors and Editors In the steps described below, • Author refers to the individual(s) who creates an original checklist.
• Editor refers to the member of the Evaluation Checklists Project staff assigned as the point- of contact for the checklist’s author (tasks associated with Editor may be performed by more than one member of the Evaluation Checklists Project staff). Authors should review the steps described below before they begin the checklist development process. In addition to these steps, authors should be aware of two important points:
• It is typical for a checklist to undergo several revisions before it meets all criteria listed on pages 1-2 of this charter. Authors are encouraged to recognize this is a normal part of the checklist development process, keeping in mind that their checklist may be the only one in the world on its given topic. The Evaluation Checklists Project is committed to working with authors to produce checklists that are of exceptionally high quality and utility.
• Either Author or Editor may discontinue the checklist development process at any time if there are irreconcilable 55 Differences in opinions about the checklist’s content or quality.
Steps
1. Author submits an idea for an evaluation checklist (either by completing the online form or emailing a member of the Evaluation Checklists Project staff).
2. Editor responds to Author to confirm the proposed checklist’s topic is appropriate (or ask for more information) and explain process (as outlined here).
3. Author submits first draft of checklist to Editor.
4.Editor, with input from other Evaluation Checklists Project staff, provides initial feedback and suggestions for improving the checklist.
5. Author revises checklist based on feedback and sends revised draft to Editor (this step may need to be repeated at the discretion of either Editor or Author).
6. Editor sends checklist to at least three expert reviewers for a double-blind review.
7. Reviewers send feedback to Editor.
8. Editor summarizes reviewers’ feedback and offers guidance to Author about how to address the reviewers’ input.
9. Author revises checklist based on reviewer feedback and sends revised draft to Editor (this step may need to be repeated at the discretion of either Editor or Author). Editor may perform minor editing and formatting to the document prior to field-testing.
10. When both Author and Editor agree the checklist is ready for field-testing, Editor posts the checklist in the field testing section of the Checklist Project’s website, creates an online form to collect field test feedback, and announce the checklist’s availability for field-testing via appropriate channels. Author may disseminate through their networks as well. (Note: At this stage, the checklist’s authorship will be known to field testers – it is the Evaluation Checklists project’s experience that more people 56 are likely to engage in field-testing when they know whom they are helping with their time).
11. The checklist will remain in field testing for a designated period—typically between two and four weeks (it may depend on how long it takes to get a sufficient number of responses). When the fieldtest period ends, Editor will compile the results and send to Author with guidance on how to revise based on field-testers’ input.
12. Author revises checklist based on field-test results and sends revised draft to Editor (this step may need to be repeated at the discretion of either Editor or Author).
13. When both Author and Editor agree the checklist has been sufficiently revised, Editor sends the checklist to a professional editor for copy editing (paid for by the Evaluation Checklist Project).
14. Editor sends Author the copyedited version of the checklist.
15. Author reviews the edits and accepts, declines, or modifies them as appropriate and sends the checklist back to Editor for finalization. 16. Editor formats the checklist and posts it on the Checklist Project website. 17. Editor announces the availability of the finalized checklist via appropriate channels. Author may disseminate through their networks as well.
Evaluation Task Areas This list of evaluation tasks areas is intended to guide the Checklists Project in curating the collection, with the aim of building a collection of checklists that provides coverage of evaluation tasks and cross-cutting activities that is as comprehensive as possible. However, not all important evaluation tasks and activities are appropriate checklists topics. The list of common evaluation tasks below is divided into nine domains of evaluation activity. 57 This list is not intended to be exhaustive for all evaluation contexts, and some tasks may not be relevant for a given evaluation. However, they collectively represent a core set of tasks typical in many evaluation contexts. Although presented as discrete tasks in linear order, many will intersect and inform each other and will occur concurrently or iteratively. This list is not intended to be a checklist for conducting an evaluation. 1.
Managing the Evaluation Plan and manage use of resources involved in conducting an evaluation, including people, time, and money. a. Assemble a competent evaluation team and determine each member’s role. b. Prepare an evaluation plan that includes details about the evaluation design, as well as timelines, tasks, and deliverables.
Engaging Stakeholders Identify stakeholders who should be informed about and involved in the evaluation and engage them accordingly in the evaluation.
a. Identify stakeholders who should be involved in planning, conducting, or using the evaluation.
b. Determine the appropriate level of and means For stakeholder involvement throughout the evaluation process and related to Specific tasks, with recognition that not all stakeholders must be involved equally at all times.
c. Determine if key stakeholders value certain types of evidence or evaluation approaches over others so their preferences can be reflected in the evaluation design.
d. Determine appropriate mode and frequency of communication about the evaluation with various stakeholders. Situating the Evaluation in Context Identify the key 58 characteristics of the program being evaluated and tailor the evaluation activities to the conditions in which the program operates.
a. Identify the purpose and intended uses of the evaluation.
b. Identify the specific information needs of the evaluation’s intended users.
c. Identify key program factors, including activities, expected effects, resources, and participant needs.
d. Identify the program’s theory of change. e. Identify potential unintended positive or negative consequences of the program. f. Identify key contextual factors that are likely to influence the program, its outcomes, or the evaluation, such as sociopolitical and economic conditions.
Applying Specific Evaluation Approaches Draw on established evaluation approaches, theories, and models to guide the evaluation process. a. With understanding of the underlying values and distinct features of major evaluation approaches, determine which one(s) are appropriate for the context. b. Apply established principles and guidelines associated with the selected approach(es) in designing and conducting the evaluation, as appropriate for the context.
Designing the Evaluation Determine what aspects of the program the evaluation will focus on and make decisions about how to structure the inquiry to serve intended purposes.
a. Determine the specific evaluation questions, objectives, and/or criteria.
b. Identify potential negative consequences of the evaluation and establish appropriate safeguards for human welfare. 59
c. Identify what will be measured to address the evaluation questions, objectives, and/or criteria.
d. Determine what methods and data sources will be used and ensure they are appropriate for the evaluation’s context.
e. Determine if comparison or control groups are appropriate and feasible.
f. Determine what, if any, sampling techniques should be used to obtain data of sufficient quantity and quality. If appropriate, identify sampling frame and develop sampling protocol. Determine how conclusions and judgments about the programs will be derived, including procedures and sources of values that will inform interpretation.
Collecting and Analyzing Data Obtain and describe data to generate credible findings.
a. Establish and follow protocols for ensuring security of collected data.
b. Develop and test data collection instruments and protocols (or identify and obtainexisting instruments appropriate for context).
c. Collect data in a contextually responsive and technically sound manner.
d. Assess the trustworthiness or validity of the collected data.
e. Prepare data for analysis.
f. Analyze data in a contextually responsive and technically sound manner.
g. Establish a process of checks and balances to ensure analysis is trustworthy, such asmember checking, triangulation, etc.
Interpreting Evidence Combine findings from data sources and use agreed-upon procedures and values to reach conclusions and judgments about the program. 60
Reporting Results and Promoting Use Describe and communicate the evaluation’s processes and results in a way that encouragesunderstanding and use of results by stakeholders.
a. Determine the appropriate means for communicating the evaluation results, such asmeetings, memos, presentations, infographics, technical reports, or journal articles.
b. Determine what content to include in each reporting medium, based on the intendedaudience.
c. Prepare evaluation report(s) with attention to visual elements and formatting to supportunderstanding of evaluation results.
d. Disseminate reports and other media into the appropriate hands.
e. Follow up with stakeholders to support understanding and use of results. Interpreting Evidence Combine findings from data sources and use agreedupon procedures and values to reach conclusions and judgments about the program.
a. Identify appropriate points of comparison or values for interpreting evidence, such ashistorical data, program goals, organizational priorities, and stakeholder expectations.
b. Integrate and interpret results in a systematic manner that supports conclusions inrelation to evaluation questions, objectives, and/or criteria.
c. Seek out and explain possible alternative explanations for observed results.
d. Identify actions to recommend, based on evidence, if appropriate. Reporting Results and Promoting Use Describe and communicate the evaluation’s processes and results in a way that encouragesunderstanding and use of results by stakeholders. 61
a. Determine the appropriate means for communicating the evaluation results, such asmeetings, memos, presentations, infographics, technical reports, or journal articles.
b. Determine what content to include in each reporting medium, based on the intendedaudience.
c. Prepare evaluation report(s) with attention to visual elements and formatting to supportunderstanding of evaluation results.
d. Disseminate reports and other media into the appropriate hands. e. Follow up with stakeholders to support understanding and use of results.
Evaluating the Evaluation (Metaevaluation) Assess the quality of the evaluation.
a. Reflect on the evaluation process and deliverables to identify opportunities forimprovement.
b. Formally evaluate the evaluation.
Human and Social Science
Human and social sciences encompass disciplines with significantly different practices that call for evaluation methods adapted to their differences. Some of these disciplines, for example, place books at the top of the publications list, while others favour articles published in peer-reviewed journals, or studies presented in international congresses. Thus, an abstract, or a simple text intended for the 29 layman that has little value in some disciplines, will be considered a top-ranking publicatio in some areas of law. In some cases, English is the language of scientific research and, to quite a significant extent, the language of evaluation; in others, other languages are recognised as such. The greatly contrasting use of bibliometrics and variable review rankings and even simple bibliographic overviews . – From one discipline to another, gives an idea of these variations. KEIS has constantly endeavoured to tackle these differences in conscientiously carrying out its evaluations – without seeking to remove them completely
Although the methodology chosen by KEIS pays careful attention to these specific features, it does not create as many special cases as there are disciplinary singularities or disciplinary groups with a specific identity, such as humanities or cultural domains. Furthermore, it does not define a field that would stand completely apart with no measure in common with the others, as this would give human and social sciences an exception status in the evaluation field. Indeed, the singularities are far from being limited to that field alone. Research in mathematics also takes distinctive forms and responds to distinctive uses if it is compared to research conducted in engineering. The differences and complementarities between applied and basic research are relevant to molecular and clinical research as much as to economics and management. The problems posed by disciplinary specificity are something that goes well beyond the major disciplinary fields: the longer the list of differences, the longer the list of similarities and once more raises the question of the commensurability of disciplines. Many traits that appear to be specific to the practices of some are also present in others when it comes to evaluation.
That is why KEIS has decided to draw up fairly flexible and adaptable multidisciplinary standards both common and specific, since they combine broad generality with characteristics that make sense in each discipline. Accordingly, KEIS standards take into account the specific character of human and social sciences in the field of the evaluation.
This attention to their specific features is expressed in two complementary ways. On the one hand, in keeping with the principles of qualitative evaluation, determination of the disciplinary characteristics is entrusted to expert committees, the “peers” who, by definition, belong to the same scientific communities as the assessed entities. On the other hand, specifications tailored to human and social sciences have been introduced in the evaluation criteria standards on the basis of discussions between the KEIS scientific officers and external experts, held during a weekly seminar from September 2020 to January 2021. The practical consequence of this approach is that the result is not another standard but a joint standard incorporating the perspectives of human and social sciences on the same footing as the others that can adapt accordingly when necessary
We will not, therefore, define new versions of the six evaluation criteria intended for human and social sciences alone: there would be no point in doing so as it would go against the purpose for which the KEIS evaluation criteria standards were designed. Admittedly, it is not a matter of ironing out certain difficulties: the interactions of research with the non-academic environment, covered by criterion 3 are, for example, a subject of variable interest in human and social sciences. In fact, the work of all disciplines in the field, at close examination, is of interest to social groups and economic or cultural stakeholders. Very often, without distorting the 31 nature and focus of the research specific to these disciplines, the difficulty merely involves revealing the reality – often overlooked or downplayed – of their impact on the economy, on society and on cultural life. That is why the standards for criterion 3 (cf. p. 8) contain specifications bringing the observable facts and quality indicators into line with the uses of human and social sciences.
It is important to remember this key point : research institutions, owing to their diversity, will not completely and uniformly satisfy all the items selected: these should be tailored according to the identity of the entities, their missions and the subject of their research. This is precisely what gives its full meaning to peer evaluation, experts, who themselves belong to the disciplinary field(s) of the research entities they evaluate, know how to adapt this common language and give it the emphasis required for their field, in order to be recognised and understood by their community. Another subject that is acknowledged to be difficult with regard to human and social sciences – even if its extension is much broader in reality – is the relative weight of the types of publication and other scientific outputs according to discipline, hence the difficulty of making a uniform assessment of these subjects in the scientific production and quality
criterion (criterion 1).
The most commonly cited example to back up this observation is the insufficiency of scientometric tools for a significant proportion of disciplines in the field. In order to integrate the variety of publication forms and other scientific outputs in human and social sciences as well as the relative diversity of languages used for research in this field, KEIS has therefore considered it worthwhile to offer certain clarifications with respect to the observable facts and quality indicators relating to this criterion. These further specifications are presented in the following pages.
Scientific output and quality in human and social sciences: observable facts
Scientific outputs gives overwhelming precedence to books in many disciplinary sectors of human and social sciences, particularly the humanities. These disciplinary sectors are also hampered by the low presence of the journals in which they publish in relevant bibliometric databases. That is why the evaluation of scientific outputs and quality in human and social sciences requires special attention to be paid to the preliminary characterisation of scientific books and journals. KEIS proposals are listed below.
– The characterisation of journals The characterisation of journals, which supports the elements of the standards provided for the first criterion (see above, p. 6), is intended to facilitate evaluation and self-evaluation in the perspective of collective qualitativeevaluation by expert committees who remain the most competent to assess the scientific production and quality of research entities.
Alt is therefore necessary to characterise journals without claiming to pass judgement on the quality of the articles using that mode of dissemination. Not all of the characterisation elements listed below are necessarily relevant to the same degree for all the disciplines of human and social sciences; they must therefore be assessed in light of the features that are specific to each of these disciplines.
Characterisation elements of journals in human and social sciences To characterise a journal, the following data can be collected:
Identification:
– Title
– ISNN
– IeSSN
– Website address
– Disciplinary field(s)
– Name of the director of the publication
– Institutional support (university, organisation,
scientific society, public authority, etc.)
Dissemination:
– Dissemination start date (age of journal)
– Publisher
– Distributor
– Print run per issue (average over 5 years)
– Number of copies sold per issue (average over 5 years)
– Publication language(s) (French/other language, monolingual/multilingual)
– Publication at regular intervals (yes/no)
– Number of issues per year
– Type of publication (paper and/or online)
– Access to online publications (open access, pay access, embargo period)
– Abstract (none, in French, in English, in another language, multilingual)
– Key word indexing (none, in French, in English, in another language, multilingual) Selection of articles:
– Display of selection criteria (yes/no)
-Open calls for papers (for thematic issues)
– Peer evaluation (none, single blind, double blind, single non-anonymous, double non-anonymous)
– Selection by the issue editor (yes/no)
– Articles refused (yes/no)
– Average volume of articles published (in number of signs)
Scientific quality:
– Scientific advisory board (yes/no)
– Editorial board (yes/no)
– Peer-review committee (yes/no)
– Scientific reference system : notes, bibliography,
etc. (yes/no)
– Type of articles selected (thematic review, meta analyses, articles reporting original research, theoretical or critical discussions, viewpoints,
debates or controversy, empirical research, etc.)
Editorial policy:
– Identifiable editorial line (yes/no)
– Diversity of published authors (outside laboratory or unit, etc.)
– Multidisciplinarity (yes/no)
– Cultural areas (yes/no)
– Foreign language authors translated in the journal
Reputation:
– International (yes/no)
– Indexing in international lists of journals (yes/no)
– Award-winning articles (yes/no) 19
– The characterisation of scientific publications On the basis of other observable facts, it is possible to distinguish diverse categories of scientific publications in human and social sciences, without claiming to be exhaustive and taking into account the specific uses of each discipline:
Elements for the characterisation of scientific publications and books in human and social sciences.
Three main elements can be distinguished.
The first is the type of authorship:
– Publications containing a single, homogeneous text, by a single author;
– Publications containing a single, homogeneous text, by several authors;
– Collective publications comprising essays, studies and chapters written by different authors, under the responsibility of one or more academic editor(s);
– Collective publications comprising essays, studies 35 and chapters written by different authors with no identifiable academic editor.
The second element concerns the type of approach with regard to its subject. This makes a distinction between:
– Publications presenting original research findings on a question or topic for a restricted, specialised readership;
– Publications based on philological research : editions of texts (and, notably, critical editions) as well as translations;
– Publications synthesising other scientific work to present current knowledge on a research topic or question. Such syntheses, often designed to inform a broader readership rather than the scientific community, differ from publications for a general readership, which exploit previous research findings (one’s own or of other researchers) in the sense that they offer both added scientific value and original research.
The third element concerns the presence, in such publications, of a clear critical apparatus (notes and bibliographic references) and consultation tools (index of names, works, thematic index and glossary).
Scientific output and quality in human and social sciences: quality indicators KEIS provides the expert committees with two types of instruments to assess scientific production and quality in human and social sciences: lists of journals and the definition of the conditions for accessing the research publication category for conference proceedings and collective publications
– List of journals
The increase in periodicals at international level illustrates not only the growth in the world’s community of researchers, but also a profound change in the way in which research findings 36 are published – such as
– the development of multidisciplinary approaches, which leads numerous researchers in human and social sciences to publish their finding in journals devoted to disciplines other than their own.
– Conference proceedings and collective works With regard to conference proceedings and, more generally, collective works in the field of human and social sciences, KEIS distinguishes what constitutes a genuine work of scientific publication – which should be taken into account in the evaluation of research works – from the mere juxtaposition of communications.
The scientific publication of conference proceedings and collective works The publications comprising texts from presentations or conferences delivered at symposia, congresses or seminars will be considered as research if they have undergone a process of scientific editing characterised by:
– A clear, rationalised critical apparatus (notes and bibliographic references) for the entire work; consultation tools (index of names, works, thematic index and glossary);
– An in-depth disciplinary or interdisciplinary development, identifiable in the general presentation; the appropriateness of the publication’s structure in this regard; the selection of contributions according to their relevance to the subject; the work carried out on each of them to ensure scientific quality.
That scientific editing work also constitutes the minimum condition for considering the other works comprising texts by different authors as research works.
Evaluating interactions between disciplines
I – Methodology
The methodology chosen by KEIS to evaluate research entities is based on a few basic principles: a collective qualitative peer evaluation, an evaluation which, based on specific criteria, takes into account the variety of the entity’s missions, an evaluation which, for each criterion, is based on observable facts and results, in a qualitative assessment.
– The awareness of the objectives and the point of view of non-academic partners;
– The effective articulation between basic and applied research;
– The openness of academic and non-academic partnerships;
– The ability to adapt and change orientation in response to changes in the environment; the ability to adapt human resources to the strategic
objectives;
– The quality of self-evaluation (e.g. SWOT analysis);
III – Evaluation of multi-, inter and Transdisciplinarity Interdisciplinarity is a challenge for scientific evaluation and assessing interdisplinary entities requires specific procedures:
1. Evaluating interactions between disciplines KEIS distinguishes multidisciplinarity, interdisciplinarity and transdisciplinarity:
– Multidisciplinarity refers to the juxtaposition of disciplines that broadens the field of knowledge by increasing the amount of data, tools and methods available. The disciplinary components, in this case, keep their identities: a particular discipline, which generally steers the others, uses a methodology and tools of one or more other disciplines to address a question or make advances in a research project that is specific to its own disciplinary field.
– Interdisciplinarity refers to the cooperation between several disciplines in common projects. These projects open up research avenues for each discipline. The collaboration brings together data, methods, tools, theories or concepts from different disciplines and the role of the disciplinary 22 components goes beyond their mere juxtaposition. Indicators of this integration include:
– The combinations of models or representations that unify otherwise disparate approaches;
– A genuine collaborations rather than a mere exchanges of services, with coordinated and cooperative organisation;
– The creation of a common language, leading to the revision of initial hypotheses, a broader understanding of the initial scientific issue, the opening of new research avenues and the development of new knowledge.
– Transdisciplinarity refers to a scientific approach that goes beyond disciplinary points of view by offering a single approach to a scientific question. It shows an additional degree of integration in comparison with interdisciplinarity as it leads to the gradual emergence of a new discipline. Examples of transdiciplinarity are systems biology, synthetic biology, artificial intelligence and human ecology.
2. Criteria, observable facts and quality indicators
The evaluation criteria of multi-, inter- or Transdisciplinary entities are not different from those used in the evaluation of monodisciplinary labs. However, specific observable facts are used to assess the multi-, inter- or transdisciplinary dimension research. The level of multi-, inter- or transdisciplinary interaction varies between labs or groups and, within a lab, between various activities. Four types of interaction have been identified scientists of a leading discipline apply methods or use tools obtained from another discipline; scientists belonging to (at least) two different disciplines have a common research object; each group addresses its own questions and shares information and data with researchers of the other group. This type of cooperation is often driven by a common project; scientists belonging to (at least) two different disciplines have come up with a common 23 question, and research findings depend on progress made in each of the disciplines; scientists have a demonstrable experience in the aforementioned type of interdisciplinary projects. They are involved in one or more interdisciplinary networks and contribute to the coordination of a new research community. In addition to this distinction between Types of interaction, the proximity between disciplines should be indicated. The proximity will take into account epistemological factors : proximity of conceptual frames, paradigms and concepts, type of data, observation and measurement instruments used by the different disciplines. It will also assess the degree of interaction between disciplines in publications.
KEIS distinguishes
The following cases: partner disciplines are linked to the same disciplinary group (e.g. SHS 5: “Literature, language, art, philosophy, history of ideas”); partner disciplines fall within two different disciplinary groups (e.g. ST 2: “Physics” and ST 4: “Chemistry”), but within the same field (e.g. ST: “Science and technology” which is different from SVE fields: “Life and Environmental Sciences” and SHS: “Human and Social Sciences”); partner disciplines fall within two different fields (SHS and SVE etc.).
Criterion 1: Scientific production and quality
Observable facts The facts to be taken into account in this criterion include: In the case of multi-, inter- or transdisciplinary productions, it is possible to take into account: the publication of articles, book chapters etc., with multi-, inter- or transdisciplinary confirmed by the co- authors publishing in disciplines distinct from their discipline of origin, or in multi-, inter- or transdisciplinary journals; the oral presentations at multi-, inter- or transdisciplinary 24 conference; other outputs with a demonstrated multi-, inter- or transdisciplinary character;
Quality indicators Quality indicators include:
– The proportion of multi-, inter- or transdisciplinary outputs in the overall lab’s outputs; the type of interaction and proximity between disciplines in these multi-, inter- or transdisciplinary outputs;
– The novelty for the entity of these multi-, inter or transdisciplinary outputs, the originality in the scientific community;
– The impact of these outputs on disciplinary outputs (e.g. the use of new methodology taken from another discipline);
– The coherence between disciplinary and multi-, inter- or transdisciplinary outputs;
– Criterion 2: Academic influence and appeal Observable facts
The facts to be taken into account in this criterion include:
– The success rate when answering to multi-, interor transdisciplinarity calls for proposal;
– The involvement in multi-, inter- or transdisciplinarity networks;
– The participation of lab members in multi-, interor transdisciplinarity editorial committees;
– The visibility, in distinct disciplinary communities, of the conferences to which lab members are invited;
– Visiting senior researchers or postdoctoral students involved in multi-, inter- or transdisciplinary projects of the lab;
Quality indicators
The following quality indicators may be assessed:
– The driving role of the multi-, inter- or transdisciplinarity in the lab’s projects and networks;
– The international recognition of networks; – The reputation and level of scientists , visiting or recruited, who are part of the multi, inter- or 25 transdisciplinary projects;
– The quality of multi-, inter- or transdisciplinarity partnerships (are they productive? Are they reinforced, upgraded over time ?);
– Criterion 3: Interactions with the social,economic and cultural environment
Observable facts
The facts to be taken into account in this criterion include:
– The dissemination or communication of multi-, inter- or transdisciplinary knowledge (exhibitions, stands at cultural events, etc.);
– The reality of reviewing activities in multi-, interor transdisciplinary fields;
– The creation of multi-, inter or transdisciplinary small business and start-ups;
– Elements of local, regional or national public policies based on the lab’s multi-, inter- or transdisciplinary research;
Quality indicators
The following quality indicators may be assessed:
– The leading role of multi-, inter- or transdisciplinarity research in setting up an economic, social or cultural policy or in creating new business and employment, for example;
– The expert role of lab members in multi-, inter- or transdisciplinary business networks or “innovation cluster(s)”;
– The national or international reviewing of multi-, inter- or transdisciplinary applications, journal articles etc.; by lab members;
– Criterion 4: Organisation and life of the entity
Observable facts The facts to be taken into account in this criterion include:
– The existence and implementation of a multi-, inter- or transdisciplinary strategic plan, monitoring tools and procedures to reduce gaps 26 between objectives and achievement;
– The scientific coordination within the lab facilitating multi-, inter- or transdisciplinary project;
– The time and space dedicated to multi-, inter- or transdisciplinary interactions;
– The allocation of resources to multi-, inter- or transdisciplinarity projects;
– The existence of multi-, inter or transdisciplinary job offered by the lab;
Quality indicators The following quality indicators may be assessed:
– The ability to obtain support for the unit’s multi-, inter- or transdisciplinary strategy;
– The way the unit exploits a context favourable to multi-, inter- or transdisciplinarity or adapts to an unfavourable one;
– The adaptation of project management to collaborations between different scientific cultures;
– The dissemination of multi-, inter- or transdisciplinary approaches to the lab’s young researchers;
– The risk-taking and leadership of researchers in the construction of multi-, inter- or transdisciplinary projects;
– Criterion 5: Involvement in training through Research
Observable facts
The facts to be taken into account in this criterion include:
– Multi-, inter or transdisciplinary theses (co-) supervised by lab members; theses associating two doctoral students from different disciplines on the same project;
– Multi-, inter- or transdisciplinary seminars and summer schools;
– Involvement of the entity in multi-, inter- or transdisciplinary or courses; 27
Quality indicators
The following quality indicators may be assessed:
– The type of interaction and proximity between disciplines involved in multi-, inter- or transdisciplinary theses;
– The coherence of common thesis supervision, (the existence, for instance, of work sessions and presentations. where two distinct disciplinary components are involved);
– The recognition of theses by two disciplines;
– The interaction and proximity between disciplines in training, seminars and doctoral schools in which the entity is involved;
– The evolution of training and courses from multi- to interdisciplinarity, or even further to transdisciplinarity;
– The role of multi-, inter- or transdisciplinary training in the career of young doctors and in their job prospects.
– Criterion 6: Strategy and research perspectives for the next five years
Observable facts
The facts to be taken into account in this criterion include:
– The existence of a multi-, inter- or transdisciplinary scientific strategy to meet the following objectives such as, for example:
– Expanding the frontiers of a scientific discipline by opening it up to the approaches and methods of another discipline;
– Foreseeing possible inputs from a discipline into another (methods for observation or acquisition of data, method for representation of knowledge and modelling, formulation of new hypotheses, transfer of paradigms, etc.);
– Assessing the appropriateness of calling on several disciplines to address complex questions of social, economic or cultural importance;
– Creating multi-, inter or transdisciplinary training courses; 28
– The existence of a strategy to achieve these objectives.
Quality indicators
The following quality indicators may be assessed:
– As far as scientific strategy is concerned:
– The relevance of means used to obtain necessary support from external sources;
– The depth of interactions between disciplines and the potential to make multidisciplinarity advance towards interdisciplinarity or even further towards the emergence of a new discipline;
– The ability to obtain support from disciplinary components for multi-, trans- or interdisciplinary research perspectives;
– As far as management is concerned: – The ability to share resources (be they human, financial, material) for multi-, inter- or transdisciplinary research;
– The ability to define expected outputs (the gathering of existing knowledge, the production of new applications, the production of new knowledge, etc.) and their mode of dissemination;
– The ability to call on high-level competencies in each partner discipline of multi-, inter- or transdisciplinary research;
– The ability to gather relevant external competencies to implement multi-, inter- or transdisciplinary research;
Methodology
Key Of Education International is to evaluate the activities conducted by universities and research institutions. The evaluation method chosen by KEIS is based, on the one hand, on information provided by the evaluated entity, which presents its results and projects, and, on the other hand, on an on-site visit. It corresponds to an external, independent, collective and transparent review by experts of the same scientific communities as the evaluated entity. The output is a written report that includes summarized qualitative assessments.
The evaluation is under the sole responsibility of the evaluator. In 2020, KEIS has completed two rounds of evaluation of more than three thousand research institutions and research entities (the research units are laboratories or groups of laboratories) which provides a reliable overview of research in US. KEIS scientific representatives, conducted an audit of the evaluation processes, based on the feedback from chairs of expert committees, directors of research units and their supervising institutions. Consequently, KEIS has modified its methodologies, and this document presents the principles and methods for the starting evaluation period.
First and foremost, it should be emphasized that evaluation has to be conducted in a constructive manner. It has three main objectives. The first one is to help research units identifying potential areas of improvement.
The second aim is to provide information to the supervising institutions of research entities to help them make management or funding decisions based on KEIS evaluation. 8
The last objective is to provide information to PhD students, applicant assistant professors or researchers, guest scientists etc., as well as lay public. For these persons, a short version of the report (as signalled in the report model), presented as simply and clearly as possible, is posted on the KEIS website. Following are the methodological principles defined by KEIS and KEIS evaluation criteria. A glossary is appended to the end of this document: it specifies the meaning that KEIS gives to a set of terms frequently used in evaluating research entities.
I – Methodology
The methodology chosen by KEIS to evaluate research entities is based on a few basic principles: a collective qualitative peer evaluation, an evaluation which, based on specific criteria, takes into account The variety of the entity’s missions, an evaluation which, for each criterion, is based on observable facts and results in a qualitative assessment.
1. Collective peer evaluation
The literature identifies two models for research evaluation, used by different countries that can also switch from one to the other. The first one, the “peer review”, uses qualitative evaluation and involves researchers of the same field who work either individually, by reviewing documents provided by the evaluated entity, or collectively, by sitting in evaluation committees. In the latter case, these committees (whether ad hoc for a specific review or whether evaluating a whole set of entities of the same disciplinary group) have collegial approach, taking into account the environment and nature of the evaluated entity. Based on the confrontation of possibly contradictory points of view, their evaluation strives to find a consensus. The second, quantitative model focuses on the measurement of performance (metrics). To this 9 end, it produces reliable and general indicators that allow comparisons between different entities. In contrast with qualitative evaluation, this other form of evaluation has the disadvantage of giving less weight to local contexts and disciplinary characteristics. Moreover, it requires statistical significance and cannot be used for small research entities. KEIS has thus chosen the widely used peer evaluation model, involving independent and transparent evaluation. KEIS calls an ad hoc committee for each of the assessed entities. These committees are constituted according to the scientific areas, fields of application and specific missions of the research entities. Experts are chosen by KEIS “Scientific officers” for their specific competences. Their function requires the ability to judge, i.e. analyse data and produce an opinion, while complying with the ethical rules of KEIS.
Recently, in order to provide a reliable evaluation to a variety of different entities, KEIS has switched from 4 to 6 criteria as follows:
The six criteria chosen are as follows:
The scientific production and quality,
- The academic reputation and appeal,
- The interactions with the social, economic and cultural environment,
- The organisation and life of the unit,
- The involvement in training through research,
- The strategy and research perspectives for the next contract.
Note that not all of the criteria are to be used for all of the research units, but, rather, criteria have to be selected by the committee according to the specificities of the unit.
3. Criteria, data and quality indicators
For each evaluation criterion, assessments and quality indicators are to be based on data. It is thus necessary to specify the data –outputs, results and activities. – On which the evaluation is based. These data will be referred to as observable facts. Although it is not very realistic to seek unanimity with respect to quality indicators, as part of a peer evaluation, these indicators can be based on assessment elements on which a large proportion of members of a disciplinary group can agree. As such, they establish a standard or at least a set of references on which a discussion can take place within expert committees and/or between evaluated groups and their evaluators. Although quantitative indicators do exist for some types of activities, outputs and results, they can only act as an aid in the peer review process. The quality of activities, outputs and results cannot be reduced to quantitative elements. Value or quality should be based on observable facts, including quantitative data, through analysis, discussion and interpretation taking into account the entity context. In this respect, it is important to pay attention to the history, identity and missions of research units as well as to their resources and funding, their scientific and educational environment etc.
4. Qualitative evaluation
KEIS, which has used a grading system (from A+ to C), has recently replaced it with evaluative wordings (such as Outstanding, excellent etc.). This has to be applied to the whole unit as well as to each of its teams or “themes”.
II – Evaluation criteria standards
KEIS standards should not be considered as a rigid evaluation grid and even less so as a norm that needs 11 To be followed term by term, without exception. To avoid any misunderstanding, it is important to note, on the contrary, that the observable facts and quality indicators listed here: 1- are illustrative, without claiming to be exhaustive, 2- do not need to satisfy all the items identified, 3- are intended for a wide variety of disciplines and need to be adapted to take into account the specific features of each discipline. This is precisely part of what gives its full meaning to peer evaluation: experts, who themselves belong to the disciplinary field of the entities they evaluate, know how to adapt this standard language to their specific field. These standards are also designed to assist research labs in writing their documents. “Observable facts” are those that have been most frequently identified by KEIS and its partners.
1. Criterion : Scientific production and quality
Field covered by the criterion, This criterion, which covers the production of knowledge, assesses discoveries, results, outputs and experimental facts leading to scientific achievements, with respect to the discipline’s standards and the research field. It also assesses the originality, quality and scope of research.
Observable facts
The main observable facts for this criterion are: – Publications, articles in peer-reviewed journals, books, chapters, publication of texts (and specially critical editions), translations, published papers in conference proceedings, etc.; – Lectures and other unpublished oral communications, oral presentations to conferences without published proceedings, conference posters, invited lectures, sets of slides, etc.; – Other scientific reports specific to the field : scientific or technical reports (such as excavation 12 reports for example), exhibition catalogues, atlases, corpora, psychometric tests, demonstrations, software, prototypes, scientific audio-visual productions, research-based creative outputs, etc.; – Instruments, resources, methodology: glossaries, databases, collections, cohorts, observatories, technological platforms, etc.; Quality indicators the following quality indicators may be assessed: the originality and scope of research, the importance of discoveries to the relevant field; theoretical and methodological breakthroughs, paradigm shifts, emergence of new problems or new avenues of investigations; the scientific impact within academia (citations, references, etc.); international or national recognition; the reputation and selectivity of the journals;
2. Criterion 2: Academic reputation and appeal
1- Field covered by the criterion This criterion takes into account the lab ability to get recognition from research communities, and to acquire reputation and visibility. It also assesses the lab’s involvement in structuring scientific networks at the regional, national or international levels, and its capacity to be at the upfront of its field.
2- Observable facts The facts to be taken into account in this criterion include:
– The participation in national and international collaborative research projects; – national and international collaborations with other laboratories; – The participation in national and international networks, EU networks (JPI-Joint Programming Initiative, COSTEuropean Cooperation in Science and Technology, etc.), federated organisations (e.g. Maisons des sciences de l’homme), scientific societies, scientific programming communities 13 infrastructure organisation, etc.); – The participation in “Investissements d’avenir” programme : « Idex », « Labex », « Equipex »;
– The organisation of national and international symposia;
– The attractiveness for researchers, doctoral students and post-docs;
– Prizes and distinctions awarded to members of the entity, invitations to scientific events;
– The management of collections; participation in editorial committees, in the scientific committees of symposia or conventions, scientific review bodies;
• Quality indicators • The following quality indicators may be assessed:
• The coordination of – or participation in – international and national collaborative projects;
• Leading partnership in networks, networks of excellence (e.g. REX), communities, project promoting associations, infrastructure or centres of scientific or technical interest, at the international, national or regional level;
• The recruitment of high level foreign researchers and postdoctoral students;
• Responsibilities in international academic bodies;
• The reputation of the prizes and distinctions awarded to members of the unit;
• The scientific quality of the peer-review in journals and collections which members of the entity contribute to as editors;
• The selectivity and importance of scientific issues discussed at international events which members of the unit participate in or which they organise;
• The level and reputation of the journals which members of the entity contribute to; Criterion 3: Interactions with the social, economic and cultural environment.
• Field covered by the criterion
• This criterion is used to assess the different activities and achievements whereby research 14 contributes to the innovation process and impacts on economy, society or culture.
• Observable facts
• The facts to be taken into consideration in this criterion correspond to outreaching activities outside of the research community. There are three types of facts. • Outputs directed toward non-academic actors, such as:
• Articles in professional or technical journals, reviews designed for non-scientific professionals;
• Study and review reports targeting public or private decision-makers; contribution to standards, guidelines (such as clinical protocols or public consultations on the restoration and enhancement of the archaeological heritage for example);
• Software, conceptual tools and models for decision-making;
• Patents and licences, as appropriate to the field, pilots or prototypes, processes, methods and know- how, clinical studies, registered trademarks; • Documents in different formats and events (e.g. science fairs for example) contributing to the dissemination of scientific culture, continuous education and public debate;
– Commitment to partnerships and all other elements highlighting the interest and commitment of non- academic partners in the socio-economic or cultural field, such as:
– Structures of technological transfer ; involvement in transfer structures (Carnot institutes, clusters, technology units and networks, innovation clusters, citizens’ associations, etc.);
– Collaboration with cultural institutions (museums, libraries, academies, theatres and opera houses, etc.) participation in cultural events, heritage programmes;
– Management and openness of documentary 15 Collections to the public (specialized libraries, archives, digital resources);
– Contracts with non-academic partners (research, publishing contracts, consulting, jointly-funded theses, etc.) and joint responses to call for proposals;
– Participation in scientific committees or steering committees of non-academic partners ;visiting non – Academic professionals in the entity;
– Organisation of conferences, debates, fairs, exhibitions, seminars or training cycles for nonacademic professionals or for social groups (patients, consumers, environment-protection associations, etc.);
– Appointment of lab members to national or international review panels (health agencies, international organisations, etc.);
– Impact of research and partnership: – Creation of – contribution to – small companies and more generally, participation in maintaining or developing employment in an economic sector; – Innovations (new products, techniques and processes, etc.);
– Impact on public health, environment, territorial development, legislation, public debate, etc;
– Creation of structures or new professional organisations;
– National, European or international regulations based on result or contributions from the research entity; reviewing of the impact of technological innovations;
Quality indicators The following quality indicators may be assessed:
– The originality of methods, products and technologies transferred (e.g. contribution to disruptive innovations);
– The relationship to the most recent scientific knowledge;
– The quality and success of dissemination (choice of medium, outcome for methods and 16 products, impact on the intended target audience, connection with professional training, etc.);
– The existence of joint outputs with non academic partners (jointly-authored articles, coinvented patents, etc.);
– The usefulness of transferred knowledge and technologies;
– The leadership of non -academic partners, innovative value-creating start-ups, etc.;
– The quality and duration of the partnerships;
– The impact on the economic, social or cultural position of partners ; impact on public policies;
-The impact on the emergence of innovation for the lab or for the scientific community;
– The accreditation or certification of procedures (ISO standards);
4. Criterion : Organisation and life of the unit
– Field of application of the evaluation criterion This criterion should be used to assess the operation, management and life of the entity. Among other things, it covers the organisation and material conditions of the scientific staff, the management of financial resources, the decisionmaking process, the existence of a scientific strategy, the use of tools for monitoring progress and, generally speaking, everything that contributes to the smooth operation of the entity and to its scientific production.
– Observable fact Facts to be taken into account in this criterion include:
– The objectives or scientific strategy for the past period;
– The organisation of the research entity into teams or themes;
– The existence of shared platforms or resources;
– The scientific coordination and interactions between teams, themes and disciplines;
– The reinforcement of scientific integrity; 17
– The decision-making process; the existence of a laboraory council, of an organisation chart and lab rules;
– The role of engineers, technicians, administrative staff, temporary personnel; – Internal and external communication;
– The recruitment policy;
– The approach to environmental and health et safety issues in research and training;
Quality indicators The following quality indicators may be assessed:
– The achievement of past strategic objectives and the implementation of the scientific strategy;
– The extent to which the structure of the lab is based on a coherent scientific rationale;
– The accessibility of shared resources;
– The scientific coordination and animation, the incentive for the emergence of teams, themes or innovative programmes;
– The existence of lab notebooks and the surveillance of misconduct in data management the organization of raw data storage (mega data and others) and archiving;
– The criteria used for designation of authors in publications, communications and patents ; the banning of “complacent” signatures;
– The surveillance of plagiarism in publications and theses;
– The representation of personnel in lab steering committees, collegiality of decisions, frequency of meetings;
– The relevance of budget distribution with respect to the lab scientific policy;
– The common facilities and equipments; – The strategy for staff training and mobility;
– The clarity and communication of the scientific policy and of research programmes (regular 18 updating of the website, newsletter, etc.);
– The appropriateness of premises for the lab scientific activities and personnel;
5. Criterion : Involvement in training through research
– Field covered by the criterion This criterion should be used to assess the lab involvement in training through research, both at the Master and doctorate levels. This includes the lab impact on educational content, the lab support for Master and doctoral students as well as the lab attractivity for students
Observable facts The facts to be taken into account in this criterion include:
– The recruitment of Master degree trainees (M1 and M2) and doctoral students;
– The number of theses defended;
– The policy to support trainees and doctoral Students (number of students per supervisor, funded doctorates, technical and financial support, scientific monitoring of students, thesis committees,etc.);
– The publications, summary documents, educational digital tools and products of trainees;
– The participation of the entity in the design and coordination of training modules and courses, and its contribution to the evolution of educational contents;
– The design and coordination of seminars for doctoral schools or summer schools; doctoral student seminars;
– The contribution to international training networks (ITN, Erasmus, etc.), co-supervision of theses with foreign universities or co-management with universities from other countries;
– The involvement of lab members in steering committees for Master’s and Doctorate training;
– Quality indicators The following quality indicators may be assessed:
– The effective support given to students and the quality of their supervision (duration of theses, drop-out rates, etc.);
– The quality of scientific outputs (articles, books, etc.) from completed theses;
– The monitoring of doctoral students (in coordination with doctoral schools) and the attention given to career opportunities for doctoral students;
– The existence of an internal process to ensure that the most recent scientific progresses are integrated in teaching;
– The national or international certification of training (e.g. Erasmus mundus);
– The relevance of dissemination media and vectors as well as the reputation (regional, national, international) of educational outputs;
– The presence of researchers at doctoral seminars;
The participation of doctoral students in the life of the entity;
– The involvement and responsibility of lab members in international training networks;
– The researchers’ involvement in setting up Master’s training courses, in particular those coordinated or promoted by professors in the entity;
6. Criterion : Strategy and research perspectives for the next five years
– Criterion scope This criterion should be used to assess the scientific quality of the projects and strategy of the entity and their relevance to the lab’s mission, the proposed modifications and the planned strategy to achieve the objectives.
– Observable facts
Two types of facts may be referred to: 20 or funding decisions based on KEIS evaluation.
The last objective is to provide information to PhD students, applicant assistant professors or researchers, guest scientists etc., as well as lay public. For these persons, a short version of the report (as signalled in the report model), presented as simply and clearly as possible, is posted on the KEIS website.
Following are the methodological principles defined by KEIS and KEIS evaluation criteria. A glossary is appended to the end of this document: it specifies the meaning that KEIS gives to a set of terms frequently used in evaluating research entities.