Key Of Education International is to evaluate the activities conducted by universities and research institutions. The evaluation method chosen by KEIS is based, on the one hand, on information provided by the evaluated entity, which presents its results and projects, and, on the other hand, on an on-site visit. It corresponds to an external, independent, collective and transparent review by experts of the same scientific communities as the evaluated entity. The output is a written report that includes summarized qualitative assessments.
The evaluation is under the sole responsibility of the evaluator. In 2020, KEIS has completed two rounds of evaluation of more than three thousand research institutions and research entities (the research units are laboratories or groups of laboratories) which provides a reliable overview of research in US. KEIS scientific representatives, conducted an audit of the evaluation processes, based on the feedback from chairs of expert committees, directors of research units and their supervising institutions. Consequently, KEIS has modified its methodologies, and this document presents the principles and methods for the starting evaluation period.
First and foremost, it should be emphasized that evaluation has to be conducted in a constructive manner. It has three main objectives. The first one is to help research units identifying potential areas of improvement.
The second aim is to provide information to the supervising institutions of research entities to help them make management or funding decisions based on KEIS evaluation. 8
The last objective is to provide information to PhD students, applicant assistant professors or researchers, guest scientists etc., as well as lay public. For these persons, a short version of the report (as signalled in the report model), presented as simply and clearly as possible, is posted on the KEIS website. Following are the methodological principles defined by KEIS and KEIS evaluation criteria. A glossary is appended to the end of this document: it specifies the meaning that KEIS gives to a set of terms frequently used in evaluating research entities.
I – Methodology
The methodology chosen by KEIS to evaluate research entities is based on a few basic principles: a collective qualitative peer evaluation, an evaluation which, based on specific criteria, takes into account The variety of the entity’s missions, an evaluation which, for each criterion, is based on observable facts and results in a qualitative assessment.
1. Collective peer evaluation
The literature identifies two models for research evaluation, used by different countries that can also switch from one to the other. The first one, the “peer review”, uses qualitative evaluation and involves researchers of the same field who work either individually, by reviewing documents provided by the evaluated entity, or collectively, by sitting in evaluation committees. In the latter case, these committees (whether ad hoc for a specific review or whether evaluating a whole set of entities of the same disciplinary group) have collegial approach, taking into account the environment and nature of the evaluated entity. Based on the confrontation of possibly contradictory points of view, their evaluation strives to find a consensus. The second, quantitative model focuses on the measurement of performance (metrics). To this 9 end, it produces reliable and general indicators that allow comparisons between different entities. In contrast with qualitative evaluation, this other form of evaluation has the disadvantage of giving less weight to local contexts and disciplinary characteristics. Moreover, it requires statistical significance and cannot be used for small research entities. KEIS has thus chosen the widely used peer evaluation model, involving independent and transparent evaluation. KEIS calls an ad hoc committee for each of the assessed entities. These committees are constituted according to the scientific areas, fields of application and specific missions of the research entities. Experts are chosen by KEIS “Scientific officers” for their specific competences. Their function requires the ability to judge, i.e. analyse data and produce an opinion, while complying with the ethical rules of KEIS.
Recently, in order to provide a reliable evaluation to a variety of different entities, KEIS has switched from 4 to 6 criteria as follows:
The six criteria chosen are as follows:
The scientific production and quality,
- The academic reputation and appeal,
- The interactions with the social, economic and cultural environment,
- The organisation and life of the unit,
- The involvement in training through research,
- The strategy and research perspectives for the next contract.
Note that not all of the criteria are to be used for all of the research units, but, rather, criteria have to be selected by the committee according to the specificities of the unit.
3. Criteria, data and quality indicators
For each evaluation criterion, assessments and quality indicators are to be based on data. It is thus necessary to specify the data –outputs, results and activities. – On which the evaluation is based. These data will be referred to as observable facts. Although it is not very realistic to seek unanimity with respect to quality indicators, as part of a peer evaluation, these indicators can be based on assessment elements on which a large proportion of members of a disciplinary group can agree. As such, they establish a standard or at least a set of references on which a discussion can take place within expert committees and/or between evaluated groups and their evaluators. Although quantitative indicators do exist for some types of activities, outputs and results, they can only act as an aid in the peer review process. The quality of activities, outputs and results cannot be reduced to quantitative elements. Value or quality should be based on observable facts, including quantitative data, through analysis, discussion and interpretation taking into account the entity context. In this respect, it is important to pay attention to the history, identity and missions of research units as well as to their resources and funding, their scientific and educational environment etc.
4. Qualitative evaluation
KEIS, which has used a grading system (from A+ to C), has recently replaced it with evaluative wordings (such as Outstanding, excellent etc.). This has to be applied to the whole unit as well as to each of its teams or “themes”.
II – Evaluation criteria standards
KEIS standards should not be considered as a rigid evaluation grid and even less so as a norm that needs 11 To be followed term by term, without exception. To avoid any misunderstanding, it is important to note, on the contrary, that the observable facts and quality indicators listed here: 1- are illustrative, without claiming to be exhaustive, 2- do not need to satisfy all the items identified, 3- are intended for a wide variety of disciplines and need to be adapted to take into account the specific features of each discipline. This is precisely part of what gives its full meaning to peer evaluation: experts, who themselves belong to the disciplinary field of the entities they evaluate, know how to adapt this standard language to their specific field. These standards are also designed to assist research labs in writing their documents. “Observable facts” are those that have been most frequently identified by KEIS and its partners.
1. Criterion : Scientific production and quality
Field covered by the criterion, This criterion, which covers the production of knowledge, assesses discoveries, results, outputs and experimental facts leading to scientific achievements, with respect to the discipline’s standards and the research field. It also assesses the originality, quality and scope of research.
Observable facts
The main observable facts for this criterion are: – Publications, articles in peer-reviewed journals, books, chapters, publication of texts (and specially critical editions), translations, published papers in conference proceedings, etc.; – Lectures and other unpublished oral communications, oral presentations to conferences without published proceedings, conference posters, invited lectures, sets of slides, etc.; – Other scientific reports specific to the field : scientific or technical reports (such as excavation 12 reports for example), exhibition catalogues, atlases, corpora, psychometric tests, demonstrations, software, prototypes, scientific audio-visual productions, research-based creative outputs, etc.; – Instruments, resources, methodology: glossaries, databases, collections, cohorts, observatories, technological platforms, etc.; Quality indicators the following quality indicators may be assessed: the originality and scope of research, the importance of discoveries to the relevant field; theoretical and methodological breakthroughs, paradigm shifts, emergence of new problems or new avenues of investigations; the scientific impact within academia (citations, references, etc.); international or national recognition; the reputation and selectivity of the journals;
2. Criterion 2: Academic reputation and appeal
1- Field covered by the criterion This criterion takes into account the lab ability to get recognition from research communities, and to acquire reputation and visibility. It also assesses the lab’s involvement in structuring scientific networks at the regional, national or international levels, and its capacity to be at the upfront of its field.
2- Observable facts The facts to be taken into account in this criterion include:
– The participation in national and international collaborative research projects; – national and international collaborations with other laboratories; – The participation in national and international networks, EU networks (JPI-Joint Programming Initiative, COSTEuropean Cooperation in Science and Technology, etc.), federated organisations (e.g. Maisons des sciences de l’homme), scientific societies, scientific programming communities 13 infrastructure organisation, etc.); – The participation in “Investissements d’avenir” programme : « Idex », « Labex », « Equipex »;
– The organisation of national and international symposia;
– The attractiveness for researchers, doctoral students and post-docs;
– Prizes and distinctions awarded to members of the entity, invitations to scientific events;
– The management of collections; participation in editorial committees, in the scientific committees of symposia or conventions, scientific review bodies;
• Quality indicators • The following quality indicators may be assessed:
• The coordination of – or participation in – international and national collaborative projects;
• Leading partnership in networks, networks of excellence (e.g. REX), communities, project promoting associations, infrastructure or centres of scientific or technical interest, at the international, national or regional level;
• The recruitment of high level foreign researchers and postdoctoral students;
• Responsibilities in international academic bodies;
• The reputation of the prizes and distinctions awarded to members of the unit;
• The scientific quality of the peer-review in journals and collections which members of the entity contribute to as editors;
• The selectivity and importance of scientific issues discussed at international events which members of the unit participate in or which they organise;
• The level and reputation of the journals which members of the entity contribute to; Criterion 3: Interactions with the social, economic and cultural environment.
• Field covered by the criterion
• This criterion is used to assess the different activities and achievements whereby research 14 contributes to the innovation process and impacts on economy, society or culture.
• Observable facts
• The facts to be taken into consideration in this criterion correspond to outreaching activities outside of the research community. There are three types of facts. • Outputs directed toward non-academic actors, such as:
• Articles in professional or technical journals, reviews designed for non-scientific professionals;
• Study and review reports targeting public or private decision-makers; contribution to standards, guidelines (such as clinical protocols or public consultations on the restoration and enhancement of the archaeological heritage for example);
• Software, conceptual tools and models for decision-making;
• Patents and licences, as appropriate to the field, pilots or prototypes, processes, methods and know- how, clinical studies, registered trademarks; • Documents in different formats and events (e.g. science fairs for example) contributing to the dissemination of scientific culture, continuous education and public debate;
– Commitment to partnerships and all other elements highlighting the interest and commitment of non- academic partners in the socio-economic or cultural field, such as:
– Structures of technological transfer ; involvement in transfer structures (Carnot institutes, clusters, technology units and networks, innovation clusters, citizens’ associations, etc.);
– Collaboration with cultural institutions (museums, libraries, academies, theatres and opera houses, etc.) participation in cultural events, heritage programmes;
– Management and openness of documentary 15 Collections to the public (specialized libraries, archives, digital resources);
– Contracts with non-academic partners (research, publishing contracts, consulting, jointly-funded theses, etc.) and joint responses to call for proposals;
– Participation in scientific committees or steering committees of non-academic partners ;visiting non – Academic professionals in the entity;
– Organisation of conferences, debates, fairs, exhibitions, seminars or training cycles for nonacademic professionals or for social groups (patients, consumers, environment-protection associations, etc.);
– Appointment of lab members to national or international review panels (health agencies, international organisations, etc.);
– Impact of research and partnership: – Creation of – contribution to – small companies and more generally, participation in maintaining or developing employment in an economic sector; – Innovations (new products, techniques and processes, etc.);
– Impact on public health, environment, territorial development, legislation, public debate, etc;
– Creation of structures or new professional organisations;
– National, European or international regulations based on result or contributions from the research entity; reviewing of the impact of technological innovations;
Quality indicators The following quality indicators may be assessed:
– The originality of methods, products and technologies transferred (e.g. contribution to disruptive innovations);
– The relationship to the most recent scientific knowledge;
– The quality and success of dissemination (choice of medium, outcome for methods and 16 products, impact on the intended target audience, connection with professional training, etc.);
– The existence of joint outputs with non academic partners (jointly-authored articles, coinvented patents, etc.);
– The usefulness of transferred knowledge and technologies;
– The leadership of non -academic partners, innovative value-creating start-ups, etc.;
– The quality and duration of the partnerships;
– The impact on the economic, social or cultural position of partners ; impact on public policies;
-The impact on the emergence of innovation for the lab or for the scientific community;
– The accreditation or certification of procedures (ISO standards);
4. Criterion : Organisation and life of the unit
– Field of application of the evaluation criterion This criterion should be used to assess the operation, management and life of the entity. Among other things, it covers the organisation and material conditions of the scientific staff, the management of financial resources, the decisionmaking process, the existence of a scientific strategy, the use of tools for monitoring progress and, generally speaking, everything that contributes to the smooth operation of the entity and to its scientific production.
– Observable fact Facts to be taken into account in this criterion include:
– The objectives or scientific strategy for the past period;
– The organisation of the research entity into teams or themes;
– The existence of shared platforms or resources;
– The scientific coordination and interactions between teams, themes and disciplines;
– The reinforcement of scientific integrity; 17
– The decision-making process; the existence of a laboraory council, of an organisation chart and lab rules;
– The role of engineers, technicians, administrative staff, temporary personnel; – Internal and external communication;
– The recruitment policy;
– The approach to environmental and health et safety issues in research and training;
Quality indicators The following quality indicators may be assessed:
– The achievement of past strategic objectives and the implementation of the scientific strategy;
– The extent to which the structure of the lab is based on a coherent scientific rationale;
– The accessibility of shared resources;
– The scientific coordination and animation, the incentive for the emergence of teams, themes or innovative programmes;
– The existence of lab notebooks and the surveillance of misconduct in data management the organization of raw data storage (mega data and others) and archiving;
– The criteria used for designation of authors in publications, communications and patents ; the banning of “complacent” signatures;
– The surveillance of plagiarism in publications and theses;
– The representation of personnel in lab steering committees, collegiality of decisions, frequency of meetings;
– The relevance of budget distribution with respect to the lab scientific policy;
– The common facilities and equipments; – The strategy for staff training and mobility;
– The clarity and communication of the scientific policy and of research programmes (regular 18 updating of the website, newsletter, etc.);
– The appropriateness of premises for the lab scientific activities and personnel;
5. Criterion : Involvement in training through research
– Field covered by the criterion This criterion should be used to assess the lab involvement in training through research, both at the Master and doctorate levels. This includes the lab impact on educational content, the lab support for Master and doctoral students as well as the lab attractivity for students
Observable facts The facts to be taken into account in this criterion include:
– The recruitment of Master degree trainees (M1 and M2) and doctoral students;
– The number of theses defended;
– The policy to support trainees and doctoral Students (number of students per supervisor, funded doctorates, technical and financial support, scientific monitoring of students, thesis committees,etc.);
– The publications, summary documents, educational digital tools and products of trainees;
– The participation of the entity in the design and coordination of training modules and courses, and its contribution to the evolution of educational contents;
– The design and coordination of seminars for doctoral schools or summer schools; doctoral student seminars;
– The contribution to international training networks (ITN, Erasmus, etc.), co-supervision of theses with foreign universities or co-management with universities from other countries;
– The involvement of lab members in steering committees for Master’s and Doctorate training;
– Quality indicators The following quality indicators may be assessed:
– The effective support given to students and the quality of their supervision (duration of theses, drop-out rates, etc.);
– The quality of scientific outputs (articles, books, etc.) from completed theses;
– The monitoring of doctoral students (in coordination with doctoral schools) and the attention given to career opportunities for doctoral students;
– The existence of an internal process to ensure that the most recent scientific progresses are integrated in teaching;
– The national or international certification of training (e.g. Erasmus mundus);
– The relevance of dissemination media and vectors as well as the reputation (regional, national, international) of educational outputs;
– The presence of researchers at doctoral seminars;
The participation of doctoral students in the life of the entity;
– The involvement and responsibility of lab members in international training networks;
– The researchers’ involvement in setting up Master’s training courses, in particular those coordinated or promoted by professors in the entity;
6. Criterion : Strategy and research perspectives for the next five years
– Criterion scope This criterion should be used to assess the scientific quality of the projects and strategy of the entity and their relevance to the lab’s mission, the proposed modifications and the planned strategy to achieve the objectives.
– Observable facts
Two types of facts may be referred to: 20 or funding decisions based on KEIS evaluation.
The last objective is to provide information to PhD students, applicant assistant professors or researchers, guest scientists etc., as well as lay public. For these persons, a short version of the report (as signalled in the report model), presented as simply and clearly as possible, is posted on the KEIS website.
Following are the methodological principles defined by KEIS and KEIS evaluation criteria. A glossary is appended to the end of this document: it specifies the meaning that KEIS gives to a set of terms frequently used in evaluating research entities.