The methodology

General guidelines and design principles

What are the relevant characteristics?

How to get information on those characteristics?

How to communicate the results?

General guidelines and design principles

·

The classification is based on empirical data

·

The classification is based on a multi-actor and multidimensional perspective

·

The classification is non-hierarchical

·

The classification is relevant for all higher education institutions in Europe

·

The classification is descriptive, not prescriptive

·

The classification is based on reliable and verifiable data

·

The classification is parsimonious regarding extra data collection

What are the relevant characteristics?

The first step is to identify what entities are to be classified. The European higher education classification addresses individual European higher education institutions as the entities to be classified. For higher education institutions this implies that only those entities are seen as acceptable that offer at least one program that is accredited or recognized by nationally recognized accreditation authority or by a government body.

The next step is to identify the relevant and adequate grouping criteria (dimensions and indicators). “The secret to successful classification is the ability to ascertain the key characteristics on which the classification is to be based” (Bailey 1994, p. 2). The choice of the dimensions and underlying indicators should allow users of the classification to group the entities the way they want. The more dimensions selected, the more detailed the entities can be described and grouped. This has a downside, however, since a larger number of dimensions also means less reduction of complexity, which results in an instrument that is less manageable. There is no “objective” standard for the optimal number of dimensions, but “no more than seven dimensions” is a rule of thumb that is often used.

The selection of dimensions and indicators has been an iterative process in which the main stakeholders and the project team exchanged their views. Feasibility, validity, and most of all legitimacy have been key criteria in that process. In the second stage of the project, the dimensions and indicators selected were tested in a pilot survey among 85 higher education institutions.

For an overview of the dimensions and indicators see list of dimensions and indicators.

For a detailed overview of the indicators and underlying data-elements see detailed list of indicators.

How to get information on those characteristics?

In European higher education so far, an overall Europe-wide data system does not exist. As a consequence, in order to be able to use the classification, the data will have to be provided by the higher education institutions themselves. A questionnaire was developed through which the institution may provide the information required for classification.

The design principle of parsimony underlines that the extra burden this creates should be kept to a minimum. Therefore “pre-filling” of questionnaires for higher education institutions was piloted. In pre-filling the data that are already available in national databases are ‘pre-filled’ into the questionnaire before sending it to the higher education institution. The institution checks the pre-filled data and completes the questionnaire. Such a pre-filling approach would substantially limit the data provision burden for institutions involved in the use of the classification.

How to communicate the results?

Classification classes

The questionnaire feeds into a database in which the data on the data-elements are stored. Checks on consistency will be performed and follow-up questions and clean-ups will be done when needed.

The information on the data-elements will be used to calculate the scores on the indicators. This results in scores that constitute a continuous scale for each indicator.

The next step is crucial for the classification: the identification of the classification categories fro each indicator. The continuous range of values for each indicator needs to be reduced to a small number of categories. The categories are defined by empirically determined cut-off points. The standard number of classes to be used in the U-Map classification is four. The cut-off points between these four classes are set by the quartile scores of all higher education institutions that are in the database. If an institution’s score is in the first quartile, it will be put into class one etc. That means that the position of an institutions is relative position. It tells something about what that institution has done, compared to the other participating higher education institutions. That implies the relative position may change if the number of participating institutions goes up.

In the start-up phase of the classification the number of participating higher education institutions will be small, which may lead to significant shifts in quartile scores if new institutions enter the database. To avoid possible negative consequences of such an instability of the classification two decisions were made. First, the results of the classification will be presented in an anonymised way, as long as the number of participating higher education institutions is below a minimum threshold. This threshold is set at 150 institutions. Second, during the start up phase, the cut-off points are set by the project team. The project team uses the empirical information from the survey organized in the second project phase, as well as other empirical information and its expertise to come to those decisions.

Once the U-Map classification has passed the start-up phase, the cut-off points will be determined empirically and be revised every three years.

ProfileFinder

The profile finder is an instrument to identify subsets of Higher education institutions within the whole set of higher education institutions included in the classification. The basic idea is that the diversity within the subset of higher education institutions is less than within the population of higher education institutions. To achieve this reduction in diversity, the user of the profile finder selects a number of selection criteria. Only those higher education institutions that match these user defined criteria end up in the subset. The names of the classes are descriptive to avoid implicit (vertical) sorting.

In principle, the user may use any number and combination of selection criteria. In practice, however, using more than four or five criteria will increase significantly the probability of ending up with an empty subset.

ProfileViewer

The ProfileViewer is a way to visualize the results of the classification. Visualisation may help to convey the information to a broader audience and to characterize different higher education institutions at a glance.

Although no strict list of criteria was drafted a few general guidelines were used to provide some direction. First of all, a visualization should be ‘intuitively readable’: it should not require too much explanation. The rationale for a visualization is that it conveys complex information in a ‘user-friendly’ way. Extensive ‘user manuals’ need therefore to be avoided. In addition a visualization should allow for both an unambiguous presentation of one profile and a easy comparison of two or more profiles. It is expected that users will use ‘their own’ institution as a benchmark. The visualization then has to show what the institution looks like. It has to be an image that easily understood and communicated. That means also that must be suited for different media. It should be able to convey its message in print as well as in an interactive web based way. For positioning a HEI it is also crucial that profiles of two or more higher education institutions can be compared in an easy and intuitive way. And last but not least: it should be appealing and intriguing. It has to invite the user to dig in deeper and learn more about the institutions.

Alternative text can go here.