Brief overview of methodology

The sur­vey involves six coun­tries — Hun­gary, Poland, Greece, Italy, France and Germany.

Its ques­tion­naire has two main goals: first, to bet­ter under­stand atti­tudes to open and closed soci­eties, and sec­ond to exam­ine how peo­ple respond when asked to eval­u­ate the com­par­a­tive open and closed soci­ety merits.

What makes a good society

In the first part of the sur­vey respon­dents were pre­sent­ed with 14 state­ments – half asso­ci­at­ed with open soci­ety views, and half with closed soci­ety ones. They were asked to rate each state­ment. For example:

  1. Should every­one be allowed to prac­tise their reli­gion freely?
  2. Should gov­ern­ment ensure that media report­ing always reflects a pos­i­tive image of a country?


Trade-offs with­in a good society

For each of the first part’s sev­en open soci­ety state­ments, its sec­ond part showed two state­ments and asked respon­dents to eval­u­ate their rel­a­tive impor­tance for a good soci­ety. They were asked to choose one, the oth­er or both.

For exam­ple, in the state­ment that every­one should be able to express their opin­ion freely, the alter­na­tives were:

  1. That Chris­t­ian val­ues not be offended
  2. That eth­nic and nation­al minori­ties not be offended.

 The sur­vey also asked views on about immi­gra­tion, civ­il soci­ety and polit­i­cal affil­i­a­tion, with a few coun­try-spe­cif­ic questions.

The devel­op­ment team

The research insti­tute d|part devel­oped the sur­vey in close coop­er­a­tion with OSEPI and the six coun­try part­ners. The final sur­vey was writ­ten and edit­ed in Eng­lish, before being trans­lat­ed into the six rel­e­vant languages.

Field­work was done by Light­speed Ger­many, and the sur­vey was admin­is­tered via an online pan­el rep­re­sent­ing the six coun­tries. Each coun­try had quo­tas for ages, gen­der, geog­ra­phy, edu­ca­tion and income lev­els. A soft launch pilot was car­ried out with 50 respon­dents, and the full launch includes more than 1,000 respon­dents in each coun­try. The sur­vey took place between Feb­ru­ary 12 and March 5, 2018.

The extended explanation

This note briefly sum­maris­es the back­ground of the sur­vey that was car­ried out to col­lect data on open soci­ety atti­tudes across the six coun­tries in the project (Hun­gary, Poland, Greece, Italy, France and Ger­many). The ques­tion­naire was devel­oped to address two main areas of con­cerns: first, to eval­u­ate the com­po­si­tion of open and closed soci­ety atti­tudes con­cep­tu­al­ly and sec­ond, to exam­ine how indi­vid­u­als respond when asked to decide whether a par­tic­u­lar open soci­ety attribute was more, less or equal­ly impor­tant than an attribute asso­ci­at­ed with closed societies.

Open society constructions

In the first part of the sur­vey respon­dents were pre­sent­ed with 14 items (in ran­dom order) and asked to indi­cate how impor­tant they thought the respec­tive item was for a good soci­ety. Half of the items (7) were attrib­ut­es defined as open soci­ety char­ac­ter­is­tics and half (7) were attrib­ut­es more close­ly asso­ci­at­ed with closed societies.

The data col­lect­ed in this first sec­tion allows us to apply dimen­sion reduc­tion tech­niques to exam­ine how the items relate to each oth­er. The items were:

Based on the answer options we com­put­ed two scores, one for the rat­ing of open and one for closed soci­ety attrib­ut­es. To cal­cu­late the score, the order of the answer scales was reversed (so high­er val­ues indi­cat­ed high­er lev­els of rat­ing an item as essen­tial). Then all sev­en respec­tive item scores were added up, result­ing in a score between 7 and 28. We then stan­dard­ised the scores between 0 and 1. So both scores respec­tive­ly mea­sure how essen­tial respon­dents rat­ed the open and closed soci­ety attribute items respec­tive­ly. A score of 0 means respon­dents said “not at all essen­tial” to all sev­en items, a score of 1 means they said “absolute­ly essen­tial” to all sev­en items.


Trade-off experiments

In the sec­ond part of the sur­vey respon­dents were pre­sent­ed with 14 direct com­par­isons between two items and asked to eval­u­ate their rel­a­tive impor­tance. For each of the sev­en open soci­ety attrib­ut­es from the first part of the sur­vey, respon­dents were pre­sent­ed with two alter­na­tive items and asked to make a deci­sion in each of those com­par­isons. The order in which the com­par­isons were pre­sent­ed was ran­domised. The full set of com­par­isons was as follows:

Table 2

 Other questions

In addi­tion to the instru­ments pre­sent­ed above, we also includ­ed a num­ber of socio-eco­nom­ic ques­tions and cor­re­late ques­tions about atti­tu­di­nal domains, such as atti­tudes towards immi­gra­tion, civ­il soci­ety and their polit­i­cal affil­i­a­tions across all coun­tries. Addi­tion­al­ly, a few coun­try-spe­cif­ic ques­tions were added in each coun­try to sup­port the analy­ses with­in each coun­try-spe­cif­ic context.


Survey development

The sur­vey was devel­oped by dpart’s core team for the Voic­es on Val­ues project and in close coop­er­a­tion with all five coun­try part­ners. After ini­tial scop­ing about issues and instru­ments, a draft ques­tion­naire was devel­oped that was then dis­cussed in a work­shop with rep­re­sen­ta­tives of all coun­try part­ners, d|part and OSEPI. Based on that work­shop a sec­ond draft ques­tion­naire was devel­oped that went through fur­ther iter­a­tions with feed­back from all part­ners. The final sur­vey (in an Eng­lish mas­ter ver­sion) was then trans­lat­ed by pro­fes­sion­al trans­la­tors into the six coun­try lan­guages. Coun­try part­ners then checked the trans­la­tion, in addi­tion to checks car­ried out by the core team. Feed­back was then giv­en to the trans­la­tors to fur­ther improve the trans­la­tions. Coun­try part­ners for­mu­lat­ed draft ques­tions spe­cif­ic to their own con­texts that were reviewed and edit­ed by the core team before going through a final round of checks togeth­er with the mas­ter questionnaire.

 Fieldwork and data

The field­work was car­ried out by Light­speed Ger­many in close coop­er­a­tion with the core team. The sur­vey was admin­is­tered through an online pan­el in all coun­tries. Pro­gram­ming of the sur­vey was pre-test­ed by sev­er­al peo­ple in each coun­try to check for user expe­ri­ence, cor­rect rout­ing and the imple­men­ta­tion of trans­la­tions. Quo­tas for age, gen­der, geog­ra­phy, edu­ca­tion and income and sev­er­al cross quo­tas were applied, to achieve good rep­re­sen­ta­tion in the sam­ples. Quo­tas were only relaxed at lat­er stages in the field­work and in case they could not be filled ade­quate­ly oth­er­wise. Before com­menc­ing with the field­work, a soft launch pilot was car­ried out with 50 respon­dents in each coun­try to test the sur­vey instru­ments and check ini­tial dis­tri­b­u­tions and par­tic­i­pa­tion. Sub­se­quent­ly the full launch took place with over 1000 respon­dents recruit­ed in each coun­try. The sur­vey was car­ried out between 12 Feb­ru­ary 2018 and 5 March 2018. Where achieved sam­ple dis­tri­b­u­tions devi­at­ed from actu­al pop­u­la­tion dis­tri­b­u­tions, weights were cal­cu­lat­ed and applied to account for those deviations.