DEMARCATION OF THE FIELD OF E-GOVERNMENT ASSESSMENT.

AuthorJovanovska, Mimoza Bogdanoska
  1. Introduction

    E-government is a process of introducing information and communication technologies (ICT) in the public sector for the purpose of creating a flawless, responsive, and citizen-focused government by transforming the process of delivering on-line public services and by introducing a fundamental re-thinking of the way government departments and agencies work. This is a long-term process that requires large amounts of resources in each country, which implies that governments all over the world need 'to monitor the efficiency and effectiveness of public spending' (Codagnone and Undheim, 2008, p. 2) as well as determine the achieved level of e-government development. In turn, governments use those results to 'help politicians and other stakeholders compare their initiatives with similar ones in others countries, to make sure that their efforts are moving the government in the right direction' (Jansen, 2005, p. 1). Thus 'since the beginning of this century, e-government has become a particularly favored subject for benchmarking' (Bannister, 2007, p. 172).

    The need for measuring e-government maturity caught the attention of the research community, consultant industry, policy makers, and public administration and instigated them to start working in this field (Andersen and Henriksen, 2006, p. 236). The results of their efforts have contributed greatly to the creation of an extensive corpus of literature (Helbig et al., 2009) referred to as 'e-government benchmarking', 'e-government evaluation' or 'e-government assessment'. 'In much of the literature addressing e-Government the two terms 'evaluation' and 'measurement' are often used ambiguously and inconsistently, with the impression that the two may be synonyms' (Savoldelli, Misuraca and Codagnone, 2013, p. 373). The corpus became so large and important that many meta-analyses which make comparisons between different assessment efforts (studies) have been performed. This, in turn, sometimes resulted in criticism of this emerging field primarily aimed at positively affecting its further development (Janssen, Rotthier and Snijkers, 2004; Kunstelj and Vintar, 2004; Heeks, 2006; Bannister, 2007; Ojo, Janowski and Estevez, 2007; Salem, 2008; Codagnone and Undheim, 2008; Bernteen and Olsen, 2009).

    The meta-analysis articles compare the assessment efforts according to different attributes: scope, type of measurement criteria and approaches used, concepts, taxonomy, conceptual issues and the symbiotic nature of benchmarking, evolution of indicators, and some of them even examine the forms of reporting (Bogdanoska Jovanovska and Todorovski, 2011). Also, they share features such as: suggesting recommendations for further development of the field (on a positive side); comparing a very limited number of studies and using ambiguously and interchangeably the different terms--measurement, evaluation, benchmarking (on the negative side). The starting point of this paper is related to this observation which refers to the limitations of the existing meta-analysis articles.

    In this paper I perform a demarcation of the field of 'e-government assessment' by providing a comprehensive overview, both in terms of including a large number of assessment efforts as well as their profound critical analysis. The first step toward achieving the goal presented in Section 2 is providing a general overview of the e-government assessment field by including thirty-seven studies and articles and their classification according to four new attributes. Furthermore, Section 3 summarizes and discusses the results of the classification; finally, Section 4 concludes the article with a brief summary and few recommendations for further research.

  2. Overview of the studies

    The overview of the literature of the so-called 'e-government benchmarking field' in this research encompasses thirty-seven studies and papers, presented in Table 1.

    As the diverse literature opens a possibility for creating various taxonomies, we presented the trajectory of this field by taking into consideration the following unique attributes: authorship and commissioning (who makes it?); geographical scope and diversity (where it takes place?), frequency of publishing (how many times the measurement is repeated?) and focus or subject of the study (what is being measured?). Only one of these four attributes (focus of the study) is already used for making comparisons in the field by some of the authors of the meta-studies. Short comments for each attribute are given in the following separate sub-sections. The reason for this categorization derives from the idea that these attributes can be crucial for clarifying the problem that this paper tackles.

    2.1. Authorship and commissioning

    A careful review of the available literature through the prism of authorship, commissioning and providing finances for e-government maturity measurement shows that there is a wide range of different stakeholders who take an enormous interest in this issue. These stakeholders can be classified in six clusters as follows: large international organizations that deal with e-government measurement globally such as UN or OECD i.e., 'global projects conducted by international organizations' (Rorissa, Demissie and Prado, 2011, p. 3); global independent organizations such as WEF and EIU; multinational consulting companies such as Accenture, CGE&Y and TNS; academic institutions and its non-profit research centers such as BU in USA and WASEDA in Japan; national institutions or national associations for ICT in the public sector in a single country such as: OE, NAO and AO in UK; the European Commission in Europe; AGIMO, ANAO and NOIE in Australia; Momentum in USA; Erin R. Inc. in Canada; and single researchers' groups such as the groups working together with Bannister (2007), Heeks (2006), Salem (2008), Bernteen and Olsen (2007); Vintar, Kunstelj and Leben (2004); Janssen, Rotthier and Snijkers (2004); Ojo, Janowski and Estevez (2007) etc.

    These different types of stakeholders who play the role of authors of the studies on measuring e-government maturity have different financial sources and commissioners. Some possible combinations of the authors are the following: partnership, where the authors work together in preparation of the study (e.g., UN works with ASPA, UNDPEPA, UNDESA); working on behalf of somebody else, where multinational consulting for-profit companies work independently or in collaboration with others (e.g., Accenture, Capgemini works on behalf of the European Commission) in research and development projects (TietoEnator Trigon ABA, BISER, SIBIS, EuroBarometer, KEeLAN); self-funded projects or independent work where the authors realize self-funded projects or independent work of research groups.

    2.2. Scope and geographical diversity

    The studies that measure e-government maturity have different geographical diversity and scope. Most of them are international (truly global and partly global), but also there are a number of regional and national ones.

    The studies with the international scope which belong to the category of truly global include: the BU's study (West, 2007) which covers 198 different nations; the UN's study (United Nations, 2012) with 190 countries; WEF's study (Dutta and Mia, 2011) with 138 countries; ElU's study (Economist Intelligence Unit, 2010) which covers 70 countries; and TNS (Dexter and Parr, 2003) which covers 32 countries from 3 different regions (North America, Europe, Asia Pacific). The category of partly global studies comprises studies with a limited scope of countries included in the research: (a) countries from all over the world (e.g., Accenture (2007) covers 22 of the world's national governments); (b) countries from Europe (e.g., PLS RE A/S (2003), SIBIS (2003), CGE&Y (2004) and KEeLAN (2002)); and (c) states from the USA (e.g., Momentum (2000) is an example of research which covers only member states of the USA).

    There is a very limited number of studies with a regional scope--BISER (2004) which covers 28 European regions; then, there is this study whose author is the NCM (2002) and which covers 5 countries in the region: Denmark, Finland, Iceland, Norway and Sweden.

    The national studies are numerous, created or commissioned by the national agencies of the respective countries as it is the case with Birch (2003), OE (2003), SOCITM (2004), NAO (2007) and Ipsos MORI (2010) in the UK; Erin R. Inc. (1998) in Canada; AO (Burgess and Houghton, 2002); NOIE (2001); NOIE and DRM (2003) and Gerhson (2008) in Australia; Momentum (2000) and Hart-Teeter (2003) in the USA.

    2.3. The frequency of publishing

    The deliberation of the publishing history of the studies yields an interesting insight: some of the studies have continuity whereas some of them do not. The studies that measure e-government maturity continuously and make comparisons between the previous and the upcoming measurement results, while providing comments, are presented in Figure 1. Their inception was instigated in the last decade of the past century and all of them have had a rather long life span to date.

    In Figure 1 the x-axis presents the names of the authors/organizations that publish the studies and the y-axis presents the period of the first publication of the study in the edition. The horizontal line means that the study publishing has been taking place continuously in the years after its first publishing to date.

    According to the available literature, there are fourteen benchmarking studies created by different authors with different starting points (e.g., EuroBarometer which was the first such study (since 1995) and WASEDA (since 2005) that produce benchmarking studies for e-government development). Most of the studies are created on a yearly basis and are listed as annuals (Accenture, CGE&Y, WEF, EIU, WASEDA, and BU). However, one is biannual (Erin Research Inc.) and two of them are published twice a year (SOCITM and NOIE). The scope of most...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT