The Electronic Journal of Information Systems Evaluation provides critical perspectives on topics relevant to Information Systems Evaluation, with an emphasis on the organisational and management implications
For general enquiries email administrator@ejise.com
Click here to see other Scholarly Electronic Journals published by API
For a range of research text books on this and complimentary topics visit the Academic Bookshop

Information about the European Conference on Information Management and Evaluation is available here

linkedin-120 

twitter2-125 

fb_logo-125 

 

Journal Article

An Interactive and Iterative Evaluation Approach for Creating Collaborative Learning Environments  pp83-92

Anita Mirijamdotter, Mary M. Somerville, Marita Holst

© Nov 2006 Volume 9 Issue 2, Editor: Dan Remenyi, pp45 - 104

Look inside Download PDF (free)

Abstract

Inspired by a three‑year Creative University 'arena' initiative at Luleå University of Technology in Sweden, an international team of faculty researchers conducted an exploratory study in 2005, which aimed to investigate the efficacy of an interactive design and evaluation process for technology‑enabled collaborative learning environments. This applied research approach was designed as a collaborative evaluation process for co‑creation of technology‑enabled, learning‑ focused physical and virtual 'learning commons.' Faculty researchers from Sweden and the United States used Soft Systems Methodology tools, including the Process for Organisational Meanings (POM) model, to guide sixty‑two students' participatory co‑design and evaluation activities. In this paper, the POM evaluation model is explained and related to the Japanese concept Ba. Application of the models is illustrated within the context of student learning through boundary crossing information exchange and knowledge creation. As evidenced in their iterative and interactive evaluative recommendations, students' learning outcomes included development of improved capabilities for identifying socio‑technical elements of distributed learning environments, suggesting that student beneficiaries can successfully reflect upon their experiences and provide valuable evaluation insights. In addition, when this evaluation is iterative, students' insights into project management, software needs, and services design can improve their technology‑enabled learning experiences. Concluding comments explore the efficacy of the POM model implementation for guiding other learning‑focused, user‑centric initiatives, which aim to promote interdisciplinary, or boundary crossing, exchanges concurrent with advancing team‑based knowledge creation proficiencies among project participants.

 

Keywords: interactive formative evaluation, learning commons, soft systems methodology, process for organisational meanings, POM, model, Ba, higher education pedagogy

 

Share |

Journal Article

Measuring the Performance of the IT Function in the UK Health Service Using a Balanced Scorecard Approach  pp1-10

Maurice Atkinson

© Jan 2004 Volume 7 Issue 1, Editor: Dan Remenyi, pp1 - 66

Look inside Download PDF (free)

Abstract

This paper explores how the Balanced Scorecard approach might be applied to measuring the performance of an IT department. Sample measures have been developed for each dimension of the scorecard for two key IT functions. A performance measurement record sheet has been developed to show how these measures would work in practice. The paper also outlines approaches to implementing, monitoring and reviewing these measures. Furthermore the benefits of such a performance management system and process have been identified.

 

Keywords: Information Technology, Balanced Scorecard, Performance Measurement

 

Share |

Journal Article

Empirical Study on Knowledge Based Systems  pp11-20

Gabriela Avram

© Jan 2005 Volume 8 Issue 1, Editor: Dan Remenyi, pp1 - 80

Look inside Download PDF (free)

Abstract

Knowledge‑based systems (KBSs) implement the heuristic human reasoning through specific techniques, procedures and mechanisms, in order to solve problems that do not have a traditional algorithmic solution. Research on this topic is being done in numerous organisations all over the world, from higher education laboratories to research institutes and software development organisations. A first research project, aimed at gathering information about the State‑of‑the‑Practice in building knowledge‑ based systems with practical applications, needed a preliminary study to ascertain if KBSs still exist today as a research topic, or the interest in them actually faded. The study was also required for finding organisations currently building KBSs for different domains. The project's aim was to catalogue the software andor knowledge engineering methods employed by the listed organisations, in order to draw a comprehensive image (State‑of‑the‑ Practice) of the field. The current paper contains the results of this preliminary study only. A second research project re‑used the results of the preliminary study, focusing on the study of KBSs' successful implementations as a basis for building a method that would allow practitioners to choose the most appropriate KM tools for each organisation's specific problems and situations. A trigger for this second project was the interest in studying the causes of KBSs rejection by the end‑users. An attempt to map the identified applications of KBSs to different phases of knowledge management lifecycle is also presented.

 

Keywords: knowledge-based systems, taxonomy, success, failure, knowledge management tools

 

Share |

Journal Article

Internet Banking in Brazil: Evaluation of Functionality, Reliability and Usability  pp41-50

Eduardo Diniz, Roseli Morena Porto, Tomi Adachi

© Jan 2005 Volume 8 Issue 1, Editor: Dan Remenyi, pp1 - 80

Look inside Download PDF (free)

Abstract

Evaluating the performance of business Web sites has been a constant concern of researchers in different fields. This article presents an approach that contributes to the development of a methodology to assist researchers, developers and managers to establish criteria to evaluate and build digital business environments. Based on a multiple case study in three large banks in Brazil, this article proposes and tests a model of three dimensions to evaluate virtual business environments from the user's point of view: functionality, evaluates the offered services profile; reliability, investigates the security of a transactional site; and usability evaluates the quality of user interaction with the site.

 

Keywords: internet banking, banking technology, usability, security, Internet

 

Share |

Journal Article

Using the Balanced Scorecard to Evaluate ICT Investments in Non profit Organisations  pp107-114

Renata Paola Dameri

© Sep 2005 Volume 8 Issue 2, Editor: Dan Remenyi, pp81 - 142

Look inside Download PDF (free)

Abstract

For nonprofit organizations (NPO's), ICT is crucial to fulfil their social objectives. However, it is rare that ICT investments have monetary returns; ICT also has indirect impact on the social activity of NPO's. So it is very difficult for them both to decide about ICT investments and to evaluate their contribution to performance. NPO's should therefore define an appropriate evaluation framework, to understand if, where, what and how much to invest in ICT, to better achieve their mission. The evaluation framework described in this paper is based on the peculiar characteristics of nonprofit organizations, on the multidimensional evaluation criteria and on the balanced scorecard, adapted to the specific nature of nonprofit activities.

 

Keywords: nonprofit, investment decisions, balanced scorecard, multidimensional evaluation

 

Share |

Journal Article

Common Gaps in Information Systems  pp123-132

Juha Kontio

© Sep 2005 Volume 8 Issue 2, Editor: Dan Remenyi, pp81 - 142

Look inside Download PDF (free)

Abstract

Information systems and databases in six Finnish organizations are evaluated in this multiple case study research. The main idea of the research was to describe the main gaps in information systems in the case organizations. In each case the gaps are presented with authentic descriptions. The research identified altogether seven different categories of gaps. These are first abstracted to four common categories of gaps: 1) data, 2) infrastructure, 3) turning data into information and 4) people working with the information systems. Finally, the four categories are further abstracted to two common categories of gaps: 1) information and 2) infrastructure.

 

Keywords: Information Systems, IS-Gaps, Databases, Case Study

 

Share |

Journal Article

An Evaluation Framework for the Acceptance of Web‑Based Aptitude Tests  pp151-158

Michael Amberg, Sonja Fischer, Manuela Schröder

© Jan 2006 Volume 8 Issue 3, ECITE 2005 Special, Editor: Dan Remenyi, pp143 - 230

Look inside Download PDF (free)

Abstract

Aptitude tests analyse the aptitude of persons for studying at a specific school or university as well as for working within a specific company. Due to the recent technology advances, web‑based solutions are increasingly used for the implementation of aptitude tests. These web‑based aptitude tests can be utilised for rather standardized test methods, testing a large amount of users. Based on the fact that web‑based aptitude tests are getting more and more common, a high user acceptance is important, especially since test results tend to be taken more seriously. Furthermore, the design of the test should be helpful and support the use of the test. In this context, the target of our research is to provide a framework for the evaluation of the user acceptance for web‑based aptitude tests. The research method is based on an exemplary web‑based aptitude test and includes the following steps: Firstly, we used the Dynamic Accep‑ tance Model for the Re‑evaluation of Technologies (DART) as a basis for the adoption of web‑based aptitude tests. DART is an instrument designed for the analysis and evaluation of the user acceptance of innovative technologies, prod‑ ucts or services. Based on a literature review and expert interviews, we identified the most important acceptance indica‑ tors. In a next step, we evaluated the defined acceptance indicators in a survey with test persons who carried out one selected web‑based aptitude test. Afterwards, we analysed the reliability and validity of the developed evaluation frame‑ work. The result shows that a detailed analysis of the influencing factors is generally possible with the use of DART. This approach helps to define a balanced set of measurable acceptance indicators for the evaluation of the user acceptance. Finally, we described lessons learned and the ongoing process to measure the acceptance of web‑based aptitude tests.

 

Keywords: evaluation framework, web-based aptitude test, user acceptance, DART approach

 

Share |

Journal Article

Designing a Process‑Oriented Framework for IT Performance Management Systems  pp211-220

Sertac Son, Tim Weitzel, Francois Laurent

© Jan 2006 Volume 8 Issue 3, ECITE 2005 Special, Editor: Dan Remenyi, pp143 - 230

Look inside Download PDF (free)

Abstract

This paper shows, which concepts and frameworks currently exist to measure the performance of the IT de‑partment and its delivered IS services. We discuss how a performance management system might be designed and im‑ plemented with the purpose to monitor and improve the IT function. A performance metrics catalogue has been elabo‑ rated to document and to enable a common understanding of the individual metrics. Finally, this paper provides lessons learned and some recommendations for further research in the area of IT performance management.

 

Keywords: Performance metrics, balanced scorecard, causality, performance manager, accounting

 

Share |