A longitudinal explanatory case study of coordination in a very large development programme: the impact of transitioning from a first- to a second-generation large-scale agile development method

  • Open access
  • Published: 08 November 2022
  • Volume 28 , article number  1 , ( 2023 )

Cite this article

You have full access to this open access article

longitudinal case study project management

  • Torgeir Dingsøyr   ORCID: orcid.org/0000-0003-0725-345X 1 , 2 ,
  • Finn Olav Bjørnson   ORCID: orcid.org/0000-0002-9111-7241 1 ,
  • Julian Schrof   ORCID: orcid.org/0000-0002-7974-1586 3 &
  • Tor Sporsem   ORCID: orcid.org/0000-0002-5230-7480 4  

7497 Accesses

17 Citations

38 Altmetric

Explore all metrics

Large-scale agile development has gained widespread interest in the software industry, but it is a topic with few empirical studies of practice. Development projects at scale introduce a range of new challenges in managing a large number of people and teams, often with high uncertainty about product requirements and technical solutions. The coordination of teams has been identified as one of the main challenges. This study presents a rich longitudinal explanatory case study of a very large software development programme with 10 development teams. We focus on inter-team coordination in two phases: one that applies a first-generation agile development method and another that uses a second-generation one. We identified 27 coordination mechanisms in the first phase, and 14 coordination mechanisms in the second. Based on an analysis of coordination strategies and mechanisms, we develop five propositions on how the transition from a first- to a second-generation method impacts coordination. These propositions have implications for theory and practice.

Similar content being viewed by others

longitudinal case study project management

Transitioning from a First Generation to Second Generation Large-Scale Agile Development Method: Towards Understanding Implications for Coordination

longitudinal case study project management

Inter-team Coordination in Large-Scale Agile Development: A Case Study of Three Enabling Mechanisms

longitudinal case study project management

Coordination in Large-Scale Agile Software Development

Explore related subjects.

  • Artificial Intelligence

Avoid common mistakes on your manuscript.

1 Introduction

Coordination is a fundamental challenge in software engineering. Kraut and Streeter ( 1995 , p. 69) stated that ‘ While there is no single cause of the software crisis, a major contribution is the problem of coordinating activities while developing large software systems’. In software development, a multitude of dependencies must be managed in a context with high uncertainty about products and technology. Previous studies have focused on coordination in traditional software projects, in global software development and, recently, in agile development.

In the mid-2000s, software engineering research focused on global software engineering, in which coordination amongst distributed teams was a key challenge. The congruence between dependencies and coordination actions is critical both in well-known contexts and in contexts with high uncertainty (Cataldo and Herbsleb 2012 ). However, an open question concerns what practices are effective. In the paper entitled Global Software Engineering: The Future of Socio-technical Coordination , Herbsleb ( 2007 , p. 9) stated that while ‘ we currently have a number of individual solutions, such as tools, practices, and methods, … we understand as yet very little about the tradeoffs among them, and the conditions of their applicability ’.

In recent years, software engineering research has concentrated mainly on agile software development methods (Dingsøyr et al. 2012 ; Hoda et al. 2018 ), in which development is organized as teamwork. Pries-Heje and Pries-Heje ( 2011 ) attributed the success of the agile method Scrum to its flexible and efficient coordination structures, its shared list of work tasks in a product backlog and sprint backlog, daily meetings within the team and the use of a visual board to show the status of work. Strode et al. ( 2012 ) proposed a coordination model for co-located agile teams, with a focus on synchronization within an agile team, proximity that allows for face-to-face communication and activities targeted at external stakeholders, which she referred to as boundary spanning.

Large IT projects with 10 or more development teams are increasingly using agile methods. Empirical studies show challenges with coordination breakdowns (Bick et al. 2018 ), lack of awareness and a mismatch between advice in methods and coordination needs over time (Dingsøyr et al. 2018c ). Dependencies undermine autonomy, which is essential for agile development teams (Biesialska et al. 2021 ).

Existing theory is not sufficient to explain coordination in this context, as large-scale agile development has characteristics that differ from those of traditional organizations and distributed development in terms of relying on oral communication, working in teams and frequent changes in coordination mechanisms over time (Dingsøyr et al. 2018a ). A systematic literature review on large-scale agile methods reports coordination challenges in large-scale agile development, including synchronizing teams, dealing with communication overload and reducing external distractions (Edison et al. 2021 ).

Large-scale agile development projects are critical for organizations, representing significant costs and risks. Coordination is critical for project success and on-time delivery (Kula et al. 2021 ). The scientific community must provide insight into and advice on coordination in this particular context. Strategies for coordination are described in development methods, and improving our understanding of the effectiveness of these approaches and in which contexts they are effective is essential.

Today, many organizations are changing their approach to large-scale development from what we will define as first-generation large-scale agile methods (Section 2.2.1 ), which combine practices from project management and agile methods, to more tailored second-generation large-scale agile development methods (Section 2.2.2 ), which replace practices from project management with practices tailored for managing software development. This change leads to a different approach to coordination, replacing previous solutions, practices and tools. Understanding how the new generation of methods impacts project success is critical. This article focuses on coordination as a significant factor influencing overall project success. More precisely, we examine coordination between teams, which is described in the literature on large-scale agile development as inter-team coordination (Edison et al. 2021 ).

In the following, we present a study of a very large development programme at the Norwegian Labour and Welfare Administration (NAV) with a total cost of about EUR 75 million. The programme, which developed a new solution to automatically process applications for parental benefits, lasted from 2016 until 2019 and had 10 teams working in parallel on development for a long period. We will describe two phases of development, in which a first-generation large-scale agile method was used in the first phase and a second-generation large-scale agile method was applied in the second. We answer the following research question:

How is the inter-team coordination strategy impacted by a change from the first- to second-generation large-scale agile development methods?

This study makes the following three contributions to the literature on coordination in large-scale agile development:

Provide a rich empirical description of coordination in a large-scale agile development programme

Provide a conceptualisation of methods for large-scale agile development from the first to the second generations

Develop a novel theory on the impact of the transition from the first- to second-generation methods on coordination

For the first contribution, a rich description enables readers to make up their own minds on what is relevant in their own situation, provides readers with more background to understand the context of the findings, and also broadens possible use of the study for example in teaching, where students need to build an understanding of industry practice. The second contribution will primarily be helpful for the scientific community in order to distinguish between different types of large-scale development methods studied. For the third contribution, in software engineering, ‘ we have very few explicit theories [that can] explain why or predict that one method … would be preferable to another under given conditions ’ (Johnson et al. 2012 ). In particular, there are few theories with an empirical basis (Sjøberg et al. 2008 ); indeed, ‘ most studies in software engineering pay little or no attention to theory development, and very few studies are based on existing theory ’ (Stol et al. 2016 ). By developing novel propositions, we provide a contribution towards a theory to understand the impact of large-scale agile methods on coordination.

Section 2 presents the background on large-scale agile development, the definitions of first- and second-generation large-scale agile development methods and an up-to-date literature review of previous relevant work on coordination organized after a model of coordination effectiveness. Section 3 describes the design of the longitudinal explanatory case study, while Section 4 provides a rich description of the programme organization and the findings on coordination for each phase. Section 5 presents coordination in the phases and develops five propositions to answer the research question (shown in Table 10 ). We also discuss the main limitations. In Section 6 , we conclude, show implications for theory and practice and suggest further work.

2 Large-Scale Agile Development and Coordination

Large-scale agile development has drawn significant interest from practitioners (Dingsøyr et al. 2019b ) and researchers (Edison et al. 2021 ; Uludağ et al. 2021 ), and several new methods, such as the Scaled Agile Framework (SAFe), Large-Scale Scrum (LeSS) and the Spotify model, have been proposed.

We first describe large-scale agile development and first- and second-generation methods. Section 2.2 . focuses on coordination—its definition, mechanisms for coordination and a coordination model. We also introduce coordination effectiveness and strategy (choice of coordination mechanisms). Then, we present prior studies on small- and large-scale coordination. Section 2.3 provides research findings on the coordination mechanisms used in large-scale agile development. The presentation is organized after three coordination modes (groups of mechanisms), which are described in Section 2.2.1 .

2.1 Large-Scale First- and Second-Generation Agile Methods

Large-scale agile development projects or programmes typically involve many developers, many interdependencies and large products, which take a significant time to complete at a substantial cost (Rolland et al. 2016 ). Dikert et al. ( 2016 , p. 88) defined large-scale agile development as involving ‘software development organisations with 50 or more people or at least six teams’. We use the term ‘very large-scale agile development’ to describe ‘agile development efforts with ten or more teams’ (Dingsøyr, Fægri, and Itkonen 2014 ). If each team has seven members, the project will involve 70 team members and will have the characteristics described above. In these projects, most of the challenges associated with scale become evident. We use the term ‘programme’ to refer to a collection of related projects.

There is a growing academic literature on large-scale agile development, after it appeared as a new topic in the discourse on agile development in the mid-2000s (Hoda et al. 2018 ). A literature review identified 191 studies, which were mostly experience reports (Edison et al. 2021 ). The review shed light on the underlying reasons for the interest in large-scale agile development and the need for alignment and cohesion across many teams, interdependencies between software development and other organizational functions, and the trend towards product delivery at scale.

In the special issue on large-scale agile development in IEEE Software , Dingsøyr et al. ( 2019b ) described two waves of development methods. Footnote 1 We think that referring to these waves as the first and second generations of large-scale agile development methods is conceptually clearer because generations represent a more fundamental change that keeps living when the next generation arrives, while waves are short-lived.

Early advice on agile methods indicated that they are best suited for co-located teams developing software that was not safety critical (Williams and Cockburn 2003 ). For larger development efforts, Boehm and Turner ( 2003 ) recommended balancing traditional Footnote 2 and agile development methods.

2.1.1 First-Generation Methods

First-generation large-scale agile development methods combine agile methods at the team level with traditional project management frameworks, such as the Project Management Body of Knowledge (Duncan 2017 ) or Prince2 (Bentley 2010 ). Many refer to these methods as hybrid approaches (Bick et al. 2018 ). Project management frameworks enable a wrapping on the development process using traditional engineering approaches. This can serve as an interface to a more traditionally minded organization or customer. The frameworks are process centric, rely on formal communication and individual roles, divide work into phases like in the waterfall model and are oriented towards a bureaucratic organization (Nerur et al. 2005 ).

An example is the first published case study on large-scale agile development, which showed a combination of the Project Management Body of Knowledge with the agile method Scrum (Batra et al. 2010 ). This project for an American cruise company had a final cost of USD 15 million and involved 60% changes in requirements during execution, but it was still able to deliver in terms of time, cost and quality. The study pointed out the need for structure in the project management framework because the project was large, strategically important, time critical and distributed, while the combination with agile methods was necessary to handle unforeseen events and changes in requirements.

Another example showing how a model inspired by Prince2 was combined with Scrum is a Norwegian State Pension Fund programme with a total cost of around EUR 140 million. The programme was organized into four main projects: an architecture project, a business project, a development project and a test project. At most, 12 development teams worked in parallel, with the releases organized into the phases of needs analysis, solution description, construction and approval (Dingsøyr et al. 2018b ). A team would often work on three releases in parallel, one under approval, another under construction and a third being planned. Scrum practices were followed at the team level, such as sprint planning, daily meetings, sprint backlog and team retrospectives. Demonstrations were held every three weeks in one meeting for all teams. The programme developed around 2500 user stories, organized into about 300 epics and with 12 releases.

2.1.2 Second-Generation Methods

In recent years, we have seen what we call a second generation of large-scale agile development methods, in which much of the advice from project management frameworks is replaced by lessons learned from digital product development. These methods include SAFe, Scrum-at-scale, Disciplined Agile Delivery, LeSS, and the Spotify model (Dingsøyr et al. 2019a ; Edison et al. 2021 ). In contrast to first-generation methods, these approaches embrace ideas from the agile community and bring in new insights from lean product development. They focus more on the product than the process, making greater use of informal communication, an evolutionary delivery model and an organic organization to encourage cooperative social action (Nerur et al. 2005 ). The management style is more oriented towards collaboration. The methods define principles built on ideas in the agile community (Baham and Hirschheim 2021 ) and prescribe the organization of large projects by relying mainly on teams; release planning and architecture through roadmaps and guidelines; collaboration with customers by involving them or end users at different levels; and typical practices for inter-team coordination, such as scrum of scrums meetings, and for knowledge sharing, such as communities of practice.

As an example, the multicase study of introduction of SAFe in the global telecommunications company Comptel (Paasivaara 2017 ), describes practices at team, program and portfolio level. The study describes organization of work as planning with epics on portfolio level, where tasks were given to programs, called “agile release trains”. Development was done in “product increments”. Each increment started in the cases with a two day planning session, which was followed by development for 10 weeks. There were new roles at this level, such as product manager, system architect and release train engineer. The release train engineer prepared and led product increment planning, Scrum of Scrum meetings and “took care of the improvement items and metrics” (Paasivaara 2017 , p. 4). Teams adopt an agile method as Scrum or Kanban, and in the cases worked in two-week iterations. Of two cases studies, one had 14 teams and the other 12 teams. There were also two platform teams serving both cases. Teams were cross-functional with 5–10 members. There were regular community meetings between product owners.

2.2 Coordination

Why is there a need to coordinate? A widely used literature review on coordination studies describes coordination as the organizational arrangements that allow individuals to ‘realise a collective performance’ (Okhuysen and Bechky 2009 ). Collaboration and communication are considered indispensable in coordination but are separate concepts. We subscribe to this understanding of coordination in the following, but we will use Malone and Crawston’s ( 1994 , p. 90) definition of coordination as the ‘management of dependencies’.

An analysis of dependencies in agile development teams resulted in a taxonomy with three main groups: knowledge, process and resource dependencies (Strode 2016 ).

Knowledge dependencies are defined as the pieces of ‘information required for a project to progress’ and include knowledge about requirements, expertise (technical or task knowledge), historical knowledge about past decisions and knowledge about task allocation (who is doing what).

Process dependencies are defined as ‘task[s that] must be completed before another task can proceed’, including activities and business processes.

Resource dependencies occur when ‘an object is required for a project to progress’. Examples are the availability of a resource (person, place or thing) and technical dependencies, such as interactions with another technical component in the software system.

Dependencies are managed through coordination mechanisms. Mintzberg ( 1989 ) identified direct supervision and standardization of work, outputs, skills and norms as central coordination mechanisms.

Coordination in organization research initially focused on static mechanisms in well-predictable environments. The dynamic aspects of coordination were described as mutual adjustment mechanisms—coordination based on feedback. Several scholars have criticized an overly static view of coordination and proposed a dynamic understanding of it (Okhuysen and Bechky 2009 ). Jarzabkowski et al. ( 2012 ) suggested a model in which the absence of coordination leads to the creation of new patterns of coordination which are stabilized. Given the focus on flexibility in work processes, changes in requirements and technology, software development is a field in which coordination is likely to be very dynamic (Dingsøyr et al. 2018c ).

We elaborate on how we coordinate through a coordination model, describe traditional and agile approaches to coordination and then further describe agile approaches for small and large-scale projects. Next, we present findings to date on three modes of coordination in large-scale agile development.

2.2.1 How do we Coordinate?

Strode ( 2012 ) presented a coordination model in small-scale agile software development projects based on the previous work by Espinosa et al. ( 2007 ) (see Fig.  1 ). We adopt this model for large-scale coordination, with a focus on inter-team coordination instead of coordination within teams.

figure 1

Coordination strategy, coordination effectiveness and influence by project complexity and uncertainty (model from Strode ( 2012 )

Coordination effectiveness is one of the many factors contributing to overall project success. Effectiveness is defined as the ‘state of coordination achieved in a project given the execution of a particular coordination strategy’ (Strode et al. 2012 , p. 1233) and encompasses implicit and explicit components. The implicit component is based on the literature on teamwork and coordination. It requires that project members understand the overall project goal and how tasks contribute to its realization, the overall idea about the project’s status, the tasks to work on, the tasks that others are working on and where expertise is located in the project organization. The explicit component is that persons and artefacts are in the correct place at the correct time ‘and in a state of readiness for use from the perspective of each individual involved in a project’ (Strode et al. 2012 , p. 1233).

How can we tell if coordination is not working? The late discovery of dependencies can lead to rework, for example, when integrating components from several teams and realizing that a new feature in one module is causing unexpected errors in another. Other problems could be due to several teams working simultaneously in the same part of the code base, which causes many merge conflicts in the code and could have been avoided if one team had delayed working in this part. There could be challenges with alignment, that some individuals or teams work on low priority tasks. If coordination is working well, it should be evident in constant progression on work tasks, unless there are other obstacles to progress. However, if a project invests too much in coordination, coordination mechanisms could be perceived as requiring too much time. If team members complain that specific meetings are not useful, this could signify a too heavy investment in coordination. Nevertheless, it could also be that meetings are not managed well and do not work effectively as coordination mechanisms.

The coordination strategy involves selecting a group of coordination mechanisms that manage dependencies in a situation (Strode et al. 2012 ). We use the term ‘coordination strategy’ more strictly than Berntzen et al. ( 2021 ) did, who described autonomous teams and technical architecture as strategies. We define coordination mechanisms in line with Van de Ven ( 1976 ), who identified three broad modes of coordination mechanisms:

Group mode – mutual adjustment based on new information through feedback in meetings that can be either scheduled or unscheduled

Personal mode – mutual adjustment through feedback but between two people at the same organizational level (personal, horizontal) or at different levels, such as a developer and a subproject manager (personal, vertical)

Impersonal mode – use of ‘codified blueprints of action’, such as those in ‘pre-established plans, schedules, forecasts, formalised rules, policies and procedures, and standardised information and communication systems’ (Van de Ven et al. 1976 , p. 323)

Choosing a coordination strategy involves finding a good set of coordination mechanisms that correspond to a project’s complexity and uncertainty in a given situation. When describing a situation, we use the characteristics that determine coordination mechanisms, as Van de Ven et al. ( 1976 ) argued:

Task uncertainty – This is the ‘difficulty and variability of work undertaken by an organisational unit. Higher degrees of complexity, thinking time to solve problems, or time required before an outcome is known all indicate higher task uncertainty’ (Dingsøyr et al. 2018c , p. 66).

Task interdependence – This is defined as ‘the extent to which people in an organisational unit depend on others to perform their work. A high degree of task-related collaboration means high interdependence’ (Dingsøyr et al. 2018c , p. 66)

Size of the work unit – This refers to ‘the number of people in a work unit. Increases in participants in a project or program mean an increase in the size of the work unit’ (Dingsøyr et al. 2018c , p. 67).

2.2.2 The Traditional and Agile Approaches to Coordination

Agile software development methods are designed to cope with change and uncertainty in small teams. They ‘de-emphasise traditional coordination mechanisms such as forward planning, extensive documentation, specific coordination roles, contracts, and strict adherence to a pre-defined specified process’ (Strode et al. 2012 , p. 1222). Instead, they rely on synchronization through activities and artefacts, structure through proximity and substitutability of team members, and boundary spanning across teams (Strode et al. 2012 ). Table 1 shows the key differences between the traditional and agile approaches to coordination.

Pries-Heje and Pries-Heje ( 2011 ) attributed the success of the agile method Scrum to its flexible and efficient coordination structures. Agile methods also seek to move decision authority to the team level and rely on rough long-term plans and detailed short-term plans to increase adaptability to change (Xu 2009 ). This impacts who handles the components of the coordination strategy. In a position paper, Dingsøyr et al. ( 2018a , p. 82) stated that ‘the complexity of large-scale agile development calls for rethinking coordination, emphasising characteristics such as oral communication, work in teams, a high level of interdependencies, uncertainty in tasks, many people involved, many relations between individuals and that coordination needs change over time’.

2.2.3 Coordination in Agile Development from the Small to Large Scales

An agile development team typically consists of five to nine members who work full time and are co-located. Boehm ( 2002 ) described the “home ground” for agile methods as smaller teams and products, seeking to provide rapid value in a context where refactoring is inexpensive and requirements may change rapidly. There are few communication channels in this context, and much of the management of dependencies can be done through feedback. Such feedback can go through the personal mode either directly between two team members (personal, horizontal) or in the whole team through scheduled meetings, as defined in Scrum (Schwaber and Beedle 2001 ). These meetings include daily iteration planning, iteration review and iteration retrospective meetings. Alternatively, the feedback can be given through unscheduled meetings, such as an extension of the daily meetings if these meetings identify an obstacle to project progress. Sharp and Robinson ( 2007 , 2010 ) explained coordination in agile development as making collaboration easy because team members are very aware of others’ work, the overall project progress, and the state of the code base. They identified two key artefacts for coordination and collaboration: story cards with a description of work tasks (typically in the form of a user story) and a physical board that shows the work status in the current iteration. A study of artefacts used in the coordination of agile teams shows additional artefacts not described in agile development textbooks, such as a textual description of the business case, the contract and a wireframe mockup in the early development stages (Zaitsev et al. 2020 ) (see Strode ( 2012 ) for an in-depth discussion of coordination at the team level).

As projects grow in size, there will likely be more dependencies to manage. A study on teams’ coordination needs in large-scale software development projects found that project-, team- and task-related characteristics impact teams’ coordination needs. The satisfaction of these needs seems to influence teams’ performance (Sablis et al. 2021 ). In another study, the coordination practices within and between Scrum teams were described as positively impacting delivery predictability in large projects (Vlietland et al. 2016 ). In globally distributed software development teams, Stray and Moe ( 2020 ) found significantly larger team sizes than those of co-located teams, and people working in distributed teams spent somewhat more time in meetings per day. A quantitative analysis of 71 SAFe projects from the company Rally found that dependencies were explicitly declared for about 10% of user stories (Biesialska et al. 2021 ). These dependencies were indicated in a lifecycle management tool by product owners, scrum masters or developers. The study emphasized that ‘the volume of unidentified dependencies is not known’ in the analysis (Biesialska et al. 2021 , p. 27). Another study on several large-scale projects found that team members, on average, spent 1.1 hours a day in scheduled meetings and 1.6 hours per day on ad hoc communication and unscheduled meetings (Stray 2018 ). A finding from the exploratory study of the Perform programme with 12 development teams (Dingsøyr et al. 2018b ) was that several unforeseen dependencies had to be managed, although the technical architecture and work organization were considered to minimize dependencies between teams.

Studies of multi-team systems in which many teams work together to solve larger tasks indicate that intra-team coordination (within teams) is vital for coordination between teams (inter-team coordination) (Firth et al. 2015 ). However, for teams’ overall performance, inter-team coordination is most important (Marks et al. 2005 ).

2.3 Inter-Team Coordination

Inter-team coordination is a topic that has been given much attention in the literature on large-scale agile development (Bass 2019 ; Bjørnson et al. 2018 ; Dingsøyr and Moe 2013 ). A survey on coordination in large-scale software teams found that respondents hoped for more effective and efficient communication (Begel et al. 2009 ). The challenges identified across existing large-scale agile development methods are described in a systematic literature review on large-scale agile methods (Edison et al. 2021 ). These challenges include synchronizing across dynamic and fast-moving teams, addressing meeting overload (communication overload, external distractions), decreasing the many handovers between teams as a result of end-to-end development and maintaining transparency across a high number of teams.

Coordination challenges are shown in a case study of a large-scale hybrid development programme with 13 teams in a large enterprise software house (Bick et al. 2018 ). This programme had participants from three countries but did not find distance or sociocultural differences to cause challenges. An example of a challenge was that development teams’ progress was blocked by unforeseen events, ‘most frequently caused by an unidentified dependency with another team’ (Bick et al. 2018 , p. 939). Teams were often unaware of other teams’ activities, and team representatives were also not part of discussions on inter-team dependencies, as these happened in a central team that mainly consisted of people with business competencies. The study explained that the lack of dependency awareness between inter-team and team levels is rooted in misaligned planning activities during work specification and later prioritization, estimation and allocation of work to a team. Based on a rich data collection process, the study developed two propositions: i) dependency awareness is necessary but not sufficient for effective coordination, and ii) planning alignment of all phases is necessary but not efficient for dependency awareness. A recommendation for practice is to ensure regular inter-team meetings by using counterparts of standard team-level arenas for coordination in agile methods through joint planning, review and retrospective meetings.

A review by Edison et al. ( 2021 ) identified a set of practices across different large-scale agile development methods. Table 2 shows the practices relevant to inter-team coordination, which we grouped by coordination mode. In the following, we show knowledge to date on the group, personal and impersonal modes of coordination. Note that a recent case study on inter-team coordination mechanisms offers an alternative taxonomy, categorizing mechanisms according to the four characteristics of technical, organizational, physical, and social (Berntzen et al. 2022 ). We have chosen to use the modes proposed by Van de Ven ( 1976 ) to easier relate to previously published theory on interteam coordination.

2.3.1 Group Mode Coordination

The previous section’s recommendation on regular inter-team meetings from the large-scale hybrid development programme builds on earlier studies on the scrum of scrums as an inter-team group mode coordination practice. A multicase study of large programmes with more than 20 development teams indicated that this area was not working very well. The topics discussed were not sufficiently relevant to the participants (Paasivaara et al. 2012 ). A recommendation was to downscale this forum to ensure the relevance of topics. This form of scheduled meeting was also examined in the context of SAFe, with varied meeting outcomes. Two cases focused on status reporting and less on what is recommended in SAFe—to address risks. In one case, Gustavsson ( 2019 , p. 9) reported that ‘none regarded the meeting as a place to solve dependency issues’, while in a third case, the meeting—based on the dependencies between teams—was used to help other teams. A misalignment between the corporate culture and coordination routines is suggested to explain the mismatch between intention in SAFe and practice.

Other studies have also shown that more meetings are used to coordinate large-scale projects. A survey and case study described large agile projects as having multiple ‘committees of specialists, including the meeting of scrum masters in the scrum of scrums’ (Hobbs and Petit 2017 , p. 14). The study of the Perform programme found 13 coordination meetings, which were mainly scheduled meetings, including a joint demonstration and scrum of scrums meetings (Dingsøyr et al. 2018c ). Retrospectives were, however, at the team level, but minutes from meetings were read and acted on by programme management.

A particularly interesting meeting in SAFe is the product increment planning meeting, described as a face-to-face event intended to create a shared mission and vision. Typically, the planning horizon is eight to twelve weeks, which is commonly divided into four iterations. Gustavsson ( 2019 , p. 3) described this meeting as not only focusing on planning and highlighting dependencies but also ‘inform[ing] and clarify[ing] the current context in terms of the business, product, and architecture’. The standard agenda in SAFe gives the most room for presentations, but a finding in three cases studied was that more and more time was used in team breakout sessions.

Another line of studies mainly involving scheduled meetings deals with aligning work by setting up groups for knowledge exchange across teams; these are called communities of practice . We find reports of how this practice is used in organizations, such as Ericsson (Paasivaara and Lassenius 2014 ) and Spotify (Smite et al. 2019 ), with insight into topics which are usually covered, such as agile methods, infrastructure and back-end and front-end development. Some communities focus primarily on learning or organizational development, while others have a more direct focus on coordination through standardization practices, for example, in defining coding standards or giving toolset recommendations (Smite et al. 2019 ). At Ericsson, these communities are described as having a critical role in the transition to agile development methods (Paasivaara and Lassenius 2014 ).

The group mode of coordination in large-scale agile development was further analysed in a study of two empirical cases, with a focus on changes in coordination modes over time (Moe et al. 2018 ). These changes included transitions from scheduled to unscheduled meetings and from unscheduled to scheduled meetings. The study concluded that programme management needs to be sensitive to changes in coordination needs over time. Edison et al. ( 2021 ) also identified unscheduled meetings as a practice, described as ad hoc meetings and physical proximity of teams in Table 2 .

2.3.2 Personal Mode

The personal mode is used extensively in agile methods at the team level with pair programming practice. However, what do we know about using the individual personal mode of coordination in existing studies of large-scale agile development? Bick et al. ( 2018 ) described coordination at the inter-team level as mainly traditional, relying on roles and hierarchy. Although not reported, the personal mode was probably used for intra-team coordination within teams through practices such as pair programming and possibly through vertical layers in the programme organization through direct communication between central team members and team roles, such as product owners. Issues were escalated from the team to the inter-team level, which could be an example of the vertical personal mode of coordination. In the Perform programme (Dingsøyr et al. 2018b ; Dingsøyr et al. 2018c ), horizontal coordination was facilitated by several factors, such as being located in the same physical open work area, which allowed for easy direct communication (ad hoc communication in Table 2 ), rotating team members between projects and forming new teams by splitting an existing team; several arenas for informal communication, such as lunches and coffee breaks, also existed. Pair programming was used extensively but mainly within development teams. Customer representatives were available in the open work area for consultation. The study reported that team members asked for advice across the teams and organizations which staffed subprojects, and many emphasized the important facilitating role of the open work area. Edison et al. ( 2021 ) also identified proxy collaboration , which we interpret as a role between teams that fits into the personal mode.

2.3.3 Impersonal Mode

Impersonal coordination in large-scale programmes was reported by Bick et al. ( 2018 ) as involving top-down planning, resulting in themes in a product backlog, epics in a release backlog and user stories and tasks in sprint backlogs. A similar masterplan was used in the Perform programme (Dingsøyr et al. 2018c ) with deliverables that consisted of epics, which were again broken down into user stories and tasks at the iteration level. Table 2 lists the ‘common goal for the sprints’ and the ‘strategic roadmap’ (Edison et al. 2021 ).

We also find several descriptions of routines in the Perform programme, such as architectural guidelines, team routines and cross-team routines (e.g. scrum of scrums meetings). Furthermore, the planning is done more in writing than what is common in agile development, with a written description of the needs analysis and a solution description available on a programme wiki. This could be seen as central team directives (Table 2 ), but the guidelines are regularly updated based on feedback from retrospectives or work in architecture and business projects. A post-project review evaluated the use of guidelines and found that some were defined too late and some were not followed, as teams perceived that they resulted in less flexibility; obtaining an overview was also challenging because of the number of guidelines in the wiki. Furthermore, an instant messaging tool was used for asynchronous communication amongst all programme participants.

A particularly interesting finding from Perform is that the plan was both available in an issue tracker and as a physical board next to the team tables in the open work area. An informant stated, ‘It takes two seconds to get an overview of the status [in a team], and from my location [in the open work area], I could see almost all the boards, and then I would know what had happened at the end of yesterday [in each team]’ (Dingsøyr et al. 2018c ). The study of inter-team coordination in SAFe cases showed variants of the board at the programme level. The programme board included information such as features, dependencies between features and relevant milestones for the next product increment (Gustavsson 2019 ). The study demonstrated the use of physical and electronic boards and the different frequencies of updates across the examined cases. Edison et al. ( 2021 ) listed several other studies that found visualization to be a common practice.

To investigate our research question— how is the inter-team coordination strategy impacted by a change from the first- to second-generation large-scale agile development methods— we have designed a longitudinal embedded explanatory case study (Runeson and Höst 2009 ; Yin 2018 ). The systematic literature review of large-scale agile methods shows that ‘purposefully designed longitudinal studies on the adoption and application of large-scale agile methods are rarely seen in the existing literature’ (Edison et al. 2021 ). We draw on previously established theories on coordination, mainly from management science, and from prior studies of inter-team coordination in large-scale agile development. We position the study as a positivist case study seeking to explain the impacts of a change by drawing on prior theory to define a set of novel propositions. In the following, we describe the research design, the procedures for data collection and the data analysis. The main limitations are discussed in Section 5.5 .

3.1 Case Study Design

The objective of the present study was to increase the understanding of coordination in large-scale agile development, particularly to empirically examine strategies for inter-team coordination. This means that we have not focused on coordination at the team level. Prior studies have identified changes in coordination mechanisms over time, but as Edison et al. ( 2021 ) found, few longitudinal studies have been conducted.

The case is a very large-scale agile development programme. A programme involves a temporal organization, which differs from a permanent software development organization in that many participants will work for a shorter period. The case was selected as one of several large-scale software development projects followed in a research project. The criterion for selecting the case was that it should be an extreme case for coordination in that it had a high number of development teams (what we describe as a very large-scale agile development programme) (Dingsøyr et al. 2014 ). The programme had 200 participants at the most, with about 130 working in 10 development teams and in programme organization. The programme was co-located, which meant that we did not need to focus on topics related to sociocultural distance (Ågerfalk and Fitzgerald 2006 ) or distributed agile development (Šmite et al. 2010 ). We describe the case as extreme for the following reasons. The first is its size. Second, the programme is also an extreme case of a large-scale agile development method in the initial choice of a first-generation large-scale agile development method which is more oriented towards plan-based development than, for example, the Perform programme (Dingsøyr et al. 2018b ). Our case was more oriented towards plan-based development in that it had two projects, business and development , with formal handovers between them. When reorganizing, the programme chose to work with continuous deployment and autonomous teams, which we argue are more in line with agile principles (Baham and Hirschheim 2021 ) than some of the second-generation large-scale agile development methods that, for example, prescribe a number of roles.

The unit of analysis is inter-team coordination strategies between business and development projects in the programme. The original plan was to focus on how the programme adjusted its coordination strategies over time. The programme was planned with three releases, and the plan for data collection focused on documenting practices and perceptions of practices amongst different groups for each release. However, the programme did reorganize, which gave us a unique opportunity to study changes in coordination after reorganization. As a consequence, we revised the data collection procedures, as described below. We focused on two phases of the programme in which 10 development teams worked in parallel: one phase using a first-generation large-scale agile development method and another phase using a second-generation large-scale agile development method.

We asked to follow the programme from early 2017 and were granted access to interview its participants, read relevant documents and observe meetings. We were also given a series of briefings about the organization and the progress of the programme.

The study was part of a more extensive work in which we already obtained approval from the Norwegian Centre for Research Data (reference 848,084). We secured informed consent from the interview participants and ensured that the data used in the reports are not traceable to individuals and that we regularly gave feedback about the findings to the case participants.

3.2 Data Collection

We had to carefully consider our strategy for data collection. The programme was located in Oslo, but most of our research team members were located 500 km away in Trondheim. We therefore chose to organize regular visits to the case, in which three to four researchers would participate in the data collection and subsequent discussion. A PhD candidate partly contributed to the data collection and gave us much insight into the context by studying changes in the central IT department of the case organization (Vestues 2021 ). The discussions after data collection were crucial in developing a collective understanding of the programme organization and coordination challenges amongst the research team.

Data collection was conducted through individual interviews, group interviews, observations and collection of documents. We also held meetings with programme management to obtain an understanding of the organization of the programme. Field notes were written after the meetings and observations.

We interviewed individuals in a variety of roles to understand coordination challenges and practices, as shown in Table 3 . Our primary focus was on software development practices, and most of our informants had roles related to development; however, we also interviewed several individuals in other roles to understand programme organization. The interview guides were revised from a previous study (Dingsøyr et al. 2018b ; Dingsøyr et al. 2018c ) (see Appendix 1). These guides focused on coordination challenges and practices, as well as on contrasting between work on releases. The questions were mainly open and phrased in a language familiar to the respondents, such as ‘What dependencies do you have on other teams? Examples?’ and ‘How do you manage dependencies’? We made minor changes in the last round of interviews to focus on the effects of work reorganization, which we call a transition from the first- to second-generation large-scale agile development methods.

We visited the case three times over two days. We were three to four researchers conducting semi-structured interviews in parallel, which were followed by a feedback session with our interpretation of what was said. During the visits, the first interviews were conducted by a pair of researchers to ensure consistency in the use of the interview guide. Later interviews were conducted by a single researcher. The interviews lasted from 24 to 120 minutes, typically around 30 minutes. These were recorded and transcribed for analysis. In total, we interviewed 39 informants—13 in December 2017, 12 in January 2019 and 13 in November 2019. We conducted another interview in January 2020 (see the participants’ roles in Table 3 ). As described in the limitations section, we could not interview participants from all teams during all visits, but we always interviewed people involved in development or test, requirements engineering, architecture and project or programme management. In total, the interview material contained 456 pages of text.

We also invited key people from the programme to a workshop in October 2020, in which we established a timeline and brainstormed on what worked well and what could be improved. This workshop led to a separate article on key learning from the transformation process, written with practitioners from the case (Dingsøyr et al. 2022 ). We further conducted group interviews to discuss coordination and the requirements engineering process. The group interview on coordination included a project manager and a product owner from NAV and a project manager, an assisting project manager and the construction responsible for the development project from Sopra Steria. This two-hour interview was recorded and transcribed into a 42-page document.

When negotiating access to the case, we avoided data collection in periods close to a release. Consequently, the first round of interviews was conducted during a relatively calm period and could be characterized by a neutral mood amongst the subjects. The second round was done after the initial shock of the reorganization had settled, which was characterized by a mix of frustration and optimism. The third round was completed after the programme ended. One of the researchers wrote, ‘ I’ve never interviewed people who are uniformly so happy with their situation!’ (Field notes, interview round 3).

We observed arenas for inter-team coordination, such as daily meetings and planning meetings, when visiting the case. To obtain further insight, we also facilitated retrospectives on team coordination in November 2017 and one on the delivery model in January 2018. Apart from facilitating these two retrospectives, we did not intervene in how the programme organized inter-team coordination.

The documents included an initial overall plan (39 pages), the proposal to reorganize the programme (23 slides) and a document describing the new release pipeline (209 pages). We also obtained access to minutes from team retrospectives, which provided insight into what the teams perceived to work well and what was perceived as challenges.

3.3 Data Analysis

The data material was imported into a tool for qualitative analysis (Nvivo 12). All data material was anonymised, and files were given attributes that described the programme phase, role (where relevant) and which interview round the file belonged to (if relevant). The dominant data source used in the analysis was the qualitative interviews.

We used interview guides that gave us much context on the case. We first conducted descriptive and holistic coding on material related to coordination. Three researchers first coded the interviews independently and then compared the coding. This happened in a series of workshop meetings, and the goal was to align our understanding of the codes. The three researchers who participated in the coding all took part in the data collection and discussions of the case over time, and all had prior experience in coding similar material.

We further independently coded the material in more detail by using codes on coordination mechanisms, such as scrum of scrums meetings, issue trackers and artefacts, such as dependency maps. 22 codes were taken from previous studies on coordination in large-scale agile development (Dingsøyr et al. 2018b ; Dingsøyr et al. 2018c ). Coordination mechanisms were coded in broad groups using the coordination modes proposed by Van de Ven et al. ( 1976 ): the group, personal and impersonal modes. A sample text coded as ‘scrum of scrums’ and was related to the first phase of the programme was ‘… we had scrum-of-scrums in which team leaders on each team met, and then we could raise issues with the other teams; we often identified if a team was waiting for another team, or if there were other causes for delay’. We found 30 coordination mechanisms, as described in the results section.

We added coding about context, such as the descriptions of phases and product releases. The context information also included the codes used to describe ‘programme complexity and uncertainty’, ‘perceived project success’ and ‘coordination effectiveness’ (Fig. 1 ).

After coding, we engaged in several activities for within-case analysis (Eisenhardt 1989 ). We first generated reports for the coordination mechanisms for each phase, which were tabulated. Langley ( 1999 ) described this as a temporal bracketing strategy to theorize from process data in which we see fairly stable processes within each phase. We can then examine how the context affects each phase and determine the consequences of the processes in the form of coordination efficiency. We had several discussions within the research team regarding the findings, and we compared our initial results with those of another study (Carroll et al. 2020 ). Furthermore, the initial findings were presented, first, to the informants in the case and, second, in an online open meeting at the IT department. We also wrote a report in Norwegian, in which we presented the context and organization of the programme to obtain feedback on our understanding, and we developed a description for a narrative strategy (Langley 1999 ). Finally, in parallel with the analysis of the material for this article, the first author wrote a magazine article with the key participants from the case; the article summarized key learning from the transition (Dingsøyr et al. 2022 ). Overall, these activities helped us increase our understanding of the organization and the challenges in the case.

Through this iterative process (Eisenhardt 1989 ), we built an explanation of coordination in the case. Following the steps described by Sjøberg et al. ( 2008 ), first, we drew on existing constructs from coordination theory from Van de ven et al. ( 1976 ) and Strode et al. ( 2012 ), together with constructs from software engineering and agile software development. We also used our novel definitions of first- and second-generation large-scale agile methods. Second, by contrasting the two phases in the case study, we developed five novel propositions on coordination in large-scale agile development, which we suggest describe the impact of coordination in the transition from the first- to second-generation agile development methods. Third, the discussion shows our logical justification for the proposition, building on both our interpretation of the case study and our synthesis of related work presented in the background section. Fourth, we discuss the scope of the suggested propositions in Section 5.4 . Finally, we discuss how the propositions might be tested in Section 5.5 .

We first describe the parental benefit programme with its background and main objectives. The presentation of the programme is built on analysed documents, external media coverage and descriptions from informants. Section 4.2 describes the first phase with the organization of the programme into projects (Fig.  4 ) and roles (Table 4 ); it presents findings on the effectiveness of coordination in this phase, followed by findings on coordination mechanisms. Similarly, Section 4.3 describes the second phase with programme organization with autonomous teams (Fig.  7 ) and competence needs (Table 7 ), followed by findings on coordination effectiveness and coordination mechanisms.

4.1 The Parental Benefit Programme

As part of the welfare system in Norway, parents with newborn babies can apply for benefits as compensation for lost salaries during their parental leave. Every year, NAV processes about 100,000 applications for parental benefits or changes to these and distribute EUR 2 billion to parents.

Prior to the parental benefit programme, parents filed applications for parental benefits on a modern web interface. Then, NAV manually entered information from applications on paper into another interface to process the applications. These were then handled using IT solutions running on mainframe computers from the 1970s. NAV received 282,000 telephone inquiries from users on these benefits per year. The system was described in national media as ‘complicated’, ‘time-consuming’ and ‘incomprehensible’. Footnote 3

Overall, NAV runs more than 300 IT systems and operated with a model in which large programmes to modernize IT solutions were given to subcontractors. In 2012, they initiated a modernisation programme with a total budget of EUR 330 million to replace systems from the 1970s with a new platform with new services. Shortly after its initiation, 17 development teams recruited from five subcontractors worked in parallel. After nine months, the modernisation programme was stopped because of a lack of progress; the cost was about EUR 70 million. This led to a parliament hearing and the resignation of the IT director and the director of NAV. ‘ The trust from the ministry was totally broken ’, one of our informants in the programme management stated (round 3).

The further modernisation of IT infrastructure was then replanned by smaller programmes seeking to reduce risk, building on known technology and development processes. The parental benefit programme was the second of three programmes, and the aim was to digitize the application process for new parents’ parental benefits. Because of a new law, the old system was to be replaced by 1 January 2019.

The new solution aimed to reduce the number of inquiries by 25%, achieving a self-service rate of 80% and decreasing incorrect payments by 10%. NAV described the goal to be achieved as follows: ‘(1) automatic application processing, (2) users can manage their application through the self-service solution and (3) electronic collection of information from caseworkers will provide better quality and more efficiency in application processing’. (document describing the programme).

The parental benefit programme lasted from October 2016 to June 2019. We studied the main part of the programme, which, at its peak, employed 130 people Footnote 4 in 10 teams, of which 100 were external consultants from “Alpha” and Sopra Steria. The programme manager was employed by NAV. The programme depended on functionality in about 20 other systems at NAV.

The programme started by using an internally tailored first-generation large-scale agile development method similar to that used in the Perform programme at the State Pension Fund (Dingsøyr et al. 2018b ), with certain changes. There were three planned releases—the baseline, the settler and the digital—all including 50,000 to 75,000 hours of estimated work. Nevertheless, for reasons that will be described in the following, the development model was changed to a second-generation method in October 2018. As shown in the timeline in Fig. 2 , the programme started with one development team and gradually increased the number of participants to 10 teams, which we describe as a very large programme. We reported the lessons learned from the transformation process in a separate article (Dingsøyr et al. 2022 ). The whole programme was physically collocated in the same work area, as shown at one time in Fig.  3 , on two floors. Some participants in the programme had also worked in the Perform programme and had a background in this development method. The programme used a target price contract model (PS2000 SOL) for the first two releases, but this was changed to a time and material model in the second phase.

figure 2

Programme timeline

figure 3

Physical work area where the programme was located in both phases

4.2 First Phase

The first phase included two releases. The baseline release was a digital application processing system that automatically processed applications for one-time benefits. The settler release expanded the application processing system to include all types of parental benefits and integration with employers’ pay systems. This phase aimed to develop a complete decision-making system adapted to the requirements of calculation in the law.

In this phase, the work was organized into four projects: business, development, test and change management (Fig. 4 ). The business project was responsible for the phase of analysis of needs , which was conducted in collaboration with the development project given a solution description , before being assigned to a development team in the construction phase; after development, the approval phase organized by the test project followed. This model was similar to what was used in the Perform programme (Dingsøyr et al. 2018b ). The programme could then, at a particular time, be in the production phase of one release while being in the construction phase of a second release and conducting the needs analysis for a third (Fig.  5 ). The change management project introduced new solutions to the main user groups, end users seeking parental benefits and caseworkers at NAV.

figure 4

Organization of the programme with four main projects

figure 5

Development phases

The development teams worked in three-week iterations with the four roles described in Table 4 . The business project and the development teams were located in different parts of the work area, and the functional architects were located with the business project, but they prepared solution descriptions of user stories for the development teams. These were made in the programme wiki. There were 16 roles at the programme level, which are described in Table 4 .

When starting on the second delivery (settler, Fig. 2 ), the programme created a pilot test to examine second-generation large-scale agile development methods in a cross-functional autonomous team. A committee was formed to assess whether the entire programme should change the delivery model.

In the focus group interviews, the informants described this phase as being characterized by not only time pressure but also a meeting culture in the programme. This made decision making time consuming:

‘ It was a constant pressure to deliver. We had six to seven development teams that should continuously be fed tasks for their sprints. And that is quite a number of people and quite a lot of power in consuming user stories ’ (manager, development project, group interview).
‘ … people were in meetings the whole time, and you’d never find anyone by their desk; because you didnt’ find a person there, you had to invite them to a meeting … And when first inviting, you’d also invite more people to make sure’ (business analyst, business project, group interview).

An informant stated that, as people tended to have full schedules, calling for a meeting often would delay decision making by more than a week.

4.2.1 Coordination

The coordination in the first phase of the programme was characterized by the value chain, with formal handovers between the phases (Fig. 5 ).

NAV used the consultancy company “Alpha” to assist in creating solution descriptions. NAV and consultants from “Alpha” coordinated internally to prioritize and harmonize the requirements across the value chain (CI1 in Fig. 6 ). The solution descriptions were then handed to a group of consultants from the development project, who processed these into user stories; these had to be approved by NAV before they could be handed to the development teams (CE2). The development teams had to coordinate internally (CI2) in order to develop the necessary code in the construction phase before handing the results back to NAV for testing and approval. If the solution descriptions involved external systems, NAV or consultants from “Alpha” would initiate contact with external partners to clarify how the process could be done (CE1).

figure 6

Overview of coordination when using first-generation large-scale agile development methods. CI is the internal coordination in the programme, whereas CE refers to the various types of external coordination. Adapted from the whiteboard during group interview on coordination. The dashed line indicates that there are more than three teams

When the user stories are passed to the team level, the team would have to initiate new contact with external partners in order to coordinate and book the necessary resources for developing the external system (CE3).

Interviews with key persons in the programme indicated that internal coordination was perceived to be working well:

‘ Coordination internally in the business project and internally in the development project worked well’ (manager, development project, group interview).

However, all parties expressed frustration with the coordination between the business project and the development project in the first phase (CE2):

‘The coordination between projects was more demanding ’ (manager, development project, group interview).
‘ In the business project, it was impossible to get insight into and obtain an understanding of what was happening and how they were working in development. You described needs, and it was like delivering to a black box ’ (business analyst, business project, group interview).

A retrospective in January 2018 focusing on the delivery model identified the ‘transitions between [the] phases [of] analysis of needs, solution description and construction ’ (Fig. 5 ) as a main challenge. In the following, we will more closely examine internal coordination in the development project, as well as the coordination between the business project and the development project. In total, we identified 27 coordination mechanisms for CI2 and CE2:

4.2.2 Inter-Team Coordination in the Development Project

Internal coordination between the development teams in the development project was highly structured. We identified 18 coordination mechanisms, as shown in Table 5 , in which nine are group mode mechanisms, five are personal and four are impersonal. An iteration would start with a planning meeting in which the programme gathered all teams and presented tasks and dependencies for the upcoming iteration. The teams would then break out for individual team planning. Dependencies with other teams were mostly handled through the scrum master, who would contact the scrum master of the team which had the dependency. After contact was initiated, the developers involved would talk directly, use instant messaging or mail, or hold ad hoc meetings to resolve dependencies. Teams working closely in an iteration could also be moved physically next to one another to ease informal coordination.

‘We did it periodically—moved people around. Teams 2 and 4, for example, often worked closely together, at least we used to in the last iteration, so then we moved together for a time’ (application architect, development project, round 1).

The scrum masters conducted a daily standup for their team. The standups were staggered, so it was possible to attend another team’s standup if a team had dependencies that needed to be discussed. The scrum masters would also meet twice or thrice a week for a scrum of scrums meeting.

Each team had a technical architect who attended a technical architecture forum. The development project held what they called a technical review to transfer knowledge about new technology, and all developers could attend. This meeting was described as one of the most important ones for inter-team coordination. One participant stated, ‘ The technical review is very good for aligning technical development across the teams ’ (minutes from the retrospective focusing on inter-team coordination in November 2017).

During the first phase, the development project scaled up by adding more people; once the teams grew too large, they were split, and new people were added. This led to what they called ‘stirring the pot’, and most developers were rotated between several teams, thus bringing domain knowledge with them. The development project also had some roles on top of its team structure; these were considered important coordinating roles. The construction responsible was often mentioned as a role that was engaged in frequent discussions with the teams to ensure that the right people were coordinating across the teams:

‘The construction responsible worked almost full time with tasks which were in between teams’ (manager, development project, the group interview).

At the end of the iteration, each team conducted a retrospective and documented the results in a wiki.

They also arranged a common demonstration in which each team showed internal and external stakeholders what it had produced in the iteration and sought to align demonstrations from the teams:

‘We tried to achieve a flow there … we tried to talk about where we worked on in the solution and achieve a natural flow, and then we got a smooth transition to the next team’ (scrum master, development project, round 1).

Table 5 shows all the coordination mechanisms identified.

4.2.3 Inter-Team Coordination Between Business and Development Projects

Table 6 provides an overview of the coordination mechanisms between the business and development projects. In our material, we identified a total of nine mechanisms, four group modes, one personal mode, and four impersonal mode. For coordination between the two projects, the development project had a dedicated team of what they called functional architects, who would handle contact with the business project. The idea was that these team members would divide their time equally between writing user stories and being available for the development teams that would implement the user stories to clarify issues. In practice, they spent most of their time in meetings with NAV. User stories were specified in formal and informal meetings. There were formal working meetings to initiate work on a user story, and there could be several user story meetings between the functional architects and the business project to clarify issues. Finally, there was a formal approval meeting with NAV before the user story was transferred to the business project’s issue tracker and scheduled for a future iteration.

‘Regarding the solution descriptions, there were several meetings … both internal to us and with the customer to work on those’ (project manager, development project, group interview).

Many informants stated that a major challenge with coordination in this phase was that the teams working on solution descriptions and user stories and the teams developing the solution were not working on the same user stories simultaneously.

There was a perception of time pressure in the programme. A functional architect (development project, round 1) stated that ‘ The deadlines are short … we need to deliver to the approval meeting on Thursday afternoon, have the approval meeting on Friday afternoon … That is not how I’d like to do it’. Construction would then start the next week.

It could take months from the approval of a user story until a team began implementation, and if there were issues that needed clarification, the people who wrote the description worked on new tasks and had to try to recall what they had meant. This also led to a long feedback loop and limited learning across organizational lines. The functional architects had their own forums in which they discussed dependencies and tried to identify as many as possible before development began. After a while, they introduced a dependency map presented to the developers at the beginning of every iteration to increase awareness. Initially, the functional architects were placed together with the business project, but they were eventually moved into the development project with the teams they supported.

As stated, the retrospective in January 2018 focusing on the delivery model identified the ‘transitions between the phases of analysis of needs, solution description and construction’ as a main challenge, which included a ‘too high focus on details early’ and ‘too late prioritisation of requirements’. An informant described that the ‘ documentation of needs and solution descriptions was very extensive ’ and that ‘ requirements were very detailed ’ (business analyst, business project). At this stage, other challenges identified in the retrospective were ‘information flow across the programme’ and ‘too many and too long meetings’.

4.3 Second Phase

The aim of the last release, the digital, was to create a self-service function integrated with an extended application processing system and to support integration with health actors. The goals for the release included creating a complete integration between a planning calendar and a dialogue about benefit applications with users and conducting a digital dialogue between the user and the application caseworker. The previous phase created a minimum viable product of core functionality that was to be further developed. A main difference of this release was that the programme was now to develop a solution that was in use and add new functionality in a domain that was less well explored.

The programme manager set up an internal committee to suggest an organization and delivery model for the last phase. They were mandated to propose changes in working methods that could enable the programme to work better but which would not increase the risk for the previous phase or for the time when a new solution was to be released (document, proposal to reoganise the programme). Both NAV and suppliers were represented in the working group. At the end of the first phase, the team was allowed to work independently as an autonomous team that could continuously deploy new functionality. This team had good experiences.

Furthermore, the central IT function in NAV defined a new way of working that was different from the first-generation large-scale method of the first phase. A new IT director had a vision that all IT developments should be done with agile methods (see Mohagheghi and Lassenius ( 2021 ), (Bernhardt 2022 ) and (Vestues and Rolland 2021 ) for a description of changes in the IT department). A new technical platform was introduced in other parts of NAV, in which many non-functional requirements were handled in the platform; this made development teams focus better on functionality towards users. This platform used container technology and microservices and enabled an event-driven architecture.

Programme management did not think they had to change the delivery model: ‘Given the size and complexity of the programme, it was well run—we delivered on time and we delivered on budget ’. However, they found that ‘ It was very calculated; yes, we had sufficient control so that we can work smarter. It was not like if we don’t change now, we’ll not deliver’ (manager, NAV, round 3). However, other informants expressed that delivering a more complex solution on a running system would have been challenging if the model had not been changed. A software architect (round 3) stated, ‘ We would not have had a chance ’ to deliver a consistent solution without changing the model.

The internal committee proposed reorganizing to cross-funcational autonomous teams, with a gradual transition to continuous deployment (Fig. 7 ). The programme manager accepted the proposal, which led to significant changes in the last phase.

figure 7

The new organization with teams and supporting functions

Some were worried about the transition to autonomous teams: ‘ I remember that at the beginning, people were worried about how we can keep oversight, how we should coordinate this and ensure that parts were coherent and that the teams align’ (business analyst, group interview).

The change was perceived as a fundamental transition:

‘We were willing to adjust how we defined needs and solution descriptions. We’ve transitioned from one extreme point to the other, from massive models with areas, epics and user stories where everything is connected to the situation today, where things—in the best case—are documented in a Slack thread’ (team manager, group interview).

Informants stated that there ‘was a lot less documentation … which I think everyone appreciated’ (business analyst, group interview), and ‘a lot of roles disappeared’ (architect, group interview). The work tasks were more focused. One informant stated the following:

‘The number of tasks you worked on simultaneously was reduced. But the quality of what was done was greater. Tasks used to take a long time previously, which led you to have many tasks in process at all times. Now, the feeling was “this need will be delivered by the end of the week”’ (business analyst, group interview).

There was an initial period characterized by a lack of coordination between teams:

‘I don’t know much about what the other teams are doing now’ (test responsible, round 2).

However, the general perception was that it took time to adjust to the new delivery model, but eventually, ‘we had a more streamlined use of tools, collaboration and coordination’ (manager, group interview). An informant stated, ‘I found that we were providing a lot more value in production the last half year of the programme’ (business analyst, group interview) . As we will describe, many of the old coordination mechanisms were re-introduced.

New regulations regarding the product were implemented in the winter of 2019. The product went into maintenance and further development in June 2019. The programme won the Norwegian prize for digitalisation the same year. Key objectives were met, such as the degree of self-service on applications which was higher than the target of 80% (99.8% in the spring of 2019). The time used to process applications was reduced from weeks or months to a matter of seconds. Footnote 5

4.3.1 Programme Organization

The programme was now organized with 10 cross-functional autonomous teams for all product areas, as shown in Fig. 7 . These teams were co-located and responsible for the product as a whole, including quality. The degree of autonomy was adapted to the degree of coupling between teams and dependencies. Still, most teams were eventually allowed to continuously put deliveries into production.

Development was now organized according to a flow-based model (Fitzgerald and Stol 2017 ), which resulted in the disappearance of roles in the programme, and new cross-functional autonomous teams were established with people from NAV and the two suppliers. Much thought was given to organizing teams according to the product domain in a way that would minimize the need for coordination.

Continuous deployment started in early 2019. Many meeting arenas disappeared. New support functions were established, as described in Table 7 , and the teams received support from two agile coaches to further develop their work processes. They also received initial support from solution architects to ensure holistic architecture. The contract model was changed so that the suppliers delivered resources to NAV.

The autonomous teams were described as cross-funcational autonomous product teams and had approximately 12 members without formal management. Each team had a product owner. The teams were sometimes moved in the office landscape to sit close to the teams with which they collaborated.

Each team had a product owner from NAV; the consultants from “Alpha” and the functional architects from Sopra Steria were designated functionals and tasked with helping the product owner, as shown in Fig.  8 . Otherwise, the teams mainly consisted of development teams from the first phase. Some developers from NAV were also integrated into the teams. These developers met across the teams and would eventually become the team that would take over the solution once the development programme had ended.

figure 8

The reorganization into autonomous teams led to more intra-team coordination and less inter-team coordination. Teams were cross-functional with product owners (POs) on each team, further team members who had formely had roles as developers (D) and functional architects (F). The participants from two consulting companies are shown in blue and green, participants from NAV in red. Teams would typically have 12 members

4.3.2 Inter-Team Coordination Between Autonomous Teams

According to our informants from both consulting companies and NAV, the coordination between NAV and developers improved with the new structure. ‘It strengthens the developers’ understanding of the domain and the product owner’s understanding of the technology. You save a lot of time and get more work completed’ . (functional architect, group interview).

At the same time, most of the arenas across the teams were removed in the reorganization to allow the teams to be autonomous and freely decide on their involvement in meetings. Many teams saw the arenas as timeconsuming and not crucial when operating as an autonomous team. The first two to three months after the reorganization were challenging, mostly because the developers from Sopra Steria were still under the old contract to deliver the last big delivery. Once that had been delivered and the teams moved to daily deployment, the team members from NAV, “Alpha” and Sopra Steria got a more similar focus, developed an identity as a team and aligned their working processes.

All tools and processes were dropped in the reorganization, and teams adopted different approaches to how they would like to work. Some lifted the old process into the new team structure; others swore never to work with the wiki tool again. Eventually, some standardization and new meeting arenas emerged in the new team structure. However, teams started to take responsibility, and the need for competence at the programme level was quickly reduced (Table 7 ).

As shown in Table 8 , we found 14 coordination mechanisms in this phase—seven in the group mode, three in the personal mode and four in the impersonal mode. Some mechanisms that reappeared did so in a different form, such as the demonstrations, which used to be scheduled meetings but were now unscheduled. One informant missed the scheduled demonstrations, which gave insight into what other teams were doing:

‘ … I miss that, but I see that it could be difficult with the teams being autonomous and they deciding what to show. So now, we have internal demos in our team; we try to have them weekly’ (test responsible, round 2).

A common repository was used to host all code, and the issue tracker was reintroduced as the standard way of documenting user stories, now including possible dependencies on other teams. The scrum of scrums meeting was reintroduced to handle dependencies between teams. The product owners reintroduced a product owner meeting to obtain a better overview of the total solution. Furthermore, the functionals reintroduced a forum across the teams to discuss dependencies between user stories, and the tech leads of the teams started meeting weekly to discuss technical dependencies. A go-no-go meeting was introduced daily to discuss whether to push the code from the previous day to production, which was described as an important meeting that provided the participants with an overview of the programme’s status. At the same time, the use of informal person-to-person or ad hoc meetings increased along with instant messaging.

The work tasks for developers were less specified, which meant that they had to discuss more with businesspeople. Some teams introduced a start-up conversation when initiating work on a task, which was guided by the following questions: What is this task? Why should it be this way? What are we looking for? The task description might be just a sentence or two.

To coordinate, the teams used task boards and an issue tracker with product queues for backlog refinement and a common roadmap with an outline for the next four months.

5 Discussion

How is the inter-team coordination strategy impacted by a change from the first- to second-generation large-scale agile development methods? The coordination strategy involves a choice of coordination mechanisms to achieve coordination effectiveness in a certain situation. Coordination effectiveness is an essential contributor to overall programme success. We start by discussing the differences in the programme’s situation in the first and second phases. This is followed by the perceived coordination effectiveness in the first and second phases. Finally, we discuss the differences in the corresponding coordination strategies and suggest five propositions related to our research question before discussing the main limitations.

5.1 Changes in the Programme Situation

To describe the situation, we first focus on what was similar in the first and second phases, and then we present the factors relevant to choosing a coordination strategy (Van de Ven et al. 1976 ).

The programme organization consisted mainly of the same people at the end of the first phase and the start of the second phase. There were no major changes in the overall goals and aims of the programme, and the programme worked in the same physical office area with physical proximity across the whole programme.

For unit size, the total size of the programme was moderately larger in the second phase. In both phases, we describe the programme as a very large development programme with 10 development teams and a maximum of 130 participants in the part of the programme studied. However, there was a large increase in unit size at the team level, as the teams were now composed of both people from the business side and people from development. The programme had larger team sizes than recommended in agile practices during both the first and second phases. We describe the unit size as large .

As for task interdependencies, Van de Ven et al. ( 1976 ) defined interdependence as the extent to which unit personnel depended on one another to perform their jobs. They further identified four types of interdependence, from ‘independent’ work to a ‘team’. The transition from a first-generation to a second-generation large-scale agile development method meant that people from the business and development sides who needed to coordinate work on a user story (the requirements in Strode’s taxonomy (2016)) were initially in different teams (working in a sequential or reciprocal mode) but were later placed in the same team. Other types of knowledge dependencies in which the business side needed technical knowledge were also now managed at the team level. Additionally, what Strode ( 2016 ) defined as historical knowledge was now broader when including both business and developers in the same team. Process dependencies were also managed at the team level, and resource dependencies were largely handled at the team level. The restructuring of the programme meant that teams were focusing on a product domain that sought to reduce the number of dependencies on other teams. In practice, however, there were still many dependencies to manage, but the significant difference was that dependencies were, to a much larger degree, handled at the team level. We describe the number of task interdependencies as high .

One could argue that task uncertainty was lower in the second phase of the programme, as i) many technical uncertainties were now handled by the platform, ii) the teams were responsible for work within a product domain and iii) programme members had learned both about the domain and technical architecture, as many had worked for over a year in the programme. On the other hand, the programme was i) taking on tasks in an area which was less explored, and ii) all new changes would be implemented on a system which was running and iii) which had grown in size; at the same time, iv) there was more feedback from user groups. Overall, we describe the situation as having a context with high task uncertainty in both phases.

5.2 Coordination Effectiveness

Having described the situations in the phases, we now move our attention to coordination effectiveness. As with developer productivity (Forsgren et al. 2021 ), we acknowledge that coordination effectiveness is difficult to measure. The tasks in the two phases were very different. One could further expect that there would be a gain in general work productivity as programme participants learned about the domain and the technical system.

Although concluding that the programme was a success is early, many of the benefits described in the business case have started to appear, as described in Section 4.3 . Some studies describe project success as a project’s capability to deliver on time and within the budget with the expected quality (Ika 2009 ). The parental benefit programme was completed on time and within the expected budget, and it delivered a solution for which the programme was awarded the annual prize for digitalisation in Norway in 2019.

However, programme success does not necessarily mean that the programme has experienced coordination effectiveness. From our qualitative interviews, we get an impression of perceived challenges and successes in managing dependencies. Edison et al. ( 2021 ) listed some challenges identified in prior studies on inter-team coordination, including synchronizing across dynamic and fast-moving teams, addressing meeting overload, decreasing the many handovers between teams as a result of end-to-end development and maintaining transparency across a high number of teams.

In the first phase, we identified 27 coordination mechanisms internally between teams in the development project (CI1) and between the business and development projects (CI2). The informants perceived coordination to work well within the teams and projects, but there were major challenges with coordination between the business and development projects. The development model with teams for phases led to handovers between these two projects. These handovers of solution descriptions of user stories resulted in knowledge dependencies on requirements; the challenge was that there was often a long time from the completion by the business project of a solution description of a user story to the actual development. Clarifying needs and requirements was frequently time consuming, as people on the business side were fully booked in meetings. As Edison et al. ( 2021 ) reported, the number of meetings can threaten coordination effectiveness. Teams experienced synchronization between teams within the projects as working well, but there were indications of a lack of transparency across projects, as the participants in the business project saw the development project as a black box. Some statements suggest that the analysis of needs often resulted in too detailed descriptions, sometimes leading to less autonomy for the developers and sometimes describing work which was not technically feasible.

In the second phase, there was an initial period in which inter-team coordination suffered, as most mechanisms were abandoned, and it was up to the autonomous teams to take the initiative to establish new ones. After an initial phase, however, we identified 14 coordination mechanisms in use. Most informants stated that coordination worked well. They could focus on fewer user stories (reduction of cognitive load) and directly ask people about domain or technical knowledge (manage knowledge dependencies at a lower level); many technical issues were addressed by separate platform teams (also a reduction of cognitive load for team members). One informant appreciated the ‘much tighter dialogue’. The change was described as increasing the developers understanding of the domain and the product owner’s understanding of the technology, leading to more completed work.

5.3 Coordination Strategies

Given this background on the situation in the first and second phases and the perceived coordination effectiveness, we now discuss the coordination strategies used, which, to a large extent, were derived from the choice of a first- or second-generation agile development method.

The systematic literature review shows that previous studies have identified creating ‘dependency awareness’ and having ‘different arenas for coordination over time’ (Edison et al. 2021 ) as two success factors which particularly relate to coordination. We first describe the coordination strategies in the first phase, followed by the second phase, and then we compare the phases and compare the first- and second-generation large-scale agile development methods. In Section 5.4 , we develop five propositions on the impact of transitioning from the first- to second-generation methods.

5.3.1 First Phase

As we show in the results, the first phase relied on a first-generation large-scale agile method, which combined phases and roles and an overall programme organization with central ideas from agile development, such as using scrum at the team level and having an overall flexible product backlog, a team organization, proximity in that the whole programme was co-located and a high presence of the business side through a dedicated project. The coordination mechanisms in the first phase were mainly organized around the phases of development and programme- and team-level roles, and inter-team coordination mainly took place through scheduled meetings. Table 9 shows the characteristics of the two phases.

Most of the coordination mechanisms were stable during the first phase, apart from attempts to remedy the coordination challenges identified between the business and development projects. The new mechanisms introduced included dependency maps (impersonal), and the functional architects were moved physically from the area where the business project was located to their development teams to ease informal coordination (unscheduled group meetings and personal horizontal coordination). These measures were not seen as sufficient when starting the last phase of the development project, in which new functionality was to be made in an area that had been less explored, and the programme had to integrate new development with the existing running solution.

Compared with the existing cases in the literature, we can note that there were several more mechanisms for coordination than we found in the study of the enterprise software project reported by Bick et al. ( 2018 ). That case illustrated challenges with many unforeseen dependencies, while the first phase in our case experienced challenges with over-specification and the time needed for clarification. Comparing this phase with the Perform programme (Dingsøyr et al. 2018b ), we note that both programmes use several coordination mechanisms and organize work in phases and projects. However, the overall organization in the Perform programme made a closer link between the development teams and the projects on architecture, business and test, as most people in these projects worked 50% on a development team. This led to knowledge flow between the four main projects; for example, in the business project, people knew the background of the developers for whom they were writing solution descriptions. In the Parental benefit programme, the business project did not have this knowledge, and some experienced that they wrote solution descriptions which were given to a black box. Comparing coordination in the first phase with what was reported from case studies of the SAFe by Gustavsson ( 2019 ), we note that the Parental benefit programme invested much in upfront planning, although it mainly relied on written documentation and not so much on presentations as in product increment planning meetings. Dependency maps were introduced, and there was an overall plan of work until the next release, which corresponded to the board described by Gustavsson ( 2019 ).

5.3.2 Second Phase

As described, the change in the second phase to a second-generation large-scale agile development method led to changes in coordination needs. The focus moved from coordination around phases to coordination around the product when transitioning to continuous deployment and autonomous teams. The need for inter-team coordination was reduced, as the management of a number of dependencies was now at the team level. From our data material, it seems that the programme successfully reduced the challenge of knowledge dependencies between the business and development projects by managing these at the team level. The problem with process dependencies in which solution descriptions were finished months before the actual development was also reduced for new user stories, as the whole team was working on the same set of tasks. There were no phases that a user story had to go through, but there was a setup with automatic and manual testing before the daily meeting, in which decisions were taken on the deployment of new functionality (the go/no-go meeting). Some of the coordination which previously happened in meetings was then coded into the test process. Moving the management of resource dependencies to the team level and making teams responsible for a product area also led to fewer technical dependencies on other teams and fewer challenges with managing resources. Overall, we can say that the second-generation development model led to a transition of coordination work from the inter-team to the team level. However, although the intention was to reduce dependencies between teams as much as possible, inter-team coordination was still needed. The decision to give teams autonomy led to an initial loss of arenas for this purpose. As described in the results, it took several months before a number of coordination mechanisms were re-introduced. With the exception of the go/no-go meeting, techlead forum and change of demo meeting from scheduled to unscheduled (Table 8 ), the coordination mechanisms in the second phase were similar to the ones in the first phase.

Although we argue that the main changes in coordination strategy involved moving the focus from phases and roles to the continuous deployment of product and autonomous teams, we also note interesting changes in patterns in the inter-team coordination work. The first phase was characterized by many scheduled meetings (11 in total: three arenas for the business project and eight arenas for the development project), as well as the use of unscheduled meetings, the personal mode through one-to-one discussions across teams and the impersonal mode through tools, such as user stories in a wiki and dependency maps. However, in the second phase, we found fewer scheduled meetings (five, including the scrum of scrums meetings). The demo meeting was scheduled in the first phase, but it was changed to an unscheduled meeting in the second phase. We still find personal and impersonal modes for inter-team coordination. In sum, however, we describe the main change as a reduction in scheduled meetings and an increase in informal modes of coordination through unscheduled meetings and face-to-face discussions between individuals (personal, horizontal).

In the determinants of coordination identified by Van de Ven et al. ( 1976 ), they found that an increase in unit size led to an increased use of impersonal coordination mechanisms in the greater use of policies, rules and procedures to coordinate activities, as well as to a decrease in the use of scheduled and unscheduled meetings. Our empirical findings show that autonomous teams can manage inter-team dependencies using all mechanisms, but the increase in unit size did not lead to an increased use of the impersonal mode; instead, it led to more mutual adjustment mainly through unscheduled meetings and personal horizontal coordination mechanisms. A possible explanation could be that high task uncertainty and high task interdependence have a greater impact on the coordination strategy. It could also be that the increase in unit size in our case was not sufficiently large to have an impact.

Two propositions on coordination were developed in the study of a large enterprise software programme reported by Bick et al. ( 2018 ). First, dependency awareness is necessary but not sufficient for effective coordination. Suppose we accept that the coordination strategy was successful, particularly in the second phase of the Parental Benefit programme. In that case, we note that there was a period in which the programme experienced a lack of coordination after the abandonment of most inter-team coordination mechanisms. With the reintroduction of coordination mechanisms, such as the functional architecture forum, the awareness of dependencies on other teams increased; this, along with other mechanisms, enabled planning alignment, which Bick et al. ( 2018 ) proposed as a second proposition for effective coordination. We note that although there were fewer scheduled meetings than in the first phase, there were still arenas for joint planning and review (optional participation in demo meetings), but retrospectives were still at the team level.

Comparing coordination in the second phase to other studies of second-generation large-scale agile development methods, we see that (e.g. compared to Gustavsson’s study of SAFe (2019)), the coordination strategy is less dependent on planning meetings, as in the product increment planning in SAFe. Planning was now a continuous process in inter-team coordination meetings between roles, such as the functional architects. Scrum of scrums meetings were reintroduced, but unlike Gustavsson’s findings, our informants reported that this arena was working well. The initiation of fora across teams was decided by the teams themselves, much like we see that communities of practice have been initiated and supported by teams at Spotify (Smite et al. 2019 ) and Ericsson (Paasivaara and Lassenius 2014 ).

In addition to the major changes in coordination strategy shown in Table 9 , we would like to emphasize two points. First, the move to continuous deployment also led to a higher frequency in coordination. The first phase had iterations that lasted three weeks, while the second phase made it possible with much shorter feedback cycles throughout the programme. Continuous deployment was enabled by reorganizing the programme in autonomous teams and the new technical platform, which moved many concerns to platform teams. We did not hear from our informants that they experienced a too heavy workload in coordination activities, and this might have resulted from the autonomy given to the teams, as it was up to them to decide in which arenas to participate. Second, most of our informants saw the second phase as more in line with the principles of agile software development. In his article on sociotechnical coordination, Herbsleb ( 2007 ) asked if carefully designed architectures could isolate work at different sites in global software development. In our case, we found that these architectural changes were also important in enabling the autonomy of the teams, which made room for local process differences.

5.3.3 Coordination Mechanisms Over Time

After an initial period, we have shown evidence of increased coordination effectiveness in the second phase, which indicates better congruence between the situation and the coordination strategy. Prior studies (Edison et al. 2021 ) have indicated that continuous improvement is critical in large-scale agile development, typically organized through team- or programme-level retrospectives. The coordination challenges in the first phase were identified in retrospectives, and actions were taken to reduce the impact of the challenges. Why did the programme wait until the last phase to drastically reorganize? As we described in the case background, there was a strong pressure to deliver after an earlier programme failure. The programme started with a known process and technology to reduce risks. If the programme had changed to a second-generation method earlier or even from the start, it might be that awareness of dependencies would take more time to establish than when relying on many scheduled meetings at the start. The reliance on scheduled meetings can also be seen in other large-scale agile development programmes (Dingsøyr et al. 2018b ; Hobbs and Petit 2017 ).

A critique of large-scale agile development methods is that they provide static advice on coordination (Gustavsson 2019 ), prescribing a minimum setup with scheduled meetings, an organization relying on teams, regular interactions with stakeholders and, in some cases, specific roles, such as the release train engineer role in SAFe. From prior studies, we have also seen that coordination mechanisms are dynamic structures that change over time (Dingsøyr et al. 2018c ). However, there is little advice in second-generation methods on how coordination mechanisms can be tailored to the situation at hand. Our study shows the impact of change in coordination strategy, although determining which improvements in coordination effectiveness stem from shortening feedback loops is difficult by going from iterations to continuous deployment, by going from focusing on roles and projects to giving teams autonomy or by changing the mode of coordination from mainly relying on scheduled meetings to mainly relying on unscheduled meetings and personal horizontal coordination. Our rich description of the change in coordination strategy shows what Jarzabkowski et al. ( 2012 ) described as an absence of coordination, mainly of knowledge dependencies between the business and development projects. Furthermore, efforts to fill this absence with minor changes to the first-generation method were not successful in achieving coordination effectiveness. The coordination challenges were first solved (new coordination mechanisms emerged) when transitioning to the second-generation method in the second phase. However, this change introduced new challenges for inter-team coordination, as old arenas were abandoned. It required time until the new mechanisms stabilized in a situation described by the informants as having high coordination effectiveness.

5.4 Coordination Strategies in the First- and Second-Generation Methods: Five Propositions

Summarizing the changes using van de Ven et al.’s ( 1976 ) framework, we see the following:

A change in the use of the impersonal mode – less written handovers between the business and development functions but the use of impersonal coordination through the technical infrastructure

More horizontal individual mode – more direct coordination within the teams

Fewer scheduled meetings – a dramatic reduction in scheduled meetings; it was up to the teams to decide what arenas to use.

More unscheduled meetings – smaller meetings within and between teams. Other teams’ programme participants who were not fully booked in meetings but were available.

After describing the phases and coordination over time, we discuss the central characteristics of the first- and second-generation large-scale agile methods. We develop five propositions (see Table 10 ) based on our findings and the discussion of prior studies. As described in the background, first- and second-generation methods differ with respect to their main principles and practices. However, as we have seen in the results section, coordination requires significant effort and many arenas, both when using a first- and a second-generation method. This leads to the following:

Proposition 1: Large-scale agile inter-team coordination requires a combination of group, personal and impersonal modes for the effective management of knowledge, process and resource dependencies.

In line with the findings from other studies of large-scale agile development (Dingsøyr et al. 2018c ), we found that scheduled meetings were fundamental coordination mechanisms in the early stages when using a first-generation large-scale agile method. This leads to the following:

Proposition 2: Scheduled meetings are important in the early phase of large-scale agile development programmes to build knowledge of domain and technical expertise, establish inter-team processes and manage resource dependencies.

Furthermore, when a second-generation method was adopted in the second phase, a new technical platform enabled continuous delivery, which increased the feedback speed. It was mainly an impersonal coordination mode, but it also involved the new (but short) go-no-go-meeting to decide on deployment. This enabled team autonomy, as many dependencies were moved from the inter-team to team levels. Placing both business and development people in cross-functional teams led to fewer handovers, and requirement dependencies, in particular, were managed at a low level.

Proposition 3: Organizing work around the product instead of projects and phases reduces inter-team coordination needs and thus contributes to the more efficient management of requirement dependencies.

Thus, the second-generation method enables work that is more in line with the key principles of agile development.

However, we observe a lack of coordination after the transition to the second-generation method. Other studies have shown a significant risk of coordination breakdowns if dependencies are not managed at the correct levels. We speculate that if the programme had adopted a second-generation method early, it could risk even more breakdowns.

Proposition 4: A transition from a first-generation to a second-generation large-scale agile method requires significant domain and technical knowledge amongst programme participants.

Finally, as some old mechanisms were re-established, the programme was perceived to achieve high coordination effectiveness. Many of the roles at the programme level were removed, and supporting functions established in the last phase were also reduced or removed. Overall, the programme was able to move resources from coordination to development.

Proposition 5: Second-generation large-scale agile development methods, compared with first-generation methods, achieve coordination through the more efficient use of resources.

Summarizing our discussion, we see that large-scale agile development methods will impact the coordination strategy. The determinants suggested by Van de Ven ( 1976 ) might need to be supplemented by other factors, such as domain and technical knowledge and experience with agile approaches, when choosing between the first- and second-generation methods. What factors are important for that choice is beyond the scope of our paper. We conclude that choosing first- or second-generation agile development methods will have significant implications on the coordination strategy in that specific mechanisms are given priority and other mechanisms are restricted.

Could there be other explanations for the improved coordination effectiveness? As we have mentioned, one could expect that the whole development organization would become more productive over time, as it learns about the technical product and the domain. However, we have shown that there is a significant change in the use of coordination mechanisms, and if learning should explain improvement, we would have expected to see such an improvement earlier and not after the transition. The informants reported new coordination challenges after the transition, which is also an argument for why the transition impacted the coordination mechanisms.

Is it correct to describe the changes as a change in the whole development method and not improvements caused by autonomy and continuous deployment? We see autonomy and continuous deployment as key characteristics that show a more agile approach than in the first-generation methods. These changes also impacted the number of roles, decision-making authority and the speed of decision making and learning. We think the change was so fundamental that it is correct to describe it as a transition from one generation of methods to the next.

5.5 Limitations and Evaluation

There are several limitations to the chosen approach. We discuss construct, internal and external validity, as well as reliability (Runeson and Höst 2009 ):

Construct validity

To ensure construct validity , we have built on established constructs, such as coordination mechanisms, but in the interview guides, we used wording such as ‘dependencies’ and ‘arenas to manage dependencies’. We acknowledge limitations in how we measure constructs in Fig. 1 , such as project success and coordination effectiveness. We are formulating theory in a field in which there is no unified agreement on how to measure what is better. As with developer productivity (Forsgren et al. 2021 ), different groups can perceive coordination effectiveness differently.

Internal Validity

We have discussed possible alternative explanations for the changed perceptions of coordination effectiveness and have sought to document the coordination challenges in each phase through multiple sources of evidence (interviews, group interviews, observations, documents). As described in the methods section, we cover a number of roles but not all roles in the programme, and we have not interviewed participants from all teams. We have presented our preliminary findings to the case participants and fellow researchers.

External Validity

A typical weakness in building theory from case studies is that a theory can be overly complex or be a ‘narrow and idiosyncratic theory’ (Eisenhardt 1989 , p. 547). Do our propositions have the right scope (Sjøberg et al. 2008 )? We have sought to overcome these weaknesses by building on established theory and constructs and by arguing that the propositions are likely to hold for instances of first- and second-generation agile development methods other than the instances in our empirical case. One might argue that large-scale agile development is not a common phenomenon and that the propositions are too narrow, but we believe that it is an important area, showing that methods work with certain adjustments in an area few had thought possible when agile methods were initially formulated.

Has this study generated new insights, is the new theory supported by evidence and have we ruled out rival explanations (Eisenhardt 1989 )? We argue that the novel propositions represent a major step forward in our understanding of coordination in large-scale agile development and that we have established new concepts in the form of first- and second-generation large-scale agile development methods that will clarify the differences between approaches. The propositions are supported by evidence from multiple sources, and we have provided a rich description of the context. We have discussed what we see as the main rival explanation.

Reliability

A large-scale agile development programme is a complex unit of analysis. We have attempted to cope with this complexity by engaging a large research team (Ribes 2014 ). A large team meant that we needed to be aligned internally by jointly developing semi-structured interview guides and using a tool for qualitative analysis and a shared file repository as our case database. The method section describes the analysis process and steps in theory development, while the results section shows tracability to data through informant quotes and the narrative.

6 Conclusion

Coordination has been a key concern in large-scale agile software development (Dingsøyr et al. 2019b ; Edison et al. 2021 ). This development is characterized by high uncertainty about how tasks should be solved, a large number of interdependencies between tasks and a high number of people involved—what van de Ven et al. ( 1976 ) described as a high unit size.

Coordination has long been a key topic in global software engineering. Herbsleb ( 2007 , p. 9) concluded that for coordination problems, we lack an understanding of tradoffs between tools, practices and methods and an understanding of when the solutions are applicable.

We have reported on two phases of a very large-scale development programme, provided a background of the programme’s situation in each phase and discussed coordination efficiency and coordination strategies. We contribute to the discussion on the conditions of the applicability of coordination mechanisms in large-scale agile development and the tradeoffs between coordination mechanisms.

We describe the first phase as first-generation large-scale agile development, combining advice from agile methods with advice from project management. The second phase replaced advice from project management with current ideas in software development in what we describe as second-generation large-scale software development, with 10 teams that were given significant autonomy and were reorganized into teams after product domain, delivering on a new platform. The change led to a massive increase in the number of deployments of the product from twice a year to daily deployments. We have investigated this change in the focus of coordination from the first focus on the phases of identifying needs, describing a solution, implementing and testing to coordinating around the product. The change was generally perceived as successful, with the programme receiving a prize for digitalisation and our informants appreciating the much tighter dialogue; the latter was characterized as leading to a better understanding of the domain for developers and a better understanding of the technology for product owners.

We have explained the change from the first- to second-generation large-scale agile development methods as having a major impact on coordination. The coordination mechanisms were decided on by the teams themselves when using the second-generation method; there were fewer intermediaries, and the reduction of dependencies between teams led to a decrease in inter-team coordination and an increase in intra-team coordination.

Our findings have implications for theory in that we have established the concepts of first- and second-generation large-scale agile developments, which can make future studies conceptually clearer. Compared with the initial findings on inter-team coordination (Edison et al. 2021 ), we develop propositions that we hope can form a basis for a new theory on coordination for the particular context of large-scale agile development.

For practitioners, we believe that the main implication of our findings is that they show the implications of change in a coordination strategy. Many organizations are considering large-scale agile methods (Dingsøyr et al. 2019b ; Edison et al. 2021 ), but our study suggests that the choice of a coordination strategy might be a more important question than the selection of a method. We also provide a rich description of changes in coordination mechanisms when transferring from the first- to second-generation large-scale agile development frameworks, which will be helpful in the many organizations currently in this process.

With the extended use of second-generation methods, we hope that future studies could first test the propositions in contexts other than that of our study, with other instances of first- and second-generation large-scale agile methods and other configurations of project complexity, uncertainty and project success. Second, we suggest exploring changes in coordination practices over time in arrangements in which much of the coordination is at the team level—in environments with a high degree of autonomy. Third, we hope that studies on coordination could further our understanding of inter-team coordination by examining coordination between different types of teams, for example, the types suggested in the practitioner literature, such as feature teams and platform teams, as well as other supporting teams in organizations (Skelton and Pais 2019 ).

The term ‘wave’ is also used in white papers by practitioners like Charlie Rudd (accessed April 2022: https://www.solutionsiq.com/resource/white-paper/the-third-wave-of-agile-2/ ) and Steve Denning (Accessed April 2022: https://www.forbes.com/sites/stevedenning/2017/02/10/beyond-agile-operations-how-to-achieve-the-holy-grail-of-strategic-agility/?sh=5167e5982b6a ) but then focusing more broadly on agility and not large-scale agile development methods.

Some, such as the Agile Alliance and Project Management Institute’s ‘Agile Practice Guide’ use the word ‘predictive’ rather than ‘traditional’.

https://www.vg.no/nyheter/innenriks/i/82rdG/nav-det-er-for-vanskelig-aa-soeke-om-foreldrepenger

The whole program had about 200 people at peak.

https://www.nrk.no/norge/na-kan-du-fa-svar-pa-fodselspengesoknad-pa-ett-minutt-1.13915937

Ågerfalk PJ, Fitzgerald B (2006) Flexible and distributed software processes: old petunias in new bowls? Commun ACM 49(10):26–34. https://doi.org/10.1145/1164394.1164416

Article   Google Scholar  

Baham C, Hirschheim R (2021) Issues, challenges, and a proposed theoretical core of agile software development research. Inf Syst J , n/a (n/a). https://doi.org/10.1111/isj.12336

Bass JM (2019) Future trends in agile at scale: a summary of the 7th international workshop on large-scale agile development . Paper presented at the XP2019, Montreal, Canada

Batra D, Xia W, VanderMeetr D, Dutta K (2010) Balancing agile and structured development approaches to successfully manage large distributed software projects: a case study from the Cuise line industry. Commun Assoc Inf Syst 27(1):379–394

Google Scholar  

Begel A, Nagappan N, Poile C, Layman L (2009) Coordination in large-scale software teams. Paper presented at the proceedings of the 2009 ICSE workshop on cooperative and human aspects on software engineering

Bentley C (2010) PRINCE2 revealed (2nd ed). Elsevier, Oxford, p 296

Bernhardt HB (2022) Digital transformation in NAV IT 2016–2020: key factors for the journey of change digital transformation in Norwegian enterprises : springer. pp. 115-134

Berntzen M, Hoda R, Moe NB, Stray V (2022) A taxonomy of inter-team coordination mechanisms in large-scale agile. IEEE Trans Softw Eng:1–1. https://doi.org/10.1109/TSE.2022.3160873

Berntzen M, Stray V, Moe NB (2021) Coordination strategies: managing inter-team coordination challenges in large-scale agile: springer international publishing. pp. 140-156

Bick S, Spohrer K, Hoda R, Scheerer A, Heinzl A (2018) Coordination challenges in large-scale software development: a case study of planning misalignment in hybrid settings. IEEE Trans Softw Eng 44(10):932–950. https://doi.org/10.1109/TSE.2017.2730870

Biesialska K, Franch X, Muntés-Mulero V (2021). Mining dependencies in large-scale agile software development Projects : A Quantitative Industry Study . Paper presented at the Evaluation and Assessment in Software Engineering, Trondheim, Norway. https://doi.org/10.1145/3463274.3463323

Bjørnson FO, Wijnmaalen J, Stettina CJ, Dingsøyr T (2018) Inter-team coordination in large-scale agile development: a case study of three enabling mechanisms . Paper presented at the XP2018, Porto, Portugal

Boehm B (2002) Get ready for agile methods, with care. IEEE Computer 35(1):64–69

Boehm B, Turner R (2003) Balancing agility and discipline: a guide for the perplexed. Addison-Wesley, Boston, p 305

Carroll N, Bjørnson FO, Dingsøyr T, Rolland K-H, Conboy K (2020) Operationalizing agile methods: examining coherence in large-scale agile transformations . Paper presented at the international conference on agile software development, Eighth International Workshop on Large-Scale Agile Development

Cataldo M, Herbsleb JD (2012) Coordination breakdowns and their impact on development productivity and software failures. IEEE Trans Softw Eng 39(3):343–360

Dikert K, Paasivaara M, Lassenius C (2016) Challenges and success factors for large-scale agile transformations: a systematic literature review. J Syst Softw 119:87–108. https://doi.org/10.1016/j.jss.2016.06.013

Dingsøyr T, Bjørnson FO, Moe NB, Rolland K, Seim EA (2018a, May 27) Rethinking coordination in large-scale software development, Gothenburg, Sweden

Dingsøyr T, Dybå T, Gjertsen M, Jacobsen AO, Mathisen T-E, Nordfjord JO et al (2019a) Key lessons from tailoring agile methods for large-scale software development. IEEE IT Prof 21(1):34–41. https://doi.org/10.1109/MITP.2018.2876984

Dingsøyr T, Fægri T, Itkonen J (2014) What is large in large-scale? A taxonomy of scale for agile software development. In: Jedlitschka A, Kuvaja P, Kuhrmann M, Männistö T, Münch J, Raatikainen M (eds) Product-Focused Software Process Improvement, Lecture Notes in Computer Science, vol 8892. Springer International Publishing, pp 273–276

Dingsøyr T, Falessi D, Power K (2019b) Agile development at scale: the next frontier. IEEE Softw 36(2):30–38. https://doi.org/10.1109/MS.2018.2884884

Dingsøyr T, Jørgensen M, Carlsen F, Carlström L, Engelsrud J, Hansvold K, . . . Sørensen KO (2022) Enabling autonomous teams and continuous deployment at scale: key lessons from a transition to a more agile delivery model during project execution. IEEE IT Professional

Dingsøyr T, Moe NB (2013) Research challenges in large-scale agile software development. ACM Software Eng Notes 38(5):38–39. https://doi.org/10.1145/2507288.2507322

Dingsøyr T, Moe NB, Fægri TE, Seim EA (2018b) Exploring software development at the very large-scale: a revelatory case study and research agenda for agile method adaptation. Empir Softw Eng 23(1):490–520. https://doi.org/10.1007/s10664-017-9524-2

Dingsøyr T, Moe NB, Seim EA (2018c) Coordinating knowledge work in multi-team programs: findings from a large-scale agile development program. Proj Manag J 49 (6):64–77. https://doi.org/10.1177/8756972818798980

Dingsøyr T, Nerur S, Balijepally V, Moe NB (2012) A decade of agile methodologies: towards explaining agile software development. J Syst Softw 85(6):1213–1221. https://doi.org/10.1016/j.jss.2012.02.033

Duncan WR (2017) A guide to the project management body of knowledge, 6th edn. Project Management Institute, Newtown Square

Edison H, Wang X, Conboy K (2021) Comparing methods for large-scale agile software development: a systematic literature review. IEEE Trans Softw Eng:1–1. https://doi.org/10.1109/TSE.2021.3069039

Eisenhardt KM (1989) Building theories from case study research. Acad Manag Rev 14(4):532–550. https://doi.org/10.2307/258557

Espinosa JA, Slaughter SA, Kraut RE, Herbsleb JD (2007) Team knowledge and coordination in geographically distributed software development. J Manag Inf Syst 24(1):135–169

Firth BM, Hollenbeck JR, Miles JE, Ilgen DR, Barnes CM (2015) Same page, different books: extending representational gaps theory to enhance performance in multiteam systems. Acad Manag J 58(3):813–835

Fitzgerald B, Stol K-J (2017) Continuous software engineering: a roadmap and agenda. J Syst Softw 123 :176–189. https://doi.org/10.1016/j.jss.2015.06.063

Forsgren N, Storey M-A, Maddila C, Zimmermann T, Houck B, Butler J (2021) The SPACE of developer productivity: There's more to it than you think. Queue 19(1):20–48

Gustavsson T (2019) Dynamics of inter-team coordination routines in large-scale agile software development. Paper presented at the in proceedings of the 27th European conference on information systems (ECIS), Stockholm and Uppsala, Sweden, June 8-14

Herbsleb JD (2007) Global software engineering: the future of socio-technical coordination. Paper presented at the future of software engineering (FOSE'07)

Hobbs B, Petit Y (2017) Agile methods on large projects in large organizations. Proj Manag J 48(3):3–19

Hoda R, Salleh N, Grundy J (2018) The rise and evolution of agile software development. IEEE Softw 35(5):58–63

Ika LA (2009) Project success as a topic in project management journals. Proj Manag J 40(4):6–19

Jarzabkowski PA, Le JK, Feldman MS (2012) Toward a theory of coordinating: creating coordinating mechanisms in practice. Organ Sci 23(4):907–927. https://doi.org/10.1287/orsc.1110.0693

Johnson P, Ekstedt M, Jacobson I (2012) Where's the theory for software engineering? IEEE Softw 29(5):96–96

Kraut RE, Streeter LA (1995) Coordination in software development. Commun ACM 38(3):69–81

Kula E, Greuter E, Van Deursen A, Georgios G (2021) Factors affecting on-time delivery in large-scale agile software development. IEEE Trans Softw Eng

Langley A (1999) Strategies for theorizing from process data. Acad Manag Rev 24(4):691–710

Malone TW, Crowston K (1994) The interdisciplinary study of coordination. ACM Comput Surv (CSUR) 26(1):87–119

Marks MA, DeChurch LA, Mathieu JE, Panzer FJ, Alonso A (2005) Teamwork in multiteam systems. J Appl Psychol 90(5):964

Mintzberg H (1989) Mintzberg on management: inside our strange world of organizations. Free Press, New York, p 418

Moe NB, Dingsøyr T, Rolland K (2018) To schedule or not to schedule? An investigation of meetings as an inter-team coordination mechanism in large-scale agile software development. Int J Inf Syst Proj Manag 6 (3):45–59. https://doi.org/10.12821/ijispm060303

Mohagheghi P, Lassenius C (2021) Organizational implications of agile adoption: a case study from the public sector. Paper presented at the proceedings of the 29th ACM joint meeting on European software engineering conference and symposium on the foundations of software engineering

Nerur S, Mahapatra R, Mangalaraj G (2005) Challenges of migrating to agile methodologies. Commun ACM 48(5):72–78

Okhuysen GA, Bechky BA (2009) Coordination in organizations: an integrative perspective. Acad Manage Ann 3:463–502

Paasivaara M (2017) Adopting SAFe to scale agile in a globally distributed organization. Paper presented at the 2017 IEEE 12th international conference on global software engineering (ICGSE)

Paasivaara M, Lassenius C (2014) Communities of practice in a large distributed agile software development organization - case Ericsson. Inf Softw Technol 56(12):1556–1577. https://doi.org/10.1016/j.infsof.2014.06.008

Paasivaara M, Lassenius C, Heikkila VT (2012) Inter-team coordination in large-scale globally distributed scrum: do scrum-of-scrums really work? Proceedings of the ACM-IEEE international symposium on empirical software engineering and measurement . New York: IEEE. pp. 235–238

Pries-Heje L, Pries-Heje J (2011) Why scrum works: a case study from an agile distributed project in Denmark and India AGILE conference (AGILE), 2011 : IEEE. pp. 20-28

Ribes D (2014) Ethnography of scaling, or, how to a fit a national research infrastructure in the room. Paper presented at the proceedings of the 17th ACM conference on computer supported cooperative work & social computing

Rolland KH, Fitzgerald B, Dingsøyr T, Stol K-J (2016) Problematizing agile in the large: alternative assumptions for large-scale agile development. In: International Conference on Information Systems, Dublin

Runeson P, Höst M (2009) Guidelines for conducting and reporting case study research in software engineering. Empir Softw Eng 14:131–164

Sablis A, Smite D, Moe N (2021) Team-external coordination in large-scale software development projects. J Softw: Evol Process 33(3):e2297. https://doi.org/10.1002/smr.2297

Schwaber K, Beedle M (2001) Agile software development with scrum. Prentice Hall, Upper Saddle River

Sharp H, Robinson H (2007) Collaboration and co-ordination in mature eXtreme programming teams. Int J Hum Comput Stud 66:506–518

Sharp H, Robinson H (2010) Three 'C's of agile practice: collaboration, co-ordination and communication. In: Dingsøyr T, Dybå T, Moe NB (Eds., ed) Agile Software Development: Current Research and Future Directions. Springer Verlag, Berlin Heidelberg

Sjøberg DI, Dybå T, Anda BC, Hannay JE (2008). Building theories in software engineering guide to advanced empirical software engineering : springer. pp. 312-336

Skelton M, Pais M (2019) Team topologies: organizing business and technology teams for fast flow. It Revolution, Portland, p 216

Šmite D, Moe NB, Ågerfalk P (2010) Agility across time and space: implementing agile methods in global software projects. Springer Verlag, Berlin Heidelberg, p 341

Smite D, Moe NB, Levinta G, Floryan M (2019) Spotify guilds: how to succeed with knowledge sharing in large-scale agile organizations. IEEE Softw 36(2):51–57. https://doi.org/10.1109/MS.2018.2886178

Stol K-J, Goedicke M, Jacobson I (2016) Introduction to the special section—general theories of software engineering: new advances and implications for research. Inf Softw Technol 70:176–180

Stray V (2018) Planned and unplanned meetings in large-scale projects. Paper presented at the proceedings of the 19th international conference on agile software development: companion

Stray V, Moe NB (2020) Understanding coordination in global software engineering: a mixed-methods study on the use of meetings and slack. J Syst Softw 170:110717. https://doi.org/10.1016/j.jss.2020.110717

Strode D (2016) A dependency taxonomy for agile software development projects. Inf Syst Front 18(1):23–46

Strode DE, Huff SL, Hope BG, Link S (2012) Coordination in co-located agile software development projects. J Syst Softw 85(6):1222–1238

Uludağ Ö, Putta A, Paasivaara M, Matthes F (2021) Evolution of the agile scaling frameworks . Paper presented at the International Conference on Agile Software Development

Van de Ven AH, Delbecq AL, Koenig R Jr (1976) Determinants of coordination modes within organizations. Am Sociol Rev:322–338

Vestues K (2021) Using digital platforms to promote value co-creation: a case study of a public sector organization. (PhD), Norwegian University of Science and Technology

Vestues K, Rolland K (2021) Platformizing the organization through decoupling and recoupling: a longitudinal case study of a government agency. Scand J Inf Syst 33(1):103–129

Vlietland J, van Solingen R, van Vliet H (2016) Aligning codependent scrum teams to enable fast business value delivery: a governance framework and set of intervention actions. J Syst Softw 113 (supplement C):418–429. https://doi.org/10.1016/j.jss.2015.11.010

Williams L, Cockburn A (2003) Agile software development: it’s about feedback and change. IEEE Comput 36:39–43

Xu P (2009) Coordination in large agile projects. Rev Bus Inf Syst 13(4):29

Yin RK (2018) Case study research and applications: design and methods, 6th edn. Sage, Thousand Oaks

Zaitsev A, Gal U, Tan B (2020) Coordination artifacts in agile software development. Inf Organ 30(2):100288

Download references

Acknowledgements

We are very grateful to all interview participants and contact persons at the Norwegian Labour and Welfare Administration, “Alpha” and Sopra Steria. We thank Kathrine Vestues, who participated in some of the data collection for this article. We also thank Diane Strode from Whitireia Polytechnic (New Zealand), Parastoo Mohaghegi at the Norwegian Labour and Welfare Administration and three anonymous reviewers for their comments, which have helped improve the quality of our manuscript significantly. We are grateful to master’s student Camilla Tøftum Ranner from the Norwegian University of Science and Technology (NTNU), who conducted three interviews. Likewise, we are indebted to Marius Mikalsen, Nils Brede Moe, Eva Amdahl Seim and Anniken Solem, researchers in the Agile 2.0 project, who were involved in the discussions on parts of the material in the article. We thank Bjørnar Tessem at the University of Bergen for the discussions on large-scale agile development at project seminars and the wider community participating in the international workshop on a large-scale agile development series.

Finally, we acknowledge the assistance of the late Knut Rolland from the University of Oslo. He participated in all three rounds of interviews and initial analysis work. He is deeply missed.

The data collection and first analysis were done in the competence-building project Agile 2.0, supported by the Research Council of Norway through grant 236759 and by the companies DNV GL, Equinor, Kantega, Kongsberg Defence & Aerospace, Sopra Steria and Sticos. NTNU Concept funded further analysis through the project on organizing digital projects. SimulaMet also supported the analysis and writing of the article through the first author’s adjunct position.

Authors’ contributions (optional: Please review the submission guidelines from the journal regarding whether statements are mandatory)

First and second authors: data collection. All authors: analysis, with the second author doing the initial work and the first author leading the final analysis round. The first author obtained access to the case and handled reporting to the case participants. The third and fourth authors conducted the literature reviews and wrote the initial background section, which the first author expanded and revised.

Funding (information that explains whether and by whom the research was supported)

Open access funding provided by NTNU Norwegian University of Science and Technology (incl St. Olavs Hospital - Trondheim University Hospital). See Acknowledgements.

Availability of data and material (data transparency)

Not applicable.

Code availability (software application or custom code)

Author information, authors and affiliations.

Department of Computer Science, Norwegian University of Science and Technology, NO-7491, Trondheim, Norway

Torgeir Dingsøyr & Finn Olav Bjørnson

Department of IT Management, SimulaMet, P.O. Box 134, 1325, Lysaker, Norway

Torgeir Dingsøyr

Universität der Bundeswehr München, Werner-Heisenberg-Weg 39, 85577, Neubiberg, Germany

Julian Schrof

SINTEF Digital, P.O. Box 4760, Torgarden, NO-7465, Trondheim, Norway

Tor Sporsem

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Torgeir Dingsøyr .

Ethics declarations

Conflicts of interest/competing interests (include appropriate disclosures), additional declarations for articles in life science journals that report the results of studies involving humans and/or animals, ethics approval (include appropriate approvals or waivers).

Study reported to the Norwegian Centre for Research Data (reference 848,084).

Consent to participate (include appropriate statements)

Consent for publication (include appropriate statements), additional information.

Communicated by: Tayana Conte

Appendix 1: Interview guides

Interteam coordination, third round

Could you describe the programme?

Have there been changes in the organization? How/why?

Could you describe your role in the programme?

Have you had similar roles previously? What is different in this programme?

What do you perceive as the main challenges in your role?

Who do you relate to in your role?

Have there been any changes in how you work?

In which circumstances do you need to coordinate with other teams?

Which dependencies do you have on other teams? Examples?

How do you manage dependencies?

Which arenas have you used to manage dependencies?

How would you describe coordination effectiveness?

Have there been changes to how dependencies are managed?

Have there been changes to how the programme is organized? If so, how did you experience changes?

How do you coordinate with external stakeholders?

How do you coordinate with external programmes/products/processes in the customer organization? Examples?

Work method, third round

What work practices do you use

internally in the team,

across teams,

with respect to architecture and

against external stakeholders?

Could you describe a typical iteration at the start of the programme

Could you describe the last or current iteration?

Have your ways of working changed over time?

What has changed?

Who initiated the change?

The change occurred at what level?

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Dingsøyr, T., Bjørnson, F.O., Schrof, J. et al. A longitudinal explanatory case study of coordination in a very large development programme: the impact of transitioning from a first- to a second-generation large-scale agile development method. Empir Software Eng 28 , 1 (2023). https://doi.org/10.1007/s10664-022-10230-6

Download citation

Accepted : 16 August 2022

Published : 08 November 2022

DOI : https://doi.org/10.1007/s10664-022-10230-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Large-scale agile development
  • Coordination mechanisms
  • Inter-team coordination
  • Multiteam systems
  • Software development process
  • Software engineering
  • Find a journal
  • Publish with us
  • Track your research

Case Study Research in Software Engineering: Guidelines and Examples by Per Runeson, Martin Höst, Austen Rainer, Björn Regnell

Get full access to Case Study Research in Software Engineering: Guidelines and Examples and 60K+ other titles, with a free 10-day trial of O'Reilly.

There are also live events, courses curated by job role, and more.

TWO LONGITUDINAL CASE STUDIES OF SOFTWARE PROJECT MANAGEMENT

11.1 introduction.

This chapter reports the experiences of the third author as he undertook two longitudinal case studies of software projects at IBM Hursley Park, as part of his PhD research. The two projects are referred to respectively as Project B and Project C to retain consistent identifiers with previous publications [144–147, 152, 154, 155] on these case studies.

11.2 BACKGROUND TO THE RESEARCH PROJECT

In the mid-1990s, IBM Hursley Park and Bournemouth University, both in the United Kingdom, had already agreed to undertake a research project. A PhD student would be the primary researcher for the project. The aim of the research project was very broadly defined: essentially the PhD student would identify one or more problematic situations at the company and would work toward understanding these situations and seeking solutions or resolutions to these situations. The project was initiated prior to the recruitment and selection of the student. Also, the objectives imply an opportunity for action research.

IBM Hursley Park and the University advertised the student post and jointly participated in the selection and recruitment process. It had been agreed that the student would relocate close to the company and would, at least for the first few months, work full time at the company, traveling to the University when it was appropriate to do so. This arrangement would ensure that the researcher had direct and ...

Get Case Study Research in Software Engineering: Guidelines and Examples now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.

Don’t leave empty-handed

Get Mark Richards’s Software Architecture Patterns ebook to better understand how to design components—and how they should interact.

It’s yours, free.

Cover of Software Architecture Patterns

Check it out now on O’Reilly

Dive in for free with a 10-day trial of the O’Reilly learning platform—then explore all the other resources our members count on to build skills and solve problems every day.

longitudinal case study project management

The longitudinal, chronological case study research strategy: A definition, and an example from IBM Hursley Park

New citation alert added.

This alert has been successfully added and will be sent to:

You will be notified whenever a record that you have chosen has been cited.

To manage your alert preferences, click on the button below.

New Citation Alert!

Please log in to your account

Information & Contributors

Bibliometrics & citations, view options.

  • Rainer A (2017) Using argumentation theory to analyse software practitioners defeasible evidence, inference and belief Information and Software Technology 10.1016/j.infsof.2017.01.011 87 :C (62-80) Online publication date: 1-Jul-2017 https://dl.acm.org/doi/10.1016/j.infsof.2017.01.011

Recommendations

A longitudinal case study on the effects of an evidence-based software engineering training.

Context: Evidence-based software engineering (EBSE) can be an effective resource to bridge the gap between academia and industry by balancing research of practical relevance and academic rigor. To achieve this, it seems necessary to investigate EBSE ...

Representing the behaviour of software projects using multi-dimensional timelines

Context: There are few empirical studies in the empirical software engineering research community that describe software projects, at the level of the project, as they progress over time. Objective: To investigate how to coherently represent a large ...

A longitudinal case study of an emerging software ecosystem: Implications for practice and theory

Software ecosystems is an emerging trend within the software industry, implying a shift from closed organizations and processes towards open structures, where actors external to the software development organization are becoming increasingly involved in ...

Information

Published in.

Butterworth-Heinemann

United States

Publication History

Author tags.

  • Deadline effect
  • Longitudinal case study
  • Qualitative data
  • Software project
  • Theory development

Contributors

Other metrics, bibliometrics, article metrics.

  • 1 Total Citations View Citations
  • 0 Total Downloads
  • Downloads (Last 12 months) 0
  • Downloads (Last 6 weeks) 0

View options

Login options.

Check if you have access through your login credentials or your institution to get full access on this article.

Full Access

Share this publication link.

Copying failed.

Share on social media

Affiliations, export citations.

  • Please download or close your previous search result export first before starting a new bulk export. Preview is not available. By clicking download, a status dialog will open to start the export process. The process may take a few minutes but once it finishes a file will be downloadable from your browser. You may continue to browse the DL while the export process is in progress. Download
  • Download citation
  • Copy citation

We are preparing your search results for download ...

We will inform you here when the file is ready.

Your file of search results citations is now ready.

Your search export query has expired. Please try again.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Longitudinal Study | Definition, Approaches & Examples

Longitudinal Study | Definition, Approaches & Examples

Published on May 8, 2020 by Lauren Thomas . Revised on June 22, 2023.

In a longitudinal study, researchers repeatedly examine the same individuals to detect any changes that might occur over a period of time.

Longitudinal studies are a type of correlational research in which researchers observe and collect data on a number of variables without trying to influence those variables.

While they are most commonly used in medicine, economics, and epidemiology, longitudinal studies can also be found in the other social or medical sciences.

Table of contents

How long is a longitudinal study, longitudinal vs cross-sectional studies, how to perform a longitudinal study, advantages and disadvantages of longitudinal studies, other interesting articles, frequently asked questions about longitudinal studies.

No set amount of time is required for a longitudinal study, so long as the participants are repeatedly observed. They can range from as short as a few weeks to as long as several decades. However, they usually last at least a year, oftentimes several.

One of the longest longitudinal studies, the Harvard Study of Adult Development , has been collecting data on the physical and mental health of a group of Boston men for over 80 years!

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

The opposite of a longitudinal study is a cross-sectional study. While longitudinal studies repeatedly observe the same participants over a period of time, cross-sectional studies examine different samples (or a “cross-section”) of the population at one point in time. They can be used to provide a snapshot of a group or society at a specific moment.

Cross-sectional vs longitudinal studies

Both types of study can prove useful in research. Because cross-sectional studies are shorter and therefore cheaper to carry out, they can be used to discover correlations that can then be investigated in a longitudinal study.

If you want to implement a longitudinal study, you have two choices: collecting your own data or using data already gathered by somebody else.

Using data from other sources

Many governments or research centers carry out longitudinal studies and make the data freely available to the general public. For example, anyone can access data from the 1970 British Cohort Study, which has followed the lives of 17,000 Brits since their births in a single week in 1970, through the UK Data Service website .

These statistics are generally very trustworthy and allow you to investigate changes over a long period of time. However, they are more restrictive than data you collect yourself. To preserve the anonymity of the participants, the data collected is often aggregated so that it can only be analyzed on a regional level. You will also be restricted to whichever variables the original researchers decided to investigate.

If you choose to go this route, you should carefully examine the source of the dataset as well as what data is available to you.

Collecting your own data

If you choose to collect your own data, the way you go about it will be determined by the type of longitudinal study you choose to perform. You can choose to conduct a retrospective or a prospective study.

  • In a retrospective study , you collect data on events that have already happened.
  • In a prospective study , you choose a group of subjects and follow them over time, collecting data in real time.

Retrospective studies are generally less expensive and take less time than prospective studies, but are more prone to measurement error.

Like any other research design , longitudinal studies have their tradeoffs: they provide a unique set of benefits, but also come with some downsides.

Longitudinal studies allow researchers to follow their subjects in real time. This means you can better establish the real sequence of events, allowing you insight into cause-and-effect relationships.

Longitudinal studies also allow repeated observations of the same individual over time. This means any changes in the outcome variable cannot be attributed to differences between individuals.

Prospective longitudinal studies eliminate the risk of recall bias , or the inability to correctly recall past events.

Disadvantages

Longitudinal studies are time-consuming and often more expensive than other types of studies, so they require significant commitment and resources to be effective.

Since longitudinal studies repeatedly observe subjects over a period of time, any potential insights from the study can take a while to be discovered.

Attrition, which occurs when participants drop out of a study, is common in longitudinal studies and may result in invalid conclusions.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Longitudinal studies and cross-sectional studies are two different types of research design . In a cross-sectional study you collect data from a population at a specific point in time; in a longitudinal study you repeatedly collect data from the same sample over an extended period of time.

Longitudinal study Cross-sectional study
observations Observations at a in time
Observes the multiple times Observes (a “cross-section”) in the population
Follows in participants over time Provides of society at a given point

Longitudinal studies can last anywhere from weeks to decades, although they tend to be at least a year long.

Longitudinal studies are better to establish the correct sequence of events, identify changes over time, and provide insight into cause-and-effect relationships, but they also tend to be more expensive and time-consuming than other types of studies.

The 1970 British Cohort Study , which has collected data on the lives of 17,000 Brits since their births in 1970, is one well-known example of a longitudinal study .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Thomas, L. (2023, June 22). Longitudinal Study | Definition, Approaches & Examples. Scribbr. Retrieved August 16, 2024, from https://www.scribbr.com/methodology/longitudinal-study/

Is this article helpful?

Lauren Thomas

Lauren Thomas

Other students also liked, cross-sectional study | definition, uses & examples, correlational research | when & how to use, guide to experimental design | overview, steps, & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

A longitudinal study on logistics strategy: the case of a building contractor

The International Journal of Logistics Management

ISSN : 0957-4093

Article publication date: 29 December 2022

Issue publication date: 18 December 2023

Contingency studies within logistics and supply chain management have shown a need for longitudinal studies on fit. The purpose of this paper is to investigate the logistics strategy from a process of establishing fit perspective.

Design/methodology/approach

A large Swedish building contractor's logistics strategy process was analysed using a longitudinal single-case study for a period of 11 years (2008–2019).

The case study reveals three main constraints to logistics strategy implementation: a dominant purchasing organisation, a lack of incentives and diverging top-management priorities. This suggests that logistics strategy fit is not a conscious choice determined by contextual factors.

Research limitations/implications

Establishing fit is a continuous cycle of regaining fit between the logistics context and logistics strategy components. Fit can be achieved by a change to the logistics context or to logistics strategy components.

Practical implications

Logistics managers may need to opt for satisfactory fit in view of the costs incurred by changing strategy versus the benefits to be gained from a higher degree of fit.

Originality/value

This paper adopts a longitudinal case design to study the fit between the logistics context and strategy, adding to the body of knowledge on organisational design and strategy in logistics and supply chain management.

  • Construction logistics
  • Strategy process
  • Strategic fit
  • Organisational structure
  • Project-based organisations

Haglund, P. and Rudberg, M. (2023), "A longitudinal study on logistics strategy: the case of a building contractor", The International Journal of Logistics Management , Vol. 34 No. 7, pp. 1-23. https://doi.org/10.1108/IJLM-02-2022-0060

Emerald Publishing Limited

Copyright © 2022, Petter Haglund and Martin Rudberg

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode

Introduction

This paper addresses the logistics strategy process in building contractor organisations. Building contractors are project-based organisations and are typically decentralised where projects are managed locally with little connection to the permanent organisation ( Dubois and Gadde, 2002 ). Consequently, activities at the operational level seldom follow strategies formulated at the corporate level ( Miterev et al. , 2017 ) and there is typically little connection between logistics plans at these levels ( Elfving, 2021 ), which in turn causes material flow-related problems at the operational level ( Thunberg and Fredriksson, 2018 ). However, a corporate/company-level logistics plan (i.e. a logistics strategy) can be a means of improving efficiency at the project level by reorganising logistics activities, leading to better resource utilisation and labour productivity ( Dubois et al. , 2019 ). Addressing the issue of formulating and implementing a logistics strategy in a building contractor organisation can thus yield insights into how to establish the necessary prerequisites for managing logistics in building projects.

In comparison to production systems and supply chains in manufacturing, construction has more complex interdependencies between production and supply activities ( Bankvall et al. , 2010 ). There is also a lack of adequate planning and control of materials and information flows; this leads to poor coordination between contractors and sub-contractors, which in turn gives rise to material flow issues ( Thunberg et al. , 2017 ). Previous studies indicate that better planned material flows in construction projects can lead to reductions in total project costs by increasing efficiency in transportation, material handling and production tasks on site (e.g. Janné and Rudberg, 2022 ). However, logistics is rarely addressed holistically in projects and instead the main contractor and sub-contractors manage their own material flows ( Dubois et al. , 2019 ). One effect of this is that planning methods are misaligned with material flow characteristics, leading to congestion on the site and poor resource utilisation ( Sezer and Fredriksson, 2021 ). There is thus a need to consider contextual aspects that influence how logistics is organised, that is a contingency approach to logistics ( Marchesini and Alcântara, 2016 ). The main contractor is typically highlighted to be in the position to address these planning-related issues, but it requires that logistics is addressed at a strategic level ( Thunberg and Fredriksson, 2018 ).

Despite the existing research on logistics and supply chain strategy and structure (e.g. Sabri, 2019 ; Feizabadi et al. , 2021 ), the process of establishing the logistics strategy and structure is seldom addressed. A central concept within logistics and supply chain strategy is “fit”, which refers to aligning strategy and structure elements with internal and external contingencies, such as market and operations characteristics, respectively ( Chow et al. , 1995 ). The concept of fit in logistics and supply chain research is typically considered from a content perspective (e.g. Nakano, 2015 ; Sabri, 2019 ; Feizabadi et al. , 2021 ), but this disregards how fit is established. Mintzberg (1979) argues that it is insufficient to describe fit based solely on strategic and structural elements because they do not represent the strategy as it is pursued. To understand how fit is established, one must look beyond strategic and structural elements to capture the process behind the realisation of the strategy.

Dynamic approaches to fit challenge the content of fit perspective ( Venkatraman and Camillus, 1984 ) and suggest that fit is the outcome of an unpredictable process characterised by internal and external pressures that are involved in reshaping the organisation and its strategy ( Child, 1972 ; Donaldson, 1987 ). For instance, in the case of construction, logistics practices are characterised by low maturity and the absence of a strategic approach to logistics ( Janné and Rudberg, 2022 ), despite the emergence of new methods, tools and organisational forms for managing logistics in construction projects ( Dubois et al. , 2019 ). This indicates that the development and deployment of logistics practices are not necessarily a conscious choice determined solely by their fit with the logistical context, which is postulated by the content of fit perspective. The literature on fit within logistics and supply chain management therefore needs to be expanded to encompass a more dynamic approach. The purpose of this paper is to investigate logistics strategy from a process of establishing fit perspective.

What factors influence the adjustment of a logistics strategy with the aim to regain fit in a building contractor organisation?

What are the implications for a building contractor pursuing a satisfactory fit or a misfit in their logistics strategy?

The study is based on a longitudinal case study of a large contractor's logistics strategy process, which is examined through the lens of contingency theory. The case is, to the authors' knowledge, one of few deliberate logistics strategy processes in construction, where a wide range of strategy contents are addressed. In contrast, most logistics initiatives in construction are limited to one or a few logistics strategy components with an emphasis on the operational level. The longitudinal case design used in this study thereby provides unique insights into the process of establishing fit in a large building contractor organisation.

The paper contributes to research within organisational design and strategy in logistics and supply chain management. In particular, the study illustrates how fit is established in a large construction company. Project-based production is rarely considered in studies of functional strategies, such as logistics strategies. The paper also highlights managerial factors, and their potential influence on the strategy process, which must be considered in order to create necessary prerequisites for managing logistics in construction projects.

The paper is structured as follows: first a theoretical background to contingency theory in logistics and supply chain management is presented. Next, the research design and method are described. This is followed by a case description and analysis of the case. The paper ends with a discussion and conclusions, including the limitations of the study and suggestions for further research.

Contingency theory in logistics and supply chain management

The strategy–structure–performance paradigm.

The leading stream within contingency theory has been the strategy–structure–performance paradigm ( Chandler, 1962 ; Galunic and Eisenhardt, 1994 ). Early adoptions of the strategy–structure–performance paradigm in logistics research focused on intraorganisational issues, that is the fit between the firm's strategy, the organisation of logistics and the effects of fit on performance ( Chow et al. , 1995 ). Later research has adopted the contingency theory lens to study fit at an interorganisational supply chain level of analysis ( Nakano, 2015 ; Sabri, 2019 ; Feizabadi et al. , 2021 ).

These advancements have been valuable for logistics and supply chain management research in explaining which logistics organisation and supply chain structures are feasible under certain circumstances. Similarly, in the operations management domain, contingency theory has been successful in providing an understanding of which operations management practices are effective under certain conditions ( Sousa and Voss, 2008 ). However, despite the valuable insights gained from using contingency theory as a theoretical lens in logistics and supply chain management research, there has been debate regarding the definition of fit within the logistics domain ( Hallavo, 2015 ). Much of this debate stems from problems with contingency theory itself, that is, the tendency to apply reductionistic theoretical models that have provided inconclusive empirical results ( Galunic and Eisenhardt, 1994 ; Van De Ven et al. , 2013 ; Turkulainen, 2022 ). To respond to this critique, major advancements in contingency theory have been made through the configurational view (CV) and the information processing view (IPV).

The configurational and information processing views on fit

The CV and IPV are complementary developments of contingency theory. The CV addresses the traditional reductionist approach and advocates a more holistic perspective with the use of multivariate studies of several contingency variables and organisation design elements ( Meyer et al. , 1993 ; Van De Ven et al. , 2013 ). On the other hand, the IPV addresses the vague definition of fit by explicating fit as the match between information processing (IP) requirements and IP capacity ( Galbraith, 1974 ). Both advancements in contingency theory have shown potential for logistics and supply chain management. The configurational view offers a more holistic picture of supply chains, which has been studied using multivariate analysis of contingency variables and structural variables ( Feizabadi et al. , 2021 ). IPV has been useful for analysing fit at both an intraorganisational and an interorganisational (supply chain) level ( Busse et al. , 2017 ; Aben et al. , 2021 ). Combined, the CV and IPV provide a solid lens for logistics and supply chain management researchers to determine under what conditions the different organisational configurations are feasible. However, to use these views in logistics and supply chain management research, the contingency variables need to be adapted to the specific empirical context ( Koskela and Ballard, 2012 ; Turkulainen, 2022 ).

Dynamic approaches to fit

The strategy–structure–performance paradigm does not account for how strategy and structure changes ( Galunic and Eisenhardt, 1994 ). Although the CV and the IPV are considered advancements on the reductionist approach in contingency theory, they also assume a static view on strategy and structure ( Donaldson, 1987 ). Therefore, the static approaches within the contingency theory place less emphasis on what is happening within the structure and how strategies unfold and are realised ( Mintzberg, 1979 ). This cross-sectional approach has been the main subject of criticism against the contingency theory, which is mainly related to its lack of relevance for dynamic environments where strategy and structure are prone to frequent change ( Galunic and Eisenhardt, 1994 ). In response to this criticism, dynamic approaches to fit focuses on the sequence of events that reinforce an existing configuration, creates a new configuration, sustain an existing configuration or that removes old core elements of a configuration that have become obsolete ( Siggelkow, 2002 ).

Two advancements in contingency theory addresses the issue of only considering fit at one point in time: strategic choice ( Child, 1972 ) and the SARFIT (structural adaption to regain fit) model ( Donaldson, 1987 ). There is considerable overlap between the two views, but they differ in that strategic choice places more emphasis on a dominant coalition (e.g. senior management) with a certain degree of discretion in strategic decisions. This implies that fit can be achieved by either responding to contingencies through organisational adaptation or by changing the contingencies per se , depending on the preferences of the dominant coalition or their degree of discretion ( Montanari, 1978 ). SARFIT, on the other hand, emphasises performance (or a lack thereof) as the main trigger for organisational adaptation rather than the discretion and preferences of the dominant coalition ( Donaldson, 1987 ).

Another stream that falls under the dynamic approaches is that of dialectics and paradoxes that emphasises the importance of internal tensions and contradictions as triggers for strategic renewal. Within this stream, internal misfits of an organization are means of strategic change, rather than temporary dysfunctional states of a configurations ( Farjoun and Fiss, 2022 ). Misfits are thus a normal part of any organization and should be viewed as an opportunity to shift towards a different strategy configuration or to reinforce an existing one.

The majority of contingency research within logistics and supply chain management does however use cross-sectional research designs ( Doering et al. , 2019 ; Danese et al. , 2020 ). Several researchers within logistics and supply chain management highlight the need for longitudinal studies ( Sabri, 2019 ; Feizabadi et al. , 2021 ). Although they are rare, dynamic approaches to fit in logistics and supply chain management have been used, for example through the lens of strategic choice or SARFIT. For instance, Howard et al. (2007) draw on strategic choice combined with institutional theory to explain a failed implementation of supply practices at an engine plant. Another example is Silvestre et al. (2020) who use the SARFIT model to analyse the implementation of supply chain sustainability practices. Furthermore, dialectics and paradoxes are emphasized by Sandberg (2017) who suggests that these advancements in organizational research can benefit the logistics domain. Table 1 provides a synthesis of streams within contingency research.

A contingency approach to logistics strategy in building construction

While contingency theory is useful in the logistics and supply chain management domain, it is too generic in its original form to provide unique insights for researchers and practitioners ( Koskela and Ballard, 2012 ). As such, the sources of IP requirements need to be adapted to the construction setting and viewed from a logistics perspective. In logistics research, uncertainty stems from the characteristics of material and information flows, which are determined by: demand characteristics, product characteristics, the design of production system, the supply chain structure and formalisation (c.f., Christopher, 1986 ; Chow et al. , 1995 ; Klaas and Delfmann, 2005 ). These are determinants of IP requirements. IP capacity is determined by the organisational structure and the need to match the level of IP requirements to achieve fit ( Galbraith, 1974 ). The following paragraphs define the sources of IP requirements and capacity, starting with the contextual factors (demand characteristics, the degree of pre-engineering and the production system), to be followed by the logistics strategy content (structure and process components).

Demand characteristics relate to the heterogeneity among clients, determining what types of buildings to produce. The requirements of the target market(s) are typically described using competitive priorities (cost, quality, flexibility and delivery) ( Maylor et al. , 2015 ).

Engineer-to-stock (ETS): The product is designed prior to customer order.

Adapt-to-order (ATO): An existing product design is modified according to customer order.

Engineer-to-order (ETO): The product is engineered from scratch, offering broad customisability.

Component Manufacture and Sub-Assembly (CM&SA): Production activities are carried out on-site with a flexible sequence of operations and reciprocally interdependent activities, leading to a high level of process time and flow variability.

Prefabrication and Sub-Assembly (PF&SA): Prefabricated panel elements that are assembled on site along with other sub-assemblies. Contains a flexible sequence of operations and reciprocally interdependent activities, leading to a high to medium level of process time and flow variability.

Prefabrication and Pre-Assembly (PF&PA): Sub-assemblies are pre-assembled to prefabricated panel elements, leading to fewer materials to be delivered to the site and fewer operations. Contains a flexible sequence of operations and reciprocally interdependent activities, leading to a medium level of process time and flow variability.

Modular building (MB): Volumetric modules are prefabricated in an off-site factory which has a production line or batch flow layout. Remaining assemblies on-site are reduced but still have a flexible sequence of operations and reciprocally interdependent activities.

Structural components include the logistics organisation structure and the supply chain structure. The logistics organisation structure determines the level of IP capacity, where centralisation is the degree to which logistics decision-making authority is concentrated to a single unit ( Pfohl and Zöllner, 1997 ). Supply chain structure refers to the geographical dispersion and relationships with suppliers ( Voordijk et al. , 2006 ). The supply chain structure has implications for the complexity of production and logistics tasks. In particular, the number and type of relationships with suppliers influence the degree of uncertainty in delivery reliability and quality ( Flynn and Flynn, 1999 ). Construction logistics centres can be used to reduce the number of deliveries to the construction site or as short-term storage for just-in-time deliveries ( Janné and Fredriksson, 2022 ). Moreover, the contractor can engage in long-term relationships with suppliers that enable better alignment between logistics solutions and on-site production ( Bildsten, 2014 ).

Process components refers to the administrative and operational logistics processes ( Klaas and Delfmann, 2005 ). Administrative logistics processes are associated with information processing, coordination, reporting and control (e.g. order processing) and operational logistics processes are associated with the execution of logistics tasks (e.g. transportation and material handling). IP requirements are reduced by formalising administrative and operational processes, that is when processes and procedures for performing logistics activities are explicitly formulated ( Chow et al. , 1995 ).e.

Research design and method

Research design.

To study logistics strategy from the perspective of the process of arriving at fit, the overall research approach needed to accommodate for temporal sequences between events and how they lead to strategy process outcomes. The research was based on a literature review and a longitudinal single-case study. The literature review focused on four literature areas: (1) cross-sectional contingency theory literature, (2) longitudinal/dynamic perspectives on fit, (3) contingency theory applications to logistics and supply chain management and (4) construction logistics literature. These areas were chosen in line with recommendations by Voss et al. (2002) to establish a focus early in the research process, whereby the researchers can identify constructs and their presumed relationships. The empirical part of this study was a single-case study of a large Swedish construction company's logistics strategy process. The single-case design was selected to examine the company's logistics strategy process over a period of 11 years, thus making it possible to study the case over time as a longitudinal study ( Yin, 2018 ). In 2008, the company initiated a project to develop a logistics strategy and tested the strategy through a total of eight pilot projects split into three phases. Phase 1 involved one project, phase 2 involved six projects and phase 3 involved one project. The project spanned over seven years and was discontinued in the middle of 2016, but the research study also includes the years 2016–2019 to cover potential outcomes of the project after its termination.

Case selection

The case selection is motivated by accessibility to the company and by the acquisition of information on an unusual case ( Flyvbjerg, 2006 ). The authors had access to extensive documentation and key agents in the logistics strategy process. This contributed with rich information covering a long period, which enabled the longitudinal case design. Furthermore, while the building contractor was regarded as a typical large general contractor in Sweden, deliberate efforts to address logistics holistically at the corporate level among these types of contractors are uncommon. It is thus the logistics strategy process that makes the case unusual, and not the contractor's general characteristics. The case was however selected for theoretical reasons ( Eisenhardt, 1989 ) based on the contractor's general characteristics in terms of size (large), target market (broad/local), production system (CM&SA) and degree of pre-engineering (ETO). Therefore, in line with recommendations by Ketokivi and Choi (2014) , regarding using cases for theory elaboration, the case's characteristics and empirical data provided a basis for analytical generalisation. Finally, the phenomenon of the strategy process and the process of arriving at fit is favourably studied by analysing process data ( Van De Ven, 1992 ; Langley, 1999 ). Therefore, the third reason behind the case selection was the opportunity to access process data that described decisions, activities and events that exemplify the unpredictable process of establishing fit.

Data collection

The data included both primary and secondary data (see Table 2 ). The primary data was of two types: participatory observation and semi-structured interviews. For participatory observation, one of the researchers participated in three pilot project kick-offs and performed three planned site visits at each pilot project. The interviews were held with key persons involved in the strategy process and were conducted in retrospect of the strategy process. A pilot interview was first conducted with the current logistics developer at the company, providing insights into the company's experience from the project. The insights from the pilot interview were used as input for the interview guide that was later used to interview the former logistics manager and the project manager, who were the key persons behind the company's logistics strategy and the pilot projects. The interviews were used to verify the researchers' analysis of the archival data, and a total of six interviews were held before the researchers' interpretation of the archival data had been verified. The secondary data comprised internal documentation containing summaries of the pilot projects, descriptions of the logistics strategy, records and presentations from strategy meetings, implementation plans and formal directives that were developed for central purchasing and logistics. This documentation was provided to one of the researchers who observed the strategy process from start to finish but did not take active part in formulating and implementing the strategy. The documentation covered the project from its initiation in 2008 to a final report issued in 2014. Besides internal documentation, publicly available information such as reports, trade magazines, annual reports and thesis works were used for background information to establish a sense of when and in what sequence certain activities in the strategy process took place. In total, the interviews, documentation and publicly available information covered decisions, activities and events from 2008 to 2019.

This study adopted a two-step approach for the analysis. The first step concerned creation of the visual map ( Figure 1 ), where activities, events and decisions that formed part of the logistics strategy process were structured in the form of an illustrative time plan representing the sequence and timing of events in the strategy process. In this first step, a tentative visual map was created based on analysis of the secondary data through a document analysis. The document analysis covered a total of 31 documents provided by the case company (see Table 2 ) and followed an iterative process of skimming, detailed examination and interpretation ( Bowen, 2009 ). The result was a visual map of critical events that occurred between 2008 and 2019 ( Figure 1 ). Langley (1999) recommends this approach for the “sense-making” part of process studies to overcome the extensiveness that characterises process data. The visual mapping approach is suitable as an intermediary analysis technique and enables researchers to retain strategy process data as a sequence of events. These events then provide grounds for explaining underlying causes for strategy process outcomes ( Van De Ven, 1992 ). For instance, a particular decision by top management was related to the implementation phase, while the managers' predispositions were related to the strategy formulation. The visual map was thus used to describe the strategy process as it unfolded, including the decisions, activities and events that influenced strategic choice during strategy implementation.

The second step in the analysis concerned validating the tentative visual map and connecting decisions, activities and events to strategy process outcomes, which explained what influenced the logistics strategy implementation. This second step was based on the procedures for thematic analysis: open coding, axial coding and selective coding ( Flick, 2018 ). The researchers used NVivo to generate codes and themes based on the interview transcripts and documentation. First, a total of 82 open codes were formed based on the interview transcripts and documentation. Second, the 82 open codes were reduced to 15 axial codes that represented identified constraints to strategy implementation (see right side of Table 3 ) that were linked to a specific logistics strategy component (see left side of Table 3 ). Third, three selective codes were identified based on the 15 axial codes: (1) a dominant purchasing organisation, (2) a lack of incentives and (3) diverging top management priorities. These three themes constituted the main constraints to implementation of the logistics strategy. Finally, the building contractor's initial state, expected outcomes and actual outcomes were compared, which enabled the researchers to infer the implications for fit of the realised outcomes (see Table 4 ).

Case study description

The company is a large contractor operating in the Nordic countries with a focus on the Swedish construction industry. The logistics strategy process is illustrated in Figure 1 , and includes important decisions, activities, events and reports. The following paragraphs summarise the logistics strategy process in chronological order.

As a response to the low productivity levels and growth in the construction industry, the company's logistics manager sent out a survey to site managers in the beginning of 2008 to map how much time was spent on purchasing- and logistics-related tasks in projects. The survey indicated that the company had substantial potential to reduce waste in these areas. This convinced the logistics manager to develop a logistics strategy for the company. The logistics manager contacted a consultancy firm the same year which produced a first draft of the logistics strategy. In 2009, the logistics manager planned the first pilot project to further explore the potential benefits of a corporate-level logistics strategy. Towards the end of 2009, they initiated pilot 1, which had a narrow focus on transportation and material handling of make-to-order materials. Pilot 1 was completed in the end of 2010.

A project manager was hired in the autumn of 2010 and became responsible for planning and executing pilot 2. The pilot, which comprised seven projects, started in 2011 and was finished in 2013. The purpose of pilot 2 was more in line with the first draft of the logistics strategy developed by the consultancy firm, addressing how to supply multiple projects using the same logistics operations platform, how to organise logistics to achieve economies of scale and the potential benefits of increased standardisation and centralisation of logistics tasks. However, at this time the company experienced declining profitability in their housebuilding business unit. Consequently, top management decided that they would reduce overhead costs by downsizing the central organisation. So, as pilot 2 progressed as expected and finished with promising results, the project manager who had only been employed for two years was at risk of being dismissed, which led to him resigning voluntarily in the end of 2013.

Pilot 3 began in the autumn of 2013 with the former project manager now working as a consultant. Until this point in time, the strategy process seemed to be progressing well. However, the Chief Purchasing Officer (CPO) had been sceptical towards some of the investments proposed by the logistics manager and the now former project manager. For instance, the CPO and the logistics manager could not agree upon which ERP system to purchase, with the result that they did not purchase an ERP system at all. Instead, the former project manager had to manually make material requirements plans, delivery plans and produce packing, labelling and unloading instructions for suppliers and haulage contractors. Therefore, they could not use the learnings from the pilot in future projects. Furthermore, while pilot 3 was underway, the CPO resigned in the first half of 2015. The CPO had been an important spokesperson for the logistics strategy in the top management team, but his and the project manager's resignation meant that the strategy work was losing ground in the company. A new CPO was hired in the end of 2015, who was positive towards the logistics strategy. However, the CPO had not been involved and the logistics manager was now approaching retirement. The logistics strategy had already lost support throughout the organisation, and the process came to an end when the logistics manager retired in 2016.

In 2017, although the logistics manager and the project manager were no longer working at the company, the new CPO established a central logistics unit, which belonged to the central purchasing department. Despite there being no plan for developing a logistics strategy on the same scale as intended by the logistics manager, the new CPO hired several people to continue developing methods, tools and processes at a central level, one of them being the logistics developer. The logistics developer was hired in the beginning of 2018 and began gathering information on what had been done previously in terms of logistics development. In the beginning of 2019, the logistics developer produced a report summarising the logistics strategy process from 2008 onwards. Apart from a summary, the report included recommendations on which areas of logistics to focus on in the short and long term. However, central logistics was closed in 2019 when the CPO resigned. The logistics developer was then relocated to a support function focusing on technical support to projects.

Case study findings

Constraints to logistics strategy implementation.

This section addresses RQ1 : “What factors influence the adjustment of a logistics strategy with the aim to regain fit in a building contractor organisation?” . The interviews and the internal project documentation reveal constraining factors to the implementation of the logistics strategy. These constraints are detailed in Table 3 . The identified constraints can be summarised as: (1) lack of a formal logistics organisation and thus formal authority of the logistics manager, (2) lack of incentives to change among internal stakeholders and (3) divergence in top management priorities.

Regarding the first issue, the logistics manager stated that “the biggest problem was that we (logistics) belonged to purchasing” . The central purchasing organisation lacked fundamental logistics expertise, for example of the total cost concept, lot sizing and transport planning. Consequently, site managers were reluctant to use framework agreements from central purchasing since they caused problems for transports and on-site logistics. The logistics manager added that purchasers were not aware of what was happening in projects, even though they had a company policy that required purchasing to evaluate supplier performance after project completion.

Besides purchasing, the interviewees indicated that site managers were not reluctant towards the strategy per se , but they lacked incentives to use centrally developed logistics solutions. For instance, the site managers' bonuses were based on project performance (i.e. time, budget and quality), which meant that they did not want to bear additional costs for material handling and marking and labelling of goods. Thus, there were no incentives for site managers to pay for distribution terminals and the ERP system because it was perceived as an additional risk to the project's budget. In addition, the project manager believed that they lacked an internal business model for how to allocate investment costs between the central organisation and projects. The project manager suggested that the central organisation should have taken the investments costs and that projects would pay a license fee, for example for using the ERP system.

Diverging top management priorities manifested themselves in several ways, but were most prominent between 2013 and 2016. Top management had in fact been positive towards the strategy in the first couple of years, but changes in the team's composition led to a more sceptical attitude. For instance, the CPO's resignation entailed that the logistics manager had to find a new way to gain top management support. After pilot 2 was completed in 2013, the CPO did little to gain support from the rest of the top management team, which the logistics and project manager perceived as originating from a lack of logistics expertise. For instance, the project manager stated: “We always needed to go via purchasing … and when you have a CPO in the top management team that does not understand this (logistics), there will not be any change” . The project manager also raised the need for a supply chain manager, or a supply chain department, with knowledge about what logistics means for operations and the ability to explain this to top executives.

Fit, satisfactory fit and misfit

This section addresses RQ2 : “What are the implications for a building contractor pursuing a satisfactory fit or a misfit in their logistics strategy?” . The implications of strategic logistics decisions identified in the literature were compared with the case study findings to investigate what could explain the building contractor's lack of fit, despite their ambitious logistics strategy ( Table 4 ). This comparison revealed that the logistics manager and project manager had not attempted to make significant changes that would lead to a change in the contractor's overall business strategy. However, there were attempts to increase the degree of pre-engineering and to move towards a PF&SA production system, but this remained unchanged. The predominant use of the CM&SA production system in projects thus entailed high IP requirements, which subsequently had to be matched with IP capacity to establish fit.

The analysis of the structural components reveals that the organisational structure generated high levels of IP capacity, since the central logistics department and regional planning units were unrealised. The contractor's logistics was thus managed in a decentralised organisational structure with low division of labour, thus generating a high level of IP capacity. This corresponds to the high degree of production and supply variability generated by the degree of pre-engineering, the production system and the supply chain. The high IP capacity generated from the organisational structure therefore matches the high IP requirements, which indicates a fit between the contextual factors and the structural components.

However, the analysis of the process components indicates that the company had an underfit logistics strategy (i.e. that IP requirements exceeded IP capacity). None of the logistics strategy process components were realised ( Table 3 ), which was in favour of ad hoc problem solving by site management and construction workers without formalised administrative and operational logistics processes. The low degree of formalisation in the administrative and operational logistics processes thus generated high IP requirements in addition to what was generated from the degree of pre-engineering, the production system and the supply chain structure. In other words, the lack of formalised routines in the five process components ( Table 3 ) generated uncertainty and complexity in addition to the low degree of pre-engineering, the CM&SA production system and the geographically dispersed supply chain structure. The low degree of formalisation is apparent in pilot 3, where the former project manager worked as a consultant to manually solve administrative logistics tasks.

The case study findings reveal that fit is not necessarily determined by contextual factors as postulated by previous contingency studies within logistics and supply chain management ( Sabri, 2019 ; Feizabadi et al. , 2021 ). Lacking performance and strategic choice both influence the pursued strategy, and thus, they mediate the fit between context and strategy. Howard et al. (2007) present similar findings in a case study of the implementation of supply practices at an engine plant, where the implementation plans received inadequate attention from top management and where unfortunate timing halted the process. Likewise, the case study findings here reveal that the downsizing decision at the building contractor unfortunately coincided with the intended implementation period starting in 2012. In a study of a similar building contractor, Elfving (2021) highlights timing as a critical determinant in the implementation of standardised logistics solutions. In this case, the financial crisis triggered a downsizing decision at the building contractor, which meant that only one logistics solution remained. Furthermore, Elfving (2021) discusses other aspects related to timing, such as the importance of the maturity of a company and ensuring that top management priorities align with the intended strategy process outcomes to enable implementation of the strategy.

In our case study, top management were initially supportive of the logistics strategy, but it lost ground when the CPO resigned. Although there is no concrete evidence in the case study findings regarding what triggered the downsizing decision, the reluctance to invest in an ERP system and to make changes to the organisational structure coincides timewise with the decision to cut overhead costs. However, this situation could have been avoided had the logistics manager, the project manager and the CPO been able to agree upon a satisfactory ERP system. Research on strategic consensus highlights this issue and indicates that shared reasoning and consistency in decision making over time are important parts of the strategy process ( Mirzaei et al. , 2016 ). In the case study, the logistics manager had to negotiate with stakeholders at a variety of hierarchical levels, including top management, regional managers and site managers. Reaching strategic consensus between all these levels requires time, timing and consistency in decision-making (c.f. Ruffini et al. , 2000 ; Mirzaei et al. , 2016 ; Elfving, 2021 ), and may result in settling for a satisfactory fit.

The case study findings support two of the dynamic approaches to fit identified in the literature: strategic choice ( Child, 1972 ) and SARFIT ( Donaldson, 1987 ). For strategic choice, our findings reveal that managerial discretion was constrained by several factors, such as support among top management, incentives in the line organisation, the educational and professional background of internal stakeholders and company politics. This contrasts with cross-sectional studies of logistics strategy and supply chain fit which focus on outcomes over the process of establishing fit. The case study findings are more in line with the suggestions of Ruffini et al . (2000) that the building contractor's logistics strategy is codetermined by contextual factors and the level of discretion decision makers have to establish fit. The main thesis of this paper is that contextual factors do not directly determine the logistics strategy. The authors propose that strategic choice influences both contextual factors and logistics strategy content, where the antecedents to strategic choice are managerial discretion and the predisposition of managers. Since contextual factors (i.e. the degree of pre-engineering and the production system) are not static over time, there will be a process of regaining fit, where the outcome (fit/misfit) is dependent on strategic choice. This line of reasoning falls under the notion of dynamic fit put forward by Zajac et al . (2000) who treat fit as an ongoing process of regaining fit, either by making modifications to contextual factors, strategy or both. In other words, the logistics strategy process can be driven by a change in demand and production characteristics requiring an increase/reduction in the degree of pre-engineering and a change of production system (reduction/increase in IP requirements) and/or logistics driven by reconfiguring logistics strategy components (reduction/increase in IP capacity). The former is driven by the logistics strategy, where logistics is a source of competitive advantage. The logistics strategy triggers a change to product and/or process characteristics, which resembles the inside-out approach. In the latter, the logistics strategy is a means of pursuing the corporate/business strategy, which resembles the outside-in approach.

However, the competing model SARFIT was also supported by the case study findings. The main reason why the logistics strategy process was initiated at all was poor logistics performance stemming from a misfit between the logistics strategy and contextual factors. The logistics manager attempted to change the logistics strategy to accommodate the existing context and did not target the contextual factors alone. This highlights an important nuance between strategic choice and SARFIT. Strategic choice assumes managers can manipulate the context, the strategy or both. SARFIT, on the other hand, questions whether organisations will change their context without adjusting their strategy ( Donaldson, 1987 ). Therefore, while strategic choice may involve making adjustments to contextual factors, it will not be without some changes to be made to the organisation's strategy. However, it should be noted that neither of these two theoretical models alone can explain how fit is established. The application of each of these theoretical models as lenses to analyse the logistics strategy process yielded support from the case study findings, but the two contradict each other. Therefore, the two models can potentially be combined, although this is beyond the scope of this paper.

Besides the reason why strategic change occurs in the first place, studies focusing on the content of fit within logistics and supply chain management fail to explain why a misfit can endure over a longer period of time. Luo and Yu (2016) address this issue and contend that it is not simply a matter of differentiating between fit and misfit. For instance, they argue that misfit caused by an underfit (i.e. when IP requirements exceed IP capacity) has more detrimental performance implications than an overfit (i.e. when IP capacity exceeds IP requirements). It is thus preferable to pursue an overfit strategy, if for some reason fit is impossible to achieve. In essence, the decision to retain a misfit or adjust the strategy to regain fit comes down to the cost of incurring change vis-á-vis living with the misfit ( Gligor, 2017 ). Although it is difficult to determine the costs incurred by the building contractor's logistics strategy process, it is obvious that it ultimately did not pay off. In retrospect, a rational conclusion through the lens of contingency theory would be to not pursue the intended logistics strategy at all and live with the misfit if the pre-existing misfit was not too detrimental for performance.

From the perspective of the building contractor, the logistics strategy process cannot only be viewed as a means of changing the organisational structure to cope with uncertainty (lack of IP capacity) or establish formalised processes (reduce IP requirements). It needs to encompass the contextual factors, including demand characteristics (e.g. by changing project selection strategy), the degree of pre-engineering (i.e. moving the customer order decoupling point) and the choice of production system. This is in line with previous research on logistics strategy, which highlights the need to establish fit between product and process characteristics and the logistics strategy and structure. For instance, Christopher (1986) argues that different positions in the product/process matrix require different ways of organising logistics activities, and thus the product/process characteristics determine the feasibility of a particular logistics strategy. A configuration of logistics strategy structure and process components can therefore be integrated with Jonsson and Rudberg's (2015) version of the product/process matrix, which is adapted to the project-based production of housebuilding. Different positions in the matrix represent variations in product and process characteristics and each position has an ideal configuration of logistics strategy content. However, it is important to note that such ideal configurations are static over time. Building contractors need to continuously adapt their logistics strategy to its contextual factors, and vice versa. This is in line with the dialectical and paradox-based views on fit suggest that strategic change is not about achieving an optimal configuration, but about a continuous act of balance between tensions in the organisation ( Sandberg, 2017 ).

Application of the strategic choice and SARFIT models, respectively, comes with different implications for building contractors. Strategic choice implies that there are three different routes towards establishing fit: (1) the logistics strategy can be adjusted to suit the contextual factors (demand characteristics, the degree of pre-engineering and the production system); (2) demand characteristics, the degree of pre-engineering and the production system can be adjusted to the logistics strategy and (3) a combination of (1) and (2). SARFIT, on the other hand, suggests that the logistics manager's discretion in adjusting any of the contextual factors (demand characteristics, the degree of pre-engineering and the production system) is limited, at least to the extent that changing the degree of pre-engineering and/or the production system will have any effect on strategic fit. Thus, SARFIT rules out the second option described previously in favour of options one and three.

Conclusions

The purpose of this paper was to investigate logistics strategy from a process of establishing fit perspective. The paper contributes to the body of knowledge on organisational design and strategy in logistics and supply chain management. The first research question is answered by identifying factors that constrain logistics strategy implementation ( Table 3 ). In addition, the implications for fit are addressed through answering the second research question ( Table 4 ). The study thus builds upon cross-sectional studies within this research area by elaborating on the process of establishing fit. The following sub-sections discuss the research implications, the limitations of the study and suggestions for further research.

Research implications

Previous research emphasises that fit creates superior performance, where fit is defined as the match between IP requirements and capacity. However, this would assume that a building contractor's contextual factors, logistics strategy and performance levels remain stable over time with limited need for strategic change, which is seldom the case even in industries with low clock speeds, such as construction. Add to this the fact that strategic decision makers do not always possess sufficient decision-making authority to pursue an ideal configuration, such as in the case of the building contractor's logistics manager. Contextual factors are thus important to consider, but they do not determine the logistics strategy. The contingency determinism argument should therefore be rejected. However, this is not to de-emphasise the importance of fit; different combinations of product and process characteristics have different theoretically ideal configurations of logistics strategy components.

Managerial implications

The findings indicate that managers may need to strive for satisfactory fit rather than attempting to establish an ideal form of fit. The factors constraining managerial discretion in this study ( Table 3 ) can potentially be found in similar companies (project-based ETO companies). These can be used to map stakeholder demands and their willingness to compromise their demands to determine which structure and process components are possible to implement. Furthermore, the study distinguishes between logistics strategy structure and process components ( Table 4 ). This distinction can be used to identify relevant logistics strategy components, but the components identified in the case study ( Table 3 ) may look different for other building contractors and for companies in other ETO industries. Logistics and supply chain managers in other companies thus need to identify relevant structure and process components.

Limitations and further research

The contextual factors and logistics strategy components examined here are specific to construction and cannot be directly generalised to other industries. The peculiarities of construction, such as fixed position, temporary production systems and temporary project organising imply that the principles from other industries cannot be adopted without consideration of these peculiarities because the sources of uncertainty are different from manufacturing. However, future studies on logistics strategy implementation in other project-driven industries (e.g. ETO manufacturing) would be of interest for comparing with the results of this study. Large-scale surveys can preferably be employed to test which of the two models, strategic choice or SARFIT, can best explain the variance in firm performance. Furthermore, the authors suggest further conceptual studies to explore how the two models can be integrated into a single holistic framework.

The single-case design poses some limitations to generalisability. The logistics strategy components ( Table 3 ) are specific to the building contractor in the case study. Further studies on other types of building contractors (e.g. industrialised housebuilders) and ETO contexts are needed to define generic logistics strategy components for ETO companies. In addition, the case study findings indicate that the middle management levels of building contractors may be overlooked in the construction logistics research domain. Regional and area managers have a high level of authority and oversee multiple projects simultaneously. The findings indicate that they were a constraining factor to logistics strategy implementation, but this needs to be investigated further.

Visual map of the logistics strategy process between 2008 and 2019

Streams within contingency theory and their applications within logistics and supply chain management

StreamRationaleConceptualization of fitRepresentative paper(s)Examples from logistics and supply chain management
Strategy–Structure–PerformanceRejects the “one size fits all” argument in favour of “contingency determinism”, i.e. that strategy determines structureStatic: Strategy drives the development of suitable organizational structure and processes ,
Information Processing ViewAddresses deficiencies in the conceptualization of fit. Explicates fit by portraying organizations as information processing systemsStatic: Fit indicates that a firms information processing requirements (determined by contingency variables) are matched by its information processing capacity (determined by organizational structure and processes) . (2017), , . (2021)
Configurational ViewAddresses criticism of contingency theory for being reductionist and limited to bivariate studiesStatic: Fit indicates a constellation of several commonly occurring variables of contextual factors and organizational structure . (1993) , . (2021)
Strategic ChoiceRejects “contingency determinism”, i.e. that contextual factors determine organizational structure. Strategic choices by a dominant coalition that influences fitDynamic: Dominant coalition (e.g. senior management) can make changes to contextual factors and/or organizational structure to establish fit based on personal preferences, performance, institutional factors etc. . (2007)
SARFITRejects “contingency determinism” and partially strategic choice in favour of performance as the main driver for a change of organizational structure to regain fitDynamic: Misfits lead to poorly functioning organizations, which in turn leads to poor performance. This puts pressure on reorganizing to regain fit to improve performance . (2020)
Dialectics and paradoxesRejects the assumption that misfits are always dysfunctional and criticise previous dynamic approaches for their lack of attention to how strategic change occurs. Misfits (or “contradictions” and “tensions” as they are called) are regarded as important drivers of strategic changeDynamic: Organizations are everchanging and thus fit cannot be viewed as a state of equilibrium. Internal tensions always exist to some extent and these need to be deliberately managed and balanced

Data collection methods

DataData collection methodTime period coveredComments
6 interviewsSemi-structured interviews2008–2019
2 project time plansArchival data from pilot project time plans2009–2012Details regarding pilot projects and implementation plans
9 project reportsArchival data with reports issued during the project2008–2019Reports continuously issued over 2008–2019
3 Annual reportsPublicly available annual reports from 2010 to 20122010–2012Financial measures and comments from top management
10 planning and follow-up meetingsArchival data with presentations, agendas, and decision protocols2008–2013Details regarding logistics strategy content, pilot projects and implementation plans
7 instructional documentsArchival data with instructions for site managers, purchasing, delivery planners etc.2010–2013Descriptions of logistics processes aimed at different organizational members
2 pilot project kick-off/start up meetingsResearcher observation and notetaking2011–2014Observational participation during full day meetings with representation from all main participants for pilot project 2 and 3, respectively
3 site visitsResearcher observation, unstructured interviews and notetaking2010–2016Planned site visits at pilot project 1, 2 and 3, respectively. Unstructured interviews with site managers, foremen, project participants and site personnel. Walk around at site and full day observation of site activities
3 student thesesMaster thesis projects/reports covering pilot project 1, 2 and 3, respectively2010–2015Containing information on pilot project 1, 2 and 3, with thesis 3 covering the full-scale implementation of the final logistics strategy outlined in the main project report in 2013

Influencing factors on the logistics strategy process outcomes

Identified logistics strategy componentsExpected outcomesRealized outcomeIdentified constraints towards strategy implementation (data source within parentheses: D = documentation, LM = logistics manager, PM = project manager, LD = logistics Developer)
Centralized logisticsCentralized development of logistics operations platformExisted between 2016 and 2019New purchasing manager left (started in 2016) (LM), Top management did not understand the strategy (PM), Logistics was part of the purchasing organization (D, LM, PM, LD)
Regional planning unitsAggregation of materials and distribution planning (MTS materials)Not realizedTop management did not understand the strategy (PM), Regional managers were not committed to change current way of working (LM)
ERP-systemConnecting central/regional and project planning levelsNot realizedCentral organization was reluctant to carry initial investment costs (LM, PM), Top management did not understand the strategy (PM)
Distribution terminalsInventory buffers of MTS materials in each region to increase flexibility, minimize number of deliveries, achieve economies of scaleNot realizedSite managers only experienced the incurred cost of distribution terminals (PM), Central organization was reluctant to carry initial investment costs (LM, PM)
Design and engineeringRoutines to improve planning, supplier selection and accuracy of informationNot realizedTop management did not understand the strategy (PM), Low degree of standardization in design and engineering solutions (D, LM)
Site logisticsSite disposition plan, roles and responsibilities, delivery planning, goods receptionNot realizedMaterial handling on site was not considered logistics (PM), Purchasers were not aware of material flow problems on site (LM, PM)
Marking and labelling of goodsEnsure correct and informative packaging labelsNot realizedSite managers only experienced the purchasing cost but not the savings of labelling goods (PM), Lack of scale perceived by suppliers (PM)
Delivery planning and transportsIncreased control of delivery times and reduce disturbances on production activitiesNot realizedLogistics was part of the purchasing organization (D, LM, PM), transport costs were not visible to project purchasers (included in purchasing costs) (D, LM)
Supplier development policiesContinuous improvements to supply logisticsNot realizedInsufficient logistics capabilities within purchasing organization (D, LM, PM), Long-term supply agreements were not used by project purchasers (PM), Purchasing organization's incentives drove focus on purchasing costs over total costs (D, LM, PM), Logistics was part of the purchasing organization (D, LM, PM)

Implications for logistics strategy fit

Contextual factors and logistics strategy componentsDescriptionLiterature findingsCase study findings
Implications for fitKey referencesRealized outcomeImplications for fit
Demand characteristicsNumber, size, knowledge, behaviour and heterogeneity among clientsDetermines suitable degree of product standardization and pre-engineering through competitive priorities (2022), . (2015)Remained unchanged. Projects were of local character with a high heterogeneity among clientsProducts and productions system were adaptable to each client's requirements and the company mainly competes with smaller local actors
Degree of pre-engineeringNo. product variants, BOM structure complexity (depth and breadth), and amount of engineering work performed prior to customer order impacting production and supply variabilityIP requirements are generated from the late design changes . (2019), (2022), , , Low use of standardized products and pre-engineered components. BOM structure changes from project to projectHigh level of IP requirements due to low amount of information possessed prior to task execution (DTO: low degree of pre-engineering)
Production systemDegree of off-site assembly (CM&SA, PF&SA, PF&PA or MB) impacting production and supply variabilityIP requirements are generated from production variability (process time and flow variability) (2022), . (2019), , , Mainly CM&SA production systems with high levels of production variability in projectsHigh level of IP requirements due to due to low amount of information possessed prior to task execution (high level of production variability)
Organizational structure Determines level of IP capacity of logistics organization during task performance , , , . (1995), High level of IP capacity generated from decentralized organizational structure. IP requirements reduced due to reduced division of labour
Supply chain structureNumber of suppliers and supplier relationships impacting delivery reliability and qualityIP requirements are generated from supply variability , , Mainly arms-length relationships with local suppliers of building materials. Direct deliveries to construction sites from materials suppliersHigh level of IP requirements generated from short-term, market-based supplier relationships. Direct deliveries from many suppliers to construction sites
Administrative processesFormalized procedures for information processing, coordination and control activities, e.g.: demand management, inventory and order management, order processing, distribution and transportation planningDetermines level of IP requirements generated from level of formalization Formalized logistics processes were never implemented, and logistics tasks were handled in a problem-solving manner. Administrative processes were seldom considered by site managementHigh level of IP requirements due to low amount of information possessed prior to task execution (lack of administrative routines and information system)
Operational processesFormalized procedures for physical activities, e.g.: on-site material handling, transportation, warehouse operationsDetermines level of IP requirements generated from level of formalization Formalized logistics processes were never implemented, and logistics tasks were handled in a problem-solving manner. Construction workers and supervisor typically carried out goods reception and material handlingHigh level of IP requirements due to low amount of information possessed prior to task execution (absence of established material handling and goods reception procedures)

Aben , T.A. , Van Der Valk , W. , Roehrich , J.K. and Selviaridis , K. ( 2021 ), “ Managing information asymmetry in public–private relationships undergoing a digital transformation: the role of contractual and relational governance ”, International Journal of Operations and Production Management , Vol.  41 No.  7 , pp. 1145 - 1191 .

Bankvall , L. , Bygballe , L.E. , Dubois , A. and Jahre , M. ( 2010 ), “ Interdependence in supply chains and projects in construction ”, Supply Chain Management , Vol.  15 , pp.  385 - 393 .

Bildsten , L. ( 2014 ), “ Buyer-supplier relationships in industrialized building ”, Construction Management and Economics , Vol.  32 , pp.  146 - 159 .

Bowen , G.A. ( 2009 ), “ Document analysis as a qualitative research method ”, Qualitative Research Journal , Vol.  9 No.  2 , pp. 27 - 40 .

Busse , C. , Meinlschmidt , J. and Foerstl , K. ( 2017 ), “ Managing information processing needs in global supply chains: a prerequisite to sustainable supply chain management ”, Journal of Supply Chain Management , Vol.  53 , pp.  87 - 113 .

Cannas , V.G. , Gosling , J. , Pero , M. and Rossi , T. ( 2019 ), “ Engineering and production decoupling configurations: an empirical study in the machinery industry ”, International Journal of Production Economics , Vol.  216 , pp.  173 - 189 .

Chandler , J.A.D. ( 1962 ), Strategy and Structure: Chapters in the History of the American Industrial Enterprise , MIT Press , Cambridge, MA .

Child , J. ( 1972 ), “ Organizational structure, environment and performance: the role of strategic choice ”, Sociology , Vol.  6 , pp.  1 - 22 .

Chow , G. , Heaver , T.D. and Henriksson , L.E. ( 1995 ), “ Strategy, structure and performance: a framework for logistics research ”, Logistics and Transportation Review , Vol.  31 , p. 285 .

Christopher , M. ( 1986 ), “ Implementing logistics strategy ”, International Journal of Physical Distribution and Materials Management , Vol.  16 , pp.  52 - 62 .

Danese , P. , Molinaro , M. and Romano , P. ( 2020 ), “ Investigating fit in supply chain integration: a systematic literature review on context, practices, performance links ”, Journal of Purchasing and Supply Management , Vol.  26 , 100634 .

Doering , T. , Suresh , N.C. and Krumwiede , D. ( 2019 ), “ Measuring the effects of time: repeated cross-sectional research in operations and supply chain management ”, Supply Chain Management: An International Journal , Vol.  25 No.  1 , pp. 122 - 138 .

Donaldson , L. ( 1987 ), “ Strategy and structural adjustment to regain fit and performance: in defence of contingency theory ”, Journal of Management Studies , Vol.  24 , pp.  1 - 24 .

Dubois , A. and Gadde , L.-E. ( 2002 ), “ The construction industry as a loosely coupled system: implications for productivity and innovation ”, Construction Management and Economics , Vol.  20 , pp.  621 - 631 .

Dubois , A. , Hulthén , K. and Sundquist , V. ( 2019 ), “ Organising logistics and transport activities in construction ”, The International Journal of Logistics Management , Vol.  30 , pp.  320 - 340 .

Eisenhardt , K.M. ( 1989 ), “ Building theories from case study research ”, Academy of Management Review , Vol.  14 , pp.  532 - 550 .

Elfving , J.A. ( 2021 ), “ A decade of lessons learned: deployment of lean at a large general contractor ”, Construction Management and Economics , Vol.  40 , pp.  548 - 561 .

Farjoun , M. and Fiss , P.C. ( 2022 ), “ Thriving on contradiction: toward a dialectical alternative to fit-based models in strategy (and beyond) ”, Strategic Management Journal , Vol.  43 , pp.  340 - 369 .

Feizabadi , J. , Gligor , D. and Alibakhshi , S. ( 2021 ), “ Strategic supply chains: a configurational perspective ”, The International Journal of Logistics Management , Vol.  32 , pp.  1093 - 1123 .

Flick , U. ( 2018 ), An Introduction to Qualitative Research , Sage , London .

Flynn , B.B. and Flynn , E.J. ( 1999 ), “ Information‐processing alternatives for coping with manufacturing environment complexity ”, Decision Sciences , Vol.  30 , pp.  1021 - 1052 .

Flyvbjerg , B. ( 2006 ), “ Five misunderstandings about case-study research ”, Qualitative Inquiry , Vol.  12 , pp.  219 - 245 .

Galbraith , J.R. ( 1974 ), “ Organization design: an information processing view ”, Interfaces , Vol.  4 , pp.  28 - 36 .

Galunic , D.C. and Eisenhardt , K.M. ( 1994 ), “ Renewing the strategy-structure-performance paradigm ”, Research in Organizational Behavior , Vol.  16 , p. 215 .

Gligor , D. ( 2017 ), “ Re‐examining supply chain fit: an assessment of moderating factors ”, Journal of Business Logistics , Vol.  38 , pp.  253 - 265 .

Hallavo , V. ( 2015 ), “ Superior performance through supply chain fit: a synthesis ”, Supply Chain Management: An International Journal , Vol.  20 No.  1 , pp. 71 - 82 .

Howard , M. , Lewis , M. , Miemczyk , J. and Brandon‐Jones , A. ( 2007 ), “ Implementing supply practice at Bridgend engine plant: the influence of institutional and strategic choice perspectives ”, International Journal of Operations and Production Management , Vol.  27 , pp.  754 - 776 .

Janné , M. and Fredriksson , A. ( 2022 ), “ Construction logistics in urban development projects–learning from, or repeating, past mistakes of city logistics? ”, The International Journal of Logistics Management , Vol.  33 , pp.  49 - 68 .

Janné , M. and Rudberg , M. ( 2022 ), “ Effects of employing third-party logistics arrangements in construction projects ”, Production Planning and Control , Vol.  33 , pp.  71 - 83 .

Jonsson , H. and Rudberg , M. ( 2015 ), “ Production system classification matrix: matching product standardization and production-system design ”, Journal of Construction Engineering and Management , Vol.  141 , 05015004 .

Ketokivi , M. and Choi , T. ( 2014 ), “ Renaissance of case research as a scientific method ”, Journal of Operations Management , Vol.  32 , pp.  232 - 240 .

Klaas , T. and Delfmann , W. ( 2005 ), “ Notes on the study of configurations in logistics research and supply chain design ”, Supply Chain Management: European Perspectives , Vol.  11 , pp. 12 - 36 .

Koskela , L. and Ballard , G. ( 2012 ), “ Is production outside management? ”, Building Research and Information , Vol.  40 , pp.  724 - 737 .

Langley , A. ( 1999 ), “ Strategies for theorizing from process data ”, Academy of Management Review , Vol.  24 , pp.  691 - 710 .

Luo , B.N. and Donaldson , L. ( 2013 ), “ Misfits in organization design: information processing as a compensatory mechanism ”, Journal of Organization Design , Vol.  2 , pp.  2 - 10 .

Luo , B.N. and Yu , K. ( 2016 ), “ Fits and misfits of supply chain flexibility to environmental uncertainty: two types of asymmetric effects on performance ”, The International Journal of Logistics Management , Vol.  27 No.  3 , pp. 862 - 885 .

Marchesini , M.M.P. and Alcântara , R.L.C. ( 2016 ), “ Logistics activities in supply chain business process: a conceptual framework to guide their implementation ”, The International Journal of Logistics Management , Vol.  27 , pp.  6 - 30 .

Maylor , H. , Turner , N. and Murray-Webster , R. ( 2015 ), “ It worked for manufacturing…!”: operations strategy in project-based operations ”, International Journal of Project Management , Vol.  33 , pp.  103 - 115 .

Meyer , A.D. , Tsui , A.S. and Hinings , C.R. ( 1993 ), “ Configurational approaches to organizational analysis ”, Academy of Management Journal , Vol.  36 , pp.  1175 - 1195 .

Miles , R.E. , Snow , C.C. , Meyer , A.D. and Coleman , H.J. Jr. ( 1978 ), “ Organizational strategy, structure, and process ”, Academy of Management Review , Vol.  3 , pp.  546 - 562 .

Mintzberg , H. ( 1979 ), The Structure of Organizations: A Synthesis of the Research , Prentice-Hall , Eaglewoods Cliff, New Jersey .

Mirzaei , N.E. , Fredriksson , A. and Winroth , M. ( 2016 ), “ Strategic consensus on manufacturing strategy content: including the operators' perceptions ”, International Journal of Operations and Production Management , Vol.  36 , pp.  429 - 466 .

Miterev , M. , Mancini , M. and Turner , R. ( 2017 ), “ Towards a design for the project-based organization ”, International Journal of Project Management , Vol.  35 , pp.  479 - 491 .

Montanari , J.R. ( 1978 ), “ Managerial discretion: an expanded model of organization choice ”, Academy of Management Review , Vol.  3 , pp.  231 - 241 .

Nakano , M. ( 2015 ), “ Exploratory analysis on the relationship between strategy and structure/processes in supply chains: using the strategy-structure-processes-performance paradigm ”, The International Journal of Logistics Management , Vol.  26 , pp.  381 - 400 .

Pfohl , H.C. and Zöllner , W. ( 1997 ), “ Organization for logistics: the contingency approach ”, International Journal of Physical Distribution and Logistics Management , Vol.  27 , pp.  306 - 320 .

Ruffini , F.A. , Boer , H. and Van Riemsdijk , M.J. ( 2000 ), “ Organisation design in operations management ”, International Journal of Operations and Production Management , Vol.  20 , pp.  860 - 879 .

Sabri , Y. ( 2019 ), “ In pursuit of supply chain fit ”, The International Journal of Logistics Management , Vol.  30 , pp.  821 - 844 .

Sandberg , E. ( 2017 ), “ Introducing the paradox theory in logistics and SCM research–examples from a global sourcing context ”, International Journal of Logistics Research and Applications , Vol.  20 , pp.  459 - 474 .

Sezer , A.A. and Fredriksson , A. ( 2021 ), “ Paving the path towards efficient construction logistics by revealing the current practice and issues ”, Logistics , Vol.  5 , p. 53 .

Shurrab , H. , Jonsson , P. and Johansson , M.I. ( 2022 ), “ A tactical demand-supply planning framework to manage complexity in engineer-to-order environments: insights from an in-depth case study ”, Production Planning and Control , Vol.  33 No.  5 , pp. 462 - 479 .

Siggelkow , N. ( 2002 ), “ Evolution toward fit ”, Administrative Science Quarterly , Vol.  47 , pp.  125 - 159 .

Silvestre , B.S. , Silva , M.E. , Cormack , A. and Thome , A.M.T. ( 2020 ), “ Supply chain sustainability trajectories: learning through sustainability initiatives ”, International Journal of Operations and Production Management , Vol.  40 No.  9 , pp. 1301 - 1337 .

Sousa , R. and Voss , C.A. ( 2008 ), “ Contingency research in operations management practices ”, Journal of Operations Management , Vol.  26 , pp.  697 - 713 .

Thunberg , M. and Fredriksson , A. ( 2018 ), “ Bringing planning back into the picture–How can supply chain planning aid in dealing with supply chain-related problems in construction? ”, Construction Management and Economics , Vol.  36 , pp.  425 - 442 .

Thunberg , M. , Rudberg , M. and Karrbom Gustavsson , T. ( 2017 ), “ Categorising on-site problems: a supply chain management perspective on construction projects ”, Construction Innovation , Vol.  17 , pp.  90 - 111 .

Turkulainen , V. ( 2022 ), “ Contingency theory and the information processing view ”, in Handbook of Theories for Purchasing, Supply Chain and Management Research , Edward Elgar Publishing , pp.  248 - 266 .

Van De Ven , A.H. ( 1992 ), “ Suggestions for studying strategy process: a research note ”, Strategic Management Journal , Vol.  13 , pp.  169 - 188 .

Van De Ven , A.H. , Ganco , M. and Hinings , C.R. ( 2013 ), “ Returning to the Frontier of contingency theory of organizational and institutional designs ”, Academy of Management Annals , Vol.  7 , pp.  393 - 440 .

Venkatraman , N. and Camillus , J.C. ( 1984 ), “ Exploring the concept of ‘fit’ in strategic management ”, Academy of Management Review , Vol.  9 , pp.  513 - 525 .

Voordijk , H. , Meijboom , B. and De Haan , J. ( 2006 ), “ Modularity in supply chains: a multiple case study in the construction industry ”, International Journal of Operations and Production Management , Vol.  26 , pp.  600 - 618 .

Voss , C. , Tsikriktsis , N. and Frohlich , M. ( 2002 ), “ Case research in operations management ”, International Journal of Operations and Production Management , Vol.  22 , pp.  195 - 219 .

Wikner , J. and Rudberg , M. ( 2005 ), “ Integrating production and engineering perspectives on the customer order decoupling point ”, International Journal of Operations and Production Management , Vol.  25 , pp.  623 - 641 .

Yin , R.K. ( 2018 ), Case Study Research: Design and Methods , 6 ed. , SAGE , Thousand Oaks, California .

Zajac , E.J. , Kraatz , M.S. and Bresser , R.K. ( 2000 ), “ Modeling the dynamics of strategic fit: a normative approach to strategic change ”, Strategic Management Journal , Vol.  21 , pp.  429 - 453 .

Acknowledgements

This research was supported by The Development Fund of the Swedish Construction Industry (SBUF), under grant 13843, and The Lars Erik Lundberg Foundation for Research and Education. The authors would like to thank the case company and the key informants, for sharing data, experience and documents on the case study. We also appreciate the constructive feedback we got from the anonymous reviewers and the editor. Their comments greatly improved the manuscript.

Corresponding author

Related articles, all feedback is valuable.

Please share your general feedback

Report an issue or find answers to frequently asked questions

Contact Customer Support

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMC Med Res Methodol

Logo of bmcmrm

Developing longitudinal qualitative designs: lessons learned and recommendations for health services research

Lynn calman.

1 University of Manchester, Jean McFarlane Building, Oxford Road, Manchester, M13 9PL, UK

Lisa Brunton

Alex molassiotis.

2 Hong Kong Polytechnic University, Hung Hom, Hong Kong

Longitudinal qualitative methods are becoming increasingly used in the health service research, but the method and challenges particular to health care settings are not well described in the literature.We reflect on the strategies used in a longitudinal qualitative study to explore the experience of symptoms in cancer patients and their carers, following participants from diagnosis for twelve months; we highlight ethical, practical, theoretical and methodological issues that need to be considered and addressed from the outset of a longitudinal qualitative study.

Key considerations in undertaking longitudinal qualitative projects in health research, include the use of theory, utilizing multiple methods of analysis and giving consideration to the practical and ethical issues at an early stage. These can include issues of time and timing; data collection processes; changing the topic guide over time; recruitment considerations; retention of staff; issues around confidentiality; effects of project on staff and patients, and analyzing data within and across time.

Conclusions

As longitudinal qualitative methods are becoming increasingly used in health services research, the methodological and practical challenges particular to health care settings need more robust approaches and conceptual improvement. We provide recommendations for the use of such designs. We have a particular focus on cancer patients, so this paper will have particular relevance for researchers interested in chronic and life limiting conditions.

Longitudinal qualitative research (LQR) has been an emerging methodology over the last decade with methodological discussion and debate taking place within social research [ 1 ]. Longitudinal qualitative research is distinguished from other qualitative approaches by the way in which time is designed into the research process, making change a key focus for analysis [ 1 ]. LQR answers qualitative questions about the lived experience of change, or sometimes stability, over time. Findings can establish the processes by which this experience is created and illuminates the causes and consequences of change. Qualitative research is about why and how health care is experienced and LQR focuses on how and why these experiences change over time. In contrast to longitudinal quantitative methodologies LQR focuses on individual narratives and trajectories and can capture critical moments and processes involved in change. LQR is also particularly helpful in capturing “transitions” in care; for example, while researchers are beginning to more clearly map the cancer journey or pathway [ 2 ] we less clearly understand the processes involved in the experience of transition along this pathway whether that be to long term survivor or living with active or advanced disease. Saldana [ 3 ] identifies the principles that underpin LQR as duration, time and change and emphasizes that time and change are contextual and may transform during the course of a study.

Holland [ 4 ] identifies four methodological models of LQR.

• Mixed methods approaches. LQR may be imbedded within case studies, ethnographies and within quantitative longitudinal studies such as cohort studies and randomized controlled trials. Mixed methods studies are the context of most LQR studies in healthcare [ 5 ].

• Planned prospective longitudinal studies. Where the analysis can be the individual or the family or an organization.

• Follow-up studies, where an original study of participants are followed up after a period of time.

• Evaluation studies, for policy evaluation.

LQR methodologies can be particularly useful in assessing interventions. LQR studies embedded within randomized controlled trials or evaluation studies, of often complex interventions, are used as part of process evaluation. This can help us to understand not just whether an intervention may work but the mechanisms through which it works and if it is feasible and acceptable to the population under study [ 6 ].

LQR is becoming more frequently used in health research. LQR has been used, for example, to explore the prospect of dying [ 7 ], journeys to the diagnosis of cancer [ 8 ] and living with haemodialysis [ 9 ]. Published papers report mainly interview based studies, sometimes called serial interviews [ 10 , 11 ] to explore change over time, although other data collection methods are used. Different approaches have been taken to collection and analysis of data, for example, the use of longitudinal data to fully develop theoretical saturation of a category in a grounded theory study [ 12 , 13 ]. Data is not presented as a longitudinal narrative but as contributing to the properties of a category.

There are limitations in the published literature. Analysis is complex and multidimensional and can be tackled both cross-sectionally at each time point to allow analysis between individuals at the same time as well as longitudinally capturing each individual’s narrative. Thematic analysis is widely used [ 13 - 15 ] but can lead to cross-sectional descriptive accounts (what is happening at this time point) rather than focusing on causes and consequences of change. Research founded on explicit theoretical perspectives can move beyond descriptive analysis to further explore the complexities of experience over time [ 16 ]. LQR generates a rich source of data which has been used successfully for secondary analysis of data [ 11 , 17 ].

How analysis with this multidimensional data can be integrated is a particular challenge and is not well described or reported in the literature [ 4 ]. Papers tend to focus on either the cross-sectional or longitudinal (narrative) data. This means that the longitudinal aspects of the study, time and change, are often poorly captured. In particular the reporting of cross-sectional data alone can lead to descriptions of each time point rather than focusing on the changes between time points. Studies may have the explicit aim to focus on one or other aspect of analysis and this will achieve different analysis and reporting. The addition of a theoretical framework can help to guide researchers during analysis to move beyond description.

The purpose of this paper is to reflect on the strategies used in an LQR programme and highlight ethical, practical, theoretical and methodological issues that need to be considered and addressed from the outset of a study, giving researchers in the field some direction and raising the debate and discussion among researchers on ways to develop and carry out LQR projects.

We have carried out over the past six years a large LQR programme of research about experiences of symptoms in cancer patients [ 18 - 25 ]. This included interviews with patients from eight cancer diagnostic groups (and their caregivers) from diagnosis to three, six and 12 months later. As researchers working for the first time with longitudinal qualitative data we developed our research design and analysis strategy iteratively throughout the project. We have a particular focus on cancer patients, so this paper will have particular relevance for researchers interested in chronic and life limiting conditions.

As we were completing the analysis and dissemination of this large programme of research we wished to reflect on our experience of a health services research LQR project. As members of the core research team we felt that we had developed a great deal of experience in the development and management of such a project. We felt that if we pooled our knowledge we could suggest some important lessons learned from our experience. The authors met at regular intervals to identify the key aspects of the researchers’ experience of conducting this LQR project that we considered were not well addressed within the current literature. Issues were identified through brainstorming sessions among the investigators and consideration of past formal discussions (recorded or not) during the project duration. A final complete list was presented and discussed in an open meeting with a group of qualitative researchers from a supportive care research team and further discussions took place. Common issues that are relevant to any qualitative research and for which there is significant literature where left out, and only issues that were closely linked with LQR remained in the list for further discussion. Alongide our experience and consultation with experienced qualitative researchers, we have also searched the literature to find out if there is any clear information on each issues/topic. Recommendations, thus, were both experience-based and literature based, although due to lack of or limited literature around some of the issues discussed, experience-based recommendations were more common. This paper was developed to give examples of how specific ethical and practical issues in the project were tackled so they might stimulate debate and discussion amongst LQR researchers.

We present the results of our discussions and suggested solutions below and these are summarized in Table ​ Table1 1 .

Summary of themes and suggested solutions

Ethical issues: participant Recruitment shortly after significant diagnosis Treating doctor assessed participant prior to approach by researcher.
Approached participants sensitively in order to build trust and develop relationships over long term
  Blurring of boundaries as relationships develop Agreed plans to manage participant initiated contact about e.g. their treatment or health status (researchers did not give advice but referred participant to relevant health professional)
  Potential for patients to become unwell or die during study Written distress policy for participants and the research team in place
Ongoing consent recorded over the life of the project
Ethical issues: researcher Developing relationships over time Prepared researchers to manage difficult topics and emotions during the interview, and how management might change as relationships deepen
Closure of relationships
Developed a supportive network for researchers (e.g. debriefing sessions post interview)
  Confidentiality – and sharing data over large research teams Written procedures for managing ad-hoc or informal contacts with participants.
Developed clear data transfer and management plans
  Management of participant fatigue in interviews Ensure as the interview schedule changes due to new emerging topics that it is not over burdensome. Find new ways to ask questions to avoid repetition (do not merely add more questions)
Involvement of service users in study design
Recruitment and retention of participants Some groups of patients had high levels of attrition due to natural history of disease Checked health status of participants before contacting them prior to next interview to ensure this was done sensitively
Careful thought should be given to heterogeneity of the sample. The time points at which data is collected may have to be managed differently for sub-groups
Time At what time points should data be collected? We made a pragmatic decision about this and time points were the same for all participants.
It may be more relevant to identify time points by key transitions in the patient’s journey or by consideration of previous literature or informed by theory
  Time should be explicitly included in the interview – to include changing illness perceptions Looking forwards and backwards in interviews moves away from linear notions of time
Encourage reflexivity in the participant as well as the researcher
Asking participants to reflect on their experience from the previous interview
Data collection and management of resources Management of time and resources – when working with a large data set Ensure adequate time is included in project plans for project management and communication with participants
  Funding for LQR Work with the funding bodies to consider LQR
  Research focus and topic guide evolves over time Flexibility, openness and responsiveness to the data and emerging analysis and interpretation is a key skill for the LQR researcher
Ask for advice about how to manage this from an ethics committee
Analyzing dataLQR data sets are large and complex and can be analyzed in multiple ways from different perspectivesEnsure adequate time to analyze data between interviews – even if analysis is preliminary
Consider analysis of data within each case and as comparison between cases
Consider if and how subgroups should be analysed – is there a strong theoretical or practical reason why some groups should be analysed separately?
Consider the contribution of a number of different analysis strategies to the data and their strengths and weaknesses
Consider analysing data in a number of different ways, to add alternative understandings of longitudinal data

Ethical issues: participant related

Patients with cancer may be vulnerable, with a high symptom burden and poor prognosis, but patients still value being able to contribute their views [ 10 , 26 ]. Longitudinal research with this patient group is important but some ethical issues are amplified by collecting in-depth data from the same participants over time. Particular issues have been identified as intrusion (into people’s lives), distortion (of experience due to repeated contact, personal involvement and closure of relationships) and dependency [ 4 ].

We wished to interview patients shortly after diagnosis, which is a critical point in the patient pathway. Sensitive recruitment of participants soon after a life changing diagnosis, such as cancer, is important in building relationships and establishing a long term commitment to a study. Although building relationships and developing trust is essential this adds complexity to the role of the researcher involved in longitudinal research. Both the researcher and the researched can be affected by their involvement over time [ 27 ]. We found that on occasion patients did contact the research team for advice or information relating to their diagnosis. It is important that a research team have plans in place to manage this sort of situation without detriment to the relationship with the participant. There was a clear written distress policy for interviews and participants were given information about local support in case they wanted this after the interview.

There was a significant risk in our research that patients would become too unwell to participate or die between interviews. We sought consent from participants to access medical records and were able to check the health status of participants prior to contacting the participants to make arrangements for the next interview to ensure this was done sensitively. Consent was an ongoing process and was given in writing prior to the first interview and consent was checked verbally prior to each subsequent interview and also during the interview if a participant became upset or was talking about a particularly sensitive issue. The participant would be reminded that the tape recorder could be switched off at any time and the interview could be terminated at any time. If upset the participant would be given time to recover before the researcher asked if it was acceptable to continue with the interview. These procedures were built into the study protocol and the application for ethical approval.

Ethical issues: researcher related

Researchers too can be affected by their role [ 27 ]. Despite good training and support protocols for researchers qualitative research can be emotionally challenging [ 27 ]. Building a relationship over time, hearing about distressing situations and the impact that diagnosis can have on everyday life and relationships is hard. Information may be disclosed to the researcher that has not been discussed with anyone else; this builds a bond between those involved. Researchers may see participants deteriorate and die. The research team needs to build a supportive network and procedures to ensure that researchers are well supported in their role. In our study we used debriefing for very stressful events and researchers had regular supervision with the study team. Peer support within the research team also proved important on a day to day basis. It has been suggested that professional counseling is made available for researchers for whom debriefing is not sufficient support [ 27 ].

Staff retention may be an issue over time. There is a tension between the need to build relationships with participants in difficult circumstances and researcher burn out. It is ideal that one researcher builds a relationship with a participant over time but due to staff turnover or sickness this may not always be possible. Changes in staffing on LQR projects need to be well managed; the participant should be made aware that a different researcher will interview them and the researcher should read through previous transcripts so that participants feel there is some continuity and they do not have to repeat their story.

“Escaping the field” [ 4 ] or closure of relationships that have been built over time requires thought. Participants in our studies were prepared for the longitudinal element and the closure of the relationships. Study information was clear so participants knew that they were going to be interviewed 4 times over the year, and researchers prepared participants for the last interview: when ringing to arrange last interview participants were reminded that it was the final visit. At the end of the last interview we asked participants how they had found the process of being involved in research and had an informal “debriefing” session with them. If patients died whilst on the study a card would be sent on behalf of the research team to offer condolences.

It is important to ensure the confidentiality is maintained throughout the project as personal details, such as addresses, may be kept for longer than in studies with a single data collection point. Any ad hoc correspondence, phone messages or emails, for example, from participants to update researchers on their condition, should be handled in line with ethical approval requirements. As data is collected over time and experiences may be bound in particular circumstances and contexts ensuring that participants are not identifiable becomes more pertinent. The “blurred boundaries” for example taking your “emotional work” home with you [ 27 ] may also need special attention in LQR. Wray et al. [ 27 ] report, in their study, taking telephone calls from participants at home and ensuring women got evidence based care. These are complex, grey areas in LQR and it may become harder to separate, or manage ethically, empathy as a human being and a wish to help people who are suffering, with the role of a researcher when relationships deepen over time. These issues may have implications for the confidentiality of participants’ identities and data.

Data may have to be shared across large teams; this may mean that the core research team loses control of the data set and it is important to ensure that all team members are working to the ethical principles agreed with the relevant ethics committee. Large volumes of data may be generated from LQR and consideration should be given to how this data is archived and stored for the required length of time stipulated by the university, hospital or other regulatory body. LQR data is a valuable resource for archiving, data sharing and secondary data analysis, and may be a requirement of some funding bodies. To date this has been more common for large qualitative population data sets and is a specialist service offered by some Universities. The correct ethical approval, and participant consent to this, should be sought at the outset.

It is important to consider how researchers will deal with participant fatigue; within quantitative studies much thought is given to the burden of lengthy repeated questionnaires, the same consideration should be made for LQR, particularly as new topics of interest may emerge during the course of the study and it is tempting just to add a few more questions to the interview. Focusing on the purpose of the research, finding different ways to ask questions can avoid repetition and participants anticipating questions and giving the “right” response [ 28 ]. It is also wise to involve patients or service users in the design of the research and ongoing management to get the participants’ perspective of burden and balance research interest with participants’ well being.

Recruitment and retention of participants

We were successful in the recruitment of participants to the study. Patients were identified by the clinical team at the research site and then approached by a member of the research team to give information about the study. Once participants were recruited to the study retention was satisfactory. Recruitment and retention are important in all longitudinal studies. In qualitative studies sufficient participants are required at the last time point to ensure data saturation particularly if any new themes become evident at this point. We also wished to interview carers and this created a significant number of interviews at follow-up. We eventually made the decision not to interview some carers at follow-up as data was saturated. This created some difficulty with carer participants who valued this ongoing opportunity to ventilate feelings. The oversampling at the beginning (in order to have an adequate number of subjects at the last interview) was not a successful technique and overstretched the researchers and the data collection process unnecessarily.

There were two groups of patients where attrition was particularly poor: lung cancer patients (where 18 were recruited and four finished the study) and brain cancer patients (where 11 started and only one patient completed the fourth interview). For both of these groups there was a significant drop off after the third time point at six months. These attrition rates were not unexpected and almost all of these participants withdrew because they were too unwell or had died; this type of attrition may be unavoidable in some patient groups. All breast and gynecology patients completed all four interviews. Hence, a more selective approach to over-recruitment at the beginning of a LQR project is advocated, basing such decision on the outlook of participants over the timeline of the project. In some LQR studies it might be appropriate to develop newsletters or a web site with news of the study for participants to sustain interest. Good researcher communication skills are required to develop trust and convey the importance of the project to participants in the initial stages of the project. We have field notes that suggest that participants found participation in the study beneficial and this may also have contributed to our successful retention rates in populations with better health and survival.

The attrition in the sample highlights the complexity of having a heterogeneous sample in longitudinal research. We were well aware at the outset of the different disease trajectories of the tumor groups but for the purposes of analysis we designed the data collection points to be the same for all patients. In retrospect this was not entirely appropriate as there were different disease and treatment trajectories within each diagnostic group. In future research we would think differently about timing of interviews and link it to, for example, critical incidents rather than having set time points. Careful thought should be given to heterogeneity of the sample; by sampling over a number of cancer diagnostic groups we complicated our analysis making it difficult to draw together the experiences of patients with different disease trajectories. It may have been a better strategy to sample for heterogeneity within, for example, patients with advanced cancer. While heterogeneity in qualitative research is a desirable sampling feature, in LQR it is the “change” in events that is of more importance, and depicting change in very heterogeneous populations may not be so meaningful. Hence, defining clearly what an appropriate sample is for a given LQR study and understanding the trajectory of this sample over time are highly important considerations.

Issues of time and timing are of importance. Longitudinal research often focuses on change: how does coping or experience change? or how do participants manage change over time? [ 1 ]. Quantitative longitudinal research, such as cohort studies, assumes linearity of experiences and that people may experience time in the same way. However, the notion of time in a disease trajectory is complex. The difference between clock time and embodied time (or the experience of time) of the cancer patient has been recently illustrated in lung cancer, and this research highlights the lack of relationship between these two conceptualizations of time [ 29 ]. The differences between research time and biographical time have been explored elsewhere too [ 1 ]. Thus, consideration needs to be given to how time is defined in the study by the participants and by the research team.

One of the central issues we faced in this study was about the nature of time. As discussed above we identified set time points for data collection at the outset. However, we discovered that it is important to balance the pragmatics of a research design with flexible notions of time. We had significant attrition after the data collection point at six months and in retrospect we had not factored in the short disease trajectories of some patients or that some patients may have different notions of time. It may have been more useful to identify potential turning points or defining moments, from initial interviews, previously published research or clinical understanding of disease and focus on those rather than identifying set time points. For example, we know that the end of treatment, be that palliative or curative, is a significant time for patients [ 30 , 31 ] but treatment duration may not fall neatly into the first three months after diagnosis. That said, the focus of interviews should not be about “concrete events, practices, relationships and transitions which can be measured in precise ways, but with the agency of individuals in crafting these processes [ 32 ], p 192.” However, defining moments do often lead to change, in experience, coping or relationships and are useful points to tap into participants’ experiences. However, on a practical level, it would have been very difficult with our large data set to keep track of these critical incidents for every participant and to be able to organize researcher appointments to conduct interviews.

Issues of time need to be explicitly placed within the interview, an aspect we could have strengthened in our study. Looking both forwards and backwards in time moves away from linear notions of time as discussed above, asking participants to reflect on the content of their previous interviews. One way of doing this may be to encourage participants to approach the interview with reflexivity [ 33 ], a concept we are familiar with as researchers but in longitudinal research may be as important for the participant. For example, an issue that seems important for participants in the short term may not prove to be as important in the long term with the benefit of hindsight or increased understanding of the context [ 34 ]. This tentative or provisional, often contradictory, understanding makes analysis complex. As researchers we must endeavour to understand these complexities and make sense of them.

McLeod [ 33 ] suggests that reflexivity within the interview did not work for all of her research participants (in a study of school children) and is a point worth pursuing as we further develop our understanding of this methodology with patients. Reflexivity on a health state is complex for patients and it has been suggested that interviewing the ill may pose particular difficulties for the researcher [ 35 , 36 ], [a]s sick people, participants are unfamiliar with their everyday worlds, and they are often incapable of describing their condition and perceptions, so that researchers have difficulty in obtaining data to comprehend, interpret and generally conduct their research. … When researching participants who are sick, these methodological problems result in decisions about the timing of data collection, challenges to validity and reliability, and debates about who should be conducting the research [ 35 ], p 538.

Longitudinal qualitative research may in some way solve some of these issues as researchers will have the chance to incorporate changing illness perceptions into data collection and analysis. Patients whose illness has a long term impact will develop vocabulary and a way of expressing their illness experience in a way that patients with an acute episode will not. These changing perceptions, often moving from a lay perspective to one of the patients managing and controlling their illness [ 37 ], needs to be factored into analysis.

Data collection and management of resources

One of the main difficulties with LQR is the time and resources that are required to undertake a study. Dealing with a large data set can bring logistical challenges and there is a significant amount of time spent on project management, keeping up to date with participants, sending reminders and checking on a patient’s status. Analysis between interviews, across the participants and longitudinally within the individual narrative, can be a significant challenge in LQR.

There are no guidelines about how long a longitudinal study should be (although at least 2 points are necessary to examine change [ 3 ]) or how often data needs to be collected; this should be determined by the processes and population under investigation and the research question. Many health/patient related studies are short in duration, one to two years, in comparison to LQR in the social sciences where issues, such as transitions in identity from child to adult, are investigated over decades. This may of course be because of differences in the issues/processes under investigation but may also reflect research funding in health care which is often limited to a fixed duration. This poses problems for a research team who wish to follow a population for a number of years and requires ongoing generation of funds to complete the research.

The topic guide and the focus of the interview may change over time, this may prove challenging when seeking ethical approval for a study. Ethics committees usually ask for all documentation including topic guides prior to giving an opinion. Our interview schedule had broad questions both to comply with ethical approval procedures and to allow participants to talk about what is important for them at the time of each interview. Example opening questions include “How have you been feeling physically this past month” or “How have you been feeling emotionally this past month”. Developing a relationship with an ethics committee and seeking guidance about how to approach this with the committee is advisable.

LQR is a prospective approach and therefore can give a different perspective on processes. Issues that seem very important at one time point may change with the perspective of time and processes may change the way experiences are viewed. One off qualitative interviews rely on recall, for example, asking about symptom experience at diagnosis when a patient is several months away from that point. There will always be some element of retrospective discussion in an LQR interview but with a focus on change over time, this can be aided by summarizing or reflecting on the previous interview. As data is collected prospectively, causation, the temporality of cause and effect, and the processes or conditions by which this happens can also be explored in the data [ 4 ].

As we describe below, the richness of the interview content and overwhelming amount of data made it difficult to analyze in-depth each interview before the next one, an issue also been reported in other studies [ 27 ]. When this is the case we would propose that a preliminary analysis and summary of the interview is made so that the next interview can commence with a recap of what was previously discussed. Subsequent interviews could start by the interviewer providing a short summary of themes they have identified from the last interview and asking the participant to reflect on this summary of experiences before moving on to ask how the participant is feeling now and what has changed for them since the last interview. This more selective interview approach in subsequent interviews may also decrease the amount of data collected, easing the analysis and making the data collected more focused and less overwhelming for the researcher. Indeed we have noticed that often subsequent interviews tended to be shorter than the initial one. This helps the researcher and participant to keep the focus on longitudinal elements, what has changed since last time, why has this happened? Preliminary analysis will also highlight emerging themes to be further pursued in later interviews.

Using LQR researchers can respond to a change in focus and interviews can be adapted to the individual narratives. This is particularly useful as at the outset it is often not clear what the important processes are over time. Thus much data collected in the initial stages may not be relevant in the emerging processes over time, and data collection necessarily will become more focused at later time points. Flexibility and responsiveness to the data and emerging analysis and interpretation is a key skill for the LQR researcher.

Analyzing data

Longitudinal qualitative data analysis is complex and time consuming. A longitudinal analysis occurs within each case and as comparison between cases. The focus is not on snapshots across time (a cross-sectional design will achieve this) but “to ground the interviews in an exploration of processes and changes which look both backwards and forwards in time [ 32 ], p194.”

Holland [ 4 ] synthesizes two approaches to analyzing data and suggests some questions to guide analysis. Firstly, framing questions focus on the contexts and conditions that influence changes over time, she gives the example, “what contextual and intervening conditions appear to influence and affect participant changes over time? [ 4 ].” Descriptive questions generate descriptive information about what kinds of changes occur, for example, “what increases or emerges through time? [ 4 ].” These two types of questions move the researcher forward to develop deeper levels of analysis and interpretation.

Data collection and analysis should be informed by the research question, data collection methods and theoretical perspective, if one is being used from the outset. It may be possible to anticipate whether cross-sectional or longitudinal analysis would be the most helpful method of answering the research question. Considering these issues at the outset may allow the researcher to be alert to themes in the data during analysis whilst keeping an open mind to emerging issues.

As described above we planned to analyze each interview before moving onto the next interview with each participant to allow reflexivity of the researcher and participant and to focus on “processes and changes” rather than snapshots. Due to the volume of data it was not always possible to do this and this is certainly a limitation of our work and may reflect the predominance of cross-sectional data in our reporting of the studies.

We decided to analyze each tumor group separately rather than across the whole sample as it was clear that there were significant differences in these populations due to different disease trajectories and symptom experience. There was a different analysis and theoretical perspective taken in each analysis reflecting that data from each tumor group. McLeod [ 33 ] suggests that the nature of longitudinal data means that multiple theoretical frameworks may be useful to analyses and interpretation and the use of different paradigms may lead to new insights and interpretations.

Interpretative Phenomenological Analysis was used in lung cancer analysis [ 21 ], Interpretative Description with lymphoma data [ 20 ], content or thematic analysis using Leventhal’s self-regulation theory, the theoretical framework for the study, was used for gynecological, brain, and head and neck cancer data analysis [ 18 , 22 , 23 ], and thematic narrative analysis for breast cancer patients, The above approach took into consideration the data analysis experience of the researchers involved or the type of information collected through the interviews. For example, the analysis of breast cancer patients’ accounts [ 25 ] lead itself to narrative analysis because the women expressed their feelings much more than other groups and we analysed the data through patient stories about their cancer journey; this fitted well with the approach to data generation and Frank’s [ 38 ] concept of the cancer journey was used as the theoretical lens though which data were analyzed. In data from other diagnostic groups the unit of analysis was often the whole interview, as in the case of patients with head and neck cancer, where coding units in the first interview were assessed for presence and information in subsequent interviews. This captured well some experiences over time, such as the continuous nature of fatigue and tiredness over time, or the attempts for maintaining normality which were evident only after T2, increasing in complexity at T3 and T4 [ 22 ]. Detailed practical examples are presented in the respective papers [ 18 - 25 ] and a summary of the themes alongside other qualitative research related to symptom experience of cancer patients is presented in a meta-synthesis of these data [ 39 ].

Our analyses have highlighted new insights into the symptom experiences of patients with cancer. Utilizing multiple analysis strategies and theoretical perspectives has its strengths and allows comparison and gives direction for reanalysis and further interpretation of this important research resource.

Recommendations

Through reflecting on and describing our experiences we have identified broad recommendations for undertaking LQR projects in health research which we hope will stimulate debate amongst qualitative researchers.

• We would recommend incorporating a theoretical perspective (if appropriate to the methodology), that encompasses concepts such as time or the experience of change. This may help researchers keep the analysis “alive” to longitudinal aspects of analysis and move beyond descriptions of experience at each time point to explore change between time points.

• Qualitative researchers are familiar with complex ethical issues involved in being in the field. However, there are some ethical issues that are amplified whilst undertaking LQR, and require careful consideration and planning, such as how relationships are built and sustained over time whilst adhering to ethical practices, how relationships are ended, maintaining confidentiality over time and managing distress in participant and researchers.

• Good project management is essential when working with large data sets. Ensure adequate time is included in project plans for project management and communication with participants.

• Developing good team working is important; there are advantages to working with large teams which may be an unfamiliar way of working for qualitative researchers. Different perspectives can be brought to bear on the analysis making it richer and generating new insights. Communication is particularly important when analysis is undertaken by researchers who have not been involved in collecting data.

• We would encourage researchers to consider multiple methods of analysis and secondary analysis within the same data set to explore the rich data that is generated.

• We have clearly identified that longitudinal research with patients with a poor prognosis and experiencing long term challenges is worthwhile. However, thought needs to be given to the timing of data collection and the heterogeneity of the sample. Support for participants and researchers, and any additional ethical considerations, should be built into protocols as there is an increased burden for all involved in LQR.

• We recommend that from the outset the research team should consider how the volume of data can be managed and consider practical issues such as timing of interviews so data can be transcribed and analyzed in time for the next round of interviews. This early analysis may help keep the focus on change and transitions rather than description of events.

•Funders of research may be unfamiliar to funding longitudinal qualitative research and recommend that a strong case for the added value of this method should be made.

This paper has explored our experience of LQR and highlighted areas where we have learned a great deal about the methodology. During this longitudinal project we developed expertise in managing practical and ethical issues, tried different analysis strategies to look for alternative ways of examining data and understanding the experience of participants. There have been successes in the strategies we have used and areas in retrospect that we could have worked differently. For example, ensuring sensitivity during initial recruitment and subsequent contacts, putting procedures in place from the outset of the study to manage issues such as patient distress during interviews and patient initiated contact regarding health issues during data collection all helped the researchers to build trusting relationships with participants. These factors, together with researcher continuity, were important in helping to maintain good recruitment rates for participants with better health and survival rates throughout the study.

It is important to note that findings were generated from one particular study and issues highlighted here reflect the conduct of this study. There are other methodological issues that may be illustrated better through other examples of LQR research and we would encourage researchers to publish methodological issues highlighted by their studies to strengthen debate in this area. Although we consider that there are general lessons to be learned from our experience, which can be usefully considered by other researchers, we acknowledge that there may be aspects of the study, particularly the heath status of the participants that will not necessarily be broadly relevant. For this reason we do consider that this paper will have particular relevance for researchers interested in chronic and life limiting conditions.

We found that when seeking guidance for the project published literature was limited in highlighting debates about LQR focusing on the reporting of findings rather than developing debate about this emerging methodology. Much of the methodological literature cited in this paper comes from the social science literature where there is a long standing tradition of LQR and where debates about LQR with schoolchildren or other healthy populations in society are well rehearsed. There is little literature that examines the methodology in the context of health services research and whether there are particular issues about following participants through the trajectory of their illness to recovery, living with impairments or death. This paper has started to highlight some of the areas where further methodological exploration would be valuable.

One of the ongoing debates in qualitative methodology is how quality and credibility are evaluated [ 40 , 41 ]. There is little debate about whether LQR poses additional questions about quality. We have highlighted where, for example, there may be heightened concerns about ethical conduct, and using multiple methods of analysis. Longitudinal analysis is complex and is often reported a-theoretically and descriptively [ 13 - 15 ] and this also has implications for the quality and credibility of LQR. It may be that established guidance for the evaluation of qualitative research can be utilised with LQR but little exploration of this can be found in the published literature. Summaries of the researcher’s interpretation of a data collected in a previous interview when discussed with participants at a subsequent interview can enhance the credibility of the data. We have highlighted some ways in which these aspects of LQR can be enhanced, and by providing a record of our experiences it can help to start standardising a process by which QLR can be conducted which can enhance the credibility of research and quality of data collected.

LQR is an increasingly utilised methodology in health services research, for example in the development and evaluation of complex health interventions or to study transitions in recovery or long term illness. The findings presented in this paper are important as they begin to identify areas of LQR where there is potential for debate and multiple perspectives on these would be valuable.

Additional research and inquiry is also essential to further develop the methodology. There is little published work about rigour in LQR, and it would be worth investigating whether additional elements should be added to accepted conceptualizations of the quality of qualitative research so judgments can be made about the rigour of research. Research to explore participants’ perspectives of being in a longitudinal study would be valuable as there may be additional burden to the participant, emotional and practical, of being involved in LQR. Eliciting participants’ insights into their experiences of participation may give us greater insight into the method itself.

This paper has highlighted specific methodological, practical and ethical issues identified in an LQR programme of research about experiences of symptoms in cancer patients in the first year after diagnosis. The study itself has highlighted useful insights into these experiences and allowed examination of data from multiple perspectives, but importantly has been an important learning opportunity of the research team. Next steps may include agreement among the qualitative research community about standardization of the process, identification of LQR research questions that would be distinct from what can be achieved from cross-sectional work, and influencing funders for the value and uniqueness of this methodological approach.

Competing interests

The authors declared no conflicts of interest with respect to the authorship and/or publication of this article.

Authors' contributions

Conception of paper: AM, LC. Acquisition of original data: AM, LB. Interpretation of data: All authors. Drafting paper: LC. Critical revisions: AM, LB. Final approval: all authors.

Pre-publication history

The pre-publication history for this paper can be accessed here:

http://www.biomedcentral.com/1471-2288/13/14/prepub

  • Thomson R, Plumridge L, Holland J. Editorial. Int J Soc Res Methodol. 2003; 6 (3):185–187. doi: 10.1080/1364557032000091789. [ CrossRef ] [ Google Scholar ]
  • Maher J, McConnell H. New pathways of care for cancer survivors: adding the numbers. Br J Cancer. 2011; 105 (S1):S5–S10. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Saldana J. Longitudinal Qualitative Research: Analyzing Change Through Time. Walnut Creek, CA: AltaMira Press; 2003. [ Google Scholar ]
  • Holland J. Qualitative Longitudinal Research: Exploring ways of researching lives through time. Real Life Methods Node of the ESRC National Centre for Research Methods Workshop held at London South Bank University 2007. Retrieved from http://www.reallifemethods.ac.uk/training/workshops/qual-long/documents/ql-workshop-holland.pdf .
  • Holland J, Thomson R, Henderson S. Qualitative longitudinal research: A discussion paper, Working Paper No. 21, Families & Social Capital ESRC Research Group. London South Bank University; 2006. http://www.lsbu.ac.uk/ahs/downloads/families/familieswp21.pdf . [ Google Scholar ]
  • Oakley A, Strange V, Bonell C, Allen E, Stephenson J. and the RIPPLE Study Team. Process Evaluation in Randomised Controlled Trials of Complex Interventions. Br Med J. 2006; 332 :413–416. doi: 10.1136/bmj.332.7538.413. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Yedidia MJ, MacGregor B. Confronting the Prospect of Dying: Reports of Terminally Ill Patients. J Pain Symptom Manage. 2001; 22 (4):807–819. doi: 10.1016/S0885-3924(01)00325-6. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kendall M, Murray SA. Tales of the Unexpected: Patients Poetic Accounts of the Journey to a Diagnosis of Lung Cancer: A Prospective Serial Qualitative Interview Study. Qualit Inq. 2005; 11 (5):733–751. doi: 10.1177/1077800405276819. [ CrossRef ] [ Google Scholar ]
  • Axelsson L, Randers I, Jacobson SH, Klang B. Living with haemodialysis when nearing end of life. Scand J Caring Sci. 2012; 26 :45–52. doi: 10.1111/j.1471-6712.2011.00902.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Murray SA, Kendal M, Carduff E, Worth A, Harris F, Lloyd A, Cavers D, Grant L, Sheikh A. Use of serial qualitative interviews to understand patients evolving experiences and needs. Br Med J. 2009; 339 :b3702. doi: 10.1136/bmj.b3702. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Murray SA, Marilyn K, Boyd K, Grant L, Gill H, Sheikh A. Archetypal trajectories of social, psychological, and spiritual wellbeing and distress in family care givers of patients with lung cancer: secondary analysis of serial qualitative interviews. Br Med J. 2010; 340 :c2581. doi: 10.1136/bmj.c2581. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Krishnasamy M, Wells M, Wilkie E. Patients and carer experiences of care provision after a diagnosis of lung cancer in Scotland. Support Care Cancer. 2007; 15 (3):327–332. doi: 10.1007/s00520-006-0129-3. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Taylor C, Richardson A, Cowley S. Surviving cancer treatment: An investigation of the experience of fear about, and monitoring for, recurrence in patients following treatment for colorectal cancer. Eur J Oncol Nurs. 2011; 15 (3):243–249. [ PubMed ] [ Google Scholar ]
  • Kennedy F, Harcourt D, Rumsey N. The shifting nature of women’s experiences and perceptions of ductal carcinoma in situ. J Adv Nurs. 2012; 68 :856–867. doi: 10.1111/j.1365-2648.2011.05788.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McCaughan E, Prue G, Parahoo K, McIlfatrick S, McKenna H. Exploring and comparing the experience and coping behaviour of men and women with colorectal cancer after chemotherapy treatment: a qualitative longitudinal study. Psycho-Oncol. 2012; 21 :64–71. doi: 10.1002/pon.1871. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McCann L, Illingworth N, Wengstrom Y, Hubbard G, Kearney N. Transitional experiences of women with breast cancer within the first year following diagnosis. J Clin Nurs. 2010; 19 (13–14):1969–1976. [ PubMed ] [ Google Scholar ]
  • Murray SA, Kendall M, Grant E, Boyd K, Barclay S, Sheikh A. Patterns of Social, Psychological, and Spiritual Decline Toward the End of Life in Lung Cancer and Heart Failure. J Pain Symptom Manage. 2007; 34 (4):393–402. doi: 10.1016/j.jpainsymman.2006.12.009. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lopez V, Copp G, Brunton L, Molassiotis A. Symptom Experience in Patients with Gynecological Cancers: The Development of Symptom Clusters through Patient Narratives. J Support Oncol. 2011; 9 :64–71. doi: 10.1016/j.suponc.2011.01.005. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lopez V, Copp G, Molassiotis A. Male Caregivers of Patients with Breast and Gynecologic Cancer: Experiences From Caring for Their Spouses and Partners. Cancer Nurs. 2012; 35 :402–410. doi: 10.1097/NCC.0b013e318231daf0. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Johansson E, Wilson B, Brunton L, Tishelman C, Molassiotis A. Symptoms Before, During, and 14 Months After the Beginning of Treatment as Perceived by Patients With Lymphoma. Oncol Nurs Forum. 2010; 37 :E105–E113. doi: 10.1188/10.ONF.E105-E113. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lowe M, Molassiotis A. A longitudinal qualitative analysis of the factors that influence patient distress within the lung cancer population. Lung Cancer. 2011; 74 :344–348. doi: 10.1016/j.lungcan.2011.03.011. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Molassiotis A, Rogers M. Symptom experience and regaining normality in the first year following a diagnosis of head and neck cancer: a qualitative longitudinal study. Palliat Support Care. 2012; 10 :197–204. doi: 10.1017/S147895151200020X. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Molassiotis A, Wilson B, Brunton L, Chaudhary H, Gattamaneni R, McBain C. Symptom experience in patients with primary brain tumours: A longitudinal exploratory study. Eur J Oncol Nurs. 2010; 14 :410–416. doi: 10.1016/j.ejon.2010.03.001. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Stamataki Z, Burden S, Molassiotis A. Weight Changes in Oncology Patients During the First Year After Diagnosis: A Qualitative Investigation of the Patients’ Experiences. Cancer Nurs. 2011; 34 :401–409. doi: 10.1097/NCC.0b013e318208f2ca. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tighe M, Molassiotis A, Morris J, Richardson J. Coping, meaning and symptom experience: A narrative approach to the overwhelming impacts of breast cancer in the first year following diagnosis. Eur J Oncol Nurs. 2011; 15 :226–232. doi: 10.1016/j.ejon.2011.03.004. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Terry W, Olson L, Ravenscroft P, Wilss L, Boulton-Lewis G. Hospice patients views on research in palliative care. Int Med J. 2006; 36 (7):406–413. doi: 10.1111/j.1445-5994.2006.01078.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wray N, Markovic M, Manderson L. Researcher Saturation: The Impact of Data Triangulation and Intensive-Research Practices on the Researcher and Qualitative Research Process. Qual Health Res. 2007; 17 (10):1392–1402. doi: 10.1177/1049732307308308. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Farrall S. What is qualitative longitudinal research? Papers in Social Research Methods Qualitative Series, Paper 11. LSE Methodology Institute. 2006. http://www2.lse.ac.uk/methodologyInstitute/pdf/QualPapers/Stephen-Farrall-Qual%20Longitudinal%20Res.pdf .
  • Lövgren M, Hamberg K, Tishelman C. Clock time and embodied time experienced by patients with inoperable lung cancer. Cancer Nurs. 2010; 33 (1):55–63. doi: 10.1097/NCC.0b013e3181b382ae. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Armes J, Crowe M, Colbourne L, Morgan H, Murrells T, Oakley C, Palmer N, Young A, Richardson A. Patients’ Supportive Care Needs Beyond the End of Cancer Treatment: A Prospective, Longitudinal Survey. J Clin Oncol. 2009; 27 :6172–6179. doi: 10.1200/JCO.2009.22.5151. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ganz PA, Kwan L, Stanton AL, Krupnick JL, Rowland JH, Meyerowitz BE, Bower JE, Belin TR. Quality of Life at the End of Primary Treatment of Breast Cancer: First Results From the Moving Beyond Cancer Randomized Trial. J Nat Cancer Inst. 2004; 96 (5):376–387. doi: 10.1093/jnci/djh060. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Neale B, Flowerdew J. Time, texture and childhood: The contours of longitudinal qualitative research. Int J Soc Res Methodol. 2003; 6 (3):189–199. doi: 10.1080/1364557032000091798. [ CrossRef ] [ Google Scholar ]
  • Mcleod J. Why we interview now–reflexivity and perspective in a longitudinal study. Int J Soc Res Methodol. 2003; 6 (3):201–211. doi: 10.1080/1364557032000091806. [ CrossRef ] [ Google Scholar ]
  • Plumridge L, Thomson R. Longitudinal qualitative studies and the reflexive self. Int J Soc Res Methodol. 2003; 6 (3):213–222. doi: 10.1080/1364557032000091815. [ CrossRef ] [ Google Scholar ]
  • Morse J. Researching Illness and Injury: Methodological Considerations. Qual Health Res. 2000; 10 (4):538–546. doi: 10.1177/104973200129118624. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Morse J. In: Handbook of Qualitative Interviewing. Gubrium J, Holstein J, editor. Thousand Oaks: Sage; 2002. Interviewing the Ill; pp. 317–328. [ Google Scholar ]
  • Calman L. Patients’ views of nurses’ competence. Nurse Educ Today. 2006; 26 :719–725. doi: 10.1016/j.nedt.2006.07.016. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Frank A. The Wounded Storyteller: Body, Illness and Ethics. Chicago: University of Chicago Press; 1995. [ Google Scholar ]
  • Bennion AE, Molassiotis A. Qualitative research into the symptom experiences of adult cancer patients after treatments: a systematic review and meta-synthesis. Support Care Cancer. 2013; 21 :9–25. doi: 10.1007/s00520-012-1573-x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cohen D, Crabtree B. Evaluative Criteria for Qualitative Research in Health Care: Controversies and Recommendations. Ann Fam Med. 2008; 6 (4):331–339. doi: 10.1370/afm.818. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Devers KJ. How will we know “good” qualitative research when we see it? Beginning the dialogue in health services research. Health Serv Res. 1999; 34 (5 Pt 2):1153–88. [ PMC free article ] [ PubMed ] [ Google Scholar ]

IMAGES

  1. longitudinal case study project management

    longitudinal case study project management

  2. longitudinal case study project management

    longitudinal case study project management

  3. case control study longitudinal

    longitudinal case study project management

  4. PPT

    longitudinal case study project management

  5. Case Study Sample Project Management

    longitudinal case study project management

  6. (PDF) Characterization of profile and main drivers for transitions in

    longitudinal case study project management

COMMENTS

  1. Project risk and uncertainty

    This paper examines a longitudinal case study to show the challenges that project managers face when assessing project risks and benefits.

  2. Stakeholder Management Strategies and Practices during a Project Course

    Based on an in-depth longitudinal case study, we provide detailed descriptions of how a project management team worked with its stakeholder relationships. Applying a practice approach, we explore how stakeholder management practices emerged and evolved as embedded actions and interpretations related to perceptions of each stakeholder's harm and ...

  3. The longitudinal, chronological case study research strategy: A

    Method Review the methodological literature on longitudinal case study. Define the LCCS and demonstrate the development and application of the LCCS research strategy to the investigation of Project C, a software development project at IBM Hursley Park.

  4. Responding to change over time: A longitudinal case study on ...

    We conducted a longitudinal case study in a growing large-scale agile organization, focusing on how external and internal changes impact coordination over time.

  5. (PDF) Improving validity and reliability in longitudinal case study

    Management Information Systems researchers rely on longitudinal case studies to investigate a variety of phenomena such as systems development, system implementation, and information systems ...

  6. Exploring knowledge creation processes as a source of organizational

    Finally, since I report on a longitudinal case study of a public service organization's entire innovation project my study contributes with it's empirical usage to organizational studies (Rashman, Withers, & Hartley, 2009). The study proceeds as follows.

  7. A longitudinal explanatory case study of coordination in a very large

    Section 3 describes the design of the longitudinal explanatory case study, while Section 4 provides a rich description of the programme organization and the findings on coordination for each phase. Section 5 presents coordination in the phases and develops five propositions to answer the research question (shown in Table 10 ).

  8. Two Longitudinal Case Studies of Software Project Management

    11.1 INTRODUCTION This chapter reports the experiences of the third author as he undertook two longitudinal case studies of software projects at IBM Hursley Park, as part of his PhD research. The two projects are referred to respectively as Project B and Project C to retain consistent identifiers with previous publications [144-147, 152, 154, 155] on these case studies.

  9. The longitudinal, chronological case study research strategy: A

    Request PDF | The longitudinal, chronological case study research strategy: A definition, and an example from IBM Hursley Park | Context: There is surprisingly little empirical software ...

  10. The longitudinal, chronological case study research strategy: A

    Objective: To examine the use of the longitudinal, chronological case study (LCCS) as a research strategy for investigating the rich, fine-grained behaviour of phenomena over time using qualitative and quantitative data. Method: Review the methodological literature on longitudinal case study.

  11. Two Longitudinal Case Studies of Software Project Management

    This chapter contains sections titled: Introduction Background to the Research Project Case Study Design and Planning Data Collection Data Analysis Reporting Lessons Learned

  12. Longitudinal Study

    Longitudinal vs cross-sectional studies The opposite of a longitudinal study is a cross-sectional study. While longitudinal studies repeatedly observe the same participants over a period of time, cross-sectional studies examine different samples (or a "cross-section") of the population at one point in time. They can be used to provide a snapshot of a group or society at a specific moment.

  13. Qualitative Research on Software Development: A Longitudinal Case Study

    We discuss the insights gained and lessons learned from applying a longitudinal qualitative approach to an empirical case study of a software development project in a large multi-national ...

  14. PDF From the Editors Introducing Teaching Case Studies in Project

    From the Editors Introducing Teaching Case Studies in ProjectFrom the Editors Introdu. and Management, Université du Québec à Montréal, CanadaProject Management Journal® is an academic journal dedicated to publishing research relevan. to researchers, reflective practitioners, and organizations. In this regard, as scholars, we have a double ...

  15. A longitudinal study on logistics strategy: the case of a building

    Originality/value This paper adopts a longitudinal case design to study the fit between the logistics context and strategy, adding to the body of knowledge on organisational design and strategy in logistics and supply chain management.

  16. PDF QUIC Briefing Paper Research design for longitudinal case-study project

    To address this gap the longitudinal case-study project is generating up-to-date and in-depth data concerning researchers' use and experience of five mainstream CAQDAS packages subsequent to initial introductory training.

  17. PDF itc.scix.net

    This paper presents the methodology and findings of a longitudinal-grounded case study of the ambitious implementation of a PMIS within the public works organization of the HKG SAR. It has provided an opportunity for practical experimentation through the quantitative measurement of 'before' and 'after' effects arising from a change in management techniques. These were substantially ...

  18. PDF The longitudinal, chronological case study research strategy: a

    Method: Review the methodological literature on longitudinal case study. Define the LCCS and demonstrate the development and application of the LCCS research strategy to the investigation of Project C, a software development project at IBM Hursley Park.

  19. Longitudinal Multiple Case Study: Policy and Environmental Change

    In this report, we describe an application of longitudinal multiple case study to evaluate a complex public health initiative over 16 months. The benefits of this design include the ability to capture the interventions over time, which is ideal to document the complex processes of policy adoption and implementation.

  20. Project Management Case Studies

    Our case studies collection highlights how organizations are implementing project management practices and using PMI products or services to fulfill strategic initiatives and overcome challenges.

  21. Developing longitudinal qualitative designs: lessons learned and

    Longitudinal qualitative methods are becoming increasingly used in the health service research, but the method and challenges particular to health care settings are not well described in the literature.We reflect on the strategies used in a longitudinal ...

  22. Longitudinal analysis of project management maturity

    It then examines--using a longitudinal analysis--a six-year (1998-2003) benchmarking study performed by one project management consulting company, a study involving 550 worldwide organizations and 2,5

  23. Participation of external consultants and management of an

    A qualitative longitudinal case study is conducted in a Tunisian leading bank as part of its process of implementing a Global Banking System divided into three phases.