“Sometimes it’s Like Putting the Track in Front of the Rushing Train”: Having to Be ‘On Call’ for Work Limits the Temporal Flexibility of Crowdworkers

This is a pre-print HTML author version of the paper. A PDF version is available. It is also available in the ACM Digital Library. Please cite the work as:

Laura Lascău, Duncan P. Brumby, Sandy J.J. Gould, and Anna L. Cox. 2023. “Sometimes it’s Like Putting the Track in Front of the Rushing Train”: Having to Be ‘On Call’ for Work Limits the Temporal Flexibility of Crowdworkers. ACM Trans. Comput.-Hum. Interact. Just Accepted (December 2023). https://doi.org/10.1145/3635145

Abstract

Research suggests that the temporal flexibility advertised to crowdworkers by crowdsourcing platforms is limited by both client-imposed constraints (e.g., strict completion times) and crowdworkers’ tooling practices (e.g., multitasking). In this paper, we explore an additional contributor to workers’ limited temporal flexibility: the design of crowdsourcing platforms, namely requiring crowdworkers to be ‘on call’ for work. We conducted two studies to investigate the impact of having to be ‘on call’ on workers’ schedule control and job control. We find that being ‘on call’ impacted: (1) participants’ ability to schedule their time and stick to planned work hours, and (2) the pace at which participants worked and took breaks. The results of the two studies suggest that the ‘on-demand’ nature of crowdsourcing platforms can limit workers’ temporal flexibility by reducing schedule control and job control. We conclude the paper by discussing the implications of the results for: (a) crowdworkers, (b) crowdsourcing platforms, and (c) the wider platform economy.

Introduction

Research suggests that the temporal flexibility of people working on crowdsourcing platforms is limited by both client-imposed constraints (e.g., strict completion times) and workers’ tooling practices (e.g., multitasking). As part of the “just-in-time” platform economy (De Stefano 2015), on-demand crowdsourcing platforms, are advertised to crowdworkers as offering them temporal flexibility: flexibility in terms of when and for how long they choose to work (Horton 2010; Sundararajan 2016). However, research suggests that crowdworkers do not benefit from the temporal flexibility advertised by crowdsourcing platforms (Lehdonvirta 2018; Lascau et al. 2022). For example, each client (i.e., private companies or individual consumers) (Howcroft and Bergvall-Kåreborn 2019) provides their own completion times on jobs, which workers must adhere to (Lascau et al. 2019). In addition, since clients post jobs on-demand, workers have to wait for work and make themselves available to ‘catch’ new jobs whenever clients make them available. Furthermore, research suggests that the external tools workers use to ‘catch’ new jobs promote the temporal fragmentation of workers’ (a) work-life boundaries (i.e., of workers’ schedules), and (b) work practices (i.e., by increasing workers’ need to use their time multitasking) (Williams et al. 2019). Thus, the tools workers use to ‘catch’ jobs also limit workers’ temporal flexibility.

In this paper, we explore an additional contributor to crowdworkers’ limited temporal flexibility: the design of crowdsourcing platforms, namely requiring crowdworkers to be ‘on call’. Being ‘on call’ is a somewhat known (Gupta et al. 2014; Lehdonvirta 2018; Toxtli, Suri, and Savage 2021), but undefined and increasingly popular working time arrangement within the platform economy, central to the operation of geographically-tethered platform work (e.g., ride-hailing services) (Woodcock and Graham 2019) and online crowdsourcing platforms alike. In the context of crowdsourcing platforms, we define being ‘on call’ as a working time arrangement that requires crowdworkers to wait and search for jobs for an undetermined amount of time, often without getting paid, because of the platforms’ lack of predictable work availability and lack of work assignment. At least seven platforms are known to have adopted this working time arrangement (i.e., Amazon Mechanical Turk, Clickworker, Hive Micro, Microworkers, Neevo, PicoWorkers, and UHRS) (Lascau et al. 2022), and over time more can be expected given the growing demand for ‘flexible’ working arrangements (Mas and Pallais 2020). However, the lack of predictable work availability and lack of work assignment results in competition between crowdworkers (Lehdonvirta 2018; Lascau et al. 2019), who have to accept jobs before other workers on a first-come, first-served basis, in an auction-like system (Dubal 2020). We argue in this paper that being ‘on call’ can further limit workers’ temporal flexibility and exacerbate precarious working conditions. Thus, revealing the platform architectures that make the exploitation of workers possible is pivotal to changing the flexibility discourse of the platform economy (e.g., individual freedom and flexibility (Anwar and Graham 2021a)), and demanding decent work standards (Graham et al. 2020) for the workers (e.g., realised temporal flexibility (Berg et al. 2018) and fair pay (Whiting, Hugh, and Bernstein 2019)).

We conducted two studies to investigate the impact of having to be ‘on call’ on workers’ schedule control (Kelly and Moen 2007) and job control (Wheatley 2017). In the two studies, we asked the following research questions (RQ):

  • RQ Study 1: How does having to be ‘on call’ for work limit crowdworkers’ control over scheduling their time?

  • RQ Study 2: How does having to be ‘on call’ for work limit crowdworkers’ control over their work pace?

Thus, Study 1 is a time-use-diary study that investigates how having to be ‘on call’ can limit U.S.-based workers’ control over how they schedule their time. Study 2 is a video analysis study of twelve 90-minute working sessions that investigates how having to be ‘on call’ can limit workers’ control over the pace at which they conduct their work. Workers’ time scheduling and work pace are of particular interest in this paper as they are predictors of schedule control (Kelly and Moen 2007) and job control (Ganster 1989), which are both, in turn, predictors of wellbeing (Kelly and Moen 2007; Ganster 1989).

We find that having to be ‘on call’ impacted: (1) participants’ ability to schedule their time and stick to planned hours of work, and (2) the pace at which participants worked and took breaks. The results of Study 1 show that participants’ ability to schedule their time and stick to planned hours of work was in particular impacted by the lack of predictable work availability on the crowdsourcing platform. While participants started and finished work roughly when they intended to, participants worked on average two hours less than planned and spent on average 22% of their daily working time on unpaid ‘on-call’ activities such as waiting and searching for new jobs. In addition, the data suggest that participants’ workdays were significantly more fragmented (i.e., broken into more work sessions) than workers planned, with work distributed across twice as many periods of work as desired. This study suggests that a lack of predictable work availability can reduce workers’ schedule control by impacting their ability to schedule their time and stick to planned work hours.

The results of Study 2 also show that the pace at which participants worked and took breaks was in particular impacted by the lack of work assignment on the crowdsourcing platform. We observed that during the twelve 90-minute working sessions recorded, participants spent on average 17% of their working time on unpaid ‘on-call’ activities such as waiting and searching for new jobs. In addition, working on the platform was overall characterised by three distinct periods of work intensity: periods of low, moderate, and high work intensity. For example, during periods of low work intensity, participants worked at a slower pace, filling their unpaid time with activities resembling break-taking due to a lack of available work (i.e., browsing the internet or watching TV shows), whilst monitoring the platform for new jobs. In contrast, participants worked at a higher pace during periods of high work intensity, engaging in task switching to quickly ‘catch’ new work, but not taking any breaks. This study suggests that a lack of work assignment can reduce workers’ job control by impacting the pace at which they work and frequency with which they take breaks.

Taken together, the results of the two studies suggest that the ‘on-demand’ nature of crowdsourcing platforms, which requires workers to be ‘on call’ for work and accept jobs before other workers, can limit U.S.-based workers’ temporal flexibility, reducing their schedule control and job control. Reduced schedule control and job control are known for having negative impacts on health and wellbeing (Kossek, Lautsch, and Eaton 2006). We conclude the paper by discussing the implications of the results for: (a) the people working on crowdsourcing platforms, (b) the design of crowdsourcing platforms, and (c) the wider platform economy. Overall, this paper makes three main contributions that extend and contribute to the existing HCI and CSCW research examining the working conditions of crowdworkers (Harmon and Silberman 2019; Gray and Suri 2019; Fredman et al. 2020):

  1. A definition of what it means to be ‘on call’ for work on some existing crowdsourcing platforms.

  2. A measure to quantify the amount of unpaid time that crowdworkers have to spend being ‘on call’ for work.

  3. Empirical evidence that being ‘on call’ for work impacts workers’ control over their daily schedule planning (Study 1) and work pace (Study 2).

Background

The Promise of Temporal Flexibility

The platform economy is seen (though not without dispute) as a flexible alternative to traditional job opportunities. For instance, a report published in 2016 by the World Bank describes working within the platform economy as a flexible earning opportunity, where people can work on online platforms from home and set their own schedules (World Bank 2016). However, the narrative of ‘flexibility’ promoted by institutions such as the World Bank is a source of contention among scholars, who argue that platform work can perpetuate precarious working conditions for the workers on the pretext of flexibility (Anwar and Graham 2021a).

Nevertheless, more people than ever have to work within the platform economy as a result of the COVID-19 pandemic (Howson et al. 2022). Working within the platform economy offers a way to earn additional income, which has been important during the COVID-19 pandemic lockdowns and in helping people cope with the cost of living crisis brought on by high inflation in the early 2020s (statista 2023). Besides the obvious financial incentives, one of the main selling points of platform economy advocates is workers’ flexibility. More precisely, platform work is advertised as offering workers temporal flexibility. For example, Uber drivers reportedly value the ride-hailing service’s flexible work schedules and ability to adapt hourly to their time demands (M. K. Chen et al. 2019). Another example of temporally-flexible platform work can be found on crowdsourcing platforms. Crowdsourcing platforms are touted as offering the people working on these platforms temporal flexibility because of the short temporal nature of the work (Wood et al. 2019), known as ‘crowdwork’, ‘microwork’ or ‘cloudwork’. Examples of work available on these platforms include data entry jobs such as receipt transcriptions, image or video tagging, data set cleaning, or survey completion (D. E. Difallah et al. 2015).

Academic and industry researchers alike widely use crowdsourcing platforms in their work. For example, within CSCW and HCI, researchers have used crowdsourcing platforms to study privacy in online social networks (Mendel and Toch 2017) or to understand online news behaviours (Bentley et al. 2019). Within industry, Artificial Intelligence (AI) researchers extensively use crowdsourcing platforms to train machine learning (ML) algorithms. Large tech companies such as Amazon, Facebook or Google, as well as AI start-ups, temporarily hire people from crowdsourcing platforms to build and label large training data sets for ML applications, such as product recommendations, image or speech recognition, or traffic prediction (Murgia 2019). Amid an expansion in the use of AI, the use of crowdsourcing platforms is expected to grow globally at an annual rate of 26% (Savage and Jarrahi 2020; Kässi and Lehdonvirta 2018). However, crowdsourcing platforms are emblematic of precarious work, being characterised by short-term work opportunities that do not have a fixed place of work (Webster 2016).

Far from being a pastime that people engage in within their spare time, crowdsourcing platforms became a primary employer for many who require, or in some cases desire, the flexibility promoted by on-demand crowdsourcing platforms (De Stefano 2015). Crowdsourcing platforms grew in popularity in the mid-2000s for people who needed an income working at home or on the move (Ipeirotis 2010). Years later, crowdsourcing platforms have been criticized for their low wages (Hara et al. 2018), lack of transparency between workers and clients (Irani and Silberman 2013), and enablement of a ‘work-anywhere’ attitude that has the potential to fragment workers’ work-life boundaries (Williams et al. 2019). The ‘work-anywhere’ element of crowdsourcing platforms, also observed in the case of digital nomads (Cook 2020), is considered to offer people a great amount of flexibility (Kuek et al. 2015).

Those who work flexibly on crowdsourcing platforms do so for personal and financial reasons (Yin, Suri, and Gray 2018). In some cases, workers’ circumstances mean that working outside of the traditional workforce becomes one of the only options for employment. For example, some workers lack the physical mobility to search for work outside of their homes (e.g., are not able to travel to a workplace with a physical location) (Zyskowski et al. 2015); other workers are not available to work within rigid industrial hours (e.g., are not able to work the hours set by traditional jobs) (Flores-Saviaga et al. 2020); finally, other workers are housed in prisons (Hao 2019) or refugee camps (Giles 2009). Therefore, people unable to find work in formal labour markets have to work on crowdsourcing platforms.

Previous work suggests that although people who work on crowdsourcing platforms value the temporal flexibility advertised by the platform, they do not benefit from it. Prior research shows that crowdworkers value having autonomy over scheduling their own time (Deng, Joshi, and Galliers 2016). However, there is evidence to suggest that crowdworkers do not benefit from the temporal flexibility advertised by the platform. For example, since workers have no protection from low demands in work (Felstiner 2011), they are not able to complete jobs when they prefer. Therefore, crowdworkers can end up having limited temporal flexibility (Lehdonvirta 2018).

In this paper, we argue that having to be ‘on call’ for work on crowdsourcing platforms is a contributor to workers’ limited temporal flexibility. Research examining the temporal flexibility of workers has shown that it is limited by both client-imposed constraints (Lascau et al. 2019) and workers’ tooling practices (Williams et al. 2019). On the one hand, even though some workers might be attracted to the temporal flexibility that crowdsourcing platforms can offer (Yin, Suri, and Gray 2018), there is a lack of worker-controlled temporal flexibility on these platforms (Lehdonvirta 2018). Instead, it is clients who have flexibility: they control the ‘when’ and ‘for how long’ aspects of jobs. For example, as soon as a client posts a set of surveys to be completed by crowdworkers, workers have to be ready to compete against other workers to ‘catch’ it (Lascau et al. 2019). Workers can ‘catch’ new jobs manually, or use tools to ‘catch’ jobs on their behalf and notify them of new work (Kaplan et al. 2018). Therefore, the temporal flexibility advertised by crowdsourcing platforms is limited by client-imposed constraints.

On the other hand, the tools workers use to ‘catch’ work promote temporal fragmentation of workers’ work-life boundaries and work practices (Williams et al. 2019). Firstly, the tools promote temporal fragmentation of workers’ work-life boundaries by increasing workers’ on-demand availability during non-work time. For example, job-catching tools notify workers of new work at unpredictable times of the day, sometimes outside their ‘working hours’ (Williams et al. 2019). Secondly, the tools promote temporal fragmentation of workers’ work practices by enabling task switching and multitasking behaviour. For example, workers need to frequently monitor the tools for new work even when trying to complete jobs (Williams et al. 2019). Therefore, the temporal flexibility advertised by crowdsourcing platforms is limited by workers’ tooling practices. We next explore what we know about being ‘on call’ for work.

Having to Be ‘On Call’ for Work

Within the platform economy, workers have to be ‘on call’ for work (Dokko, Mumford, and Schanzenbach 2015). Formally, on-call work is characterised by the International Labour Organization (ILO) as a working time arrangement that involves variable and unpredictable hours of work (from zero hours to full-time work) (Organization 2016). On-call arrangements emerged in the past decade in industrialised economies as a way of scaling staffing in a short amount of time, in response to changing business needs. Such working time arrangements are commonly found within the platform economy (Organization 2016). We built on the ILO’s description of on-call work to define being ‘on call’ in the context of crowdsourcing platforms.

In the context of crowdsourcing platforms, we define being ‘on call’ as a working time arrangement that requires crowdworkers to wait and search for jobs for an undetermined amount of time, often without getting paid, because of the platforms’ lack of predictable work availability and lack of work assignment. We next describe the lack of predictable work availability and lack of work assignment found on some existing crowdsourcing platforms.

In this paper, we argue that the problem of being ‘on call’ for work on crowdsourcing platforms is twofold: (1) there is a lack of predictable work availability, and (2) there is a lack of work assignment. Firstly, the lack of predictable work availability is mainly due to an oversupply of labour on crowdsourcing platforms, which makes jobs scarce (Graham, Hjorth, and Lehdonvirta 2017). The oversupply of labour generates a labour force that competes for the better-paid jobs (Anwar and Graham 2021a). Within the wider platform economy, labour platforms also have an oversupply of workers (Graham, Hjorth, and Lehdonvirta 2017), which makes the workers a ‘disposable labour force’ that platforms can quickly replace (Moore 2017). Further, the oversupply of labour means that there are more crowdworkers completing jobs on these platforms than available jobs, which results in competition between the workers. In other words, there are more workers completing jobs on the platform than available jobs. The lack of predictable work availability results in workers not knowing when clients are going to post jobs on the platform. Further, workers do not know for how long work will be available on the platform (Lehdonvirta 2018). We describe the impact that not knowing when and for how long clients will post jobs has on the workers in the forthcoming sections (Section 2.2.1 and Section 2.2.2).

Secondly, the lack of work assignment is due to the jobs being made available to most of the workers online, rather than workers being matched by the platform with suitable jobs. While some on-call work within parts of the platform economy is assigned to workers algorithmically (e.g., Uber) (Lee et al. 2015), workers on crowdsourcing platforms such as Amazon Mechanical Turk, Clickworker, Hive Micro, Microworkers, Neevo, PicoWorkers, or Microsoft’s UHRS (Universal Human Relevance System) are not assigned work (Lascau et al. 2022). Instead, crowdworkers have to accept work from the pool of jobs available (Kittur et al. 2013). In the case of crowdsourcing platforms, clients recruit workers on an as-needed basis to work on jobs. Recruiting participants quickly, also known as lowering crowd recruitment latency (i.e., the time until a worker accepts a job that was just posted (Haas et al. 2015)), has been of interest to researchers wanting to optimise on-demand real-time crowdsourcing (Gao and Parameswaran 2014). Workers can be recruited either using on-demand recruiting (i.e., workers are recruited by clients when needed) or using retainers (i.e., workers are added to a waiting pool and are assigned jobs when needed) (Huang and Bigham 2017). For example, on-demand recruiting of crowdworkers has been used to pre-recruit workers with a latency of two minutes (Bigham et al. 2010). In contrast, the retainer model for recruitment has been used to pre-recruit workers within two seconds (Bernstein et al. 2011). Therefore, because crowdworkers are not assigned work, they have to be ‘on call’ to ‘catch’ jobs. The lack of work assignment means that workers have to accept available jobs before other workers. We describe the impact that not having work assigned has on workers in the forthcoming sections (Section 2.2.1 and Section 2.2.2).

While ‘on call’, crowdworkers are in a state of hypervigilance (Gray and Suri 2019), having to be ready to work on jobs whenever they become available (Toxtli, Suri, and Savage 2021). During this time, workers carry out unpaid work such as searching and waiting for work to become available on the platform (Berg 2015). Toxtli et al. (Toxtli, Suri, and Savage 2021) identified that crowdworkers spend a median of 11 minutes daily on activities related to hypervigilance: (1) watching over clients’ profiles, (2) searching for jobs, (3) managing queued jobs, (4) searching for filtered jobs, and (5) checking worker’s qualification. Out of these five activities, Toxtli et al. (Toxtli, Suri, and Savage 2021) describe two as being ‘on call’: (1) watching over clients’ profiles, and (2) managing queued jobs. The remaining three activities are described as “identifying good work” (Toxtli, Suri, and Savage 2021, 6). On a closer look at Toxtli et al.’s (Toxtli, Suri, and Savage 2021) data, participants spent on average 28 minutes (SD = 56.8 min) on activities related to hypervigilance. Calculating the mean revealed the outliers in their data (i.e., the participants who spent the most time in each activity), which they manually inspected to understand the details of the participants. Out of the five activities belonging to hypervigilance (enumerated above) participants spent the most time watching over clients’ profiles, for an average of 15 minutes. This activity was described as being ‘on call’. Given that, on average, participants in Toxtli et al.’s (Toxtli, Suri, and Savage 2021) study spent most of their time on an activity described as being ‘on call’, we believe there is value in further quantifying (and describing) the amount of time crowdworkers spend ‘on call’ for work. Therefore, in this paper, we build on Toxtli et al.’s (Toxtli, Suri, and Savage 2021) description of being ‘on call’ on a crowdsourcing platform by defining, quantifying, and describing being ‘on call’ for work on a large crowdsourcing platform. We next review what is known about schedule control and job control to begin to understand the potential impact of being ‘on call’ for crowdworkers.

The Impact of Being ‘On Call’ on Schedule Control

Schedule control, or temporal flexibility in work schedules, involves the extent to which workers can determine the hours and duration of work (Kelly and Moen 2007). Schedule control is believed to minimise the disruptiveness of role blurring, and in turn, enhance work-home integration (Glavin and Schieman 2012; Schieman and Glavin 2008). In addition, flexibility enactment theory (Kossek, Lautsch, and Eaton 2005) and work-family border theory (Clark 2000) state that when workers have control over how to schedule their time, they can better attend to the demands of the work and life domains. Therefore, a high level of control over time use can minimise the negative effects of long hours on work-family relations (Hughes and Parkes 2007). Furthermore, increased schedule control, such as limiting excessive working hours, can result in less fatigue and fewer sleep problems, workers matching their work hours with their circadian rhythm (Baltes et al. 1999).

Research examining the temporal flexibility of crowdworkers shows that crowdworkers have difficulties scheduling their time because of client-imposed constraints. In particular, not knowing when clients will make work available means that some workers cannot detach from the platform. In Lehdonvirta’s (Lehdonvirta 2018) interview study with 10 crowdworkers, participants report finding it hard to mentally detach from work, thinking about the jobs and money they might be missing by not being online to catch ‘tasks’ when clients make them available. Workers’ inability to properly detach from work likely negatively impacts their ability to recover and replenish energy resources properly (Sonnentag, Kuttler, and Fritz 2010). Consequentially, Lehdonvirta (Lehdonvirta 2018) suggests that having to be ‘on call’ might limit crowdworkers’ control over scheduling their time, rather than delivering on the promises of temporal flexibility.

Research also shows that workers have difficulties scheduling their time because of their tooling practices. In particular, notifications from the tools workers use to ‘catch’ jobs can intrude into workers’ non-work time. Williams et al. (Williams et al. 2019) show that the tools workers use to ‘catch’ work promote temporal fragmentation of workers’ work-life boundaries. In their study, they interviewed 21 full-time crowdworkers and showed that although workers try to follow a ‘routine work schedule’, the tools they use notify them of new work at unpredictable times, sometimes outside their ‘working hours’ (Williams et al. 2019). For example, participants report interrupting their non-work activities (e.g., time with families or spent resting) to work on the platform when tools notified them of new jobs. Furthermore, workers in Williams et al.’s (Williams et al. 2019) study report feeling motivated to remain ‘on call’ on the platform partially because they enjoy the sense of serendipity that comes from their tools uncovering unexpected jobs, especially during non-work hours (Williams et al. 2019).

In summary, prior work shows that crowdworkers have difficulties scheduling their time because of client-imposed constraints and workers’ tooling practices. However, the results of prior work do not tell us how being ‘on call’ impacts the control workers have over when, and for how long to work, and their ability to stick to their work schedules. In this paper, we aim to explore the extent to which the design of the large crowdsourcing platform, namely requiring crowdworkers to be ‘on call’ for work, is an additional contributor to workers’ limited control over their work scheduling. Evidence suggests that a lack of work time-control can result in stressful work environments and decreased health outcomes (Geurts and Sonnentag 2006). Furthermore, having some degree of flexibility in working hours is considered to be an important element of overall job satisfaction (Baltes et al. 1999). Thus, if we want to change things to avoid these kinds of negative outcomes, we need to understand the extent and impact of the difficulties participants encounter when scheduling their time. Therefore, in our first study, we ask the following research question (RQ):

RQ Study 1: How does having to be ‘on call’ for work limit crowdworkers’ control over scheduling their time?

The Impact of Being ‘On Call’ on Job Control

Job control is defined as the perceived ability to exert some influence over one’s work environment in order to make it more rewarding and less threatening (Ganster 1989). Perceived job control, including control over pace of work, has positive impacts on job satisfaction and job performance (Wheatley 2017; Humphrey, Nahrgang, and Morgeson 2007). However, low levels of perceived job control are linked to negative outcomes, such as job dissatisfaction, work-related stress, and mental and physical ill-health (Bond and Bunce 2001; Bosma, Stansfeld, and Marmot 1998; Theorell, Karasek, and Eneroth 1990). In the case of on-call workers, high job control is positively associated with job satisfaction, having the potential to mitigate on-call stress symptoms (Lindfors et al. 2009; Batt and Appelbaum 1995). Research suggests that not all forms of flexibility are beneficial towards work-life balance; however, job control is the most crucial resource towards decreasing work intensity (Gallie and Zhou 2013).

‘Work pace’ is a working time dimension that reflects the intensity of work and forms a component of job control. Work pace, as opposed to other working time dimensions that relate to how much control people have over when and for how long to work, describes how quickly work is completed (Fagan 2001). Task switching and multitasking, and not being able to take breaks from work, can intensify the pace of work (Jett and George 2003; Franke 2015). High work pace can lead to fatigue (Eriksen 2006), exhaustion (Naruse et al. 2012), and work-life conflict (Cho et al. 2014).

Research examining the temporal flexibility of crowdworkers show that crowdworkers have difficulties choosing at what pace they would like to work because of client-imposed constraints. In particular, having to multitask to catch ‘task’ when posted by clients can increase workers’ pace of work. The structural constraints of crowdsourcing platforms have led to the creation of a competitive marketplace in which only the fastest and most alert workers get to work on the better-paid jobs (Lehdonvirta 2018; Lascau et al. 2019). In contrast, slower workers see jobs disappear before they can accept them. Therefore, the competitiveness of the platform has led to workers adopting job-finding tools to ‘catch’ work. Lehdonvirta (Lehdonvirta 2018) reports that the job-finding tools used by the participants in his interview study notified them when a new job became available. As a result, some participants reported stopping and putting everything aside to complete the job. However, the use of job-finding tools can lead to task switching and multitasking as workers need to frequently monitor the tools even when trying to complete jobs (Williams et al. 2019). Despite the common belief that multitasking allows people to use their time more flexibly, evidence suggests that multitasking lowers wellbeing and self-rated performance (Kirchberg, Roe, and Van Eerde 2015). Furthermore, task switching induced by interruptions, such as the notifications from job-finding tools, can increase feelings of frustration and stress (Mark, Voida, and Cardello 2012; Mark, Gudith, and Klocke 2008; Brumby et al. 2014), and intensify the pace of work (Franke 2015).

Research also shows that workers have difficulties choosing at what pace they would like to work because of their tooling practices—in particular, continuously receiving notifications from tools can make it difficult for workers to take breaks. Williams et al. (Williams et al. 2019) show that the tools workers use to ‘catch‘ work promote fragmentation of workers’ work practices by enabling task switching and multitasking behaviour. Furthermore, as the availability of work on the platform is variable, workers are under time pressure to ‘catch’ work before other workers. In turn, the variable availability of work can increase workers’ task switching and multitasking activities and decrease their ability to take breaks and disconnect from work. In Williams et al.’s (Williams et al. 2019) study, some of the participants reported difficulties detaching from crowdwork and the process of searching for jobs. In particular, a participant reports difficulties detaching from work to take breaks because of fear they might miss out on potential earnings when work suddenly became available. In addition, a prior investigation of break-taking among crowdworkers also reveals that workers had concerns that taking breaks would decrease their earnings (Lasecki et al. 2015). Evidence suggests that having control over taking breaks at the right time from working at a high pace can provide workers relief (Rzeszotarski et al. 2013; Dai et al. 2015). However, a small number of breaks, or a lack of breaks, can result in sedentary behaviour and musculoskeletal symptoms (Griffiths, Mackey, and Adamson 2011), and depletion of cognitive resources, which leads to fatigue and burnout (de Jonge et al. 2012).

In summary, prior work shows that crowdworkers have difficulties choosing their pace of work because of client-imposed constraints and workers’ tooling practices. However, the results of prior work do not tell us how much control workers have over their work pace. In this paper, we aim to explore the extent to which the design of the crowdsourcing platform, namely requiring crowdworkers to be ‘on call’ for work, is an additional contributor to workers’ limited control over their work pace. Limited control over the pace of work can increase workers’ task switching and multitasking activities, and decrease workers’ ability to take breaks and disconnect from work, negatively impacting their health and wellbeing. In this paper, we, therefore, aim to characterise in detail the ‘on call’ problem of crowdsourcing platforms, with a focus on not only how being ‘on call’ impacts workers’ control over their work schedules (Study 1), but also the impact of being ‘on call’ on workers’ control over their work pace. Therefore, in our second study, we ask the following RQ:

RQ Study 2: How does having to be ‘on call’ for work limit crowdworkers’ control over their work pace?

Research approach

In this paper, we investigate how the design of a large crowdsourcing platform, namely requiring crowdworkers to be ‘on call’ because of the platform’s lack of predictable work availability and work assignment, is a contributor to workers’ limited temporal flexibility. To answer our RQs, we look at the impact that having to be ‘on call’ had on workers’ temporal flexibility at two levels. In the first part of the paper, we focus on the bigger picture of how having to be ‘on call’ can limit workers’ control over scheduling their time at the level of a working day. In the second part of the paper, we ‘zoom in’ to the moment-to-moment experience of workers to understand how having to be ‘on call’ can limit workers’ control over their work pace at the level of a working session. Therefore, in Study 1, we present a time-use-diary study conducted to investigate the impact of having to be ‘on call’ on workers’ ability to plan and stick to their work schedules. Next, in Study 2, we present a video analysis study of 18 hours of screen recordings conducted to investigate the impact of being ‘on call’ on job control over workers’ work pace. We begin by presenting Study 1.

Study 1: Time-use-diary Study

In the next sections, we present the time-use-diary study we conducted to investigate how being ‘on call’ for work can limit crowdworkers’ control over scheduling their time. We begin by describing how we recruited participants for the study and the data collection procedure.

Method

Design

We conducted a time-use-diary study to investigate how having to be ‘on call’ can limit crowdworkers’ control over their work schedules at the level of a working day. Time-use diaries are a multidisciplinary research method used within the social sciences (e.g., (J. E. Brown et al. 2010)), psychology (e.g., (Orben and Przybylski 2019)), and sociology (e.g., (Craig and Mullan 2011)) to capture how people use their time and document the time spent on daily activities. Time-use diaries have been used in the past to explore daily patterns of work and flexibility of work (Anttila et al. 2015).

In this study, we used two time-use diaries. Firstly, we used the ‘7-day Work Schedule’ diary to get an overview of when participants planned to work during the upcoming week. As shown in Figure 1, participants could mark with an ‘X’ each 15-minute interval when they plan to work on the crowdsourcing platform in the next seven days. Participants could also tick the ‘No work’ box if they did not plan to work on the crowdsourcing platform on a particular day.

Secondly, we used the ‘24-hour Everyday Activities’ diary to get a detailed record of participants’ activities over a single 24-hour period. As can be seen in Figure 2, participants were asked to record a detailed description of their activities every 10 minutes throughout the day. In addition, we used the ‘24-hour Everyday Activities’ diary to understand how well crowdworkers could stick to the schedule they had previously made at the start of the week using the ‘7-day Work Schedule’ diary. We based these two time-diaries of the United Kingdom 2014-2015 Time Use Survey (Gershuny and Sullivan 2017).

Figure 1: Example of a filled-in ‘7-day Work Schedule’ time-use diary from Diary Participant 1. Participants were asked to mark with an ‘X’ each time interval when they had planned to work on the crowdsourcing platform over the next seven days. Participants could tick the ‘No work’ box if they did not plan to work on the crowdsourcing platform on a particular day.

Figure 2: Example of a filled-in ‘24-hour Everyday Activities’ time-use diary from Diary Participant 1. The diary included columns for participants to enter main and secondary activities, as well as locations in their own words. Participants were asked to write down their activities throughout the day.

Procedure

We hosted the two diaries online using Microsoft Excel Online. We created separate files for each diary participant (DP), and we assigned each participant a unique link to their files. The links could only be accessed by the participants and the researchers. Participants were able to access the files containing the two diaries in their browsers (e.g., on a computer or phone) using the online spreadsheet software Microsoft Excel Online or on their desktop using the Microsoft Excel desktop application.

Once recruited, we asked participants to first complete the ‘7-day Work Schedule’ diary (Figure 1. We asked participants to mark with an ‘X’ each time interval when they had planned to work on the crowdsourcing platform in the next seven days. We used this diary to gather data on when participants planned to work during the week. On the following day, we asked participants to complete the ‘24-hour Everyday Activities’ diary (Figure 2. We asked participants to write down a description of their activities throughout the day. Participants were sent three reminders during the day to complete the ‘24-hour Everyday Activities’ diary. We used this diary to gather data on when participants actually worked during the week. In this way, it was possible to see how well participants were able to stick to the plans that they made the previous day.

Participants

Table 1 reports the demographics of the participants who took part in this study. We recruited 19 participants to take part in our study. We compensated the participants with $20 USD for their time. Participants were required to earn a significant amount of their income (over 50%) from working on crowdsourcing platforms to participate in the study. We recruited participants who made a large portion of their income through crowdsourcing platforms because workers who depend on online work for their living report spending a sizable portion of their workday being ‘on call’ for work to appear, compared to workers who have other sources of income (Lehdonvirta 2018).

All participants were based in the U.S. We recruited only participants based in the U.S. to mitigate some of the cultural differences that may affect people’s relationship with time, and in turn, with how they plan their time. For example, cross-cultural research studies conducted with crowdworkers based in the U.S. and India, the two largest crowdworker groups (D. Difallah, Filatova, and Ipeirotis 2018), have found that workers display intertemporal differences across (a) time of day and (b) across the serial order in which they participated (i.e., earlier or later in data collection) (Casey et al. 2017); in other words, crowdworkers from the U.S. and India vary demographically across the time of day at which data is collected and serial position, suggesting the existence of intertemporal variations among countries. Other studies have illustrated that crowdworkers across different countries work on the platforms at different rates and times (Ross et al. 2010) and display different patterns of both motivation and social desirability effects (Antin and Shaw 2012). Finally, research suggests that data collected on crowdsourcing platforms as part of cross-cultural comparison studies can be systematically different, highlighting cultural differences including social orientation (individualism vs. collectivism), social desirability, and thinking styles (holistic vs. analytic) (Wang et al. 2015).

In terms of gender, of the 19 participants, 11 (58%) identified as women, 6 (32%) as men, and 1 (5%) as non-binary; our sample had more women than the average population of crowdworkers reported in previous studies, which had a more mixed workforce (D. Difallah, Filatova, and Ipeirotis 2018). In terms of age, sixteen (84%) participants were in the age range of 24 to 54 years; our sample was therefore consistent with the average population of crowdworkers, which ranges between the ages of 30 and 39 (Chandler et al. 2019). In terms of education level, 10 participants (53%) reported having some college/technical training, and 8 participants (42%) reported holding a University undergraduate degree (e.g., Bachelor’s); this is in line with previous studies reporting that crowdworkers are likely to have a college degree (Chandler et al. 2019). A further 2 participants (11%) reported holding University post-graduate program degrees (e.g., Master’s). Finally, in terms of income, participants earned on average 90% (SD = 15) of their income from working on crowdsourcing platforms.

P#GenderAgeHighest education level% income from online work
P1M24 - 34Some college/technical training100%
P2W24 - 34Some college/technical training100%
P3W55 - 64University undergraduate programme (e.g., Bachelor’s)98%
P4W35 - 44University undergraduate programme (e.g., Bachelor’s)100%
P5W45 - 54University undergraduate programme (e.g., Bachelor’s)95%
P6W45 - 54Some college/technical training60%
P7W35 - 44University undergraduate programme (e.g., Bachelor’s)90%
P8M24 - 34University undergraduate programme (e.g., Bachelor’s)90%
P9M24 - 34University undergraduate programme90%
P10W35 - 44Some college/technical training100%
P11M65 years or overUniversity undergraduate programme (e.g., Bachelor’s)55%
P12M35 - 44Some college/technical training100%
P13M45 - 54Some college/technical training100%
P14W45 - 54Some college/technical training70%
P15W45 - 54University post-graduate programme (e.g., Master’s)70%
P16W35 - 44Some college/technical training100%
P17W45 - 54University undergraduate programme (e.g., Bachelor’s)100%
P18W45 - 54Some college/technical training100%
P19Non-binary24 - 34University post-graduate programme (e.g., Master’s)95%

Table 1: Study 1 Participant Demographics

Ethical Considerations

Before taking part in the study, we briefed participants about the study’s purpose and data confidentiality practices. The participation of the workers was voluntary, and informed consent was obtained from all participants. The study had institutional research ethics approval: UCLIC/1718/013/Staff Cox/Lascau/Brumby.

Throughout the ‘24-hour Everyday Activities’ diary, we regularly prompted participants to avoid disclosing sensitive information. For example, we noted in the diaries that if there was something they felt was too private to record, participants should write ‘personal’ in the text fields. Furthermore, we pseudonymised the entries and separated workers’ usernames from the diaries to preserve confidentiality.

Analysis

The analysis is based on diary data from 18 participants. Out of the 19 participants recruited, we excluded the data from one participant (i.e., DP2) from the analysis. This is because DP2’s ‘24-hour Everyday Activities’ diary revealed that they did not engage in any crowdsourcing-related activities (e.g., completing jobs or waiting for work), except for filing in the diary itself. To code the remaining participants’ diary entries, we used the coding schema provided in the NatCen report (Morris et al. 2016) of the United Kingdom 2014-2015 Time Use Survey (Gershuny and Sullivan 2017). For example, sleeping activities were marked with ‘110’, and actively working on jobs on the crowdsourcing platform were marked with ‘1110’. We excluded from the last measure the amount of time during which participants filled in the diary. Furthermore, we argue that actively working on jobs is also part of participants’ experiences of being ‘on call’ because workers have to monitor job-catching tools even when they are actively completing jobs (Williams et al. 2019). However, for the purpose of our study, we did not include this paid activity in our measurement of being ‘on call’ for work. Instead, in addition to the codes provided in the NatCen report, we constructed two new code categories for the unpaid time participants had to spend ‘on call’ for work: (1) ‘1392’ for activities related to waiting for work (example activities include “Reading stuff online and waiting for work” and “Waiting for work to do on [name of platform]”) and (2) ‘1393’ for activities related to searching for work on the crowdsourcing platform (example activities include “checked [name of platform] for jobs I’ve missed overnight” and “scanned [name of platform] for batches”). We next present the results of the study.

Results

Table 2 provides an overview of the observed differences between how participants planned to work for a given day on the crowdsourcing platform, as captured by the ‘7-day Work Schedule’ diary, compared to how they actually worked, as captured by the ‘24-hour Everyday Activities’ diary. For all statistical analysis, we used a paired sample t-test since the data were normally distributed as assessed by the Shapiro-Wilk test ($\alpha$ = .05). We judged effects significant if they reached a $0.05$ significance level. We explain each of the measures in more detail in the following sections.

MeasureDifference of the Means (Actual-Planned)95% CIt-value
Start time of workday15 min-8 min, 38 min1.37
End time of workday37 min-36 min, 1hr 50 min1.07
Number of periods of work3.111.30, 4.923.62*
Duration of periods of work-1hr 58 min-2 hr 54 min, -1 hr 1 min5.32*
Duration of total hours worked-2 hrs 3 min-3 hr 30 min, -36 min2.99*

Table 2: Difference between actual and planned the crowdsourcing platform work from Study 1 diaries. A period of work is defined as a block of contiguous work that is separated from another period of work by a break of at least 10 minutes. Note: df = 17, * p < .01, ** p < .001.

Start and End Time of Workday

We first consider the time of day that participants planned to start and end their workday, and whether our participants then actually kept to these plans on a given day. We found that participants planned to start their workday in the morning at around 8 am (M = 08:06, SD = 1 hr 35 min). In reality, participants actually started their workday a little later than planned (M = 08:21, SD = 2 hr). However, a paired sample t-test found there to be no significant difference between planned and actual start time of the workday, t(17) = 1.37, p = .19, 95% CI = [-8 min, 38 min]. Participants planned to end their workday in the evening at around 6:30 pm (M = 18:23, SD = 3 hr 14 min). In reality, participants actually stopped a little later than planned (M = 19:00, SD = 3 hr). A paired sample t-test again found there to be no significant difference between the planned and actual end time of the workday, t(17) = 1.07, p = .30, 95% CI = [-36 min, 1 hr 50 min]. These results show that participants planned to work what can be considered traditional working hours, starting work in the morning and finishing work in the early evening. There was, therefore, no evidence that participants did not keep to their planned start and finish times.

Number of Hours Worked

We next consider whether participants were able to actually work the number of hours planned within their working day. We found that while participants planned to work on average for 8 hr 17 min (SD = 2 hr 45 min), they were only able to actually work 6 hr 13 min (SD = 2 hr 33 min). A paired sample t-test found that participants actually worked significantly fewer hours (-2 hr 03 min) than planned, t(17) = 2.99, p = .008, 95% CI = [-3 hr 30 min, -36 min]. Therefore, results of the diary study show that our participants worked on average two hours less than planned. Furthermore, out of the total time worked, participants spent on average 1 hr 23 min (SD = 3 hr 30 min) waiting and searching for work. In other words, participants spent on average 22% of their daily working time on unpaid ‘on-call’ activities such as waiting and searching for new jobs.

Fragmentation of Workday

To better understand why participants were not able to work as many hours as planned despite starting and ending their workday as planned, we next drill down to consider periods of work. We define a period of work as a block of contiguous work (either planned or actually reported) that is separated from another period of work by a break of at least 10 minutes. For example, participant DP17 planned to work across two distinct periods of work: (1) 08:30-13:00 and (2) 14:00-17:30. However, participant DP17’s actual work pattern was far more fragmented than planned, with work being done in many more shorter periods of work than planned: (1) 08:00-09:10, (2) 09:50-10:50, (3) 11:00-11:10, (4) 11:40-12:50, (5) 14:20-17:00. From this example, we can see that the reason participant DP17 worked 1 hour 90 min less than planned is because work was done over five periods of work instead of two, and that each of these work periods was far shorter in duration than planned. In other words, participant DP17’s workday was far more fragmented than they had planned.

To examine whether participants’ workdays were more fragmented than planned, we consider both the frequency and the duration of the periods of work throughout the workday. While participants planned to work across only three distinct periods in the day (M = 3.06, SD = 1.63), they actually worked across six periods (M = 6.17, SD = 3.37). A paired sample t-test found this difference to be significant, t(17) = 3.62, p = .002, 95% CI = [1.30, 4.92], suggesting that participants’ work schedules were far more fragmented than planned. Therefore, the results of the diary study show that our participants’ workdays were more fragmented than planned, with work distributed across twice as many work periods as planned. In terms of the duration of these work periods, we found that participants planned for each period of work to be on average 3 hr 22 min (SD = 1 hr 57 min). In reality, each period of work was actually only 1 hr 24 min (SD = 1 hr 11 min), which was significantly shorter in duration than planned, t(17) = -4.42, p < .001, 95% CI = [-2 hr 54 min, -1 hr 01 min]. Therefore, results of the diary study show that the work periods our participants worked were far shorter than planned.

Discussion

Taken together, the results of the diary study show that our participants’ workdays were far more fragmented than they had planned: there were on average twice as many work periods as planned, and each of these was far shorter than planned. Moreover, this increased fragmentation of the workday meant that participants were on average working for two hours less than planned, despite starting and ending their workday roughly when they had planned. These findings are interesting because most of our participants wanted to work a ‘shift’ on the crowdsourcing platform, setting out clear start and end times when they wanted to work on the platform and generally sticking to them. Workers, then, are responding to the imperatives of the platform within this constraint; they are not just ‘jumping to attention’ at any point jobs arrive.

Our results also suggest that participants worked on average for two hours less than planned. In contrast, participants in Lehdonvirta’s (Lehdonvirta 2018) interview study report having to frequently work longer hours than planned. However, it is unclear if participants in the aforementioned study were speaking about the amount of time they had to work (i.e., for how long) or the times of the day at which they were working (i.e., when). In our study, we aimed to differentiate between the two concepts and provide objective measures. It could be that participants in Lehdonvirta’s (Lehdonvirta 2018) study were speaking about both scenarios. Nevertheless, the fact that in our study participants worked two hours fewer than they had planned illustrates the issue with being ‘on call’ on crowdsourcing platforms: with participants having set aside time to work a ‘shift’, the data suggest they could not ‘fill’ this time and maximise their earnings. This situation indicates a paucity of work available on the platform that meets workers’ requirements, meaning that workers end up using the time already assigned inefficiently trying to find work. Lack of suitable work at a given time can also be seen in the case of platform workers on freelancing platforms, such as Upwork, where freelancers report spending a high number of unpaid hours on finding jobs (Carlos Alvarez de la Vega, E. Cecchinato, and Rooksby 2021).

This study also yields new insights into the relationship between the flexibility and fragmentation of work in crowdsourcing platforms. Our results paint a picture of workers seeking to develop some kind of routine, but where the platform’s architecture prevents them from doing so. At a workday level, this leads to high temporal fragmentation of workers’ schedules. The temporal fragmentation makes it harder for workers to schedule their time across the day in a way that balances demands on their time. Irregular and unpredictable schedules of work can disrupt daily or weekly routines (Bell and La Valle 2003), and personal relationships (Arlinghaus et al. 2019), and thus reduce control over work-life boundaries (Cousins and Varshney 2009). Furthermore, non-standard work schedules are associated with anxiety and irritability (Costa 2003), decreased sleep quality (Vogel et al. 2012), and can have adverse effects on mental health (Rajaratnam and Arendt 2001). However, it is not just unpredictability in working time that can worsen the working conditions of crowdworkers, but also the economic instabilities, as evidence suggests that unpredictable and unstable work schedules can lead to economic insecurity (Ben-Ishai 2015; A. Brown et al. 2014).

In summary, the schedule-level data tells us something about the temporal fragmentation of the working day within the ‘shifts’ that workers plan to fit in around their other commitments. Even when workers set aside time to work (rather than simply being continuously available), scheduling in the face of a ‘flexible’ platform with variable availability of work is almost impossible, and workers end up fitting life around the times when work is available. The fact that workers miss out on almost two hours of work in a given ‘shift’ but do not attempt to ‘catch up’ on this time at the end of the shift, might imply that workers are aware that after a certain hour they will not find work on the platform; or they are simply not keen to make use of the ‘flexibility’ that the platform offers.

What we did not see from the data was how having to be ‘on call’ for work influences the pace at which workers complete individual jobs and find and manage work on the platform. Work pace is another important part of job control, so in order to understand this aspect, in Study 2, we ‘zoomed in’ to the moment-to-moment experience of crowdworkers.

Limitations

We recruited participants who made a large portion of their income through crowdsourcing platforms, as they are known to spend a significant proportion of their time waiting for work (Lehdonvirta 2018). However, we did not set any other pre-screening criteria when we recruited the participants, such as minimum job completion rates or minimum earnings. Therefore, we cannot know if any of the participants in our study were highly experienced workers. Observing more experienced workers could have created potential biases towards the interpretation of our results, as the workers might have developed specific strategies to schedule their time. Nevertheless, for this work, a general understanding of planning and fragmentation is sufficient. Still, it could be that a more focused or stratified sample based on experience rather than time might yield insights into whether planning effectiveness develops with experience, and whether there are ceiling effects (i.e., a point at which experience cannot overcome the fundamental architecture of the platform). Furthermore, we asked 19 workers to keep the two time-use diaries, whereas time-use diaries are usually administrated to larger samples (e.g., (Gershuny and Sullivan 2017)). Therefore, results in the study might not necessarily be generalisable to the wider population of crowdworkers. However, the results provide an initial overview of the limitations workers encounter when scheduling their time. Future work should consider administering the diaries to a larger sample of participants to examine the scale and significance of the issues generated by having to be ‘on call’ observed in our study.

We asked participants in the time-use-diary study to record their work activities throughout the day, in line with prior studies that have asked participants to keep diaries of their work activities (e.g., (Newman 2004; Czerwinski, Horvitz, and Wilhite 2004; Ahmetoglu, Brumby, and Cox 2021)). Asking participants to record their activities every ten minutes throughout the day could have been disruptive to participants’ work. The potential disruptiveness is a common disadvantage of diary studies (Czerwinski, Horvitz, and Wilhite 2004). However, the in situ and in-the-moment nature of these kinds of diaries is an important advantage (Iida et al. 2012; Carter and Mankoff 2005). We have reason to think that in this working context, the disruptive effects of the diaries may have been less apparent. Working on crowdsourcing platforms is a fragmented activity; as larger jobs are decomposed into short jobs, the disruptions to participants’ work should have been minimal. The disruption should have been minimal because filling in the diaries could be done quickly, which should have made recovering from the interruption easier (Monk, Trafton, and Boehm-Davis 2008). Moreover, the interruptions were relevant to the participants’ main jobs (e.g., working on the crowdsourcing platform), an aspect which should have also made it easier for the participants to recover from the interruptions (Adamczyk and Bailey 2004; Gould, Brumby, and Cox 2013). Despite the additional workload, participants in our study engaged with the diaries throughout the diary day and completed all the required fields. Subsequent studies could employ alternative methods (e.g., Experience Sampling Method (Larson and Csikszentmihalyi 2014; van Berkel, Ferreira, and Kostakos 2017)) and alternative time frames (e.g., hourly reports) to examine the extent to which workers encounter difficulties in planning and sticking to their schedules. It might be that a lower fidelity of data might still serve the purpose.

Study 2: Video Analysis Study

In the following sections, we present the video analysis study of more than 18 hours of screen recordings conducted to investigate how having to be ‘on call’ for work can limit workers’ control over the pace at which they work and thus reduce job control. We begin by describing how we recruited participants for the study, the data collection procedure and analytic strategies.

Method

Design

We conducted a video analysis study to investigate how having to be ‘on call’ can limit workers’ control over their work pace at the level of a working session. Inspired by previous research that studied the workflows of crowdworkers (Gupta 2017), we gathered 18 hours of screen recordings of participants working on the crowdsourcing platform. The screen recordings provided us with a rich dataset of the participants’ behaviours on the crowdsourcing platform. Compared to diary studies, screen recordings require little effort on participants’ side (Carter and Mankoff 2005). Further, compared to direct observation or shadowing, screen recordings do not require researchers to be physically present in the same room with the participants. Therefore, instead of relying on participants’ notes during a diary study or on researchers’ notes during direct observations or shadowing, the screen recordings enabled us to capture a comprehensive picture of the participants’ behaviours. Finally, in our study, we annotated each screen recording to describe the activities we saw participants doing (e.g., working on a job, searching for new jobs to work on, switching between windows, taking breaks). Furthermore, to better understand the events in these videos, we sent follow-up questions to participants to elaborate on their actions recorded in the videos.

Procedure

A job was advertised on the crowdsourcing platform with a maximum completion time of 24 hours, meaning that the participants had 24 hours to record their screens, upload the screen recordings and submit the job. Before agreeing to complete the job, we presented participants with an information page that contained details about the study. On the information page, we mentioned that:

  1. We ask participants in the study to record their screens for 90 minutes using a remote usability testing platform.

  2. We will hold everything that appears on participants’ screens under strict confidentiality, and we will delete the recordings after data analysis.

  3. We expect participants to take breaks during the 90-minute recording, and ask them to inform us if they are about to take any breaks by leaving a message on their screens.

After participants accepted the job, they were taken to the usability testing platform to record their computer screen for 90 minutes. Where participants were asked by the platform to enter their full names and email addresses to begin recording their screens, we instructed participants to enter the usernames they used on the crowdsourcing platform and a random email address (e.g., address@email.com). All participants complied with our instructions. We paid participants within 24 hours of submitting the job.

After watching each video, we identified brief video clips about which we wanted to learn more. The selection of these clips focused on moments when we could see participants readjusting their work activities once a new job became available on the platform. We identified 98 clips in total, and after identifying these, we sent follow-up messages to each participant to provide detailed comments on three of their video clips. We contacted participants via an API that supported messaging the workers using only their usernames, as we did not collect participants’ email addresses. In the message, we included our questions and links to the video fragments. We hosted the video fragments on Microsoft OneDrive. For security purposes, we scheduled the links to the videos to expire two weeks after we had sent them to the participants. Participants received an additional $5 USD for answering these follow-up questions. We received detailed annotations on 18 video clips from six participants.

Each of the three clips that we sent to participants to comment on covered different types of interactions with the crowdsourcing platform: interactions with the platform, productivity tools, or other tools such as forums. The following is an example of the kind of question we sent to a participant: “Looking back at how you went about completing your work, I was wondering how come you chose to check the platform while you were completing a job? Here is the secure link to a fragment of the video recording where you can see the job I’m referring to: […]”.

P#GenderAgeHighest education level# Jobs
P1M35 - 44University post-graduate programme (e.g., Master’s)Under 1,000
P2M35 - 44Some college/technical training10,000 - 25,000
P3W24 - 34Some college/technical training25,000 - 50,000
P4W24 - 34High school diploma100,000 - 150,000
P5M24 - 34High school diplomaUnder 1,000
P6M45 - 54Some college/technical training25,000 - 50,000
P7W35 - 44University undergraduate programme (e.g., Bachelor’s)10,000 - 25,000
P8M35 - 44Some college/technical training10,000 - 25,000
P9W45 - 54University undergraduate programme (e.g., Bachelor’s)25,000 - 50,000
P10M24 - 34University undergraduate programme (e.g., Bachelor’s)25,000 - 50,000
P11M35 - 44Some college/technical training75,000 - 100,000
P12F35 - 44Some college/technical training1,000 - 5,000

Table 3: Study 2 Participant Demographics

Participants

Table 3 reports the demographics of the participants who took part in this study. We recruited 12 participants to take part in our study. We compensated the participants with $10 for their time; participants could then receive an additional $5 USD for answering our follow-up questions, bringing the pay for the study to $15 USD. All of the participants were based in the U.S., and the total number of jobs participants completed ranged from 525 to 125,778 (M = 34,465, SD = 37,917). The study was open to both experienced workers (n = 9), i.e., workers with over 10,000 jobs completed (Savage et al. 2020), and novice workers (n = 3), i.e., workers with under 1,000 jobs completed (Savage et al. 2020); this is because we wanted to capture workers’ activities regardless of their experience. Of the 12 participants, 7 participants (58%) identified as men and five (42%) as women; this is in line with previous studies reporting a mixed workforce (D. Difallah, Filatova, and Ipeirotis 2018). Participants’ age range was 27 to 51 years (M = 37.3 years, SD = 7.0 years); this is in line with previous studies reporting that the majority of crowdworkers population is between the ages of 30 and 39 (Chandler et al. 2019). In terms of education level, five (42%) had some college/technical training, four participants (33%) reported holding a University undergraduate program degree (e.g., Bachelor’s), two participants reported holding high school diplomas (17%), and one a University post-graduate program degree (e.g., Master’s); this is in line with previous studies reporting that worker are likely to have a college degree (Chandler et al. 2019).

Ethical Considerations

There are, naturally, several privacy concerns over collecting the kind of video data necessary for this study. We took two main measures to mitigate as best as possible any potential risks to the participants. First, we held extensive conversations with the departmental and faculty ethics committees of our University. We obtained the appropriate ethical review committee approval prior to conducting the study and complied with all aspects of the approval—the study had institutional research ethics approval: UCLIC/1718/013/Staff Cox/Lascau/Brumby.

Before taking part in the study, we briefed participants about the study’s purpose and data confidentiality practices. The participation of the workers was voluntary, and informed consent was obtained from all participants. Further, we committed to giving participants control over the recordings: participants could start, pause or restart the screen recordings at any point. The recordings were stored on participants’ computers until they had decided to upload and share them with us.

Second, we obtained the appropriate data protection registration approval prior to conducting the study and adhered to the institutional data management requirements. In this sense, we maintained the participants’ privacy and that of the clients by keeping the screen recordings secure and removing any confidential information. Further, we were aware of the risks of participants accidentally disclosing confidential information in the screen recordings and therefore prompted participants not to reveal any sensitive information. Where personal information such as clients’ usernames appeared in the videos (e.g., where a worker was completing a job posted by a client), we anonymised the information within the job by blurring it out, unlinking it from the videos, and discarding it alongside the original videos. We also removed any other screen recordings not relevant to our study (e.g., screen recordings of non-crowdwork).

Analysis

We next give an overview of how we analysed the video data collected. We begin by describing how we measured the activities recorded in the video data, followed by describing how we contextualised these activities.

A. Preparing the data for analysis. As this study is only concerned with participants’ behaviour whilst working on the crowdsourcing platform, the first author initially reviewed the recordings to discard any sections where the participants were not engaging in crowdwork activities. Furthermore, as this study is only concerned with participants’ behaviours, she additionally discarded any sections where the contents of the jobs were showing.

Once the first author discarded these sections, together with the second author, they familiarised themselves with the screen recordings by initially independently viewing the recordings to gain a good understanding of the entire data; this process was comparable to a thematic analysis (Braun and Clarke 2006, 2019). Specifically, since we aimed in this study to investigate how having to be ‘on call’ can limit workers’ control over the pace at which they work, the two authors initially annotated two unpaid work activities that participants had to engage in whilst ‘on call’ on the crowdsourcing platform: (1) Waiting for new jobs (Zukalova 2020); and (2) Searching for jobs (Berg 2015).

Annotating the two unpaid work activities enabled us to define each activity’s start, and end times and the frequency of occurrence across the data set. We describe the process of analysing the two activities in subsection ‘E. Analysing the activities recorded’.

We describe in the next subsection how we measured the two activities.

B. Measuring the amount of unpaid time participants spent whilst ‘on call’ for work. Consistent with the measure we used in Study 1, we measured the amount of time participants spent whilst ‘on call’ for work by aggregating the amount of unpaid time participants spent: (1) Waiting for new jobs; and (2) Searching for jobs. Both of these activities have been described previously in the context of the unpaid work activities that crowdworkers have to engage in when working on crowdsourcing platforms (Zukalova 2020; Berg 2015). Similarly to Study 1, in this study, we annotated the two activities to measure the amount of unpaid time participants spent being ‘on call’ for work. We next give an overview of how we measured these two activities:

  1. Waiting for new jobs. We measured the amount of time participants were waiting for new jobs to become available on the platform. Waiting for new jobs is part of being ‘on call’ because of the crowdsourcing platform’s lack of predictable work availability (i.e., workers do not know when clients are going to make new jobs available (Berg 2015)). We observed when participants were waiting for new jobs by annotating the instances in which participants were not interacting directly with the crowdsourcing platform or any crowdwork-related work (e.g., unpaid work such as contacting clients, tracking earnings, reading forums or reviews, or checking qualifications). Whilst waiting for work, participants filled their time with activities resembling break-taking (e.g., playing video games, browsing the internet or watching Netflix). In this study, we differentiated between ‘activities resembling break-taking’ and actual breaks, by asking participants to inform us during the screen recordings if they were about to take any breaks (we report the number of breaks participants took in the Results section).

  2. Searching for jobs. We measured the amount of time participants were searching for jobs to complete. Searching for jobs is part of being ‘on call’ because it requires workers to engage in unpaid work such as filtering through potential work (Toxtli, Suri, and Savage 2021). We observed when participants searched for jobs by annotating the instances in which participants visited the crowdsourcing platform’s main page (i.e., the page where jobs appear as they are posted), and when participants visited their external tools to adjust observable parameters in the tools (Gupta 2017).

C. Describing how participants spent their unpaid time whilst ‘on-call’. In addition to reviewing and annotating the two activities that we used to measure the amount of unpaid time participants spent whilst ‘on call’ for work, we also annotated three other activities. We annotated the following three activities to describe how participants spent their unpaid ‘on-call’ time: (1) ‘Catching’ new jobs; (2) Managing the queue of jobs; and (3) Doing other unpaid work. First, ‘Catching new jobs’ has been described previously as one of the main activities that crowdworkers have to engage in (Williams et al. 2019; Lascau et al. 2019). Second, ‘Managing the queue of jobs’ is one of the two activities that were previously used to briefly describe being ‘on call’ on a crowdsourcing platform (Toxtli, Suri, and Savage 2021). Finally, ‘Doing other unpaid work’ has been described previously in quantifying the invisible labour of crowdworkers (Toxtli, Suri, and Savage 2021). We chose to focus on these three activities because we were interested to learn from our data about how having to be ‘on call’ can influence participants’ work activities. Specifically, we wanted to quantify these three activities to describe how participants spent their ‘on-call’ time. In this sense, we further annotated the video data to define when each of these three work activities occurred; this approach enabled us to define the frequency of occurrence across the data set. We next give an overview of how we quantified these three work activities, which we used to describe how participants spent their unpaid ‘on-call’ time:

  1. ‘Catching’ new jobs. We observed whether participants were ‘catching’ new jobs as they became available on the crowdsourcing platform. We chose this activity to describe being ‘on call’ for work because crowdworkers have to ‘catch’ new jobs as they become available (Williams et al. 2019; Lascau et al. 2019)—we argue that this is because of the crowdsourcing platform’s lack of work assignment (i.e., the crowdsourcing platform does not assign workers jobs to complete). We measured the number of jobs participants caught by recording the frequency with which participants were (a) manually accepting new jobs and (b) the frequency with which participants’ external tools were ‘catching’ jobs on their behalf. We excluded from the analysis jobs that did not meet participants’ criteria and that they, therefore, did not attempt to ‘catch’; it was possible to record participants’ criteria when they adjusted observable parameters in their external tools.

  2. Managing the queue of jobs. We observed whether participants were managing the jobs they had ‘queued’ up to work on. Managing the queue of jobs is one of the two activities that were previously used to briefly describe being ‘on call’ on a crowdsourcing platform (Toxtli, Suri, and Savage 2021). We chose this activity to describe being ‘on call’ because workers have to line up jobs to complete in the queue of jobs, and then filter out any unsuitable jobs that they will not complete (e.g., fraudulent jobs or low-paying jobs) (Toxtli, Suri, and Savage 2021). We observed the number of jobs participants had in their queue by examining the workers’ job queues right before they started working on a job. The queue of jobs was displayed either on the crowdsourcing platform’s queue of jobs or as part of the external tools that participants were using to catch jobs and maximise their earnings. For ten of the participants, the queue of jobs was displayed on the crowdsourcing platform’s jobs queue, whereas for the remaining two participants, the queue was displayed as part of their external tools. For the participants who had the queue displayed on the crowdsourcing platform’s queue of jobs and had the “auto-accept next job” feature on, we subtracted the number of jobs they were completing (or returning) one after the other, as participants were working their way through the queue.

  3. Doing other unpaid work. Finally, we observed whether participants were engaging in any other unpaid work. We chose this activity to describe being ‘on call’ because crowdworkers have to engage in a variety of unpaid work just to secure paid work (Toxtli, Suri, and Savage 2021) (in addition to activities such as waiting, searching, and catching jobs, or managing the queue of jobs, which we excluded from this measure). We defined unpaid work to include a wide variety of unpaid activities such as: (a) contacting clients, (b) tracking earnings, (c) reading forums or reviews, and (d) checking their qualifications. This list of unpaid activities is consistent with Gupta’s (Gupta 2017) description of crowdworkers’ workflows and with the invisible labour activities examined by Toxtli et al. (Toxtli, Suri, and Savage 2021). We next describe how we observed each unpaid activity. We first observed when participants contacted clients by annotating the instances in which participants sent clients messages using the contact form provided by the platform. We next observed when participants tracked their earnings by annotating the instances in which they visited the crowdsourcing platform’s earnings section. Further, we observed when participants read forums or reviews by annotating the instances in which participants visited worker forums or read client or job reviews left by other workers. Finally, we observed when participants checked their job qualifications by annotating the instances in which they visited the crowdsourcing platform’s Qualifications page.

D. Measuring and describing the amount of paid time participants spent working whilst ‘on call’. Finally, we reviewed and annotated the video data showing the participants actively working on jobs. Similarly to the measure in Study 1, we argue that this paid activity is also part of participants’ experiences of being ‘on call’ for work because workers have to monitor job-catching tools even when they are actively completing jobs (Williams et al. 2019). However, for the purpose of our study, we did not include this paid activity in our definition of being ‘on call’ for work, nor did we use this activity to measure the amount of unpaid time participants spent whilst ‘on call’. Instead, we used this activity to measure and describe the amount of paid time participants spent working on the platform. Hence, we present this activity separately. We next give an overview of how we measured this activity:

  1. Actively working on jobs. We measured the amount of time participants were actively working on jobs. We observed the number of job participants completed by annotating the video data to define: (a) when participants started working on a job (i.e., the moment participants clicked on the ‘Work’ button to begin working on a job) and (b) when participants stopped working on a job (i.e., the moment participants clicked on the ‘Submit’ or ‘Return’ buttons to stop working on a job). As part of measuring the amount of time participants were actively working on jobs, we also included (a) any amount of time spent on jobs that were started but then returned, as well as (b) any amount of time spent on jobs that eventually timed out (Toxtli, Suri, and Savage 2021)—in comparison, Toxtli et al.’s (Toxtli, Suri, and Savage 2021) categorised these two measures as ‘unpaid work’. However, in our study, we included these other two activities to measure the amount of paid time participants spent working, whilst still being ‘on call’ for work.

In summary, to measure the amount of unpaid time participants spent whilst ‘on call’ for work, we annotated two key work activities: (1) Waiting for new jobs; and (2) Searching for jobs. Next, to describe how participants spent their ‘on-call’ time, we additionally annotated three other unpaid activities: (3) ‘Catching’ new jobs; (4) Managing the queue of jobs; and (5) Doing other unpaid work. Finally, to measure and describe the amount of paid time that participants spent working on the platform, whilst still being ‘on call’ for work, we annotated one other activity: (6) Actively working on jobs. We next describe how we analysed the six activities.

E. Analysing the activities recorded. The first and second authors independently coded all instances related to the six activities observed within the 18 hours of video recordings. They coded the video data deductively (i.e., top-down). In this sense, the first and second authors deductively generated initial codes based on the RQ asked, the initial viewing of the recordings, and existing literature (e.g., ‘catching’ new jobs (Williams et al. 2019) or managing the queue of jobs (Toxtli, Suri, and Savage 2021)). Based on the codes, they developed a preliminary codebook to help guide the analysis of the study; they refined the codebook throughout the analysis of the video data.

This analysis resulted in 14 codes grouped into six main categories: (1) Waiting for new jobs (e.g. code, ‘playing video game’); (2) Searching for jobs (e.g. code, ‘searching for job on main page’); (3) Doing other unpaid work (e.g. code, ‘contacting clients’); (4) ‘Catching’ new jobs (e.g. code, ‘catching job manually’); (5) Managing the queue of jobs (e.g. code, ‘checking the work queue on the platform’); and (6) Actively working on jobs (e.g. code, ‘starting to complete job’). The two authors annotated the video data iteratively, until no other notable instances related to the six activities were identified. Through the process of annotating the data, they were able to generate a robust list of all of the instances of activities we were interested in measuring in this study.

Throughout the coding process, the first and second authors collaboratively examined in detail the six activities recorded and shared their own understandings of these activities and discussed any disagreements. Additionally, after coding the video data, the first author contacted the participants with follow-up questions: participants were asked to describe some of the behaviours recorded in the videos. Following up with the participants allowed us to check and refine our codes against their descriptions. Finally, all authors collaboratively discussed the activities recorded and asked probing questions to build a shared understanding of the video data. Throughout these group discussions, the authors explored the meaning of the data, and reflected on how their biases and subjectivity might be affecting the reading of the data (Braun and Clarke 2019)

F. Contextualising the activities recorded. Additionally, the first author inductively translated the codes into observable patterns (i.e., comparable to the themes of thematic analysis (Braun and Clarke 2006, 2019)). During this process, the author looked for patterns of activities across the whole dataset rather than only within each video recording. Furthermore, the first author sought patterns across the activities by combining the annotations made and observing relationships between them, instead of contrasting the instances of the six activities in a direct manner (Dourish 2014). This approach enabled the first author to examine the relationship between the instances and measure the amount of time participants engaged with them. Throughout this process, we identified in the video data three distinct periods of work intensity.

We observed variability in work intensity over the 90-minute periods. The variability suggests that, for the participants in our study, working on the crowdsourcing platform was characterised by three distinct periods of work intensity, which were influenced by the number of jobs participants had in their work queue:

  1. Periods of low work intensity, in which participants had zero or only one job lined up to complete in the queue;

  2. Periods of moderate work intensity, in which participants had between two and five jobs lined up; and

  3. Periods of high work intensity, in which participants had six or more jobs lined up. We used six jobs as the cut-off point for periods of high work intensity, as it is approximately equal to 2 standard deviations from the overall mean.

We observed the number of jobs participants had in their queue by examining the workers’ queues right before they started working on a job. The queue of jobs was displayed on either the crowdsourcing platform’s queue of jobs or as part of their external tools. We chose to focus on the number of jobs participants had in their queues, as these can be an indicator of work intensity on the crowdsourcing platform. Since each queued job has a time limit, crowdworkers report having to monitor the time limits of the jobs to ensure they get to complete all the queued jobs without the jobs expiring or timing out when working on them (Lascau et al. 2019). Moreover, if workers are not near their work spaces, monitoring the queue of jobs is one of the most common mobile job for the workers (Williams et al. 2019). Therefore, we argue that the more jobs workers have in their queue, the more their work intensifies (i.e., the workers’ pace increases).

On average, we identified four (M = 4.22, SD = 1.96, range: 0–8) distinct periods of work intensity of the three types described above (i.e., low, moderate, and high). We next present the results of the study, in which we used the three distinct periods of work intensity to describe how having to be ‘on call’ for work limited participants’ control over the pace at which they worked.

Results

Compliance with instructions

We asked the twelve participants to share 90 minutes of video capturing their screen while working on the crowdsourcing platform. Participants generally complied with this instruction, although there was some variability in the duration of footage that was submitted (M = 93 min 34 sec, SD = 7 min 28 sec, range: 86 min 37 sec–113 min 28 sec).

Activities undertaken

Table 4 presents an overview of the activities participants engaged in during the working sessions recorded. We next describe the six work activities across the whole dataset.

ActivitiesAcross the Dataset
MeanSD
Waiting for Jobs12 min 49 sec9 min 52 sec
Searching for Jobs16 min 14 sec2 min 30 sec
Catching Jobs (Freq.)8 min 14 sec7 min 43 sec
Other Unpaid Work6 min 56 sec2 min 45 sec
Working on Jobs17 min 15 sec12 min 19 sec

Table 4: Overview of the activities participants engaged in during the working sessions recorded, across the whole dataset. Note that ‘catching jobs’ describes the frequency with which participants were claiming new jobs.

First, participants spent on average 13 minutes (M = 12 min 49 sec, SD = 9 min 52 sec) waiting for new jobs. Second, participants spent on average 16 minutes (M = 16 min 14 sec, SD = 2 min 50 sec) searching for jobs. Taken together, participants spent on average 17% of their unpaid working time waiting and searching for jobs whilst working on the crowdsourcing platform; we calculated this value by summing up all of the unpaid time participants in our study spent waiting and searching for jobs during the twelve 90-minute working sessions recorded. In other words, participants spent on average 17% of their working time on unpaid ‘on-call’ activities such as waiting and searching for new jobs.

Third, participants managed to catch new jobs approximately every eight minutes on average (M = 8 min 14 sec, SD = 7 min 43 sec). Participants ‘caught’ a total of 110 jobs during the 18 hours of recordings (M = 4.58, SD = 3.19). Fourth, participants had on average two jobs in their queues whilst working (M = 1.75, SD = 1.81). Fifth, participants spent on average seven minutes doing unpaid work (M = 6 min 56 sec, SD = 2 min 45 sec).

Finally, participants submitted 64 jobs in total (M = 5.33, SD = 2.46); we use the term ‘submitted’ to differentiate between jobs that participants worked on and submitted for review, and jobs that participants worked on but had to return or had expired. In total, participants returned 46 jobs (M = 3.83, SD = 3.71); therefore, participants returned on average 42% of the total number of jobs caught. Furthermore, participants spent on average 17 minutes (M = 17 min 15 sec, SD = 12 min 19 sec) working on the 64 jobs.

In the next section, we describe how the aforementioned activities occurred during the three periods of work intensity observed in the data. Table 5 presents an overview of the amount of time participants engaged in the activities during periods of low, moderate, and high work intensity. In addition, Figure 3 presents a stacked bar chart of the activities across the three intensity periods and the whole dataset, for easier comparison. Next, we first report descriptive statistics from the data. The descriptive statistics help us to build a detailed picture of how participants were spending their time being ‘on call’ for work during the three periods of work intensity. Additionally, to give a more nuanced understanding, we describe examples from specific video clips along with the detailed comments that the participants provided to explain what was happening.

ActivitiesLow IntensityModerate IntensityHigh Intensity
MeanSDMeanSDMeanSD
Waiting for Jobs25 min 14 sec16 sec9 min 4 sec1 min 36 sec2 min 36 sec18 sec
Searching for Jobs15 min 6 sec1 min 54 sec16 min 12 sec2 min 51 sec17 min 25 sec1 min 49 sec
Catching Jobs (Freq.)17 min 5 sec10 min 55 sec4 min 50 sec3 min 39 sec2 min 46 sec3 min 43 sec
Other Unpaid Work10 min 5 sec47 sec4 min 39 sec49 sec6 min 39 sec47 sec
Working on Jobs19 min 57 sec1 min 36 sec19 min 41 sec11 min 12 sec14 min 58 sec11 min 46 sec

Table 5: Overview of the amount of time participants engaged in the activities during periods of low, moderate, and high work intensity. Note that ‘catching jobs’ describes the frequency with which participants were claiming new jobs.

Figure 3: Overview of the activities participants engaged in during periods of low, moderate, and high work intensity, and across the whole dataset, shown in a stacked bar chart.

Periods of Low Work Intensity

We observed that participants spent 39% of their time in periods of low work intensity, for an average of 33 minutes (M = 33 min 22 sec, SD = 16 min 9 sec). First, during periods of low work intensity, participants spent on average 25 minutes (M = 25 min 14 sec, SD = 16 sec) waiting for new jobs. Second, participants on average 15 minutes (M = 15 min 6 sec, SD = 1 min 54 sec) searching for jobs. Third, participants managed to catch new jobs approximately every 17 minutes on average (M = 17 min 5 sec, SD = 10 min 55 sec). Fourth, participants had zero or only one job lined up in the queue of jobs to complete. Fifth, participants spent on average ten minutes doing unpaid work (M = 10 min 5 sec, SD = 47 sec). Sixth, participants spent on average 20 minutes working on jobs (M = 19 min 57 sec, SD = 1 min 36 sec). Finally, during these periods of low work intensity, we observed that participants were working at a slow pace during these periods, working on one job or on no jobs whatsoever. Participants were left waiting for new jobs that met their selection criteria to become available on the platform. During this time, participants filled their unpaid time with activities resembling break-taking (e.g., browsing the internet or watching TV shows). We next describe these activities.

Since only a low amount of work was available on the platform during these periods, participants had to spend a large amount of time searching for new jobs. To automatically search the platform for new jobs, participants used open-source external tools. We observed each participant using on average three different external tools each (M = 3.28, SD = 1.64). We observed participants frequently switching between different external tools as they were waiting for work (N = 15). For example, as presented in Figure 4, we observed participant P9 using a dual-monitor setup and switching between three different tools over 30 seconds in one clip. When asked about this clip, the participant said that they used this setup to manage their work:

“I use two monitors because it is advantageous for [name of platform] (and productivity in general while doing any number of computer based jobs). [name of platform]’s own interface is limited, and many people, myself included, find it necessary to use various [tools] and extensions that help find, sort, filter, organize, and accept jobs. Thus, at any given time I might have a number of different windows running for these purposes, so having two monitors is extremely helpful to manage my work flow.” — P9

Participants programmed their external tools to help them monitor the platform for new jobs that met their selection criteria (e.g. setting a minimum payment amount for a job). We observed participants actively changing the parameters of their external tools based on the availability of jobs on the platform over time. For example, participant P8 initially set their external tool to only alert for new jobs that paid more than $2.99. After not ‘catching’ any jobs for 10 minutes, the participant changed the parameters in their tool to return jobs paying more than $1.99. After 12 minutes of making this change, the tool had auto-accepted five jobs paying more than $1.99, and the participant worked on these jobs. When we asked P8 to elaborate on this video clip, they said that: “while waiting for work. [Tool] will automatically grab anything that is $1.99 and above …I still count this as <<working>>”.

Figure 4: Participant P8 working with three different external tools to find new jobs. In the first two screenshots, the participant switched between two tabs to check the list of jobs available on the crowdsourcing platform using two different tools. In the last screenshot, the participant configured the settings on another tool to search only for jobs paying more than $1.99. All screenshots are presented as sketches in this paper to maintain workers’ and clients’ privacy.

Periods of Moderate Work Intensity

We observed that participants spent 35% of their time in periods of moderate work intensity, for an average of 32 minutes (M = 32 min 9 sec, SD = 15 min 47 sec). First, during periods of moderate work intensity, participants spent on average nine minutes (M = 9 min 4 sec, SD = 1 min 36 sec) waiting for new jobs. Second, participants spent on average 16 minutes (M = 16 min 12 sec, SD = 2 min 51 sec) searching for jobs. Third, participants managed to catch new jobs approximately every five minutes on average (M = 4 min 50 sec, SD = 3 min 39 sec). Fourth, participants had between two and five jobs (M = 2.96, SD = 1.78) lined up in the queue of jobs to complete. Fifth, participants spent on average five minutes doing unpaid work (M = 4 min 39 sec, SD = 49 sec). Sixth, participants spent on average 20 minutes working on jobs (M = 19 min 41 sec, SD = 11 min 12 sec). Finally, during these periods of moderate work intensity, we observed that participants were working at a moderate pace, actively working on jobs, as well as doing activities that are necessary for work (e.g., contacting clients, tracking earnings, reading forums or reviews, or checking qualifications). Unlike during periods of lower work intensity, participants were able to take a few breaks. We next describe these activities.

Since a moderate amount of work was available on the platform during these periods, participants had to increase their work pace to find and evaluate new work. We observed participants frequently (N = 16) switching between different external tools to catch jobs. Once participants caught a new job, they had to quickly decide if they wanted to keep it or return it. We observed participants using community-based reviews to find out more information to help evaluate the jobs in their queue. Furthermore, we frequently observed participants returning jobs (M = 3.82, SD = 3.73) from their queue of jobs. We measured the number of jobs participants returned by counting the instances in which participants clicked a job’s ‘Return’ button on the crowdsourcing platform. There are many reasons why the participant might want to return a job. We learnt about the reasons from the responses we received from the participants. For example, participants returned jobs because they did not think they could finish them before the completion time expired, or because they were not sure that the work they did would be accepted, or because of a problem with the job. We also observed cases (N = 6) in which participants stopped working on a job because a new, better-paid job became available to work on instead. To give an example, as presented in Figure 5, P8 returned a job that paid $2.00. P8 had eight jobs in their jobs queue (including ours). In their jobs queue, the most recent job they caught (which also happened to have the shortest time remaining) paid $2.00. The job did not have an estimated completion time, so P8 turned to a tool to check the hourly rate of the job for other workers who completed the same job. Other workers reviewed the job as “underpaid” (at a pay rate of $5.25/hour), and as a result, P8 returned the job. When asked about this clip, the participant said that they usually swap lower-paying jobs for higher-paying jobs:

“One other factor that I should mention is money. By that, I mean it can matter sometimes what the new job is paying and what the current job you’re working on is paying. Usually this is a decider in jobs that you haven’t started yet. If I was working on a low paying job like $.50 and the new job that came in was higher paying like $4 you might think that I would return the $.50 job and forget about it and just start on the $4 job since it pays more and I won’t have to worry about [not] having enough time. …if my queue starts filling up and the time limits are conflicting then I will return lower-paying jobs for the higher paying jobs but that is because I haven’t started any of the jobs yet.” — P8

Figure 5: Participant P8 returning a job that was underpaid. In the first screenshot, the participant checked their queue of jobs and noticed they caught a job that paid $2.00. To check the hourly rate of the job for other workers who completed the same job, P8 used an external tool, exemplified in the second screenshot. Other workers reviewed the job as “underpaid” (at a pay rate of $5.25/hour), and as a result, P8 returned the job in the last screenshot.*

As well as working on the crowdsourcing platform and doing activities that are necessary for crowdsourcing, participants took a few breaks (M = 1.57, SD = 0.78, range: 1–3), for an average of 5 minutes each (M = 4 min 46 sec, SD = 3 min 52 sec). The most common reason reported by participants for taking a break was to get a drink. Other break activities included checking social media (e.g., Twitter, Facebook, Reddit), spending time with children or pets, or doing other non-crowdwork.

Periods of High Work Intensity

We observed that participants spent 26% of their time in periods of high work intensity, for an average of 23 minutes (M = 23 min 33 sec, SD = 16 min 19 sec). First, during periods of high work intensity, participants spent on average three minutes (M = 2 min 36 sec, SD = 18 sec) waiting for new jobs. Second, participants spent on average 17 minutes (M = 17 min 25 sec, SD = 1 min 49 sec) searching for jobs. Third, participants managed to catch new jobs approximately every three minutes on average (M = 2 min 46 sec, SD = 3 min 43 sec). Fourth, participants had at least six jobs (M = 6.23, SD = 1.26) lined up in the queue of jobs to complete. Fifth, participants spent on average seven minutes doing unpaid work (M = 6 min 39 sec, SD = 47 sec). Sixth, participants spent on average 15 minutes working on jobs (M = 14 min 58 sec, SD = 11 min 46 sec). Finally, during these periods of high work intensity, we observed that participants were working at a faster pace, engaging in task switching to ‘catch’ new work. During these periods, participants in our study did not take any breaks. We next describe these activities.

Since a high amount of work was available on the platform during these periods, notifications for new jobs often came at points when participants were already working on a job. There were 34 occasions in the video data in which we observed participants switching away from the job they were currently working on to catch a new job that had just become available. Participants configured their external tools to notify them when new jobs became available that met their personal selection criteria to avoid missing out on jobs. For example, some of the external tools use pop-up visual notifications that give information on the name of the job, the name of the client, the payment amount as well as the time expected to complete the job. While these notifications helped participants be fast at grabbing new jobs, they could also distract them from working on an active job. To give an example of how these external tools could distract participants, in one video clip exemplified in Figure 6, participant P6 was halfway through working on a job when they were alerted of a new one job by one of their external tools. When asked about this clip, the participant said they switched away from working on the job because the tool notified them that a new job had become available via a sound alert. After switching, checking, and accepting this new job, the participant returned to the job they had previously been completing:

“…you mentioned [the tool] and I am glad you did! The reason I keep checking that so aggressively is there is an Alarm that sounds whenever a job is available that meets my preset criteria. So in the video it may seem like I am beyond obsessively checking it but in reality it was a really busy day and that’s why you see me click it sometimes in a millisecond. I just glance at what made it beep and resume my work …” — P6

Figure 6: Participant P6 switching away from the job they were completing to accept a new job that had just become available on the platform. In the first screenshot, the participant was halfway through a data collection job when suddenly they switched to another tab because a tool notified them via a sound alert that a new job had just become available. In the second screenshot, the participant reviewed the job and accepted it, adding it to the jobs queue.

As new jobs became available at a faster pace, we observed participants using community-based reviews to find out more information to help evaluate the jobs in their queue (N = 12). For example, participants used scripts to check the client’s history and the estimated hourly pay of the job. When asked to elaborate on one of these clips, participant P11 said that they were looking at the number of reviews a client had, and the time and hourly wage information of the job:

“[I] usually look at the number of reviews and whether there are any rejections or blocks. I then look at the hourly average …I use time and hourly wage information to decide if I should do a job or will I miss something more lucrative.” — P11

Unlike during periods of lower work intensity, participants did not take breaks during periods of high work intensity. One reason for this is that participants had at least five jobs lined up in their queue of jobs during these high availability periods, and these jobs often had short completion deadlines. For example, as seen in Figure 7, in one video clip, participant P7 started working on a job that only had 20 minutes left until the completion deadline. Despite the job being advertised as needing 40 minutes to be completed, the participant proceeded to check one of the external tools that they were using and discovered that other crowdworkers had reported completing this job in around 18 minutes. After checking this information, participant P7 decided to work on the job. After 10 minutes of working on the job, a message was displayed: ‘Feel free to take a break before the next round’. At this point, the participant switched back to their jobs queue to check how much time they had left on the job, where they learnt that they only had 9 minutes and 43 seconds remaining. At this point, P7 exclaimed: “I don’t really have time for a break …oh god …I’m tired …”. P7 then immediately returned to the job and resumed working on it. Eventually, the job expired. P7 did not manage to submit it on time and did not get paid for working on it. When asked about this clip, participant P7 elaborated that because of the time limits on the job, they were unable to take a break while working:

“…Sometimes I have to rush to complete and submit [the jobs] within like 10 minutes, than [sic] oh my god, I have to go do that, and sometimes it’s like putting the track in front of the rushing train sort of situation where the moment I am done with one job, the amount of time that I have left to complete the job in the queue is just barely enough to get that done, and then the amount to get the third one in the queue done is barely enough. So it keeps like this, one thing after another, after another, after another, until several hours have passed and I barely have time to pee or get something to drink. It’s not the best situation to be in, but this is the situation I am in right now …” — P7

Figure 7: Participant P7 unable to take breaks whilst working because of a job that had a short completion deadline. In the first screenshot, P7 was two-thirds into completing an experimental psychology job, which asked them to take a break before continuing working. In the second screenshot, the participant switched to the jobs queue to check how much time they had left on the job, where they learnt that they only had 9 minutes and 43 seconds remaining; P7 then immediately returned to the job and resumed working on it. Eventually, as seen in the last screenshot, the job expired, and P7 did not manage to submit it on time and did not get paid for their work.

Discussion

Taken together, the results of the video analysis study show that the variable availability of work on the crowdsourcing platform observed in Study 1 influenced the pace at which workers were working. It was easy for the work pace of the workers in our study to increase during busy moments. During these times, workers had multiple jobs queued up. To manage these fragments of work, workers had to switch between jobs and external tools to keep on top of everything. However, the ‘quiet’ periods when job stacks are empty also have knock-on effects, fragmenting the rest of the day. Having planned to set aside time to work on the crowdsourcing platform, workers were keen to make the most of this time. However, this time is not necessarily fungible; if there are few jobs, workers do not necessarily have the option to simply stop work and ‘make it up’ later. Instead, they remained ‘on call’, neither actively working on jobs but unable to turn off from searching and waiting for new jobs. Thus, workers were hoping to make as much as possible in the time available. Conversely, if there were plenty of available jobs, workers were reluctant to take breaks as planned and continue working without breaks. This finding fits with prior work that reports that crowdworkers are not able to take breaks because of the time demands they were under (Lasecki et al. 2015). Our study suggests that the time demands workers reported in prior work were likely due to workers having to work under the time pressures of ‘catching’ work before other workers and not taking breaks when work is available.

In terms of unpaid time, the results of the study suggest that workers in our study spent on average 17% of their working time on unpaid ‘on-call’ activities such as waiting and searching for new jobs during the twelve 90-minute working sessions recorded. Additionally, we observed that workers had to return 42% of the total number of jobs caught and had to continue waiting and searching for jobs to secure paid work. Moreover, the results of Study 1 further suggest that workers spent on average 22% of their daily working time on unpaid ‘on-call’ activities such as waiting and searching for new jobs. Waiting and searching for work is a form of work itself, unpaid and largely invisible (Star and Strauss 1999). There are no wage guarantees on crowdsourcing platforms (Felstiner 2011), and workers are also not paid for the time they are doing essential admin and ‘meta-work’. Therefore, as there are generally more low-paying jobs than high-paying jobs on crowdsourcing platforms (Hara et al. 2018), crowdworkers have to compete against each other for the higher-paying jobs whilst ‘on call’ for work. Furthermore, for each 90 minutes of video recordings, workers in our study spent an average of twelve minutes on unpaid work activities (i.e., contacting clients, tracking earnings, reading forums or reviews, and checking qualifications) and on waiting and searching for jobs. In Berg’s (Berg 2015) survey study, for every 60 minutes spent on the crowdsourcing platform, workers spent 18 minutes searching for jobs and performing unpaid preparatory work (i.e., unpaid work). In comparison, in our study, for every 60 minutes of ‘at work’ time, participants spent on average eight minutes on unpaid work.

Additionally, Toxtli et al.’s (Toxtli, Suri, and Savage 2021) study identified that crowdworkers spend 33% (i.e., a median of 33 minutes) of their time daily on unpaid work. We note that the aforementioned study categorised as ‘unpaid work’ two activities that we considered to be part of actively working on jobs (see Section 4.1.5), rather than part of unpaid work: (1) “Starting jobs but then returning them” and (2) “Doing jobs that eventually timeout” (Toxtli, Suri, and Savage 2021, 9). The latter activity (i.e., “Doing jobs that eventually timeout”) was the most time-consuming activity observed in Toxtli et al.’s (Toxtli, Suri, and Savage 2021) study (for a median of 4.5 minutes and 37% of workers), whereas the first activity (i.e., “Starting jobs but then returning them”) was the second most time-consuming activity observed in their study (for a median of 4.2 minutes and 92% of workers). However, in our study, we did not categorise these two activities as unpaid work, but instead we categorised them as ‘Actively working on jobs’, since the aim of our study was not to measure the amount of time workers have to take part in unpaid work activities (other than unpaid ‘on-call’ activities such as waiting and searching for new jobs), but to observe the impact of being ‘on call’ on workers’ schedules and work pace. Thus, since we did not categorise these two uttermost time-consuming activities as unpaid work, it is difficult to compare the amount of unpaid time reported in our study with the amount of unpaid time reported in (Toxtli, Suri, and Savage 2021). Nevertheless, we hope that by reporting the amount of unpaid time workers spent in our study (an average of twelve minutes over 90 minutes of work), we can help further quantify the amount of time crowdworkers spent on unpaid work. Unpaid work ultimately impacts workers’ wages—workers’ already low hourly wages (Hara et al. 2018) go down from $3.76 to $2.83 when accounting for unpaid work (Toxtli, Suri, and Savage 2021). Thus, paying workers at a rate for jobs that gives them ‘slack’ for unpaid work (i.e, the non-task aspects of taking on jobs) would provide recognition of this invisible work, but could also provide the platform and clients with an incentive to reduce the time and effort involved in doing unpaid meta-work (e.g., with better tooling, robust and reliable jobs).

Finally, the results of our study suggest that workers had to multitask whenever new jobs became available. Toxtli et al. pose the question “How exactly does multi-tasking and context switching relate to invisible labor?” (Toxtli, Suri, and Savage 2021, 20). We provide an initial answer to this question by showing that the time pressures of being ‘on call’ to secure work before other workers shaped the task switching behaviours of our workers. Workers had to switch their attention away from the job at hand when new work became available. Switching between monitoring and performing work has also been observed in the case of knowledge workers in Renaud et al.’s (Renaud, Ramsay, and Hair 2006) study, where 84% of survey respondents kept their email running in the background while working. However, constant task switching is considered to be more taxing than actual interruptions (e.g., being notified by a new email or job) because of the frequent switching of attention from one activity to the other (González and Mark 2004), and the added time pressures of responding to interruptions (Mark, Gudith, and Klocke 2008). Therefore, we argue that the time-urgent ‘on call’ nature makes the crowdsourcing platform a multitasking environment, in which workers have to be quick to react and switch away from their job when new work becomes available. Jobs, especially ones using time-based measures of performance, need to be designed with this behaviour in mind (Gould, Cox, and Brumby 2018) — in some cases multitasking might be a participant-level random effect that comes out in the wash, but for certain kinds of paradigm (e.g., memory-focused research), the constant meta-work being done during participation could have a systematic impact on results.

Limitations

We conducted a detailed video analysis. A strength of video data is that you can actually see what is happening; therefore, nuances and details can be expected. The nuanced data meant that we could identify interesting clips of activity and share these with the participants to get more information about what was going on. There are, however, also limitations of the video study. Video is intensive to gather and analyse. As a result, this was a relatively small study with 12 participants recording only 90 minutes of activity. While small in size (N = 12), it allowed us to generate a rich set of data to investigate the work practices of workers. However, the results in the study might not necessarily be generalisable to the wider population of crowdworkers and across different crowdsourcing platforms due to the small sample. To increase the generalisability of the results, the results of the study could be validated with a large-scale survey experiment (e.g., 1,000 participants working on different crowdsourcing platforms). Additionally, the results of the study could be validated with a system-level simulation or model, addressing questions such as “what are the costs of this problem to each type of stakeholder (e.g., workers, clients, platforms)?” or “what is the optimal behaviour for each type of stakeholder?”. 1

Further, the study was open to both experienced (i.e., over 10,000 jobs) and novice workers (i.e., with under 500 jobs completed), who self-selected to participate in the study. As a result, our sample consisted of more experienced participants (n = 9) than novice participants (n = 3). However, observing more experienced workers could have created potential biases towards the interpretation of our results, as the workers might have developed specific strategies to manage the pace of their work and take breaks. Future work should explicitly compare how novice and more experienced workers experience being ‘on call’; it could yield valuable insights into developing strategies to cope and reveal ceiling effects, where even very experienced workers cannot deal with having to be ‘on call’ for call.

In terms of methods, one limitation of the study is that we could not record all of the activities of four participants who were using a second screen in their work. This limitation is because the screen recording tool we asked participants to use for the screen recordings only allowed participants to record one screen at a time. Thus, while we could tell when a worker switched from one screen to the other (i.e., the windows on the first screen became inactive), we could not record the activities displayed on the screens of four participants. In the data analysis, we only accounted for the activities that took place on participants’ main screens. Where no activity was shown on the participants’ screen (because they were using a second screen), we skipped these moments in the recordings as no activities were in focus and only analysed the moments that activity was on the recorded screen. Alternative screen recording software that allows recording multiple screens would have improved our ability to capture working contexts fully.

Another methodological challenge in this study is that, throughout, participants were working on our job by recording their screen and other jobs. This situation made setting the rate of pay difficult; we wanted to pay fairly, but at the same time, too high a rate of pay might have affected participants’ behaviour in terms of work rate and job-finding behaviour. Thus, calibrating pay to be fair but does not unduly influence participants’ behaviour is a difficult question for future work. Furthermore, we observed most activity during sessions to take place in an internet browser. It could be possible for future work to corroborate these findings by automatically recording browser activity. Indeed, such activity log studies have been useful in other areas to learn about large-scale patterns of users over extended periods of time (e.g., (Adar, Teevan, and Dumais 2008; Whittaker et al. 2011; Teevan et al. 2007)). Logging is not quite this simple, though; we saw that not all work on the job occurred within the job’s browser tab. For example, participants sometimes switched from the job tab to search and find information in a different tab. This switch meant that they were doing a Google search or reading a webpage. This activity may appear for a logger unrelated to the job, but in the video, we can see that these searches are clearly being done as part of the job.

Additionally, in the data analysis, we counted the time spent on work that was rejected as part of ‘working time’ while being on call. However, given the data collection method, it is difficult to identify which work was rejected and which was not, since it usually takes time to know whether a client will pay. Thus, future work could employ mechanisms that detect rejected work to better measure the amount of unpaid time participants spent whilst ‘on call’.

A final methodological limitation is that we asked participants to record their screens at random points throughout the day for 90 minutes. While the results of Study 1 suggest that crowdworkers tend to work in blocks of around 90 minutes, survey studies have indicated that workers report longer typical work sessions for full-time crowdworkers of approximately 300 minutes (Lasecki et al. 2015). Thus, future work could ask workers to record their screens for more extended periods and, at the same time, observe any potential variations in job intensity while being ‘on call’ for work. The challenge, of course, is to develop ways to measure aspects of behaviour that retain some of the fidelity of video coding (without the labour involved in coding it) without ending up with telemetry-based measures that might misclassify (e.g., work vs non-work) activity. A promising method is described in Toxtli et al.’s (Toxtli, Suri, and Savage 2021) work. They present a computational mechanism for quantifying the invisible labour of crowdworkers. The mechanism is built into a webpage plugin that can detect and quantify paid and invisible labour. In this sense, the plugin detects when a crowdworker is doing paid work or when they are doing invisible work, and then measures how much time the crowdworker puts into each of these two activities. In the future, Toxtli et al.’s (Toxtli, Suri, and Savage 2021) computational mechanism could build on our definition of being ‘on call’ on the crowdsourcing platform to quantify the amount of time crowdworkers spent whilst ‘on call’. Furthermore, researchers could use their plugin to investigate to what extent crowdworkers have to be ‘on call’ for work on other crowdsourcing platforms and digital labour platforms.

General Discussion

We situated the findings of the two studies with prior work in the previous Discussion sections of the paper. Next, we summarise the contributions and the results of the two studies. We complete the paper by discussing the implications of the results for: (a) the people working on crowdsourcing platforms, (b) the design of crowdsourcing platforms, and (c) the wider platform economy.

Summary of Contributions and Results

In this paper, we make three main contributions that extend and contribute to the existing HCI and CSCW research examining the working conditions of crowdworkers (Harmon and Silberman 2019; Gray and Suri 2019; Fredman et al. 2020). In summary, our three main contributions are: (1) defining, (2) quantifying, and (3) describing being ‘on call’ on a large crowdsourcing platform. We summarise the three contributions next.

The first contribution of this paper is a definition of what it means to be ‘on call’ for work on some existing crowdsourcing platforms. Being ‘on call’ is a somewhat known (Gupta et al. 2014; Lehdonvirta 2018; Toxtli, Suri, and Savage 2021), but undefined and increasingly popular working time arrangement of some existing crowdsourcing platforms (Lascau et al. 2022). Therefore, in this paper, we defined being ‘on call’ as a working time arrangement that requires crowdworkers to wait and search for jobs for an undetermined amount of time, often without getting paid, because of the platforms’ lack of predictable work availability and lack of work assignment. By defining ‘on-call’ time, we were able to quantify and describe the ways in which this type of working time arrangement employed by existing crowdsourcing platforms contributes to workers’ limited temporal flexibility.

The second contribution of this paper is a measure to quantify the amount of unpaid time that crowdworkers have to spend being ‘on call’ for work. We quantified the amount of unpaid time crowdworkers spend waiting and searching for new jobs whilst being ‘on call’ for work. Whilst previous work quantified the amount of time crowdworkers spend searching for jobs (Toxtli, Suri, and Savage 2021; Berg 2015), it did not quantify the amount of time crowdworkers spend waiting for jobs to become available. Overall, the results of Study 1 suggest that crowdworkers spent on average 22% of their daily working time on unpaid ‘on-call’ activities such as waiting and searching for new jobs. Additionally, the results of Study 2 further suggest that workers spent on average 17% of their working time on the two unpaid activities during the twelve 90-minute working sessions recorded. By quantifying unpaid ‘on call’ time, we extend the existing research examining the invisible labour of crowdworkers (Gray and Suri 2019; Toxtli, Suri, and Savage 2021).

Finally, the third contribution of this paper is empirical evidence that being ‘on call’ for work impacts workers’ control over their daily schedule planning and work pace. Previous research suggests that the temporal flexibility of people working on crowdsourcing platforms is limited by both client-imposed constraints (e.g., strict job completion times) (Lascau et al. 2019) and crowdworkers’ tooling practices (e.g., increased multitasking) (Williams et al. 2019). In this paper, we explored an additional contributor to workers’ limited temporal flexibility: the design of crowdsourcing platforms, namely requiring crowdworkers to be ‘on call’ for work. We find in Study 1 that having to be ‘on call’ impacted participants’ ability to schedule their time and stick to planned hours of work. Furthermore, we find in Study 2 that having to be ‘on call’ impacted the pace at which participants were able to work. Overall, the two studies we have presented in this paper show that having to be ‘on call’ limits workers’ temporal flexibility. Thus, by describing ‘on-call’ time, we extend the existing research examining the working conditions of crowdworkers by showing that being ‘on call’ for work impacts workers’ control over their daily schedule planning (Study 1) and work pace (Study 2).

Implications for People Working on Crowdsourcing Platforms

Prior work suggests that the temporal flexibility of people working on crowdsourcing platforms is limited by both client-imposed constraints (Lascau et al. 2019) and workers’ tooling practices (Williams et al. 2019). The results of the studies we have presented show that the design of the crowdsourcing platform, namely requiring crowdworkers to be ‘on call’ for work because of the platform’s lack of predictable work availability and lack of work assignment, is a contributor to crowdworkers’ limited temporal flexibility. We next discuss the implications of the results.

Lack of Predictable Work Availability

Study 1 investigated the relationship between crowdworkers’ planned and actual working times. The results of the study suggest that the platform’s lack of predictable work availability can lead to workers having limited control over the planning of their work schedules. Further, the study showed that participants could roughly keep to planned start and finish times for ‘shifts’ on the crowdsourcing platform. However, participants worked two hours fewer than they had planned. Furthermore, the workday was more fragmented than planned between these times, though, with work distributed across twice as many work periods as desired. Thus, at the level of the ‘workday’, schedules were highly fragmented. Therefore, the results suggest that having to be ‘on call’ for work impacted participants’ ability to schedule their time and stick to planned work hours.

The results of this study are important because they suggest that having to be ‘on call’ for work can limit crowdworkers’ control over their work scheduling. Work time accounts for a significant amount of daily life and so can influence health and wellbeing (Schneider and Harknett 2019). In addition, essential health and wellbeing activities, such as nutrition, exercise, and sleep, need some degree of time management and planning (Schneider and Harknett 2019; Fenwick and Tausig 2004; Allen and Armstrong 2006). Therefore, a lack of control in working time can limit crowdworkers’ ability to manage and plan their time, affecting in turn their health and wellbeing. Furthermore, being ‘on call’ for work can make it more difficult for crowdworkers to establish work routines. Therefore, the uncertainty in work routine that stems from being ‘on call’ can further affect health and wellbeing due to its association to psychological distress, poor sleep quality, and overall unhappiness (Schneider and Harknett 2019).

Lack of Work Assignment

Study 2 contextualised the findings of Study 1 by ‘zooming in’ on how crowdworkers manage and complete individual jobs. The results of the study suggest that the platform’s lack of work assignment can lead to workers having limited control over their work pace. Further, the results show that working on the crowdsourcing platform is characterised by three distinct periods of work intensity: periods of low work intensity, periods of moderate work intensity, and periods of high work intensity. During periods of high work intensity, we observed that participants worked at a higher pace, engaging in task switching to quickly ‘catch’ new work, but not taking any breaks. Therefore, the results suggest that having to be ‘on call’ for work impacted participants’ ability to control the pace of their work.

The results of this study are important because they suggest that having to be ‘on call’ for work can limit crowdworkers’ control over their work pace. A high work pace can increase the time pressures under which workers have to work. We know that when under time pressure, people tend to gather less information and to act quickly when making decisions (Christensen-Szalanski 1980). Time pressure affects human judgement, and decision-making (Alter et al. 2007), calling into question the validity of data—used in both industry and academic publications—provided by crowdworkers. Future work will be required to assess the impact of time pressure on crowdworkers’ judgement and decision-making.

Furthermore, a high work pace can increase fatigue (Eriksen 2006) and exhaustion (Naruse et al. 2012) for the crowdworkers. Taking regular breaks can alleviate feelings of fatigue and exhaustion (Rzeszotarski et al. 2013; Dai et al. 2015) and replenish their energy resources (Sonnentag, Kuttler, and Fritz 2010). However, participants in our study were not able to take breaks during periods of high work intensity. Lack of break-taking is of interest to the ongoing conversation about the working conditions of crowdworkers (Gray and Suri 2019). For example, crowdworkers based in the U.S. do not benefit from state law paid rest breaks at work (“a paid 10-minute rest period for each 4-hour work period” (Labor 2023)) since crowdsourcing platforms are in large part unregulated. In comparison, drivers on Uber’s on-demand ride-hailing service can work up to a maximum of ten hours before having to take a six-hour break from completing trips (Uber UK 2018) Thus, the results of our study extend prior work examining the invisible work of crowdworkers by showing that participants in our study were not able to take breaks during periods of high work intensity.

Finally, work scheduling and work pace are two components of temporal precarity. Temporal precarity is defined as the unpredictability, uncertainty, and insecurity workers experience with respect to work scheduling and work pace (Kalleberg 2011). Prior research exploring the sustainability of platform work has criticised the growing ‘Uberization’ of the workforce (Hill 2015) and the exacerbation of work precarity that platform workers experience (Fleming 2017; Wilkins et al. 2022); it has also asked for an investigation of the work precarity of platform work (Anwar and Graham 2021b). Thus, the results of our two studies extend the current understanding of how platform workers experience temporal precarity (Lascau et al. 2022). The results show that the current design of the crowdsourcing platform, namely requiring crowdworkers to be ‘on call’ because of the platform’s lack of predictable work availability and, importantly, lack of work assignment, limits workers’ control over their work schedules and work pace.

Implications for the Design of Crowdsourcing Platforms

Can anything be done about having crowdworkers be ‘on call’ for work? It is challenging to make practicable suggestions for improvements because, as other authors have noted, these issues are effectively ‘features’ and not ‘bugs’ in these platforms (Lascau et al. 2019, 2022; Williams et al. 2019; Lehdonvirta 2018). There may be room to mitigate some of the worst effects of the temporal flexibility issue, and we focus on these here.

Lack of Predictable Work Availability

Thinking specifically about increasing the temporal flexibility of crowdworkers, Williams et al. suggest that temporal flexibility could be increased by, for example, crowdworkers having the “ability to limit their work hours to an 8-hour window during the day. ” (Williams et al. 2019, 22). However, the results of our first study suggest that this suggestion would not help with some of the temporal fragmentation issues that our participants experienced. Our participants seemed to favour planning ‘shifts’ of work; clearly defined periods when they intended to be working. For the most part, our participants were able to stick to the start and end time of these shifts, but within these shifts, work was highly fragmented. The tool-related temporal fragmentation Williams et al. noticed is another side effect of the nature of the platform and workers’ desires to maximise the work they can do in the time they have set aside — an hour is not a fungible unit for workers in that they might be able to work right now, but not in two hours. To earn what they need when they can work, they need to use tools to help find more work. Having limited ‘blocks’ of work in this way may help with the ‘leakage’ of crowdwork into non-work time. However, it would not fix the high work intensity of workers who try to make as much as possible in the time available, which is a corollary of the limited availability of work on the platform.

Instead, crowdworkers might benefit from crowdsourcing platforms enforcing maximum working hours. Since crowdworkers are neither employed directly by clients nor employed by crowdsourcing platforms, they do not benefit from the labour and social rights that come with formal employment, such as maximum working hours regulations and access to a union. Instead, the ‘workers’ are classified by crowdsourcing platforms as Independent Contractors (Gray and Suri 2019) and are free to work for an unlimited number of hours. The recommendation to enforce maximum working hours on the platform might be beneficial to reduce the increasingly higher competitiveness on crowdsourcing platforms. However, the issue of a small number of high-quality jobs on crowdsourcing platforms remains unaddressed.

Lack of Work Assignment

Crowdworkers have to invest their time into building job-catching tools and browsing community forums to find work (Kaplan et al. 2018). The results of Study 2 suggest that the lack of work assignment impacts the temporal flexibility of crowdworkers as workers compete against each other to ‘catch’ the higher-paying jobs. Therefore, one might recommend assigning workers jobs algorithmically (like in the case of Uber) instead of workers having to accept work from the pool of available jobs before other workers. Arguably, assigning workers to jobs could reduce the amount of time workers spend searching for jobs (Toxtli, Suri, and Savage 2021). Several methods for assigning jobs to crowdworkers, rather than having workers ‘catch’ them, have been suggested (e.g., (Ho and Vaughan 2012), (D. E. Difallah et al. 2015)). For example, workers could be assigned jobs based on skills, expertise, past experience, job preferences, or personal interests. Hettiachchi et al. (Hettiachchi et al. 2020) argue that if crowdsourcing platforms assigned compatible jobs to crowdworkers, workers would spend less time and effort finding jobs. Regardless of how workers would be assigned jobs, Jones makes the argument that job assignment should privilege the free time and autonomy of the workers (Jones 2021). However, assigning workers jobs algorithmically is not a quick fix. We discuss in the next sections how Uber driver, despite being assigned trips algorithmically, still have to spend 40% of their working time waiting for a fare (M. K. Chen et al. 2019) due to the high competitiveness on the platform.

A few readers might be frustrated about the lack of tangible design recommendations to increase crowdworkers’ temporal flexibility. Designers of crowdsourcing platforms can be intentional and impactful with their work in not building new features on top of an existing technology or not building a new technology altogether if, ultimately, it may not be possible to ‘solve’ design ‘issues’ (Friedman and Hendry 2019) unless significant changes are made to the platform architecture and the business model of crowdsourcing. In this sense, we acknowledge that designers of crowdsourcing platforms do not operate in a vacuum, but have to work with the business model of these platforms, as well as with stakeholders’ objectives—further, even when researchers have tried to ‘layer’ tools on top of these platforms, the platforms have often terminated their access (Salehi et al. 2015; Irani and Silberman 2016). Thus, the ‘fix’ for these issues is a big and ongoing problem.

However, policymakers need to continue to design policy that improves the working conditions of crowdworkers (Toxtli, Suri, and Savage 2021). In particular, we argue that policymakers need to be made aware of the potentially problematic working time arrangements perpetuated by some crowdsourcing platforms, i.e., requiring crowdworkers to be ‘on call’ for work because of the platforms’ lack of predictable work availability and lack of work assignment. We believe that revealing the platform architectures that make the exploitation of crowdworkers possible is pivotal to policy innovation and, ultimately, changing the flexibility discourse of the platform economy (e.g., individual freedom and flexibility (Anwar and Graham 2021a)), and demanding decent work standards (Graham et al. 2020) for the workers (e.g., realised temporal flexibility (Berg et al. 2018) and fair pay (Whiting, Hugh, and Bernstein 2019)). Thus, this paper contributes to the larger conversation about overlooked unpaid labour by defining, quantifying, and describing what it means to be ‘on call’ on a large crowdsourcing platform. Though the responsibility for improving crowdworkers’ working conditions needs to be shifted away from the workers to crowdsourcing platforms and policymakers, crowdworkers themselves require financial and, consequentially, temporal support to cooperate and take collective action (Salehi et al. 2015). Thus, technologies that provide insights into crowdworkers’ working conditions could support the collective voice and action of the workers (Irani and Silberman 2016). For example, tools that reveal the platform architectures that make the temporal exploitation of crowdworkers possible (e.g., (Toxtli, Suri, and Savage 2021; Irani and Silberman 2013)) could empower the workers, as well as partners and advocates (e.g., scholars, unions, the public, or designers) to further critique and protest poor working conditions (Irani and Silberman 2016).

Going forward, AI practitioners are showing a growing interest in improving sociotechnical AI systems by incorporating more ethical practices in AI and ML models (Schwartz 2019). For example, technology companies have been focusing on developing responsible AI frameworks that embody values such as fairness, accountability, and transparency (Nokia Bell Labs 2023). Within the goal of creating responsible AI, there is this opportunity to consider how responsible AI supply chains (or “fairtrade AI”) can be created, focusing on human labour. We argue that AI systems generated with human labour that have good working conditions could potentially create more responsible AI systems. The resulting AI systems could be less biased, have better data quality, and have more market value. Furthermore, given that online platform work may become more regulated, technology companies might benefit from leading in the space of responsible AI supply chains rather than adapting their practices to new regulations. Defining a set of standards for responsible AI supply chains could be achieved through interdisciplinary collaborations between crowdworkers, clients, policymakers, civil society, scholars and industry practitioners with expertise in HCI, AI, ML, psychology, and social science.

Finally, we leave the reader with a few open-ended questions. What if workers were assigned work algorithmically, instead of having to accept work from the pool of available jobs before other workers? Would this change increase workers’ temporal flexibility? Would workers spend less time monitoring the platform for work? Would they have more control over how they do their work? Would they multitask less? Would they be able to take more breaks? Would clients miss out on flexibility? These are all questions for future work.

Implications for the Wider Platform Economy

With the rise of the “just-in-time” workforce within the platform economy, there has been a shift from ‘scheduled’ to ‘on-demand’ expectations from both services (e.g., Netflix) and people (e.g., Uber drivers) (Lascau et al. 2022). As a result of the shift to ‘on-demand’ expectations, the speed at which platforms workers must accomplish jobs has increased, consumers becoming the principal benefactors of the quick services within the platform economy (J. Y. Chen and Sun 2020) (Gould 2022). In contrast, we see that online platform workers such as crowdworkers do not benefit from the same flexibility as consumers (Lascau et al. 2022). The results of the two studies presented in this paper suggest that the temporal flexibility of crowdworkers is limited by having to be ‘on call’ to respond to the temporal demands of customers (i.e., clients). We next discuss the implications of our results for the wider platform economy.

Lack of Predictable Work Availability

In this paper, we argue that the lack of predictable work availability on crowdsourcing platforms is due to an oversupply of labour. Within the wider platform economy, labour platforms have an oversupply of workers (Graham, Hjorth, and Lehdonvirta 2017), which makes the workers a ‘disposable labour force’ that can be quickly replaced (Moore 2017). As a result of the COVID-19 pandemic, platform economy services are facing an increase in labour supply (Dunn et al. 2020). The increase in labour supply has resulted in platform workers spending more time ‘on call’ for work. For example, in the case of Deliveroo, the largest on-demand food delivery service, riders began spending more time waiting for work due to the increase in labour supply (Cant 2020; Bates et al. 2021). Whilst the number of riders working during the evenings increased, the number of orders stagnated. Consequently, riders spent more time waiting for work, and earnings dropped.

The results of Study 1 suggest that workers spent on average 22% of their daily working time on unpaid ‘on-call’ activities such as waiting and searching for new jobs. Additionally, the results of Study 2 further suggest that workers spent on average 17% of their working time on unpaid ‘on-call’ activities such as waiting and searching for new jobs. In the context of the wider platform economy, Uber drivers report spending 40% of their time waiting for a fare (M. K. Chen et al. 2019). In February 2021, the Supreme Court of the United Kingdom ruled that the time Uber drivers spend working is not restricted only to the time drivers are driving customers to their destination but also covers any time the driver is logged into the Uber app, waiting to accept trips (Supreme Court of the United Kingdom 2021). In other words, Uber drivers ought to be paid for the time they spend waiting for a fare. The proposed Directive is a positive move in the right direction for improving the working conditions of people working in the platform economy. However, at the time of submission of this manuscript, Uber is still yet to comply with the Supreme Court’s ruling. Further, it remains to be seen how crowdsourcing platforms will transpose the Directive into practice across the regions in which they operate.

Finally, the effects of the oversupply of labour within the platform economy can also be seen on freelancing platforms such as Upwork, where freelancers report spending a high number of unpaid hours on waiting and searching for jobs because of a lack of available work (Carlos Alvarez de la Vega, E. Cecchinato, and Rooksby 2021). As a result, freelancers have to adapt their tool and software usage to support the temporal rhythms of their work, which, although grant the freelancers’ high levels of temporal flexibility to find work at different times throughout the day, also blurs the lines between work and non-work (Jarrahi et al. 2021). Therefore, we see more and more examples of people working within the platform economy that are impacted by the narrative of flexibility and individual freedom (Anwar and Graham 2021a).

Lack of Work Assignment

In this paper, we argue that the lack of work assignment on crowdsourcing platforms is due to the jobs being made available to most of the workers online, rather than workers being matched by the platform with suitable jobs. For example, in the case of Lyft and Uber’s ride-hailing services, drivers are assigned work algorithmically. This form of algorithmic management is the most common type of automation found across the platform economy. Nevertheless, although Lyft and Uber drivers are assigned trips algorithmically, drivers still have to spend 40% of their working time waiting for a fare (M. K. Chen et al. 2019). Once a fare becomes available, riders have a mere 15 seconds to assess the offer based on the information provided, reach the screen and accept a trip (Uber 2023; Lyft 2019). If drivers do not want to accept a ride or are too slow to do so, they are penalised: their ‘acceptance rate’ drops and drivers risk having their accounts deactivated (Rosenblat 2018). Therefore, drivers have to work under similar time pressures of having to respond quickly to new jobs as the crowdworkers who participated in our second study, but cannot realistically opt out of accepting rides as they risk losing their jobs temporarily or permanently.

Whilst one might recommend assigning crowdworkers jobs algorithmically (like in the case of Uber; however, the decisions should be made transparent to the workers (Lee et al. 2015)), assigning jobs algorithmically is not a quick fix. We know from the experiences of on-demand ride-hailing drivers that the forms of algorithmic management employed by apps such as Uber can result in drivers spending longer hours than initially planned logged into the app just to be assigned jobs (Rosenblat 2018), but risk getting their accounts suspended if they decline too many jobs. Furthermore, in the case of Upwork, freelancers and clients are matched through algorithmic assignment based on a set of attributes that enables freelancers and clients to search for one another and get matched (Jarrahi et al. 2020). Upwork also notifies freelancers about potential jobs that might be a good match for their skill sets. However, algorithmic matching is not enough to match freelancers and clients on Upwork. Freelancers and clients also use the platform’s communication channels and evaluation metrics to supplement the match-making process (Jarrahi et al. 2020). Therefore, whilst working alongside the algorithm became culturally the most common type of automation within the platform economy, this arrangement seems far from the paradise of individual freedom and flexibility (Anwar and Graham 2021a) envisioned for the future of work.

“Can we foresee a future crowd workplace in which we would want our children to participate?”, Kittur et al. (Kittur et al. 2013, 1) called on the HCI community in 2013 to consider a longer-term perspective for the future of crowdsourcing platforms. In this sense, the community has been aiming towards a future of work in which tomorrow’s generation would participate in proudly. Alas, ‘task assignment’ remains one of the greatest roadblocks to achieving this aim (Kittur et al. 2013), not only for crowdsourcing platforms but also for the wider platform economy. Furthermore, decent work standards such as realised temporal flexibility also remain a great roadblock to achieving this aim.

Conclusion

We presented two studies that show how having to be ‘on call’ on a large crowdsourcing platform, namely requiring crowdworkers to be ‘on call’ for work, contributes to workers’ limited temporal flexibility. We argue that crowdworkers have to be ‘on call’ for work because of the platform’s lack of predictable work availability and lack of work assignment.

Study 1 was a time-use-diary study in which 18 participants completed a (planning) one-week work schedule diary at the start of the week and an (actual) one-day activity diary. Results suggest that while participants started and finished work roughly when they intended to, participants worked on average two hours less than planned and spent on average 22% of their daily working time on unpaid ‘on-call’ activities such as waiting and searching for new jobs. In addition, the data suggests that participants’ workdays were significantly more fragmented than workers planned, with work distributed across twice as many periods of work as desired. Therefore, the results of this study suggest that being ‘on call’ can limit workers’ control over scheduling their time and sticking to planned work hours and, thus, reduce schedule control.

However, we did not see from the data in Study 1 how having to be ‘on call’ for work influences the pace at which workers complete individual jobs, and find and manage work on the platform. Therefore, in Study 2, we presented the video analysis study of more than 18 hours of screen recordings conducted to investigate how having to be ‘on call’ for work can limit workers’ control over the pace at which they work and, thus, reduce job control. We observed in the video data that participants spent on average 17% of their working time on unpaid ‘on-call’ activities, such as waiting and searching for new jobs. Overall, working on the platform was characterised by three distinct periods of work intensity: periods of low, moderate, and high work intensity. We observed that participants adjusted their work pace, and task switching and break-taking behaviours in relation to the intensity of the work.

The two studies showed that having to be ‘on call’ for work can limit crowdworkers’ temporal flexibility, resulting in reduced schedule control and job control for the workers. The ‘fix’ for these issues is a big and ongoing problem; these are negative externalities caused by the architecture of crowdsourcing platforms. We propose adjustments that could ameliorate some of the effects. However, ultimately, it may not be possible to ‘solve’ these issues for workers on these platforms since the platform architecture and the business model of crowdsourcing platforms are inherently unfixable and overall a move in the wrong direction.

We are grateful for the comments of several anonymous reviewers across several versions of this manuscript. We could not have conducted this research without participants, and we acknowledge their critical role in the research. This work was supported by the UK Engineering and Physical Sciences Research Council grant EP/L504889/1.

Adamczyk, Piotr D., and Brian P. Bailey. 2004. “If Not Now, When? The Effects of Interruption at Different Moments Within Task Execution.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 271–78. New York, NY, USA: Association for Computing Machinery.

Adar, Eytan, Jaime Teevan, and Susan T. Dumais. 2008. “Large Scale Analysis of Web Revisitation Patterns.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1197–1206. CHI ’08. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/1357054.1357241.

Ahmetoglu, Yoana, Duncan P. Brumby, and Anna L. Cox. 2021. “To Plan or Not to Plan? A Mixed-Methods Diary Study Examining When, How and Why Knowledge Work Planning Is Inaccurate.” Proc. ACM Hum.-Comput. Interact. 4 (CSCW3). https://doi.org/10.1145/3432921.

Allen, Tammy D, and Jeremy Armstrong. 2006. “Further Examination of the Link Between Work-Family Conflict and Physical Health: The Role of Health-Related Behaviors.” American Behavioral Scientist 49 (9): 1204–21.

Alter, Adam L, Daniel M Oppenheimer, Nicholas Epley, and Rebecca N Eyre. 2007. “Overcoming Intuition: Metacognitive Difficulty Activates Analytic Reasoning.” Journal of Experimental Psychology: General 136 (4): 569.

Antin, Judd, and Aaron Shaw. 2012. “Social Desirability Bias and Self-Reports of Motivation: A Study of Amazon Mechanical Turk in the US and India.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2925–34. CHI ’12. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2207676.2208699.

Anttila, Timo, Tomi Oinas, Mia Tammelin, and Jouko Nätti. 2015. “Working-Time Regimes and Work-Life Balance in Europe.” European Sociological Review 31 (6): 713–24.

Anwar, Mohammad Amir, and Mark Graham. 2021b. “Between a Rock and a Hard Place: Freedom, Flexibility, Precarity and Vulnerability in the Gig Economy in Africa.” Competition & Change 25 (2): 237–58.

Anwar, Mohammad Amir, and Mark Graham. 2021a. “Between a Rock and a Hard Place: Freedom, Flexibility, Precarity and Vulnerability in the Gig Economy in Africa.” Competition & Change 25 (2): 237–58. https://doi.org/10.1177/1024529420914473.

Arlinghaus, Anna, Philip Bohle, Irena Iskra-Golec, Nicole Jansen, Sarah Jay, and Lucia Rotenberg. 2019. “Working Time Society Consensus Statements: Evidence-based Effects of Shift Work and Non-Standard Working Hours on Workers, Family and Community.” Industrial Health 57 (2): 184–200.

Baltes, Boris B, Thomas E Briggs, Joseph W Huff, Julie A Wright, and George A Neuman. 1999. “Flexible and Compressed Workweek Schedules: A Meta-Analysis of Their Effects on Work-Related Criteria.” Journal of Applied Psychology 84 (4): 496.

Bates, O., C. Lord, H. Alter, A. Friday, and B. Kirman. 2021. “Lessons from One Future of Work: Opportunities to Flip the Gig Economy.” IEEE Pervasive Computing 20 (04): 26–34. https://doi.org/10.1109/MPRV.2021.3113825.

Batt, Rosemary, and Eileen Appelbaum. 1995. “Worker Participation in Diverse Settings: Does the Form Affect the Outcome, and If so, Who Benefits?” British Journal of Industrial Relations 33 (3): 353–78. https://doi.org/10.1111/j.1467-8543.1995.tb00444.x.

Bell, Alice, and Ivana La Valle. 2003. Combining Self-Employment and Family Life. Bristol, UK: Policy Press.

Ben-Ishai, Liz. 2015. “Volatile Job Schedules and Access to Public Benefits.” The Center for Law and Social Policy.

Bentley, Frank, Katie Quehl, Jordan Wirfs-Brock, and Melissa Bica. 2019. “Understanding Online News Behaviors.” In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–11. New York, NY, USA: Association for Computing Machinery.

Berg, Janine. 2015. “Income Security in the on-Demand Economy: Findings and Policy Lessons from a Survey of Crowdworkers.” Comp. Lab. L. & Pol’y J. 37: 543.

Berg, Janine, Marianne Furrer, Ellie Harmon, Uma Rani, and M. Six Silberman. 2018. Digital Labour Platforms and the Future of Work: Towards Decent Work in the Online World. Geneva: International Labour Organization.

Bernstein, Michael S., Joel Brandt, Robert C. Miller, and David R. Karger. 2011. “Crowds in Two Seconds: Enabling Realtime Crowd-Powered Interfaces.” In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, 33–42. UIST ’11. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2047196.2047201.

Bigham, Jeffrey P., Chandrika Jayant, Hanjie Ji, Greg Little, Andrew Miller, Robert C. Miller, Robin Miller, et al. 2010. “VizWiz: Nearly Real-Time Answers to Visual Questions.” In Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology, 333–42. UIST ’10. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/1866029.1866080.

Bond, Frank W, and David Bunce. 2001. “Job Control Mediates Change in a Work Reorganization Intervention for Stress Reduction.” Journal of Occupational Health Psychology 6 (4): 290.

Bosma, Hans, Stephen A Stansfeld, and Michael G Marmot. 1998. “Job Control, Personal Characteristics, and Heart Disease.” Journal of Occupational Health Psychology 3 (4): 402.

Braun, Virginia, and Victoria Clarke. 2006. “Using Thematic Analysis in Psychology.” Qualitative Research in Psychology 3 (2): 77–101.

Braun, Virginia, and Victoria Clarke. 2019. “Reflecting on Reflexive Thematic Analysis.” Qualitative Research in Sport, Exercise and Health 11 (4): 589–97.

Brown, Alexandra, David Buchholz, Matthew B Gross, Jeff Larrimore, Ellen A Merry, Barbara J Robles, Maximilian D Schmeiser, Logan Thomas, et al. 2014. “Report on the Economic Well-Being of U.S. Households in 2013.” 89200. Board of Governors of the Federal Reserve System (U.S.).

Brown, Judith E., Dorothy H. Broom, Jan M. Nicholson, and Michael Bittman. 2010. “Do Working Mothers Raise Couch Potato Kids? Maternal Employment and Children’s Lifestyle Behaviours and Weight in Early Childhood.” Social Science & Medicine 70 (11): 1816–24. https://doi.org/10.1016/j.socscimed.2010.01.040.

Brumby, Duncan P., Helena Du Toit, Harry J. Griffin, Ana Tajadura-Jiménez, and Anna L. Cox. 2014. “Working with the Television on: An Investigation into Media Multitasking.” In CHI ’14 Extended Abstracts on Human Factors in Computing Systems, 1807–12. CHI EA ’14. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2559206.2581210.

Cant, Callum. 2020. Riding for Deliveroo: Resistance in the New Economy. Cambridge, UK: Polity Press.

Carlos Alvarez de la Vega, Juan, Marta E. Cecchinato, and John Rooksby. 2021. “‘Why Lose Control?’ A Study of Freelancers’ Experiences with Gig Economy Platforms.” In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. CHI ’21. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3411764.3445305.

Carter, Scott, and Jennifer Mankoff. 2005. “When Participants Do the Capturing: The Role of Media in Diary Studies.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 899–908. CHI ’05. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/1054972.1055098.

Casey, Logan S., Jesse Chandler, Adam Seth Levine, Andrew Proctor, and Dara Z. Strolovitch. 2017. “Intertemporal Differences Among MTurk Workers: Time-based Sample Variations and Implications for Online Data Collection.” SAGE Open 7 (2): 2158244017712774. https://doi.org/10.1177/2158244017712774.

Chandler, Jesse, Cheskie Rosenzweig, Aaron J Moss, Jonathan Robinson, and Leib Litman. 2019. “Online Panels in Social Science Research: Expanding Sampling Methods Beyond Mechanical Turk.” Behavior Research Methods 51 (5): 2022–38.

Chen, Julie Yujie, and Ping Sun. 2020. “Temporal Arbitrage, Fragmented Rush, and Opportunistic Behaviors: The Labor Politics of Time in the Platform Economy.” New Media & Society 22 (9): 1561–79. https://doi.org/10.1177/1461444820913567.

Chen, M Keith, Peter E Rossi, Judith A Chevalier, and Emily Oehlsen. 2019. “The Value of Flexible Work: Evidence from Uber Drivers.” Journal of Political Economy 127 (6): 2735–94.

Cho, Sung-Hyun, Mihyun Park, Sang Hee Jeon, Hyoung Eun Chang, and Hyun-Ja Hong. 2014. “Average Hospital Length of Stay, Nurses’ Work Demands, and Their Health and Job Outcomes.” Journal of Nursing Scholarship 46 (3): 199–206.

Christensen-Szalanski, Jay JJ. 1980. “A Further Examination of the Selection of Problem-Solving Strategies: The Effects of Deadlines and Analytic Aptitudes.” Organizational Behavior and Human Performance 25 (1): 107–22.

Clark, Sue Campbell. 2000. “Work/Family Border Theory: A New Theory of Work/Family Balance.” Human Relations 53 (6): 747–70.

Cook, Dave. 2020. “The Freedom Trap: Digital Nomads and the Use of Disciplining Practices to Manage Work/Leisure Boundaries.” Information Technology & Tourism 22 (3): 355–90. https://doi.org/10.1007/s40558-020-00172-4.

Costa, Giovanni. 2003. “Shift Work and Occupational Medicine: An Overview.” Occupational Medicine 53 (2): 83–88.

Cousins, Karlene C, and Upkar Varshney. 2009. “Designing Ubiquitous Computing Environments to Support Work Life Balance.” Communications of the ACM 52 (5): 117–23.

Craig, Lyn, and Killian Mullan. 2011. “How Mothers and Fathers Share Childcare: A Cross-National Time-Use Comparison.” American Sociological Review 76 (6): 834–61. https://doi.org/10.1177/0003122411427673.

Czerwinski, Mary, Eric Horvitz, and Susan Wilhite. 2004. “A Diary Study of Task Switching and Interruptions.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 175–82. CHI ’04. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/985692.985715.

Dai, Peng, Jeffrey M. Rzeszotarski, Praveen Paritosh, and Ed H. Chi. 2015. “And Now for Something Completely Different: Improving Crowdsourcing Workflows with Micro-Diversions.” In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, 628–38. CSCW ’15. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2675133.2675260.

de Jonge, Jan, Ellen Spoor, Sabine Sonnentag, Christian Dormann, and Marieke van den Tooren. 2012. “‘Take a Break?!’ Off-job Recovery, Job Demands, and Job Resources as Predictors of Health, Active Learning, and Creativity.” European Journal of Work and Organizational Psychology 21 (3): 321–48.

De Stefano, Valerio. 2015. “The Rise of the Just-in-Time Workforce: On-demand Work, Crowdwork, and Labor Protection in the Gig-Economy.” Comp. Lab. L. & Pol’y J. 37: 471.

Deng, Xuefei Nancy, K. D. Joshi, and Robert D. Galliers. 2016. “The Duality of Empowerment and Marginalization in Microtask Crowdsourcing: Giving Voice to the Less Powerful Through Value Sensitive Design.” MIS Q. 40 (2): 279–302.

Difallah, Djellel Eddine, Michele Catasta, Gianluca Demartini, Panagiotis G. Ipeirotis, and Philippe Cudré-Mauroux. 2015. “The Dynamics of Micro-Task Crowdsourcing: The Case of Amazon MTurk.” In Proceedings of the 24th International Conference on World Wide Web, 238–47. WWW ’15. Republic and Canton of Geneva, CHE: International World Wide Web Conferences Steering Committee. https://doi.org/10.1145/2736277.2741685.

Difallah, Djellel, Elena Filatova, and Panos Ipeirotis. 2018. “Demographics and Dynamics of Mechanical Turk Workers.” In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, 135–43. WSDM ’18. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3159652.3159661.

Dokko, Jane, Megan Mumford, and Diane Whitmore Schanzenbach. 2015. “Workers and the Online Gig Economy.” Brookings.

Dourish, Paul. 2014. “Reading and Interpreting Ethnography.” In Ways of Knowing in HCI, edited by Judith S. Olson and Wendy A. Kellogg, 1–23. New York, NY: Springer New York. https://doi.org/10.1007/978-1-4939-0378-8_1.

Dubal, Veena B. 2020. “The Time Politics of Home-Based Digital Piecework.” In Center for Ethics Journal: Perspectives on Ethics, Symposium Issue “The Future of Work in the Age of Automation and AI.”, 2020:50. San Francisco, CA, USA: C4E Journal.

Dunn, Michael, Fabian Stephany, Steven Sawyer, Isabel Munoz, Raghav Raheja, Gabrielle Vaccaro, and Vili Lehdonvirta. 2020. “When Motivation Becomes Desperation: Online Freelancing During the COVID-19 Pandemic.” SocArXiv. https://doi.org/10.31235/osf.io/67ptf.

Eriksen, W. 2006. “Work Factors as Predictors of Persistent Fatigue: A Prospective Study of Nurses’ Aides.” Occupational and Environmental Medicine 63 (6): 428–34. https://doi.org/10.1136/oem.2005.019729.

Fagan, Colette. 2001. “The Temporal Reorganization of Employment and the Household Rhythm of Work Schedules: The Implications for Gender and Class Relations.” American Behavioral Scientist 44 (7): 1199–1212.

Felstiner, Alek. 2011. “Working the Crowd: Employment and Labor Law in the Crowdsourcing Industry.” Berkeley J. Emp. & Lab. L. 32: 143.

Fenwick, Rudy, and Mark Tausig. 2004. “The Health and Family-Social Consequences of Shift Work and Schedule Control: 1977 and 1997.” In Fighting For Time; Shifting Boundaries of Work and Social Life, edited by Cynthia Fuchs Epstein and Arne L. Kalleberg, 77–110. New York, NY, USA: Russell Sage Foundation.

Fleming, Peter. 2017. “The Human Capital Hoax: Work, Debt and Insecurity in the Era of Uberization.” Organization Studies 38 (5): 691–709.

Flores-Saviaga, Claudia, Yuwen Li, Benjamin Hanrahan, Jeffrey Bigham, and Saiph Savage. 2020. “The Challenges of Crowd Workers in Rural and Urban America.” Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 8 (1): 159–62.

Franke, Franziska. 2015. “Is Work Intensification Extra Stress?” Journal of Personnel Psychology 14 (1): 17–27. https://doi.org/10.1027/1866-5888/a000120.

Fredman, Sandra, Darcy du Toit, Mark Graham, Kelle Howson, Richard Heeks, Jean-Paul van Belle, Paul Mungai, and Abigail Osiki. 2020. “Thinking Out of the Box: Fair Work for Platform Workers.” King’s Law Journal 31 (2): 236–49. https://doi.org/10.1080/09615768.2020.1794196.

Friedman, Batya, and David G Hendry. 2019. Value Sensitive Design: Shaping Technology with Moral Imagination. Cambridge, MA, USA: The MIT Press.

Gallie, Duncan, and Ying Zhou. 2013. “Job Control, Work Intensity, and Work Stress.” In Economic Crisis, Quality of Work and Social Integration: The European Experience, edited by Duncan Gallie, 115–41. Oxford, UK: Oxford University Press.

Ganster, Daniel C. 1989. “Worker Control and Well-Being: A Review of Research in the Workplace.” Job Control and Worker Health 3 (23): 213–29.

Gao, Yihan, and Aditya Parameswaran. 2014. “Finish Them! Pricing Algorithms for Human Computation.” Proc. VLDB Endow. 7 (14): 1965–76. https://doi.org/10.14778/2733085.2733101.

Gershuny, Jonathan I., and Oriel Sullivan. 2017. “United Kingdom Time Use Survey, 2014-2015.” UK Data Service. https://doi.org/10.5255/UKDA-SN-8128-1.

Geurts, Sabine Ae, and Sabine Sonnentag. 2006. “Recovery as an Explanatory Mechanism in the Relation Between Acute Stress Reactions and Chronic Health Impairment.” Scandinavian Journal of Work, Environment & Health 32 (6): 482–92. https://doi.org/10.5271/sjweh.1053.

Giles, Jim. 2009. “Refugees Set to Tap Demand for Virtual Workforce.” Elsevier.

Glavin, Paul, and Scott Schieman. 2012. “Workfamily Role Blurring and Workfamily Conflict: The Moderating Influence of Job Resources and Job Demands.” Work and Occupations 39 (1): 71–98.

González, Victor M., and Gloria Mark. 2004. ““Constant, Constant, Multi-Tasking Craziness”: Managing Multiple Working Spheres.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 113–20. CHI ’04. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/985692.985707.

Gould, Sandy J. J. 2022. “Consumption Experiences in the Research Process.” In Press of Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, 1–17. CHI ’22. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3491102.3502001.

Gould, Sandy J. J., Duncan P. Brumby, and Anna L. Cox. 2013. “What Does It Mean for an Interruption to Be Relevant? An Investigation of Relevance as a Memory Effect.” Proceedings of the Human Factors and Ergonomics Society Annual Meeting 57 (1): 149–53. https://doi.org/10.1177/1541931213571034.

Gould, Sandy J. J., Anna L. Cox, and Duncan P. Brumby. 2018. “Influencing and Measuring Behaviour in Crowdsourced Activities.” In New Directions in Third Wave Human-Computer Interaction: Volume 2 - Methodologies, edited by Michael Filimowicz and Veronika Tzankova, 103–30. Human. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-73374-6_7.

Graham, Mark, Isis Hjorth, and Vili Lehdonvirta. 2017. “Digital Labour and Development: Impacts of Global Digital Labour Platforms and the Gig Economy on Worker Livelihoods.” Transfer: European Review of Labour and Research 23 (2): 135–62. https://doi.org/10.1177/1024258916687250.

Graham, Mark, Jamie Woodcock, Richard Heeks, Paul Mungai, Jean-Paul Van Belle, Darcy du Toit, Sandra Fredman, Abigail Osiki, Anri van der Spuy, and Six M. Silberman. 2020. “The Fairwork Foundation: Strategies for Improving Platform Work in a Global Context.” Geoforum; Journal of Physical, Human, and Regional Geosciences 112: 100–103. https://doi.org/10.1016/j.geoforum.2020.01.023.

Gray, Mary L, and Siddharth Suri. 2019. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. New York, NY, US: Houghton Mifflin Harcourt.

Griffiths, Karin Lindgren, Martin G Mackey, and Barbara J Adamson. 2011. “Behavioral and Psychophysiological Responses to Job Demands and Association with Musculoskeletal Symptoms in Computer Work.” Journal of Occupational Rehabilitation 21 (4): 482–92.

Gupta, Neha. 2017. “An Ethnographic Study of Crowdwork via Amazon Mechanical Turk in India.”

Gupta, Neha, David Martin, Benjamin V. Hanrahan, and Jacki O’Neill. 2014. “Turk-Life in India.” In Proceedings of the 18th International Conference on Supporting Group Work, 1–11. GROUP ’14. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2660398.2660403.

Haas, Daniel, Jiannan Wang, Eugene Wu, and Michael J. Franklin. 2015. “CLAMShell: Speeding up Crowds for Low-Latency Data Labeling.” Proc. VLDB Endow. 9 (4): 372–83. https://doi.org/10.14778/2856318.2856331.

Hao, Karen. 2019. “An AI Startup Has Found a New Source of Cheap Labor for Training Algorithms: Prisoners.” MIT Technology Review. https://web.archive.org/web/20230402082700/https://www.technologyreview.com/2019/03/29/136262/an-ai-startup-has-found-a-new-source-of-cheap-labor-for-training-algorithms/.

Hara, Kotaro, Abigail Adams, Kristy Milland, Saiph Savage, Chris Callison-Burch, and Jeffrey P. Bigham. 2018. “A Data-Driven Analysis of Workers’ Earnings on Amazon Mechanical Turk.” In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1–14. New York, NY, USA: Association for Computing Machinery.

Harmon, Ellie, and M Six Silberman. 2019. “Rating Working Conditions on Digital Labor Platforms.” Computer Supported Cooperative Work (CSCW) 28 (5): 911–60.

Hettiachchi, Danula, Niels van Berkel, Vassilis Kostakos, and Jorge Goncalves. 2020. “CrowdCog: A Cognitive Skill Based System for Heterogeneous Task Assignment and Recommendation in Crowdsourcing.” Proc. ACM Hum.-Comput. Interact. 4 (CSCW2). https://doi.org/10.1145/3415181.

Hill, Steven. 2015. Raw Deal: How the" Uber Economy" and Runaway Capitalism Are Screwing American Workers. New York, NY, USA: St. Martin’s Press.

Ho, Chien-Ju, and Jennifer Wortman Vaughan. 2012. “Online Task Assignment in Crowdsourcing Markets.” In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, 45–51. AAAI’12. Toronto, Ontario, Canada: AAAI Press.

Horton, John J. 2010. “Online Labor Markets.” In Internet and Network Economics, edited by Amin Saberi, 515–22. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer. https://doi.org/10.1007/978-3-642-17572-5_45.

Howcroft, Debra, and Birgitta Bergvall-Kåreborn. 2019. “A Typology of Crowdwork Platforms.” Work, Employment and Society 33 (1): 21–38. https://doi.org/10.1177/0950017018760136.

Howson, Kelle, Funda Ustek-Spilda, Alessio Bertolini, Richard Heeks, Fabian Ferrari, Srujana Katta, Matthew Cole, et al. 2022. “Stripping Back the Mask: Working Conditions on Digital Labour Platforms During the COVID-19 Pandemic.” International Labour Review 161 (3): 413–40. https://doi.org/10.1111/ilr.12222.

Huang, Ting-Hao, and Jeffrey Bigham. 2017. “A 10-Month-Long Deployment Study of on-Demand Recruiting for Low-Latency Crowdsourcing.” Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 5 (1): 61–70.

Hughes, Emily L, and Katharine R Parkes. 2007. “Work Hours and Well-Being: The Roles of Work-Time Control and Workfamily Interference.” Work & Stress 21 (3): 264–78.

Humphrey, Stephen E., Jennifer D. Nahrgang, and Frederick P. Morgeson. 2007. “Integrating Motivational, Social, and Contextual Work Design Features: A Meta-Analytic Summary and Theoretical Extension of the Work Design Literature.” Journal of Applied Psychology 92 (5): 1332–56. https://doi.org/10.1037/0021-9010.92.5.1332.

Iida, Masumi, Patrick E. Shrout, Jean-Philippe Laurenceau, and Niall Bolger. 2012. “Using Diary Methods in Psychological Research.” In APA Handbook of Research Methods in Psychology, Vol 1: Foundations, Planning, Measures, and Psychometrics., edited by Harris Cooper, Paul M. Camic, Debra L. Long, A. T. Panter, David Rindskopf, and Kenneth J. Sher, 277–305. Washington: American Psychological Association. https://doi.org/10.1037/13619-016.

Ipeirotis, Panagiotis G. 2010. “Analyzing the Amazon Mechanical Turk Marketplace.” XRDS: Crossroads, The ACM Magazine for Students 17 (2): 16–21.

Irani, Lilly C., and M. Six Silberman. 2013. “Turkopticon: Interrupting Worker Invisibility in Amazon Mechanical Turk.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 611–20. New York, NY, USA: Association for Computing Machinery.

Irani, Lilly C., and M. Six Silberman. 2016. “Stories We Tell about Labor: Turkopticon and the Trouble with “Design”.” In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 4573–86. CHI ’16. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2858036.2858592.

Jarrahi, Mohammad Hossein, Gemma Newlands, Brian Butler, Saiph Savage, Christoph Lutz, Michael Dunn, and Steve Sawyer. 2021. “Flexible Work and Personal Digital Infrastructures.” Communications of the ACM 64 (7): 72–79.

Jarrahi, Mohammad Hossein, Will Sutherland, Sarah Beth Nelson, and Steve Sawyer. 2020. “Platformic Management, Boundary Resources for Gig Work, and Worker Autonomy.” Computer Supported Cooperative Work (CSCW) 29 (1): 153–89.

Jett, Quintus R, and Jennifer M George. 2003. “Work Interrupted: A Closer Look at the Role of Interruptions in Organizational Life.” Academy of Management Review 28 (3): 494–507.

Jones, Phil. 2021. Work Without the Worker: Labour in the Age of Platform Capitalism. London, UK: Verso Books.

Kalleberg, Arne L. 2011. Good Jobs, Bad Jobs: The Rise of Polarized and Precarious Employment Systems in the United States, 1970s-2000s. The American Sociological Association’s Rose Series in Sociology. New York, NY, US: Russell Sage Foundation.

Kaplan, Toni, Susumu Saito, Kotaro Hara, and Jeffrey Bigham. 2018. “Striving to Earn More: A Survey of Work Strategies and Tool Use Among Crowd Workers.” Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 6 (June): 70–78. https://doi.org/10.1609/hcomp.v6i1.13327.

Kässi, Otto, and Vili Lehdonvirta. 2018. “Online Labour Index: Measuring the Online Gig Economy for Policy and Research.” Technological Forecasting and Social Change 137: 241–48.

Kelly, Erin L, and Phyllis Moen. 2007. “Rethinking the Clockwork of Work: Why Schedule Control May Pay Off at Work and at Home.” Advances in Developing Human Resources 9 (4): 487–506.

Kirchberg, Daniela M, Robert A Roe, and Wendelien Van Eerde. 2015. “Polychronicity and Multitasking: A Diary Study at Work.” Human Performance 28 (2): 112–36.

Kittur, Aniket, Jeffrey V. Nickerson, Michael Bernstein, Elizabeth Gerber, Aaron Shaw, John Zimmerman, Matt Lease, and John Horton. 2013. “The Future of Crowd Work.” In Proceedings of the 2013 Conference on Computer Supported Cooperative Work, 1301–18. CSCW ’13. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2441776.2441923.

Kossek, Ellen Ernst, Brenda A. Lautsch, and Susan C. Eaton. 2005. “Flexibility Enactment Theory: Implications of Flexibility Type, Control, and Boundary Management for Work- Family Effectiveness.” In Work and Life Integration: Organizational, Cultural, and Individual Perspectives, 243–61. LEA’s Organization and Management Series. Mahwah, NJ, US: Lawrence Erlbaum Associates Publishers.

Kossek, Ellen Ernst, Brenda A. Lautsch, and Susan C. Eaton. 2006. “Telecommuting, Control, and Boundary Management: Correlates of Policy Use and Practice, Job Control, and Workfamily Effectiveness.” Journal of Vocational Behavior 68 (2): 347–67. https://doi.org/10.1016/j.jvb.2005.07.002.

Kuek, Siou Chew, Cecilia Paradi-Guilford, Toks Fayomi, Saori Imaizumi, Panos Ipeirotis, Patricia Pina, and Manpreet Singh. 2015. “The Global Opportunity in Online Outsourcing.” World Bank.

Labor, U. S. Department of. 2023. “Minimum Paid Rest Period Requirements Under State Law for Adult Employees in Private Sector.”

Larson, Reed, and Mihaly Csikszentmihalyi. 2014. “The Experience Sampling Method.” In Flow and the Foundations of Positive Psychology: The Collected Works of Mihaly Csikszentmihalyi, 21–34. Dordrecht: Springer Netherlands. https://doi.org/10.1007/978-94-017-9088-8_2.

Lascau, Laura, Sandy J. J. Gould, Duncan P. Brumby, and Anna L. Cox. 2022. “Crowdworkers’ Temporal Flexibility Is Being Traded for the Convenience of Requesters Through 19 ‘Invisible Mechanisms’ Employed by Crowdworking Platforms: A Comparative Analysis Study of Nine Platforms.” In CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI ’22 Extended Abstracts), 1–8. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3491101.3519629.

Lascau, Laura, Sandy J. J. Gould, Anna L. Cox, Elizaveta Karmannaya, and Duncan P. Brumby. 2019. “Monotasking or Multitasking: Designing for Crowdworkers’ Preferences.” In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–14. CHI ’19. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3290605.3300649.

Lasecki, Walter S., Jeffrey M. Rzeszotarski, Adam Marcus, and Jeffrey P. Bigham. 2015. “The Effects of Sequence and Delay on Crowd Work.” In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 1375–78. CHI ’15. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2702123.2702594.

Lee, Min Kyung, Daniel Kusbit, Evan Metsky, and Laura Dabbish. 2015. “Working with Machines: The Impact of Algorithmic and Data-Driven Management on Human Workers.” In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 1603–12. CHI ’15. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2702123.2702548.

Lehdonvirta, Vili. 2018. “Flexibility in the Gig Economy: Managing Time on Three Online Piecework Platforms.” New Technology, Work and Employment 33 (1): 13–29.

Lindfors, PM, Tarja Heponiemi, OA Meretoja, TJ Leino, and MJ Elovainio. 2009. “Mitigating on-Call Symptoms Through Organizational Justice and Job Control: A Cross-Sectional Study Among Finnish Anesthesiologists.” Acta Anaesthesiologica Scandinavica 53 (9): 1138–44.

Lyft. 2019. “How to Give a Lyft Ride.” Lyft; https://web.archive.org/web/20230506210913/https://www.lyft.com/hub/posts/how-to-give-a-ride.

Mark, Gloria, Daniela Gudith, and Ulrich Klocke. 2008. “The Cost of Interrupted Work: More Speed and Stress.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 107–10. CHI ’08. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/1357054.1357072.

Mark, Gloria, Stephen Voida, and Armand Cardello. 2012. ““A Pace Not Dictated by Electrons”: An Empirical Study of Work Without Email.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 555–64. CHI ’12. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2207676.2207754.

Mas, Alexandre, and Amanda Pallais. 2020. “Alternative Work Arrangements.” Annual Review of Economics 12 (1): 631–58. https://doi.org/10.1146/annurev-economics-022020-032512.

Mendel, Tamir, and Eran Toch. 2017. “Susceptibility to Social Influence of Privacy Behaviors: Peer Versus Authoritative Sources.” In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, 581–93. CSCW ’17. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2998181.2998323.

Monk, Christopher A, J Gregory Trafton, and Deborah A Boehm-Davis. 2008. “The Effect of Interruption Duration and Demand on Resuming Suspended Goals.” Journal of Experimental Psychology: Applied 14 (4): 299.

Moore, Thomas. 2017. The Disposable Work Force: Worker Displacement and Employment Instability in America. New York: Routledge. https://doi.org/10.4324/9781351328364.

Morris, Sarah, Alun Humphrey, Pablo Cabrera Alvarez, and Olivia D’Lima. 2016. “The UK Time Diary Study 2014 - 2015 Technical Report.” NatCen Social Research.

Murgia, Madhumita. 2019. “AI’s New Workforce: The Data-Labelling Industry Spreads Globally.” Financial Times, July.

Naruse, Takashi, Atsuko Taguchi, Yuki Kuwahara, Satoko Nagata, Izumi Watai, and Sachiyo Murashima. 2012. “Relationship Between Perceived Time Pressure During Visits and Burnout Among Home Visiting Nurses in Japan.” Japan Journal of Nursing Science 9 (2): 185–94.

Newman, William M. 2004. “Busy Days: Exposing Temporal Metrics, Problems and Elasticities Through Diary Studies.” In CHI 2004 Workshop on Temporal Issues in Work.

Nokia Bell Labs. 2023. “Introducing Nokia’s 6 Pillars of Responsible AI.” https://www.bell-labs.com/research-innovation/ai-software-systems/responsible-ai/.

Orben, Amy, and Andrew K. Przybylski. 2019. “Screens, Teens, and Psychological Well-Being: Evidence from Three Time-Use-Diary Studies.” Psychological Science 30 (5): 682–96. https://doi.org/10.1177/0956797619830329.

Organization, International Labour. 2016. “Non-Standard Employment Around the World: Understanding Challenges, Shaping Prospects.” ILO Geneva.

Rajaratnam, Shantha MW, and Josephine Arendt. 2001. “Health in a 24-h Society.” The Lancet 358 (9286): 999–1005.

Renaud, Karen, Judith Ramsay, and Mario Hair. 2006. “" You’ve Got e-Mail!"… Shall I Deal with It Now? Electronic Mail from the Recipient’s Perspective.” International Journal of Human-Computer Interaction 21 (3): 313–32.

Rosenblat, Alex. 2018. Uberland: How Algorithms Are Rewriting the Rules of Work. Oakland, CA, USA: University of California Press.

Ross, Joel, Lilly Irani, M. Six Silberman, Andrew Zaldivar, and Bill Tomlinson. 2010. “Who Are the Crowdworkers? Shifting Demographics in Mechanical Turk.” In CHI ’10 Extended Abstracts on Human Factors in Computing Systems, 2863–72. CHI EA ’10. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/1753846.1753873.

Rzeszotarski, Jeffrey, Ed Chi, Praveen Paritosh, and Peng Dai. 2013. “Inserting Micro-Breaks into Crowdsourcing Workflows.” Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 1 (November): 62–63. https://doi.org/10.1609/hcomp.v1i1.13127.

Salehi, Niloufar, Lilly C. Irani, Michael S. Bernstein, Ali Alkhatib, Eva Ogbe, Kristy Milland, and Clickhappier. 2015. “We Are Dynamo: Overcoming Stalling and Friction in Collective Action for Crowd Workers.” In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 1621–30. CHI ’15. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2702123.2702508.

Savage, Saiph, Chun Wei Chiang, Susumu Saito, Carlos Toxtli, and Jeffrey Bigham. 2020. “Becoming the Super Turker:Increasing Wages via a Strategy from High Earning Workers.” In Proceedings of the Web Conference 2020, 1241–52. New York, NY, USA: Association for Computing Machinery.

Savage, Saiph, and Mohammad Jarrahi. 2020. “Solidarity and A.I. For Transitioning to Crowd Work During COVID-19.”

Schieman, Scott, and Paul Glavin. 2008. “Trouble at the Border?: Gender, Flexibility at Work, and the Work-Home Interface.” Social Problems 55 (4): 590–611.

Schneider, Daniel, and Kristen Harknett. 2019. “Consequences of Routine Work-Schedule Instability for Worker Health and Well-Being.” American Sociological Review 84 (1): 82–114. https://doi.org/10.1177/0003122418823184.

Schwartz, Oscar. 2019. “Untold History of AI: How Amazon’s Mechanical Turkers Got Squeezed Inside the Machine - IEEE Spectrum.” https://web.archive.org/web/20230621221140/https://spectrum.ieee.org/untold-history-of-ai-mechanical-turk-revisited-tktkt.

Sonnentag, Sabine, Iris Kuttler, and Charlotte Fritz. 2010. “Job Stressors, Emotional Exhaustion, and Need for Recovery: A Multi-Source Study on the Benefits of Psychological Detachment.” Journal of Vocational Behavior 76 (3): 355–65. https://doi.org/10.1016/j.jvb.2009.06.005.

Star, Susan Leigh, and Anselm Strauss. 1999. “Layers of Silence, Arenas of Voice: The Ecology of Visible and Invisible Work.” Computer Supported Cooperative Work (CSCW) 8 (1): 9–30. https://doi.org/10.1023/A:1008651105359.

statista. 2023. “Global Inflation Rate from 2000 to 2021, with Forecasts Until 2027.” Global Inflation Rate from 2000 to 2022, with Forecasts Until 2028. https://web.archive.org/web/20230509231919/https://www.statista.com/statistics/256598/global-inflation-rate-compared-to-previous-year/.

Sundararajan, Arun. 2016. The Sharing Economy: The End of Employment and the Rise of Crowd-Based Capitalism. Cambridge, MA, USA: The MIT Press. https://www.jstor.org/stable/j.ctt1c2cqh3.

Supreme Court of the United Kingdom. 2021. “Uber BV and Others (Appellants) v Aslam and Others (Respondents).”

Teevan, Jaime, Eytan Adar, Rosie Jones, and Michael A. S. Potts. 2007. “Information Re-Retrieval: Repeat Queries in Yahoo’s Logs.” In Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 151–58. SIGIR ’07. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/1277741.1277770.

Theorell, Töres, RA Karasek, and P Eneroth. 1990. “Job Strain Variations in Relation to Plasma Testosterone Fluctuations in Working Men-a Longitudinal Study.” Journal of Internal Medicine 227 (1): 31–36.

Toxtli, Carlos, Siddharth Suri, and Saiph Savage. 2021. “Quantifying the Invisible Labor in Crowd Work.” Proc. ACM Hum.-Comput. Interact. 5 (CSCW2). https://doi.org/10.1145/3476060.

Uber. 2023. “Getting a Trip Request | Driving & Delivering - Uber Help.” Uber. https://web.archive.org/web/20220218234056/https://help.uber.com/driving-and-delivering/article/getting-a-trip-request?nodeId=e7228ac8-7c7f-4ad6-b120-086d39f2c94c.

Uber UK. 2018. “Introducing Our New Driver Hours Policy.” https://web.archive.org/web/20230515171137/https://www.uber.com/en-GB/newsroom/introducing-new-driver-hours-policy/.

van Berkel, Niels, Denzil Ferreira, and Vassilis Kostakos. 2017. “The Experience Sampling Method on Mobile Devices.” Acm Computing Surveys 50 (6). https://doi.org/10.1145/3123988.

Vogel, Matthias, Tanja Braungardt, Wolfgang Meyer, and Wolfgang Schneider. 2012. “The Effects of Shift Work on Physical and Mental Health.” Journal of Neural Transmission 119 (10): 1121–32.

Wang, Hao-Chuan, Tau-Heng Yeo, Syavash Nobarany, and Gary Hsieh. 2015. “Problem with Cross-Cultural Comparison of User-Generated Ratings on Mechanical Turk.” In Proceedings of the Third International Symposium of Chinese CHI, 9–12. Chinese CHI ’15. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2739999.2740001.

Webster, Juliet. 2016. “Microworkers of the Gig Economy: Separate and Precarious.” New Labor Forum 25 (3): 56–64. https://doi.org/10.1177/1095796016661511.

Wheatley, Daniel. 2017. “Autonomy in Paid Work and Employee Subjective Well-Being.” Work and Occupations 44 (3): 296–328. https://doi.org/10.1177/0730888417697232.

Whiting, Mark E., Grant Hugh, and Michael S. Bernstein. 2019. “Fair Work: Crowd Work Minimum Wage with One Line of Code.” In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 7:197–206. Washington, DC, US: AAAI. https://doi.org/10.1609/hcomp.v7i1.5283.

Whittaker, Steve, Tara Matthews, Julian Cerruti, Hernan Badenes, and John Tang. 2011. “Am I Wasting My Time Organizing Email? A Study of Email Refinding.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 3449–58. CHI ’11. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/1978942.1979457.

Wilkins, Denise J., Srihari Hulikal Muralidhar, Max Meijer, Laura Lascau, and Siân Lindley. 2022. “Gigified Knowledge Work: Understanding Knowledge Gaps When Knowledge Work and on-Demand Work Intersect.” Proc. ACM Hum.-Comput. Interact. 6 (CSCW1). https://doi.org/10.1145/3512940.

Williams, Alex C., Gloria Mark, Kristy Milland, Edward Lank, and Edith Law. 2019. “The Perpetual Work Life of Crowdworkers: How Tooling Practices Increase Fragmentation in Crowdwork.” Proc. ACM Hum.-Comput. Interact. 3 (CSCW). https://doi.org/10.1145/3359126.

Wood, Alex J, Mark Graham, Vili Lehdonvirta, and Isis Hjorth. 2019. “Good Gig, Bad Gig: Autonomy and Algorithmic Control in the Global Gig Economy.” Work, Employment and Society 33 (1): 56–75. https://doi.org/10.1177/0950017018785616.

Woodcock, Jamie, and Mark Graham. 2019. The Gig Economy: A Critical Introduction. Cambridge, UK: Policy Press.

World Bank. 2016. World Development Report 2016: Digital Dividends. Washington, DC, USA: World Bank. https://doi.org/10.1596/978-1-4648-0671-1.

Yin, Ming, Siddharth Suri, and Mary L. Gray. 2018. “Running Out of Time: The Impact and Value of Flexibility in on-Demand Crowdwork.” In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1–11. CHI ’18. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3173574.3174004.

Zukalova, Zaneta. 2020. “Shepherd’s Office. The Politics of Digital Labor and Its Impact on the Amazon Mechanical Turk Workers.” Media-N 16 (1): 99–115.

Zyskowski, Kathryn, Meredith Ringel Morris, Jeffrey P. Bigham, Mary L. Gray, and Shaun K. Kane. 2015. “Accessible Crowdwork? Understanding the Value in and Challenge of Microtask Employment for People with Disabilities.” In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, 1682–93. CSCW ’15. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2675133.2675158.


  1. We would like to thank one of the anonymous reviewers for these excellent examples of how we could increase the generalisability of the results. ↩︎