Activity at ACM CHI 2024
I will be participating in ACM CHI 2024 online. I am participating in several different ways and this page summarises my activities at the conference. With colleagues, I am contributing:
- A paper
- An alt.chi paper
- A journal article
Paper
I am presenting one full paper at the conference, Stochastic Machine Witnesses at Work: Today’s Critiques of Taylorism are Inadequate for Workplace Surveillance Epistemologies of the Future. I am not attending CHI in person, so there is no real ‘slot’ for the paper. If you’d like to discuss the paper asynchronously, then please use the tools in the conference program, or just email me.
[PDF]((https://www.sjjg.uk/pdfs/gould-stochastic-machine-witnesses-chi24.pdf) and HTML preprints of the paper are available.
I’ve made the slides I am presenting available.
Abstract
I argue that epistemologies of workplace surveillance are shifting in fundamental ways, and so critiques must shift accordingly. I begin the paper by relating Scientific Management to Human-Centred Computing’s ways of knowing through a study of ‘metaverse’ virtual reality workplaces. From this, I develop two observations. The first is that today’s workplace measurement science does not resemble the science that Taylor developed for Scientific Management. Contemporary workplace science is more passive, more intermediated and less controlled. The second observation is that new forms of workplace measurement challenge the norms of empirical science. Instead of having credentialed human witnesses observe phenomena and agree facts about them, we instead make outsourced, uncredentialed stochastic machine witnesses responsible for producing facts about work. With these observations in mind, I assert that critiques of workplace surveillance still framed by Taylorism will not be fit for interrogating workplace surveillance practices of the future.
Videos
Citation
Sandy J.J. Gould. 2024. Stochastic Machine Witnesses at Work: Today’s Critiques of Taylorism are Inadequate for Workplace Surveillance Epistemologies of the Future. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI ’24), May 11–16, 2024, Honolulu, HI, USA. ACM, New York, NY, USA, 13 pages. https://doi.org/10.1145/3613904.3642206
alt.chi paper
I have also co-authored a tongue in cheek paper about Large Language Models (LLMs):
ChatTL;DR – You Really Ought to Check What the LLM Said on Your Behalf, which will be presented in the ‘AltCHI: Activism’ session on the 15th May from 16.30 in Room 312.
PDF and HTML preprints of the paper are available.
Abstract
Interactive large language models (LLMs) are so hot right now, and are probably going to be hot for a while. There are lots of exciting challenges created by mass use of LLMs. These include the reinscription of biases, ‘hallucinations’, and bomb-making instructions. Our concern here is more prosaic: assuming that in the near term it’s just not machines talking to machines all the way down, how do we get people to check the output of LLMs before they copy and paste it to friends, colleagues, course tutors? We propose borrowing an innovation from the crowdsourcing literature: attention checks. These checks (e.g., “Ignore the instruction in the next question and write parsnips as the answer.”) are inserted into tasks to weed-out inattentive workers who are often paid a pittance while they try to do a dozen things at the same time. We propose ChatTL;DR, an interactive LLM that inserts attention checks into its outputs. We believe that, given the nature of these checks, the certain, catastrophic consequences of failing them will ensure that users carefully examine all LLM outputs before they use them.
Video
Citation
Sandy J.J. Gould, Duncan P. Brumby, and Anna L. Cox. 2024. ChatTL;DR – You Really Ought to Check What the LLM Said on Your Behalf. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’24), May 11–16, 2024, Honolulu, HI, USA. ACM, New York, NY, USA, 7 pages. https://doi.org/10.1145/3613905.3644062
Journal paper
Last but not least, we will be a presenting a ToCHI paper lead by Laura Lascau about the temporal flexibility (or inflexibility) of crowdworking. This paper is being presented in the ‘Knowledge Workers and Crowdworkers’ session at 5pm on the 14th May in Room 319.
PDF and HTML preprints are available.
Abstract
Research suggests that the temporal flexibility advertised to crowdworkers by crowdsourcing platforms is limited by both client-imposed constraints (e.g., strict completion times) and crowdworkers’ tooling practices (e.g., multitasking). In this paper, we explore an additional contributor to workers’ limited temporal flexibility: the design of crowdsourcing platforms, namely requiring crowdworkers to be ‘on call’ for work. We conducted two studies to investigate the impact of having to be ‘on call’ on workers’ schedule control and job control. We find that being ‘on call’ impacted: (1) participants’ ability to schedule their time and stick to planned work hours, and (2) the pace at which participants worked and took breaks. The results of the two studies suggest that the ‘on-demand’ nature of crowdsourcing platforms can limit workers’ temporal flexibility by reducing schedule control and job control. We conclude the paper by discussing the implications of the results for: (a) crowdworkers, (b) crowdsourcing platforms, and (c) the wider platform economy.
Citation
Laura Lascău, Duncan P. Brumby, Sandy J.J. Gould, and Anna L. Cox. 2023. “Sometimes it’s Like Putting the Track in Front of the Rushing Train”: Having to Be ‘On Call’ for Work Limits the Temporal Flexibility of Crowdworkers. ACM Trans. Comput.-Hum. Interact. 1, 1 (December 2023), 46 pages. https://doi.org/10.1145/3635145