

A pertinent confession: I graduated from high school with the help of a machine. It was called a slide rule, a logarithmic device for solving daunting computations. Before digital calculators, it was to engineers what a stethoscope is to doctors. It was such a definitive mark of their calling that the autobiography of the aeronautical engineer and popular fiction writer Nevil Shute was titled Slide Rule. In school, my arithmetic was execrable. Calculators were made for people like me, but were banned from classrooms at the time. However, slide rules were allowed, and one borrowed from my father got me through the exam.
Why was using a digital calculator regarded as cheating, but using a slide rule was not, though they perform exactly the same function? Was it moral unease about the legitimate fruits of labour? Did slide rules feel more legit because some manual labour is involved: you have to slide them, as the name suggests, and match scales to read off the results. Calculators are child’s play in comparison, and thank god they have entered the classroom. Attitudes to work have changed―or have they?
Artificial intelligence is now inspiring moral panic like the calculator once did. Hachette has cancelled the US and UK release of the horror novel Shy Girl by new writer Mia Ballard following online accusations of the use of AI in passages. The publisher backed off when the popular YouTube book channel Frankie’s Shelf dismissed it as “AI slop”. Ballard clarified that a collaborator had used AI tools on the text of the self-published version, but it still looked like cheating.
Ironically, all mass market publishers have always encouraged cheating at scale by humans, with the consequence that the autobiography sections of bookstores are populated largely by ghost-writers. Did you seriously imagine that sportspeople and entrepreneurs can write engaging books? Their talents lie elsewhere.
Besides, until publishing houses burdened their editors with commercial concerns, they had the space to help authors refine their work and they even collaborated in the creative process, which is rather like AI assistance. The most famous beneficiary of such handholding was Raymond Carver, whose minimalist style was chiselled by drastic surgery by his editor, Gordon Lish, who cut stories down to the bone, removing descriptions, dialogue and detail to give the author his unique voice. Carver protested for fear of losing the voice he was born with, but gave in when he saw that it worked.
Many famous authors have profited from editorial nudging. Scribner’s editor Maxwell Perkins, who actively sought out young writers he could influence, contributed materially to the success of F Scott Fitzgerald (he sharpened The Great Gatsby), Ernest Hemingway (he preserved the linguistic simplicity in The Sun Also Rises), Thomas Wolfe (machete-scale cuts) and Harper Lee (threaded stories into a novel with a point).
Now, machines have stepped into the arena once dominated by passionate literary editors. Earlier this year, romance author Coral Hart got herself some extra mileage with a New York Times story which focused on her AI-assisted workflow―in the course of which she produced a new novel in 45 minutes. She now runs online workshops which teach new romance writers AI hacks with which anyone can create ‘slop’ in hours, while the competition takes months to write books that are probably just as sloppy. Isn’t that clever? Genre readers value predictability over originality. They like a classical plot replayed well, whether it’s the Ramlila or the hugely successful Commando war comics, which reused cels and storylines for decades, and no one complained because they wanted repetition.
The Hart story echoes the findings of the new American AI Jobs Risk Index of the Fletcher School at Tufts University, which has taken a nuanced approach to assessing risk, marking it apart from the lay debate on AI, which is polarised between hallelujahs to machine efficiency and uproars about apocalyptic anxieties. Its scale of AI exposure follows received wisdom―web and digital interface designers are most exposed to AI, and manual workers like miners are the least exposed. But it also recognises that risk and opportunity are interrelated―roles that are vulnerable to job losses to AI also stand to benefit most from AI augmentation and complementarity, which need not be the same thing.
The study offers a sliding scale, in which risk is weighted by clusters of factors. Bhaskar Chakravorti, Dean of Global Business at Tufts, who heads the university’s Digital Planet programme, has emphasised that the ideal workers of the future will not be manual practitioners like miners and plumbers, but people who combine domain expertise and critical thinking with AI proficiency.
Significantly, the index brings policy clarity by mapping the risk of job losses to physical geography―plotting hubs of innovation and knowledge industries in the US where machine functions may swiftly replace highly specialised humans. In academic hubs, even minor job losses may assume huge percentage proportions.
India, which hopes to become an AI hub, should map the threat perception as if it were facing a military challenge. In all crises, the first step to relief delivery is an accurate ordnance map. As AI alters the landscape of work in unanticipated ways, mapping will become as essential for social planners as the slide rule once was to engineers.
Pratik Kanjilal | SPEAKEASY | Senior Fellow, Henry J Leir Institute of Migration and Human Security, Fletcher School, Tufts University
(Views are personal)
(Tweets @pratik_k)