Making management great again

Like Hemingway’s bankruptcy, the collapse of the conventional management model has come in two ways – ‘gradually, then suddenly.’

In obvious decay since the financial crash of  2008, although the rot had set in long before that, management as we know it has finally been finished off by covid.

It’s as if a switch has been turned. As Gary Hamel reflected at this year’s Global Peter Drucker Forum at the end of October, everything we thought we knew about management has been derived from observation of organisations that came into being in the first Industrial Revolution. It was perfected against the bureaucratic template of the early 20th century and locked into place since the 1970s by the toxic doctrine of shareholder value.

The pseudo-scientific pretensions of this technocratic, numbers-driven and inhuman model have been stripped bare by a pandemic that has systematically inverted the values it embodied. Human cooperation has been more use than competition – and should have been pursued much more at international level. Centralisation and scale have been no match for a nimble disease which strikes one person at the time (witness the failure and waste in our huge outsourced testing and test-and-trace centres); and above all, it has reasserted in the most basic of terms the centrality of people.

One of the flaws of the exclusive focus on shareholders is that it disables companies’ auto-immune systems, blinding them to their own long-term interest. Covid reminds companies that they need people to be employed and paid not just to solve problems and make stuff, but also to buy it. Absent people with jobs, governments have to invent surrogates in enormous stimulus and recovery programmes, as now. Welcome back, employment policy.

At the same time, the pandemic underlines how dangerously out of kilter we have allowed our value system to become. As Mark Carney is exploring in his current Reith lectures, the market as currently constituted overvalues the present at the expense of the future, and undervalues essential work like care, transport and other basic service to the benefit of a host of inessential ones. Hence our bullshit economies, built on work that is often not worth doing. This too is due for a reset.

But it’s at the company level that divergence between old and new is most spectacular. The recent online Drucker Forum got off to an electrifying start (I mean that) by showcasing a number of companies that unlike struggling competitors are sailing unscathed through covid not only while ignoring conventional management practices, but because they ignore them.

Among the five presenting firms – Nucor (US), Buurtzorg (Netherlands), Michelin (France), Handelsbanken (Sweden) GE Appliances (Sino-US) – only Buurtzorg, the Dutch nurse-run healthcare operator, is a start-up, the rest being solid corporate citizens of many years standing. They prove that it is perfectly possible to ‘transform’ – to use a catastrophically traduced word – if, but only if, managers throw off the blindfold of the old and devote as much attention to management innovation as they do to product and technology development, attempting to make their companies as inventive and creative as their employees are.

Currently, that’s a big ask. At a time when we need to harness every scrap of human ingenuity – and when three new vaccines stand as shining testimony to what can be achieved when that happens – it should be a global emergency that 80 per cent of workers think their opinions are disregarded at work; 70 per cent of jobs require little or no ingenuity; and just 18 per cent of workers are engaged at work – present physically but absent (at best) mentally. Baldly, companies in their present form squander much more human capacity than they use, or than we can afford.

As amply shown by the Forum five, it doesn’t have to be like that. Buurtzorg and Handelsbanken, the Swedish bank, are already rightly well known. Buurtzorg, now 15,000 nurses strong, continues to attract a further 100 recruits a month, and as it expands is starting to transform the Dutch healthcare model from the inside. Handelsbanken’s decentralised, relationship-based banking model ensures that it can respond instantly to its customers’ changing circumstances – one reason it has outperformed its Swedish rivals for the 49th year in succession.

As for the others, disparate as they are in culture, history and nationality, they are united in an unshakable belief that success is driven by people. This is absolutely nothing to do with being ‘nice’. It’s the conviction that ‘27,000 minds are more powerful than any single one,’ in the words of former Nucor CEO John Ferriola. Ferriola talks of a ‘chain of trust’ in which top management’s job is to build teams rather than products, and then provide the environment in which they can focus single-mindedly on the effectiveness that makes Nucor ‘the safest, highest quality, lowest cost, most productive and most profitable steel and steel products company in the world’.

‘Never underestimate the casual genius in every human being’, says Florent Menegaux, CEO of 130-year-old Michelin, the French tire-maker – while admitting that most of the time corporate bullshit stifles them from using it. Starting from small experiments, Michelin is now riding an upsurge of frontline improvement welling up from below. Menegaux now sees his mission as taking the stress out of operational pressures – including on middle managers – and feeding energy back. The manager takes care of the team; the team takes care of everything else, as one slogan neatly puts it.

At GE Appliances the divergence from management’s mainstream is even more dramatic. When the traditionally run white-goods maker was sold to China’s Haier in 2016 the culture shock was colossal. It didn’t realise it, but the company ‘was slowly dying’, in the words of CEO Kevin Nolan, strangled by its 100-year past. Now broken up into ever smaller micro-enterprises, a re-energised GEA is thriving like never before. ‘We need more ceos!’ says Nolan. ‘It sounds counterintuitive, but you have to get more ceos within your company. You have to let people control their future and their decision-making to unlock their creativity.’

The reason why these companies have done so well during the pandemic is blindingly clear. Simply put, their decentralised structure and carefully fostered cult of trustworthiness means that their people don’t have to wait for orders from above – they know what to do and do it. At Handelsbanken, local knowledge and branch responsibility for all lending translates into a fraction of the bad loans of rivals during the crisis. GEA’s ambition of ‘zero-distance’ formalises its recognition that closing the gap between the enterprise and its true boss, the customer, is a key metric of success. Nolan notes that without central direction GEA’s micro-enterprises were solving issues daily ‘at the speed of the market’; under covid they see the future as brighter than at any time in the company’s history.

To achieve zero distance with the customer, omission – eliminating what gets in the way – becomes as important as commission. What gets in the way is management. Buurtzorg has a slide entitled ‘what we don’t do’ that lists ‘management meetings, policy notes, strategic documents, HR strategies, year plans, and other useless things’. The latter include budgets and intermediate goals like targets, two things that Handelsbanken also eschews. Threats and opportunities don’t come in 12-month packages, so why should decisions?

As the technology of human accomplishment, ‘management sets the outer limits on what we can do as a species. It is humankind’s most important technology’, Hamel noted at the Drucker Forum, channelling Drucker himself. After a long pause, companies like those described (among many others) are beginning to test those limits, as they do so redefining management’s fundamental laws along human rather than economic metrics.

Unlike sheer physical size, trust and decentralisation appear to scale without diminishing returns. Effective relationships trump efficient transactions. Companies succeed by working with the grain of the ecosystems they operate in, not against them. Zero (response time, distance from the customer, management itself) is often the best score. IT in the background, not the foreground. Having spent the last 40 years trying to eliminate all traces of the human, companies are belatedly beginning to realise that it’s when they betray the human that things start to go wrong. With that established, perhaps management at last has a chance to live up to the gurus’ claims for it.

Wasting a good crisis?

Never waste a good crisis. That glib slogan is less to be heard this time round. Not surprising, perhaps, given what happened after the financial crash a decade ago – which after the dust had settled, consisted of a return to business as usual, only with added austerity. That didn’t turn out so well for anyone who wasn’t part of the global 1 per cent, and the delayed reaction brought us Brexit, Trump and the election of Boris Johnson. 

So will today’s pandemic crisis be more productive? Nine months on from the first Coronavirus fatality, the signs aren’t good. After an initial burst of good behaviour (research collaboration among pharma groups, repurposing of manufacturing plants to turn out medical supplies, a few bosses forgoing raises) firms are in danger of reverting to bad old habits instead of taking the opportunity to institute better new ones. 

Take working from home. You might think that this was a rare win-win. Employers and workers both get to cut costs. Workers like it. In a recent survey of 10,000 European and Middle Eastern workers, 87 per cent said they wanted a choice over their place of work. Corporates, meanwhile, have discovered to their relief and surprise that under WFH not only does office workers’ productivity not suffer – in many cases it goes up. Unilever and Google found that at home their office workers were putting in more time than before, not less.

The unspoken corollary of that, of course, is that the office environment in general, and management in particular, add no value to employees’ work; rather the reverse. This isn’t new. The late Peter Drucker used to complain that too much management consisted of preventing people from doing their work, and advised every company to subject all their work processes to a zero-budgeting exercise every few years to strip out the friction-generating clutter and grit. 

Alas, rather than take the lesson to heart, managers have swiftly reverted to their default setting of control. Witness soaring demand for, and burgeoning start-ups in the field of, what are euphemistically termed ‘collaboration tools’: software which as well as collaboration also facilitates remote monitoring of computer keystrokes, websites visited, pauses taken and even infrared hotspots pinpointing staff providing the ‘pivotal point that people go to for information and answers’ (and by the same token presumably those who don’t). Bizarrely, apps are also springing up that mimic the background noise of a busy office, or even the ‘gentle chatter’ of a Danish coffee house. 

As Rana Foroohar points out in her latest book, anything that can be used for surveillance, sooner rather than later will be. That’s because behind the drive to control lies another obsession: reducing cost. Understandable as that is in today’s hard times, it is leading to behaviour that spectacularly misses the point. A la Drucker, the crisis would be the perfect moment to go back to ground zero and redesign the work to meet current and projected demand in the light of the new conditions, including WFM, social distancing and other consequences of covid.

But no. Spurred on by the big consultants, companies instead are splurging on ‘digital transformation’. That has led to a dramatic decline in customer service as the punters are peremptorily herded online whether they like it or not, often with no recourse to human contact. Pleading the crisis, companies resort to rationing – ‘due to covid, we are experiencing exceptional call volumes: expect wait times of more than one hour’ – directing callers to FAQs online, or simply deleting any other means of contact. In one prominent NHS operation, sad to relate, where the phone is permanently off the hook and a broken email link never repaired, there seems no means of changing an urgent appointment. The cost in terms of frustration, anxiety and wasted time for citizens and customers is off the scale, while the build-up of failure demand is invisible to managers who are probably congratulating themselves on having cut their (comparatively irrelevant) transaction costs. If anyone was wondering where productivity goes in these ‘transformations’, look no further: it lies in a grave marked ‘digital services’.

The other favourite corporate cost-cutting initiative is to chop full-time staff in favour of agency or ‘contingent’ workers. Around 5m people in the UK were in mostly low-paid, precarious employment even the pandemic hit, and that total will have surged over the last few months. As the FT’s Sarah O’Connor recently noted, the accepted risk-reward ratio in finance – the higher the risk the higher the reward – is reversed in today’s labour market: a truth rubbed in by the news that the boards of a number of US companies have begun quietly to adjust bonus formulae to compensate CEOs for ‘covid-related’ loss of earnings. Good luck finding revisions in the opposite direction to adjust for undeserved strokes of good fortune.

All of these things involve choices. Not all companies are choosing to recalibrate CEO pay. Companies that signed the US Round Table’s historic 2019 retreat from shareholder primacy seem to be behaving better towards their employees in the pandemic than others. In the UK, Aviva and Standard Life Aberdeen have signed up to a ‘living hours’ agreement that guarantees shift patterns (and payment) for workers four weeks ahead. Companies like these, and others that have chosen to maintain or improve levels of customer support (food retailers, including small ones, John Lewis, Waterstones), may gain in the long term when things have returned to something nearer the previous normal.

But these are exceptions. And the real test of an organisation’s purpose is not being nice to stakeholders. It is bending all its energy and ingenuity to challenge the seemingly inevitable and find new ways of fulfilling what it exists to do. Consider this. When all the world’s theatres and cultural festivals were shutting down – the New York Met won’t reopen until at least autumn 2021 – after fierce debates, the Salzburg Music Festival, the largest of its kind, resolved to defy the odds and go ahead with its 100-year anniversary event in June. This involved going back to scratch: in double quick time developing a new programme, preparing a distancing and safety strategy that has become a model for others, reimbursing 180,000 previously sold tickets and selling 76,000 new ones, quite apart from the normal artistic work. ‘We were deeply conscious of our dual responsibility as both a source of meaning and employer,’ says Salzburg president Helga Rabl-Standler. The result of Salzburg’s courage: ‘a sold-out festival, a giant step forward in terms of digitization, and a thousand good ideas on how to offer our greatest asset, regular customers from 80 countries around the world, faster and even better service.’

Well: just encore.

D for dunce: the great exam failure

The current educational algo-debacle is an exquisitely English cock-up*: a slow-motion train wreck that is the product of 30 years of educational initiatives, reorganisations and adjustments to alleviate the problems generated by previous changes, all piled up on each other without consistent architecture or, needless to say, political consensus. Finally this year Covid nudged it over the cliff of its own contradictions.

Our education system is a perfectly designed generator of grade inflation. Like executive pay under shareholder capitalism, it’s an escalator engineered to move in one direction only: up.

This year’s events are the culmination of a story that began in 1992 when 38 polytechnics were elevated to university status, nearly doubling the overall estate. Growth has continued ever since: there are now no less than 132 UK universities, with a student body that has expanded to match. Nineteen-seventy’s total of 200,000 students had mushroomed to almost 2m in 2019.

At a stroke, higher education morphed from an elite to a mass education system. Unfortunately, having willed the end, naturally with no diminution of quality, the government neglected to provide the means to bridge the gulf in standards – judged on traditional measures – between the old and the new. Accurately reflecting the gulf in respective resources, it was and in some cases remains large.

Real levelling up would have required a massive injection of resources into the new-borns. Instead, as ever, the government opted for a sleight of hand whose costs would only surface later. Traditionally, to maintain standards new universities underwent an adjustment period during which they administered degrees set by longer-established institutions. By contrast, the post-1992 cohort were granted degree-awarding powers from the start. There was no way a first from an under-resourced new university could be worth the same as a first from a top established one, but at a stroke the difference as made invisible – except to external examiners, who are often still pressured to verify marks that they know are too high or less often too low, depending on where they come from.

Tuition fees did provide universities with extra resources. But they were a two-edged sword. Particularly after 2010, when they jumped to £9000, they set in motion a programme of marketisation that, as the government intended, turned students from learners into consumers, a process encouraged by the creation of albeit unofficial league tables and increasingly important student satisfaction surveys. By the same token, universities became fierce competitors for their custom. Much of the extra resource was diverted into marketing, facilities and highly paid administration, while students began to argue that shelling out £9000 a year entitled them to a good degree and the teaching that ensured they got it. Lecturers and their employers had strong incentives to oblige. The casualty: a steady inflation of students’ grades.

As part of the supply chain, schools have naturally been sucked into the upward vortex. They were also subject to strong pressures of their own. Exam boards are competing commercial entities, and schools exploit discreet exam arbitrage between them. Moreover, education was an early testing ground for the New Public Management (NPM), the drive to sharpen up the public sector by subjecting it to private-sector methods and techniques. Unsurprisingly, the regime of targets, inspection, league tables and fierce performance management (‘targets and terror’) had the same dismal effects as in other public services such as health. Particularly harmful were the inducements for heads and teachers to play the numbers game by quietly dropping ‘harder’ subjects, excluding poor performers and ‘teaching to the test’ – a classic illustration of the folly of making professionals accountable to ministers and inspectors rather than those they directly serve. While it is widely accepted that many schools, eg London, have improved, the cost has been high in the shape, again, of grade inflation.

Briefly, consider that the percentage of top ‘A’ passes at A-level had gone up from 12 per cent in 1990 to 26 per cent in 2017, and ‘A’s plus ‘B’s from 27 to 55 per cent. The upward progression in degrees is even more marked. As a New Statesman article put it last year, ‘British universities… have increased the number if degrees they award fivefold since 1990, while the proportion of firsts they hand out has quadrupled – from 7% in 1994 to 29% in 2019. For every student who got a first in the early 1990s, nearly 20 do now… The proportion of students getting “good honours” – a first or 2:1 – has leapt from 47% to 79%: at 13 universities more than 90% of students were given at least a 2:1 [in 2018].’ In a perfect self-reinforcing cycle, universities justify this progression by pointing at the schools: it’s not surprising we’re giving more good degrees, they say, because we’re getting better students – just look at the A-level results.

This is the backstory to this year’s school shenanigans, when the creaking system was brought crashing down by the cancellation of GCSEs and A-levels during the lockdown. Without the restraining influence of real marks for real work, the government invented two unreal ones – centrally assessed grades (or CAGS) and a version moderated by the famous algorithm to damp down what it saw as alarming grade inflation. Both measures are barely comprehensible in their complexity (sample: ‘CAGs are not teacher grades or predicted grades, but a centres profile of the most likely grades distributed to students based on the professional views of teachers’). But the circle was unsquarable. While the algorithm did moderate the grades, it could only do so at the price of such manifestly unfair side effects that the government hastily retreated. CAGs, and by extension, grade inflation, on this occasion however justified, rolled on.

So we arrive at a familiar destination. Grade inflation is a symptom of what Ray Ison and Ed Straw, authors of the important new The Hidden Power of Systems Thinking, call a system-determined problem – one that can’t be resolved by first-order change, only by rethinking the system itself. Tinkering with the existing system to make it work better is our old friend doing the wrong thing righter, which ends up making it wronger. And we end up with the worst of both worlds: private-sector market competition moderated by Soviet-style regulation that achieves neither efficiency nor accountability, and whose figures won’t bear the mildest scrutiny. When we most needed a system based on professional trust and respect, we have the reverse, a regime established to assure academic standards that has overseen their almost complete debasement.

This has the potential to be much more than a little local difficulty. Higher and to a lesser extent secondary education, backed up by league tables that conveniently big up their strengths, have long been talked up as one of this country’s strongest international success stories. Covid’s inconvenient intervention suggests a more accurate characterisation might be a house of cards, built on statistical foundations that don’t even come up to O-level standards.

* As a Scottish reader correctly notes, it is increasingly hard to generalise across the component parts of the union in such matters.

How masks became a weapon in the culture wars

Trust in government is emerging as an important factor in how a country fares on what might be called the coronavirus performance league table. That stands to reason: in the absence of a vaccine, ‘beating the virus’ is a collective social enterprise as much as a medical one – just as ‘saving our NHS’ was at the peak of the infection, although the government appears to have forgotten it. (The cost was perilously high, but that’s another story.) In other words, performance is less a matter of science, more a matter of political competence and leadership.

New support for that idea comes from a recent paper in the Lancet describing the ‘Cummings effect’. When the story of the adviser’s dash for Durham, breaching official lockdown advice, broke in May, the result wasn’t just an immediate and continuing loss of public confidence in the government – it changed people’s behaviour. Their growing unwillingness to follow the guidelines was the other side of the coin of declining trust. Rubbing it in, Durham’s former chief constable noted: ‘People were actually using the word “Cummings” in encounters with the police to justify antisocial behaviour’.

A more insidious seepage of confidence – leading to an almost virus-like spike of consternation, rage and conspiracy theories – has been triggered by the government’s vacillation over the desirability of wearing face masks. Indeed, when the history of the pandemic is written, there will likely be a special section on this mundane piece of cloth and gauze, which has become an unlikely symbol of the contradictions and jagged social and political divides that the coronavirus has generated.

It should have been simple. When everyone wears one, the face mask is an important element, along with maintaining distance, hand washing and restricting frequentation, in limiting transmission of the virus.

But it is not quite as straightforward as it looks. The mask has a systemic dimension, and the benefits are asymmetric. For the individual, wearing a mask is a mild inconvenience for not much return. For the collective, on the other hand, there is no downside, and the benefit is multiplicative because of a kind of network effect: the more widespread the use, the greater the value, including to individuals. If sufficient numbers mask up, in protecting other people you protect yourself. This makes it too, and this likewise has been much neglected, a powerful signifier. In the context of the above, wearing a mask is a badge of common endeavour, a recognition of the fact that your health depends partly on the behaviour of others, just as theirs depends on yours.

Yet for many in the individualistic US and UK, these scraps of fabric have become objects of scorn (‘face nappies’) and wearing them an affront to liberty – ‘facemask hell’ and ‘a monstrous imposition’, according to one MP. For some Americans they are symbol of oppression, even totalitarianism, an insult to religious feeling (‘denial of the God-created means of breathing’) or even a threat to wellbeing (one American woman bizarrely shouted to camera, ‘the reason I don’t wear a mask is the same as for not wearing underwear: things gotta breathe!’). According to a trade union poll, 44% of McDonald’s employees had been threatened or abused for insisting that customers don a mask. At least one American has been shot.

In short, instead of being a simple precaution, covering your face has morphed into a weapon in the culture wars – a sign of wokeness or meek compliance with an oppressive state on one hand, an identifier of aggressive right-wing libertarianism on the other.

How has this come about? In microcosm, the depressing story of the face mask mirrors the convulsive progress of the crisis as a whole: a drunken lurch from under- to overreaction, accompanied by mixed messaging and subsequent public cynicism, augmented by the Cummings effect and the utter untrustworthiness of the testing statistics. In the absence of trust, leaders have no levers to pull when they want to get a scared, suspicious and increasingly resentful country back to work. They can only beg and bribe.

In the UK no one has ever explained in simple, clearly understandable terms the cumulative benefits of mask-wearing. And, disastrously, western authorities, including the World Health Organisation (WHO), initially played down of masks not for medical reasons but because they feared that a rush on masks would aggravate the strains on national health services then struggling with critical shortages of PPE, including face coverings. Not surprisingly, people now instructed to wear one are apt to take a cynical view.

The consequences of the failures to come clean are now coming home to roost. Ironically, even in the US and UK, most people are in principle in favour of wearing masks and even of making them compulsory. Yet in the UK, uniquely, this has not translated into behaviour: in an IPSOS Mori poll of 23 July, four months after the start of lockdown, just 28 per cent said they wore one, compared with double that proportion in France, Italy and the US. This is one reason why the UK now has another dubious Europe-beating qualification to add to its list: alongside the highest number of covid-related deaths and the worst hit economy, we are the slowest and most reluctant to return to work.

But if citizens now are slow to wear masks and resist going back to work, it’s largely not because they are bloody-minded or stupid. Inadequate leadership is squarely to blame.

Slavery, Inc

Like most people, including Alfred Chandler in his magnum opus The Visible Hand, I always accepted that – with a nod to ancient institutions like universities, the army and the Catholic church – the origins of modern management lay in the US railroads and the factories of the Industrial Revolution. 

But although long denied or ignored, it is becoming clear that some of the founding practices were already well developed in the 18th-century slave plantations of the Caribbean and the southern states of America. When F.W. Taylor’s The principles of scientific management appeared in 1911, echoes of the earlier ‘scientific agriculture’ practised on some of the sugar and cotton plantations were not lost on contemporary critics who found some of Taylor’s practices uncomfortably reminiscent of ‘slave-driving’ – nor on supporters who on the contrary praised them for the advance they represented over slaveholding.

This is troubling stuff to write about. But the aim is not to pick at the scabs of the past for the sake of it. It is that, as ever, the present is the child of the past, and coming to terms with the history is the first step to resolving the unfinished business it has left behind.

In Accounting for Slavery: Masters and Management, a remarkable piece of primary research, Caitlin Rosenthal, a young McKinsey consultant turned academic, parses surviving plantation account and record books to paint a chilling picture of the blend of violence and innovative data practices that turned plantations into extreme exemplars of scientific management – ‘machines built out of men, women and children’ where ‘the soft power of quantification supplemented the driving force of the whip.’ 

Slavery, Rosenthal notes, ‘plays almost no role in histories of management’. Whether conscious or not, this is denial, the erasure accomplished by Chandler’s comforting categorisation of plantation management as primitive and pre-modern. Not a bit of it, counters Rosenthal. Sophisticated information and accounting practices thrived precisely because slavery suppressed the key variable that makes management difficult – the human. As she puts it, ‘Slavery became a laboratory for the development of accounting because the control drawn on paper matched the reality of the plantation more closely than that of almost any other American business’.

The combination of labour that was essentially free, unspeakably brutal management and smart accounting meant that slaveholding was exceptionally profitable. Plantation owners were among the one percent of the period; at the time of the Civil War, there were more millionaire slave-owners in the south than factory-owners in the North. In the UK, as we are sharply reminded, many Downtons were built on the trade or forced labour of slaves. Historians mostly don’t include human capital in their calculations, but plantation owners did, using depreciation to assess the changing value of slaves according to age, strength and fertility well before the concept was in use in the North, and routinely using them as collateral for loans and mortgages. By buying and selling judiciously, slave-owners could add steady capital accumulation to the profits from cotton and sugar. 

Pace Chandler, plantations were management- as well as capital-intensive: according to one calculation, in 1860, when the railroads were emerging as the acceptable crucible of management, 38,000 plantation overseers, or middle managers, were managing 4m slaves using techniques that included incentives as well as indescribable punishments. Rosenthal recounts that in 1750 a British pamphleteer launched a prospectus for a kind of business school whose target clientele included sons of American planters. Slaveholders, concludes Rosenthal, ‘built an innovative, profit-hungry labor regime that contributed to the emergence of the modern economy… Slavery was central to the emergence of the economic system that goes by [the name of capitalism].’ 

With some estates numbering thousands of slaves, the plantations represented a milestone in managing scale. Even more important, the tools developed there enabled owners to manage their enterprise remotely. The slaveholder no longer had to suffer the physical discomforts of colonial life – or the mental discomfort of seeing at first hand the appalling human cost of his or her mounting wealth. Studying the numbers in the account books – embryonic spreadsheets – in a study in Bristol, London or Liverpool, he (or she) could see at a glance the productivity and profitability of each slave and decide their fate with a tick or a cross. 

This was a genuine management innovation, perfectly aligning the need for distant control with conditions on the ground. It was also crucial in another way. Representing humans as numbers not only put them out of sight and out of mind. It also encoded them as simple instruments of profit, no different in that respect from mules or horses, or the machinery for turning raw cane into sugar. It was to this vision of unfettered capitalism, where the only sanctity was property, that the southern states (and the British ‘West India interest’) clung to so tenaciously for so long – and in the former’s case, went to war to protect.

They lost that battle. But even after abolition the ghost of the old regime lived on in the south in the infamous penal labor and convict leasing schemes – and endures today through the for-profit prison-industrial complex that has seen the quadrupling of the (disproportionately black) US prison population since 1970. A whole raft of blue-chip US companies continue to profit from captive prison labour today.

The debate about economic freedoms and ends and means in business that slavery started rumbles on in 2020. When Milton Friedman wrote in 1970 that the social responsibility of business was to increase its profits, he was reasserting the primacy of capital owners’ property rights, and in an extreme version of Adam Smith’s ‘invisible hand’ argument insisting that anything they do to increase those profits contributes to the common good. Now the management wheel is turning again towards a more inclusive view, although with how much conviction it remains to be seen. If there is any hesitation, slavery should remind us with crystal clarity how far people will go in pursuit of profit if allowed to; that management’s urge to reduce everything to numbers can all too easily result in the destruction of its own humanity as well as the lives of those being managed; in short, that management can be a force for evil rather than for good. Making a clean breast of the dark side of its history is the only way to close off those bleakest avenues for ever.

Remind me: what is HR for?

In case you missed it, May 20 was International HR Day. To celebrate it, the CIPD tweeted five reasons ‘to recognise HR right now’: putting people first, enabling remote and flexible working, championing physical and mental wellbeing, encouraging virtual collaboration, and supporting people and organisations to adjust to the new normal.

Nothing much to object to there – it’s motherhood and apple pie. Yes. And that’s the problem.

Like a great deal – most? – of management advice, what is proposed is true but useless; preaching, as Jeff Pfeffer puts it.

One clue is that you can’t imagine many people arguing a case for putting people last or stubbornly upholding the old normal. More deviously, the five reasons for celebrating HR are actually nothing of the sort. They are really abstract desired outcomes – practices that companies ought to have – pretending to be inputs – processes or principles that companies and organisations actually observe.

But they don’t: the banality of the desiderata is in inverse ratio to their occurrence in real life. As such, the list declines reasons to despair of HR, not celebrate it.

Managing with rather than against the grain of human needs is not a new prescription, nor a controversial one. As big-name researchers from Herzberg (‘to get people to do a good job, give them a good job to do’) in the 1970s and 1980s to Pfeffer (The Human Equation) in 1998 to Julian Birkinshaw (Becoming A Better Boss, 2013) have emphasized in their different ways, effective work arrangements that enlist people’s abilities and motivation are a better and more sustainable route to economic success than downsizing, contracting out and relying on sharp incentives and sanctions. Countless research studies say the same thing.

And it is true today. At the recent launch of a joint RSA-Carnegie Trust report on the question, ‘Can Good Work Solve the [UK’s] Productivity Puzzle?’, top representatives from the Bank of England, the TUC, McKinsey and the RSA all agreed: yes, it can and should. There are simply no downsides.

Except that it doesn’t happen. Despite the lip service, ‘good work’ is almost exclusively honoured in the breach rather than than the observance. Standard management practices unambiguously put shareholders first, and people last, literally.

In today’s economy, companies create full-time ‘good work, at a good wage’ (the RSA’s hopeful formulation) only as a last resort. They rely instead on contingent workers who can be turned on and off at will and are increasingly managed by algorithm, thus dispensing with another tranche of the workforce. Pay is wildly unequal, even though studies again show that wide dispersion undermines teamwork, involvement and attachment to the organisation. Tight supervision and micromanagement kill trust and initiative – and even where, pushed by coronavirus, companies have moved to home working and virtual collaboration, the latter are almost comically sabotaged by the increasing use of digital surveillance to monitor and control remote employees. Meet the new work, actually a return to the old work, where all the risk and responsibility is borne by the individual, none by the corporation.

Given the yawning mismatch between the ideal and the grubby reality that most employees think their company doesn’t care about them and they don’t care about their work, the obvious question is, where on earth is HR in all this? If it and its nominal agenda are so comprehsnsively disregarded, why does it even exist?

There is much hand-wringing within HR and the academic literature over this. Every few years HR is called on to ‘reinvent itself’ or ‘make itself more relevant to business’ in one of the top management journals. But cynicism continues to grow, along with ineffectual programmes and surveys with no follow-up. ‘I do whatever the CEO wants,’ one HR head shrugged to HBR in 2015.

But the frustrations of HR can be explained if you think of it, at least in its current form, as a figleaf. In Beyond Command and Control, John Seddon describes HRM as a by-product of the industrialisation of service organisations along command-and-control lines. HR departments, he says, ‘grew up to treat the unwelcome symptoms of command-and-control management and have steadily expanded as the symptoms have got worse’. HR is, bluntly, damage limitation – yet another example of management consuming itself in trying to do the wrong thing righter (Ackoff), or doing more efficiently that which shouldn’t be done at all (Drucker).

As with so much of management, the way forward isn’t for HR to invent new things to do, but to give up doing old pointless ones. Managers should quit obsessing over individual performance and instead pay attention to the system that governs it. If they stopped demotivating people, removed conditions that get in the way of doing good work (‘So much of management consists of making it difficult for people to work’ (Drucker), ceased measuring activity rather than achievement of purpose and above all did away with incentives that distort priorities and divert ingenuity into gaming the system – bingo! the need for most of what passes for HR today (performance monitoring and surveillance, inspection, culture and engagement surveys, appraisals, courses on coping with change and other fake subjects that add no value) would simply evaporate. When the system changes, says Seddon, so does behaviour; as people act their way into a new way of thinking, culture change comes free.

That’s what an organisation that puts people first looks like. But it’s a result, not a cause. And you may have to kill off HR to get there.

Hitting the target and missing the point

Targets. Stretch targets. 100,000 coronavirus tests a day by the end of April. That turned out well, didn’t it?

When on 2 April health secretary Matt Hancock announced his goal of carrying out the famous 100,000 tests a day by the end of April, the result was predictable.

Given that at the time the daily testing rate was around 11,000, attention naturally focused on the number, and whether it would be achieved. And that’s where the debate stuck for the month. Not on why 100,000 or the purpose of the testing – the number.

On 1 May Hancock used the daily coronavirus briefing to declare that testing numbers had hit 122,347: the pledge had been met. Again, the number hogged the attention. Was it true? Had it really been hit? How?

Well, yes and no. It transpired that between the announcement of the target and the declaration of victory, the definition of ‘completed tests’, which previously meant ‘completed tests’, had quietly changed to ‘completed tests plus test kits in the post’. Subtracting the latter category left a ‘real’ figure of 82,000 actually carried out. Cue a new furore – again about the numbers.

What happened is a textbook illustration of the unintended effects of targets and their faithful sidekick, Goodhart’s Law.

To paraphrase W. Edwards Deming: in the case of a stable system there’s no point in setting a target, because you’ll get what it delivers. But with a non-stable system, there’s no point in setting a target either, because you have no idea what it will deliver. A numerical target in such circumstances is a finger stuck up in the air. Unless you know how to improve system capability permanently (I don’t think so), to hit it you have to be either incredibly lucky (in which case you’ll have to be even luckier to do it again tomorrow); or alter the parameters to make the target attainable.

Hancock did what everyone does when faced with the imperative to hit an arbitrary target: he managed the thing that he could – in this case, the definition of success.

But this is not a harmless bit of jugglery. Deming again: ‘What do “targets” accomplish? Nothing. Wrong: their accomplishment is negative.’ There is a high cost to his action – which is where Goodhart comes in.

As economic adviser at the Bank of England, Charles Goodhart noted that attempts to manage monetary policy by using any definition of the money supply was constantly subverted by actors finding novel ways to circumvent the definition. Hence his law, usually formulated as, ‘when a measure becomes a target, it ceases to be useful as a measure.’ A metric can be either a target or a measure. It can’t be both.

Take Hancock and his tests. To meet his target, he included in his count for 30 April around 40,000 test kits mailed out to the public and to hospitals. Of this number (pay attention here), while the Department of Health and Social Care counts the number of people that test positive, it doesn’t collect figures for tests completed.

What’s worse, since mid-April the government figures include on the same basis (ie people testing positive but not tests completed) 17,500 variegated tests consisting of both diagnostic and antibody tests, thus adding oranges to uneaten, partially eaten and completely eaten apples. As Tim Harford declared incredulously on his latest ‘More or Less‘ show: ‘It’s almost as if they don’t care if the number of tests is consistent or indeed accurate, as long as it’s big.’

At any rate, the upshot of this piece of target-setting is exactly as Deming and Goodhart predicted: the system is beyond comprehension and the figures such a dog’s breakfast that no one can tell what they mean. It seems highly unlikely that Hancock’s original target has been met at all since 30 April, but how can anyone know for sure, including the government? The only certainty about the figures is that they are bogus. You might think that when the subject is life or death, this matters, no?

Yet the damage done by targets doesn’t stop there. What most people don’t get (including a ‘science writer’ on a previous edition of ‘More or Less’) is that the problem with targets isn’t that they don’t work. It’s that they do.

A target is typically a one-club solution to a problem with many moving parts. But the first law of systems is that you can’t optimise one part of a multipart system without sub-optimising others. Any benefits are outweighed by unintended consequences elsewhere in the system. Focusing attention (often with added incentives) on the target rather than the purpose ensures that even if the target is hit, the point is missed.

Targets displace purpose. Tests are a means, not an end. But reporting 100,000 of them became the purpose, both for Hancock and his critics. Yet why 100,000 a day, rather than 75,000 or 250,000? What are we testing for in the first place? Deming once more: ‘Focus on outcome is not an effective way to improve a process or an activity…[M]anagement by numerical goal is an attempt to manage without knowledge of what to do’. Another finger in the air. Or, more tersely: ‘Having lost sight of our objectives, we redoubled our efforts.’

Consistent failure to meet the daily target underlines the point: it bears no relation to purpose, or any other kind of reality, really. Not production capacity, as we have seen. Even more serious, not with demand either – shortage of which, or shortage of which in the right place, has been put forward as a reason for the target debacle.

To be effective, a system needs to be designed against demand. And demand is determined locally. Testing is the first step in the ‘test, trace, isolate’ strategy that the government first initiated and then discontinued in March, and has now resurrected. By definition, that strategy has to play out out locally, where the infection occurs, tracing begins and treatment takes place. But bypassing hospitals and 400 or so existing small labs dotted around the country, all tightly linked to local primary care, the government, as with the Nightingale hospitals, is relying on giant regional testing factories, set up from scratch and remote from their users in every sense. A lurch backward to early 20th century industrial thinking, these in the view of many observers are the exact opposite of what is needed.

We can all support a goal of ramping up testing capacity to the level necessary to meet the purpose, whatever that number is. In fact it would be a good idea. But the minute you set it as a numerical target, it is subject to Goodhart. Managing backwards from an outcome plucked from thin air is a feature of command-and-control management, the only kind of management that government knows. But it is back-to-front. Targets are a disease. They destroy purpose, distort priorities, and soak up energy in games-playing and bureaucracy. They are the problem, to be avoided like, well, the plague.

Who saved the NHS?

‘Stay at home, protect the NHS, save lives.’ It’s a simple, understandable mantra, and we have internalised it well. So well, in fact, that we haven’t noticed the sleight of hand – desperate? cynical? – that’s going on here.

How didn’t we see it? The formula says it is our duty as citizens and patients to protect our NHS – and, by the way, the government will punish us if we don’t. But that’s the wrong way round. The NHS is supposed to protect us. Protecting the NHS, and us, is the government’s job, and if it fails to do it, we should punish it at the next election. (Whether the NHS is safe with this or that party is a question rightly posed at every vote.)

As it turns out, we have done our job of not using the NHS for the purpose for which it was designed so well that A&E departments are half empty, patients are politely declining to come forward for cancer diagnosis and ministers are reduced to acting like fairground barkers to drum up trade: ‘Roll up, roll up, we’re still open for business!’

As ever, crisis has brought the best out of the NHS, which has performed heroically with the resources at its disposal. In fact, though, the conventional image of doctors, nurses, porters and auxiliaries as saints and martyrs does them a disservice, obscuring a much more interesting reality. Under pressure the NHS is quietly doing things that it, and we, would have thought impossible a couple of months ago: partnering to build and equip hospitals in a week, redeploying and retraining staff, sharing work with the private sector, improvising procedures and equipment to keep, so far successfully, one step ahead of the flow of infections… Bureaucracy, what bureaucracy?

‘It’s impossible to overstate what has happened in the last two months,’ judges one observer. Of course, it has been a desperately close-run thing, and no one would want a re-run any time soon. But the imperative to do whatever it takes (to coin a phrase) to win a real-life game of life and death is being seen as a liberation for some NHS managers, who find themselves for once with control where it should be – in their own hands. ‘We can’t go back after this,’ one is quoted as saying.

Where the news is less good is at the point where help from the citizenry’s self-isolation from health services runs out – that is, in the supply of the basic equipment that allows our saints and martyrs to perform the heroics we rightly celebrate. Or not help much, compared with the need. But that is not for want of effort. Witness the countless stories of ‘small ships’ – small businesses, individuals and groups of volunteers pitching in to sew scrubs, gowns and even craft visors, some with ‘bleeding fingers’ at the end of the day; cooking free meals for NHS staff; providing transport to and from work; and putting up and feeding them at the end of their shifts. And what about the astonishing 750,000 volunteers who came forward to help the NHS at the beginning of the crisis, a phenomenon that has left the rest of the world marvelling?

Unfortunately, the one party missing from this spontaneous outbreak of inventiveness, collaboration and sense of common purpose is the one that matters most – the government. It’s a commonplace that the pandemic has made government and the state vital again, as the only entities with comprehensive national reach. Accordingly, although the fit isn’t perfect, the gruesome league table of covid mortality also looks like a pretty fair reflection of government competence. In general the countries that come out best are those in south-east Asia, Germany and Denmark in Europe, and New Zealand that acted in character and as might have been predicted – calmly, early and firmly.

At the other end of the scale, currently the worst outcomes are likely to be in the US and UK, both notable for fractured politics and a strong belief at the top that the state is just another interest group, and in their own exceptionalism – mercilessly skewered by Fintan O’Toole in the latter case (although he failed to mention that the virus itself had already done the same thing by putting the country’s prime minister, health secretary, and chief medical officer, not to mention No 10’s chief special adviser, out of action with covid19 all at the same time – a unique full house of haplessness). The Johnson team’s slow reaction to the spread of the disease, subsequent policy zigzags and implementation failures faithfully reproduce on fast-forward the distinctive shortcomings of British government over the decades, with an added layer of arrogance thrown in.

In this context, the delegation of responsibility for protecting the NHS to citizens is no surprise – it is the culmination of the series of ‘reversifications’ under which the dehumanising pressures of financialisation and targets have gradually turned processes, functions and whole institutions into their dark opposite – think the Home Office as department of alienation and hostility, welfare as punishment for poverty instead of a helping hand. Tellingly, while ministers learned enough from the financial crisis of 2008 to enact an instant bail-out of the economy, their lack of systemic social vision is leaving a trail of destructive unintended consequences behind its initial ‘protect-our-NHS’ call: the ‘collateral damage’ of extra deaths as ill patients shun A&E and cancer departments, the unfolding tragedy of care homes become killing fields (another reversification), and now the finding that people mystified by the government’s switch from herd immunity to strict lock-down will have to be cajoled out of their houses to ‘assert their inalienable right to go to the pub’ when restrictions are lifted.

Covid19 has ruthlessly exposed the hidden faultlines and contradictions in our society, and the lazy, self-serving economics and management thinking that first engineered and then ignored them for 40 years. When we come out of it, we will indeed remember those failings. But we will also take heart from the unforced collaborations, cooperation and solidarity emerging in the lockdown that reflect a more positive view of human nature – one that will form the basis of new and more realistic versions of that dried-up, pre-Darwinian thinking. It will have been us who have protected the NHS, not governments, and it’s the least that they owe us in return.

What coronavirus teaches about business

When the Global Peter Drucker Forum chose ‘business ecosystems’ as its theme for last November’s conference, it was met by some gentle scoffing. Bit airy-fairy? More fad than substance? Was ecosystem management even a thing?

Well, now we know. The coronavirus is not only a thing; manging its ecology and ecosystem is the biggest test of management, and leadership – which just happens to be the Drucker Forum theme for 2020 – the world has recently seen: greater than 9/11 and the 2008 financial crash, a matter of literal life and death for many thousands of people, and financial life or death for thousands, perhaps millions more.

Given this, it’s at first sight odd that political leaders have conspicuously not been beating a path to the doors of leadership and crisis-management ‘experts’ in business schools or large companies for their advice on facing down the virus. After all, business websites and blogs teem with strategies for managing in a world of volatility, uncertainty, complexity and ambiguity, or VUCA, as the military call it. And did I dream that Google and/or Facebook once boasted of being able to use information they had sucked out of their users to predict the spread of disease?

Yet at a second glance, the shunning of conventional management is understandable – the dirty secret is that it has significantly contributed to the weakness of the current global response. All too willingly coopted into the neoliberal economic consensus by the appeal to self interest in the 1970s, management has been the Trojan Horse that released a strain of free-market dogma into the economic and political mainstream that we’re all suffering from now. One of the side-effects is the willful self-fragilisation of some of our biggest corporations.

Thus the big US airlines that are now seeking a $58bn bailout have over the last decade spent roughly the same amount to buy back their own shares to the exclusive benefit of their own executives and stockholders. Boeing, the epitome of the ‘downsize and distribute’ approach to capital allocation, having donated $43bn to shareholders through buybacks between 2013 and 2019, now wants $60bn to prop up aviation-industry supply lines. Even mainstream commentators, such as the FT’s excellent Rana Foroohar, are suggesting that this time round it cannot be simply a case of socialising such firms’ losses – the price of a public bailout should include equity participation and ban on buybacks and executive bonuses until the debt is repaid – not to mention improvements for customers and employees.

Fragilisation is not confined to companies. The same ideological emphasis on markets and self-interest is a factor in the opioid crisis, the tax minimisation strategies of Big Tech that have seriously diminished the resources of national governments for any kind of public spending, and more insidiously so undermined the instruments and confidence of the administrative state that its ability to intervene is now desperately compromised. Together with the retreating state, the systematic application of competition and markets to the public sector has given us a decade of austerity reflected in the parlous states of US and UK safety nets, making the medium-term social and – irony – financial toll of COVID-19 much greater than it should have been. Is it fanciful to suggest that Johnson and Trump’s fanatical faith in laissez faire and small government is behind their naive exceptionalism and hands-off initial response to the coronavirus?

Lenin once remarked that ‘there are decades when nothing happens. And there are weeks when decades happen’. Decades are happening now, and one of the things that is collapsing before our eyes is the edifice of neo-liberal economic theory, whose equations are simply swept away by a pandemic that operates on a plane of rude physicality governed by medical and ecological laws. A similar thing is happening to management. In the context of a wider business ecosystem that is itself nested in the wider systems of the economy, society and the natural environment, the idea of a company as a standalone maximisation engine is an aberration. In an ecology, something that maximises itself is a cancer to be feared. Instead, the purpose of business is to play its part in optimising the whole, which means investing in the jobs and salaries without which the wheels of capitalism seize up – as is graphically illustrated by a coronavirus that keeps people at home and shuts economies down.

It became quickly clear in November’s Drucker Forum sessions that managing in ecosystems was as different as quantum to Newtonian physics. Previous certainties – indeed, the very idea of certainty – suddenly become suspect. Standard leadership strategies or lists of personal qualities are useless when everything is shifting around you. In a world where nothing is certain, leadership no longer makes any sense as an abstract ‘thing’, separate from what is being done. It is situational or nothing.

In those circumstances, what do you use as a guiderail when leading followers on a course that is as likely to be wrong as right? Again, coronavirus gives the clue. Consider luxury purveyor LVMH using perfume lines to turn out hand sanitisers; or Dyson designing ventilators rather than vacuum cleaners; ex-footballers turning over their hotel free to NHS staff; Amazon about to deliver testing kits; CEOs such as Danny Meyer of the Hospitality Group cutting executive salaries and foregoing his own to avoid laying off low-paid employees; engineering professors, PhDs and F1 racing teams around the UK collaborating on basic ventilator designs and breathing aids; and a staggering 750,000 individuals volunteering to help the beleaguered NHS. The list goes on.

Altruism, yes. But another way of describing it is people and companies self-organising to redirect their resources to solving problems that matter. It’s an ecosystem responding creatively and cooperatively to threat. Note the lengths people will go to when offered a good job to do. As Warren Bennis once put it: ‘Problem-solving is the task we evolved for – it gives us as much pleasure as sex’.

Although they have largely forgotten it, solving problems is what companies evolved for, too. For economists Eric Beinhocker and Nick Hanauer, ‘the accumulation of solutions to human problems’ is a better measure of progress and prosperity than GDP. It is access to more of those solutions – air-conditioning, the world’s information on a smartphone, a cure for COVID-19 – that is for its citizens the real difference between living in an advanced economy and a poorer one. In a functioning capitalist ecosystem, business creates solutions that benefit humanity, while also employing and paying people such that they can take advantage of those solutions for themselves.

In turn, that defines what leadership looks like in the VUCA world, wherever the volatility or uncertainty comes from: acting fast to deploy resources where they can contribute most to optimising the system of which they are part. In Peter Drucker’s famous distinction, that is doing the right thing, which is leadership, as opposed to management, which is doing things right. COVID-19 could hardly have pointed the way ahead more clearly.

RIP Jack Welch

Rarely can a passing have marked the end of a business era more aptly than that of Jack Welch. When the legendary CEO of GE, who died this week, retired in 2001, the giant company that he had shaped over 20 years was at the height of its fame and financial success. GE’s fabled management academy at Crotonville, where Welch often lectured, was a big part of it, a conveyor belt as efficient as any of its plants at turning out successful executives modeled on Welch’s hard-nosed, hard-driving style. Had not Welch’s unforgiving approach and uncanny ability to meet Wall Street’s quarterly earnings targets, earned him in 1999 the grandiose title of Fortune’s ‘manager of the century’?

Yet it it is now evident that 2000 was GE’s high point. Never has its reputation, or earning, touched those levels since. Its share price has dropped 80 per cent as Welch’s successors struggled to cope with the legacy of his push into financial services, which nearly sank the group in 2008. Net profits of $15bn as late as 2014 have vanished in three of the last four years, and in 2018 GE suffered the indignity of being dropped from the Dow Jones Industrial Average. It was the last surviving member of the original class of 1896.

Just as butal has been the collapse of GE’s management aura. Consider the careers of the post-Welch generation of leaders from GE’s proud leadership finishing school. Jim McNerney, one of the unsuccessful suitors in the contest for Welch’s old job in 2000, decamped to another famous company, 3M, which he left on its knees five years later. His next berth was at Boeing, where as its first CEO with a non-aviation background (he was also president and chairman), he took the decision to build the 737 Max-8, currently in the news for all the wrong reasons, rather than design a new airliner from scratch.The second spurned internal candidate, Bob Nardelli, left Home Depot, his new company, similarly weakened, alienating customers and long-serving workers with sweeping cost cuts and provoking shareholder ire with his outsize compensation package. As for GE itself, when insider Jeff Immelt’s ultimately unsatisfactory reign ended in 2016, the board turned to another internal candidate, John Flannery – before abruptly ousting him two years later and – oh ignominy – replacing him with GE’s first CEO from outside the company. The rout of GE management is complete.

In retrospect, for the period of his CEO-ship, GE and Welch were perfectly matched. Russ Ackoff once noted that one of capitalism’s dirtiest secrets is that ‘we are committed to a market economy at the national (macro) level and to a nonmarket, centrally planned, hierarchically managed (micro) economy within most corporations’. A centrally planned, shareholder-value-driven company – of which GE was the epiome – can only operate as a top-down hierarchy, because without line of sight to the actions needed to meet indirect corporate targets, employees have to be told what to do.

Only an imperial CEO like Welch could make the model work, up to a point, by relentlessly holding underlings to account for punishing plans and performance targets, and having no compunction about sacking people who fell foul of GE’s infamous forced ranking assessment system. He was equally ruthless in pursuit of efficiencies through job cuts (hence ‘Neutron Jack’), offshoring and selling off units that couldn’t make number one or two in their industry.

In truth, GE under Welch may have been about as well run as a large top-down corporate economy could be, as he pushed managers to break down silos and exhorted them to beat back competition from the far east. But he couldn’t escape the limitations of command and control. He railed for example at the bureaucracy of budgeting, which he castigated as ‘the bane of corporate America – it should never have existed’, but couldn’t get rid of; and complained that ‘the talents of our people are greatly underestimated and their skills are underutilised’. In neither case did he understand why. Unchecked, he overreached imperially in financial services (a poisoned legacy that helped to cripple his successors), in his botched attempt to take over Honeywell in 2000, and in his exorbitant material demands – a scandal that tarnished his reputation when it came to light in a subsequent messy divorce.

Looking back, you might say that Welch perfected the dinosaur just in time for evolution to sweep it into the dustbin of history. Shareholder primacy, as even Welch came to admit – although not before he had retired on its imperial proceeds – was ‘the dumbest thing in the world’, shareholder value being a result, not a cause. Immoderate personal gains are no longer viewed with indulgence. Most decisively, business has evolved: today’s emerging business ecosystems, fluid and constantly shifting, simply aren’t amenable to the detailed planning and control that were GE’s, and Welch’s, forte. Management has to evolve too.

Poigantly, one part of the old conglomerate that is conspicuouly thriving is GE Appliances, albeit under different parentage. In 2016 it was sold to the Chinese Haier group. Under its inventive chairman, Zhang Ruimin, Haier is in the process of transforming itself from a white goods manufacturer into a ‘major appliances ecosystem’ in which all its related products are linked into, for example, an ‘internet of food’ or an ‘internet of clothes’. Standalone products don’t cut it any more. ‘We all need to transform into ecosystem companies, or we won’t survive,’ Zhang says.

In 2018, GE Appliances announced plans to invest $465m in new US manufacturing and distribution facilities.

RIP, then, Jack Welch. RIP too – and not before time – an entire industrial management paradigm.