As artificial intelligence is increasingly introduced into business, an expert panel – hosted by the Guardian – forecast how it will change our working live

Workplaces should use automation technologies to enhance employees’ jobs rather than to replace humans, according to speakers at an event held by the Guardian on 11 July. However, they saw problems in the introduction of technologies such as artificial intelligence (AI) and robots, the latter including software as well as physical machines.

Will robots replace us?

“Humans should not worry too much about replacement, but need to find new ways to work together with AI,” said Chelsea Chen, co-founder of Emotech, a company which makes a voice-operated device called Olly that aims to recognise users’ emotions as well the content of speech.

Chen said that human employees are likely to remain better at dealing with people’s emotions than computers. She says Olly can express excitement in response to what a user says, but that does not make it conscious: “Any job which is highly relevant to people will be really hard to replace.”

Automation is ideal for office chores that divert employees from their actual jobs, argued Manu Dell’Aquila, technology transformation manager for software staffing consultancy Red Commerce. “There isn’t a single person in my business who will not say there are not enough hours in the day,” he said, and automating administration would help with this.

But he added that in some cases automation will lead to redundancies, such as drivers being replaced by autonomous vehicles, and some organisations could force staff to train their software replacements. “I would want to work with staff to build a better set of processes for them to do a more fulfilling job,” he said. “Is that approach going to be used by everybody? Probably not.”

Can robots be trusted with the important stuff in business?

Cecilia Harvey, chief operating officer of security-focused technology company Quant Network, said that two decades working in banking technology have made her wary about allowing automated systems to manage important processes. She said technology suppliers often tell “the happy trail story”, which talks about improved efficiency and financial savings while ignoring the increased risk of failures.

“There are going to be errors, whether it’s humans or robots. It’s more about where do you want those errors to occur,” Harvey said. This means it may make more sense to focus on internal processes where mistakes are unlikely to cause significant problems. But when they could affect clients or have a regulatory impact, “that’s probably not where I would want to have AI. I would want to seriously look deep into what the potential losses are associated with that – not only to clients but to the firm.” She added that this could include reputational damage and recruitment difficulties, as well as financial costs.


(Panel L-R) Manu Dell’Aquila, technology transformation manager, RED Solutions; Chelsea Chen, co-founder, Emotech; Cecilia Harvey, chief operation officer, Quant Network; Alastair Jardin, head of product, Trint; Alex Hern, UK technology editor, The Guardian. Pictured at the Guardian offices.

Alastair Jardine, head of product for speech transcription service Trint, said there are specific risks involved with the data needed to establish artificial intelligence systems. These typically develop probability-based models of how to behave by processing large amounts of “training data”, such as audio files of speech and the resulting transcriptions. “We have to be so aware of the data we are using to train things,” said Jardine, adding by way of example that if training data for an automatic transcription service includes swearing, the system may include these words by mistake in its output.

Jardine added that there are questions of ethics and privacy over how training data is gathered. Technology giants such as Amazon and Google use much of the information they collect as training data, including through their voice-activated speaker services: “What Amazon has done in gathering voice data and not anonymising it in a sufficient way really worries me,” he said. Google does much the same: “You are effectively training their algorithms for them.”

Full disclosure is necessary

Panel members agreed that users should be told whether they are communicating with a human or with software, and when they are being handed from one to the other. “I’m always for full disclosure, and being clear with users that they are speaking to a robot,” said Harvey.

Despite the problems to be overcome, the panellists were optimistic over how workplaces will use artificial intelligence systems in five years’ time. Harvey said they could contribute to staff wellbeing, such as through a chatbot asking employees every day if they are happy with work. Jardine expects it will be possible for employees to set up AI systems themselves, rather than needing experts to do this. Chen said organisations could be using AI to open access to high-quality medical diagnosis and education across the world, given the low cost of adding more users to a working automated system.

Dell’Aquila said that the workplaces most likely to accept such developments are those aiming to expand their businesses, rather than put people out of work. “I would like the AI revolution within a business to come from a business, not just thrown in from above or by technology teams,” he said.

The panel spoke at the Guardian offices on 11 July 2019.

Via The Guardian