Can good policy fend off the downsides of AI? Deel's Nick Catino weighs in

"If you're implementing AI in any way at work, even if you're downstream, you still will have regulatory requirements. And keep in mind existing rules like whistleblower protections, demographic protections, often still apply. AI is just an extension of it."
92 million jobs could disappear over the next five years as AI overtakes the world of work.
That is the worst-case prognosis presented by the World Economic Forum in this year's Future of Jobs report.
But even before that huge number comes to pass, it is matched by a second, and equally pressing issue: the massive shift in work norms, from hiring to management to workplace privacy to the potential for bias and exploitation, that AI in the workplace represents, and that very few employers, let alone governments, have been able to fully navigate yet.
To better understand this burgeoning problem, People Matters managed to get a hour of time with Nick Catino, global head of public policy at Deel, during the inaugural launch of The AI Summit in Singapore at the end of May.
His first piece of advice:
"The best thing governments can do for their workforce is make sure that talent has the tools they need to succeed."
A classic example of this, he said, is Singapore's SkillsFuture initiative with its recent increased focus on workers over the age of 40. It is a model that benefits both employers and workers, and companies can follow it, on a smaller scale, by setting aside their own budget for employees to upskill or reskill.
Most importantly, he said, employees themselves have to take the first step of adopting the new technology as well.
Catino, who spent more than a decade of his career in the US public sector before moving to corporate roles, emphasises this because in his view, there is far too much stigma attached to using AI right now. According to studies of how widely AI is used in work right now, in practice, only a little more than 20% of professionals have actually adopted the tool into their jobs, and that worries him.
"I do fear that if that leaves 80% of people not using AI, those are a lot of people that will get left behind this job change," he said.
Will greater transparency help?
If companies disclose their use of AI - as many media companies did in the initial months around the worldwide trend launch of generative AI - will that help resolve the stigma and perhaps address some of the ethical and security issues that stem from ignorance? Perhaps, Catino says.
He pointed to the European Union's Artificial Intelligence Act, published last July, which attempts to bring more transparency to how AI is used - not just by developers but also by individual companies that deploy it for their business use. The Act identifies high risk sectors, requires impact assessments and reporting, and clearly prohibits certain uses which are exploitative, harmful to human rights, or harmful to society as a whole.
But he also suspects that while governments can and possibly will enforce some amount of disclosure, the onus will eventually fall upon businesses to do the right thing.
"If you consider where global cooperation is going on ethics and standards and governance, I wonder if we're moving away from governments being prescriptive, and toward an era where governments want to support their industries and talent and infrastructure," he said. "That shifts the burden to businesses to make sure that they are being ethical. There will be a lot more industry agreements and sector standards, requiring businesses to take a little more initiative versus being directed by governments."
As for what the businesses themselves want? They do hope for regulatory guidance, but actually putting the technology to use takes priority right now, according to Deel's research on the matter: 57% of businesses want stronger guardrails and regulatory clarity, but 92% are more interested in government support for AI innovation.
"Businesses often want to know what the rules of the road are so that they can comply," Catino explained. And they should, he pointed out: if a company is implementing AI in any way at work, even if they are downstream, there will still be regulatory requirements.
"Keep in mind that existing rules like whistleblower protections and demographic protections often still apply. AI rules are just an extension of existing rules that cover all these things."
How is AI currently being regulated in the workplace?
Catino sees three areas that policymakers are focusing on, or going to focus on, when it comes to AI in the workplace. One is recruiting - ensuring there's no bias in hiring.
"A number of countries or cities have rules and walls in place to make sure that if there's historical bias occurring, you're not then training the models on that same bias, which leads to bias occurring in the AI," he said.
These regulations, he added, don't just apply to basic use cases like resume sorting. AI in hiring has advanced to the point where companies can use AI avatars for the first round of interviews with candidates, especially for bulk or volume hiring.
The second area is performance management, for similar reasons. Given access to annual or quarterly performance review data, AI can help managers with recommendations on performance ratings, compensation adjustments, and even eligibility for promotions.
"There's a very near future in which you're outsourcing some of that decision making, because it's it's removing the qualitative aspects and coming to conclusions that are not related to how much you like the person," Catino predicted. "But what if, similar to recruiting and hiring, there's bias involved in how you have the model set up?"
The third area is data privacy, especially with the amount of data that AI takes in and the ubiquity of the ways in which it gathers that data.
"How do you ensure you're maintaining the privacy of others and that you're not violating their rights?" he questioned. "AI potentially could have eyes and ears on all the conversations you're having. And I would say some existing rules already cover this, including consumer protections around data privacy and whistleblowing."
He predicts that in another couple of years, we will see use cases for how AI impacts the workplace, and only then will the policy debate start to catch on - because policymakers and policies are reactive.
That reactive nature, and also the archaic nature of many labour laws, is why Catino thinks business and industry has to lead the way in ensuring AI is ethical. In the absence of proactive laws, everyone has to rely on the large AI developers to follow ethics and governance. And if the large developers are not in that headspace? Then the pressure must come from large users to put them in the right place.
"You want [the developers] to be thinking about what are the right ethics and governance and policies in place on the front end while they're building," he said. "Not at the rollout, and not like a check-the-box compliance exercise. By then, it's too late."