'The workforce needs to embrace AI': Why humans are critical to the future of AI-integrated work
Training, regulation, and employee collaboration are essential for successfully scaling businesses using AI, EY's Jad Shimaly told Euronews Next at Mobile World Congress.
Artificial intelligence (AI) is redefining the future of work at a rapid pace, leaving many business owners struggling to integrate the technology mindfully - while keeping up with high-pressure industry demands.
For Jad Shimaly, the Global Managing Partner at EY, it's essential that CEOs navigate this transitional period with agility, accountability, and an open-minded collaborative approach.
"The workforce needs to be ready to embrace AI," Shimaly told Euronews Next at Mobile World Congress in Barcelona.
"Training, making sure that the workforce is well-equipped to leverage the benefits of AI, and embarking on the change process that AI brings forward, is a big part of what companies that are getting the most out of their AI initiatives are tackling head-on - and very early on into the process."
EY, one of the Big Four accounting firms, has been leading the charge when it comes to developing AI-integration solutions for business leaders, announcing an alliance with Boomi, an AI-driven automation system, in 2025.
Utilising such tools is part of developing an "AI ecosystem," which Shumaly argues is integral for quelling employee overwhelm and ensuring smoother, more efficient AI deployment.
"Companies are realising that they cannot tackle AI initiatives on their own. Bringing in partners, bringing in alliance partners, and doing joint ventures - that new set of workforce is critical for them to land on the true benefits of AI."
Maintaining human advantage
One of the biggest hurdles CEOs face when integrating AI is the tensions it can create with human employees.
Alongside people's fears around the technology taking jobs, the threat of burnout also looms large, as workers struggle to manage new AI-driven responsibilities on top of their existing roles.
This has led to organisations losing up to 40 percent of AI's productivity upside, according to EY data, calling into question how CEOs can encourage innovation without detriment to employee motivation and wellbeing.
The solutions are multifaceted, but at their core reside training, collaboration, and clear frameworks, according to Shimaly.
"If you want innovation and [employee] wellbeing to be complementary to each other, and to be improving in tandem, then we need to have the right change management programmes to make sure that employees are better understanding the positive impact," Shimaly explained.
"Employees cannot just be given a set of innovations or a technology. They need to be part of the solution as well. And when they become part of a solution, by default, they embrace the solution better," he said.
"By default, they understand how the solution is going to impact their daily lives. So, they get readier and they create better things, more creative things, and it improves well-being in general."
Shimaly added that in areas where workforces are augmented by AI, creativity is tripling, reinforcing the importance of framing AI as a tool for complementing workers.
"In many cases, when AI is left on its own, it becomes very structured, very redundant, without driving up the level of creativity in the organisation. And when the workforce is left without AI, we're also seeing that creativity is getting stifled, because the human brain is not getting closer to reaching its capacity."
Responsible AI
Another major and ongoing challenge for business owners is the ethics surrounding AI integration.
As regulatory frameworks struggle to keep pace with AI's surging innovations, it's up to CEOs to devise strong accountability frameworks for leveraging AI's benefits responsibly.
"Responsible AI, the way I see it, is ensuring that you have the right governance, the right ethical standards, the right accountability framework for the AI lifecycle from beginning to end," Shimaly said.
Companies are introducing it through a few steps, he said. The first is defining what responsible AI standards are. The second is communicating what these standards are and how they're going to be embraced and driven through an organisation. The last step is ongoing oversight — making sure responsible AI practices stay current and remain central to how organisations manage AI throughout their entire lifecycle.
Shimaly also noted that companies that embrace the right governance frameworks are seeing more benefits, as it allows them to more accurately measure success.
"It's enabling them to start cutting and start progressing, versus the ones who do not have the right standards in place and are taking two steps forward and then, in many cases, two or three backwards," he said.
"Because AI will inevitably surprise you if you don't govern it the right way."

No comments:
Post a Comment