The Horse Has Bolted

The move of current forms of artificial intelligence to the mainstream has happened so fast, it’s becoming an effort to keep up – and even more difficult to attempt to opt out of it entirely. When virtually every service connected to the Internet – and some that aren’t – are offering some sort of (supposedly) AI-enabled feature, it’s clear that the horse has bolted. The challenge now is to make sure that we use these technologies responsibly, with no negative impacts to our society and community.

The debate on that continues to rage. For example, many have raised concerns over the heavy operational requirements of AI data centers, and the risks it poses to access to clean water. Grok, the AI platform of Elon Musk’s X, is now being scrutinized for its role in generating sexually explicit images without the consent of the people who appear in it. There are, of course, questions about the role generative AI plays in the spreading of misinformation and the muddling of public discourse.

Perhaps surprisingly, there’s been little talk – or, if there’s more than that, they’re definitely hushed – about how the expanding capabilities of AI poses a risk to certain jobs. It’s not just the creatives who have been protesting loudly over how their works are being mined for AI models that would ultimately take their livelihoods away. Of course, there’s the other side of AI, of how it speeds up administrative and clerical tasks and frees up the user to engage in deeper forms of processing and working. I have encountered examples of people who have stopped hiring new employees because they found current AI platforms allow them to do the work humans would have done at a much lower cost. And soon, it won’t just be these clerical tasks, but certain decisions that used to be the domain of humans alone. For example, in other countries, there’s scrutiny over rent management software using AI to dynamically adjust the rates tenants pay – and carrying over class and race biases to a new degree.

But on a policy level, no one is quite sure how to approach these challenges. South Korea has taken a stab, with the enactment of the AI Basic Act. In its bid to be recognized as one of the world’s AI leaders, it will force companies to (a) more clearly identify works generated by AI; (b) conduct risk assessments and document decision-making processes for “high-impact AI” systems, such as those used for medical purposes; and (c) provide safety reports for AI models that meet a certain threshold.

The second item is of potential interest to us across the supply chain. The emergence of agentic AI – systems designed to perform complex series of tasks with limited to no human supervision – is rightly seen as a game-changer as businesses seek to reduce costs and continue to deliver value to shareholders. The technology is certainly capable, and the speed of development means that assessment could be woefully out of date by the time you read this. This could threaten not just entry-level positions, but also higher-ranking ones, as – for example – demand planning, or route planning, or warehouse optimization decisions that humans would usually make could be done faster by a server in a data center somewhere else on the planet. And even then, excessive reliance on these agents could, as things stand right now, result in unforeseen (and possibly compounding) errors that are not caught early because of the reduction or elimination of human oversight.

Businesses have long recognized the crossroads that lie ahead. How far do we go with existing and emerging technologies to reduce costs? Despite stated commitments to our employees, the reality is, the sustainability of the business means some difficult decisions will be made – and that time is sooner than we think.

Policy makers may not be as ready for what lies ahead. We talk a lot about reskilling and upskilling as a response to how technology continually disrupts the norms. But, again, the pace of change means we may be training people for “future jobs” that won’t exist once they’re ready. What more today’s students, tomorrow’s members of our labor force? Recent revelations about how woefully unprepared and incapable our students are when it comes to basic literacy should very much be a concern.

Not every country may be able to lead the world when it comes to regulating AI, but perhaps the task should be to make sure we can co-exist with it, and a lot of it will come down to policy. How can we preserve the dignity of our citizens when the social contract of working for your keep breaks down? In the UK, there is talk of introducing a universal basic income to protect workers in industries that are being severely disrupted by AI. We may not be best equipped to do something like this – but considering how reliant our economy is on private consumption, well… one horse may have bolted, but we may have others at the ready.

Henrik Batallones is the marketing and communications director of SCMAP, and editor-in-chief of its official publication, Supply Chain Philippines. More information about SCMAP is available at scmap.org.

PREVIOUS COLUMN: New World Order