Are Rules the Tools to Make AI a Force for Good?

Article

by Gary Saw, Laiye APAC General Manager


Ethics of AI Gary_update - 1920x700.jpg





While innovation has the potential to transform people’s lives, it can have unintended, sometimes harmful, consequences if it is hastily adopted and improperly regulated. 


The global pandemic has spurred accelerated digital transformation across all sectors, sparking an increase in conversations regarding ethics and regulations in tech.  News of lawsuits over data protection and privacy, and the blistering pace at which Big Tech firms are acquiring other companies to consolidate their power, have ensured this remains a hot topic.


In this article, Gary Saw, Laiye APAC General Manager, discusses how technology can be regulated to serve the broader social good, solutions to mitigate the pitfalls of skyrocketing growth, and the challenges ahead. 


In terms of designing and building artificial intelligence (AI) products and services, how can we ensure that it can serve — instead of harm — the broader social good?


GARY SAW:

When building responsible and ethical AI, algorithms are of the essence. As we allow technology to be increasingly responsible for weighty decisions, such as driving cars on the road, determining criminal offenses or assessing job candidates, we are letting algorithms become a massive part of our lives.


Algorithms are sets of rules that computers follow to problem solve and make decisions. While powerful, the issue is that algorithms can be biased. This can be due to the quality of training data used to teach the AI — it can be poor, erroneous, or simply baked with human bias. 


The algorithms behind these decisions are themselves a black box making it difficult to hold algorithm creators accountable. Therefore, we need to drive towards Algorithmic Accountability and Explainable AI. 


In the building of algorithms, besides including sufficient and high-quality data to cover the diversity and avoid potential discrimination, there needs to be a good governance framework and a human element involved. The responsibility falls on tech companies to develop internal governance structures and measures to ensure algorithmic accountability. They will also need to establish the level of human involvement required in AI decision-making, and address the operation management and stakeholders involved. 


Besides self-regulation, do you think that governmental regulation is necessary too? 


GARY SAW:

Yes, definitely. The Covid-19 pandemic has dramatically accelerated the monopoly positions of many Big Tech companies as we've become more reliant on the technologies produced by those few firms. The pandemic has brought about behavioral changes, and what used to be tools for convenience have now become necessities. Smartphones are needed for tracking and contact tracing purposes, and laptops and tablets have become essential for remote working or working from home. 


With this, the impetus to rein in the power of Big Tech is now greater than ever. The growing consolidation of global tech companies calls for governments around the world to step forward to protect users from privacy invasions and disinformation. Traditional antitrust policies need to be relooked to regulate network-based platforms, and implemented swiftly and nimbly to cater to the lightning pace of how these markets evolve. 


What is the biggest challenge of AI regulation? 


GARY SAW:

AI has already become pervasive in our lives. It is being used for things such as facial recognition to powering Intelligent Automation tools that are helping businesses and people work better and live better. 


While it is necessary to regulate AI, the key challenge is identifying how the process can be achieved effectively without aggressive rules that stifle the growth of new technologies. By finding this balance can we help humankind continue to evolve and improve. 


Related articles

No items found.

Ready to explore the RPA world ?