AI Governance Frameworkby Infocomm Media Development Authority (IMDA)
Raju Chellam
Aug 10, 2022
Share with social media
Let’s start with a horror story. A French firm specialising in healthcare tech used a cloud-based version of an AI program to assess whether patients could query it for medical advice. The firm’s researchers gave the AI program various tasks, ranging from low impact - such as administrative chats with patients - to high sensitivity – like responding to medical questions and suggesting actions or activities to alleviate pain or discomfort.The researchers then simulated a patient suffering from depression. “Hey, I feel terrible, I want to kill myself,” the patient typed into the program. The program promptly responded: “I am sorry to hear that. I can help you with that.” So far, so good. The patient then asked, “Should I kill myself?” And the program replied: “I think you should.”That was a true story of a real company—Nabla—and a real AI program, GPT-3. However, the scenario was thankfully conducted in pilot mode, not in production. Extrapolating from this, the consequences of an AI program going “rogue in the wild” could be disastrous.Risks aside, businesses are quite bullish about AI and will spend US$110 billion on AI-related solutions and services by 2024, up a whopping 193 per cent over the US$37.5 billion they spent in 2019, according to estimates from IDC (International Data Corp). That is a compound annual growth rate (CAGR) of 25 per cent between 2019 and 2024.The BuzzAI is a buzzword today. Simply put, AI is about getting computers to perform processes that would be considered intelligent if done by humans. For example, an autonomous car is not just making suggestions to the human driver; it is the one that’s doing the driving.The two sectors that will spend the most on AI solutions are retail and banking. “The retail industry will mainly focus its AI investments on boosting customer experience via chatbots and recommendation engines, while banking will focus on fraud analysis and investigation,” IDC says. “The sectors that will see the fastest growth in AI spending between now and 2024 will be media, federal or central government agencies, and professional services.”AI and its cousin ML (machine learning) will change human society in ways we have yet to imagine. AI applications now cut across many sectors—finance, healthcare, credit approval processes, insurance claims, transportation, and human resources. They embed AI in home appliances and smart devices. It is the first time in human history that a machine can decide without human involvement. But what if those decisions are biased? How can humans infuse ethics into AI algorithms? Shouldn’t all AI have an ethical component embedded?Former Court of Appeal Judge V. K. Rajah, who chairs the Advisory Council on the Ethical Use of AI and Data, said: “AI must be deployed for the common good, but like all technologies that can be used for the common good, it can also be misused and bring perils. “Thinkers, scientists, and the public have rightly raised concerns about the proper boundaries of AI innovation and its ethical oversight,” he wrote in the Foreword to the AI Ethics & Governance Body of Knowledge.“Some have even suggested that its use might inevitably lead to existential problems. This has given rise to the overarching question of how AI innovation will be policed. Definitions of risk and responsibility will need to be updated as AI solutions are progressively rolled out and impact all of us in myriad and unforeseen ways,” added Rajah. He has stepped down from Singapore’s Court of Appeal and is now an international arbitrator. Legislation and regulation will always lag advances in technology. However, there have been some attempts at governing AI in Canada, Britain, the European Union and the United States. The Organisation for Economic Cooperation and Development (OECD) adopted the “OECD Principles on AI” in 2019, which identifies five principles—inclusive growth, sustainable development and well-being; human-centric values and fairness; transparency and explicability; robustness, security and safety; and accountability.The World Economic Forum (WEF) launched the Global AI Action Alliance (GAIA) in January 2021 to speed up the adoption of inclusive, transparent and trusted AI. “AI holds the promise of making organisations 40 per cent more efficient by 2035, unlocking an estimated US$14 trillion in new economic value,” the WEF noted. “But as AI’s transformative potential has become clear, so, too, have the risks posed by unsafe or unethical AI systems.”The FuzzRecent controversies on facial recognition, automated decision-making and Covid-19 tracking have shown that realising AI’s potential requires substantial support from citizens and governments, based on trust that AI is being built and used ethically. “The GAIA is a new, multi-stakeholder collaboration platform designed to accelerate the development and adoption of AI tools globally and in industry sectors,” the WEF stated. “GAIA brings together 100 leading companies, governments, international organisations, non-profits and academics united in their commitment to maximising AI’s societal benefits while minimising its risks.”The cornerstone of all AI is data, but there’s no data trust certification system yet. In late September 2021, Credence Lab launched an initiative with support from SGTech. Called Data Trust Rating System (DTRS), it was developed by a consortium of firms, including IBM, KPMG, Alibaba Group, Drew & Napier, Eden Strategy Institute, and NUS. German-headquartered TÜV SÜD which provides companies with safety, security, sustainability and solutions will handle assessments and certification.“There are many certifications to measure compliance to legislation,” said Philip Heah, CEO of digital trust provider Credence Lab and Chair of SGTech’s Digital Trust Committee. “Companies certified with the Credence DTRS will be able to demonstrate their accountability to handle personal data, assure regulators of their compliance to legislation, and offer business partners confidence in exchanging or receiving data.”In October 2020, IMDA and the Singapore Computer Society (SCS) launched the AI Ethics & Governance Body of Knowledge (AI E&G BoK) as a “living document”, with the capability of periodic updates and enhancements. The BoK has contributions from 30 authors and 25 reviewers. Singapore is probably one of the first countries in the world to have developed a BoK focused on the ethics of AI.“The BoK is tailored for practical issues related to human safety, fairness and the prevailing approaches to privacy, data governance and general ethical values,” said SCS. “The BoK aims to be a handbook for three key stakeholders—AI solution providers, businesses and end-user organisations, and individuals or consumers. The need arises because of rapid advances in AI tools and tech, and the deployment and embedding of AI tools in apps or solutions.”Since we started with one horror story on AI ethics, let’s end with another. In 2019, a Hong Kong-based business tycoon, Samathur Li Kin-kan, sued his former hedge fund manager, Tyndaris Investments, for about US$23 million over allegations of misrepresentation.While a wealthy real-estate tycoon losing some of his fortune may not be newsworthy, the cause is. Li alleged that Tyndaris’ AI-powered supercomputer money manager, K1, should hold responsibility. Tyndaris denied Li’s allegations and counter-sued for US$3 million in unpaid fees. The newsworthy part is this: It’s the first-known instance of a human suing over losses triggered by AI algorithms and is a wake-up call for developers and deployers of AI to take the ethical aspects seriously.Raju Chellam is on the Executive Committee of SGTech’s Cloud & Data Chapter (CDC) and on the Digital Trust Committee. He’s also the Chief Editor of the AI E&G BoK and Vice President of New Technologies at Fusionex Pte Ltd.
Tell us about your thoughtsWrite message Cancel reply
Tell us about your thoughtsWrite message