Hallucination is a disease. It is about seeing things that are not there. Or it could be touching or smelling something that do not exist. Generative AI including ChatGPT suffers from hallucinations when they generate confident responses that turn out to be untrue, incorrect and not backed by facts and real-world data. Untreated, AI hallucinations can lead to the use and spread of misinformation which can damage personal and corporate reputation or worse, disrupt law and order. Incorrect information can also have a domino effect, with far reaching consequences on society and people. Hence, the clarion call by policy makers, tech executives and government leaders around the world to introduce guardrails to make AI safer. The memory of the Cambridge Analytica social media scandal remains fresh. Beginning in 2014, the British consulting firm developed psychological profiles by harvesting the personal data of tens of millions of Facebook (now known as Meta) users. These profiles were used for targeted advertising and other data related services, leading to widespread impact in many areas including the 2016 US presidential campaigns and the Brexit referendum of the same year. Social media is good tool for connecting people but it can also be negatively used. By the time governments and policy makers acted to curb social media, it was too late. The damage had been done. Jump to the current environment. Generative AI is the current “hotness” in the tech industry. Reports of incorrect information provided by generative AI tools including ChatGPT have begun to appear. Last month, a New York lawyer in the US cited fake cases generated by ChatGPT in a legal brief filed in federal court and may face sanctions as a result.Surely, more and more incorrect information and misinformation emanating from generative AI will soon emerge. Better to control it in the beginning rather than later. Making AI safer is complex. It could mean licensing companies which offer generative AI tools like chatbots. Licensing would offer governments oversight into what AI tools are built and how they are deployed. In turn, this would guide the development of AI for public good. However, the greater challenge is there maybe companies who ignore the licensing requirements. Another way is to monitor data centres where the generative AI foundation models reside. Since the models require huge amounts of compute resources, data centres could be mandated to report when a GAI provider and/or user uses above certain level of compute resources. Apart from hardware improvements which could reduce the computing resources needed, monitoring would require international cooperation to be effective. With the generative AI appearing everywhere and in everything, the enforcement of licences and monitoring of data centres would quickly become onerous and not feasible. Why guardrails for responsible AI is criticalThe key concern is to find ways to understand what goes under the hood of generative AI applications so as to correct any mistakes before they are widely rolled out. Evaluating foundation models could be one way. It is demanding task because of the fast-growing number and types of foundation models used to train the generative AI-based chatbots. ChatGPT, for instance, is trained on a foundation model that has about 175 billion parameters. Some models are trained on a fraction of this because it is meant for niche applications. Some models may have only company-specific information while others maybe industry based. Additionally, different providers of chatbots have their own way of operationalising the technology. There will be chatbots with different ways of making conversation and creating images, all of which have a lot of potential for bias.Testing the underlying code for responsible AIFor evaluation to work, there needs to be testing methods in an agreed framework and which would be acceptable to developers and users. A neutral third party will serve this purpose. It will provide a platform for collaboration and idea sharing. It will be a place where standards, frameworks and best practices can be developed to ensure that the new technology can be used responsibly and in a trusted manner. The setting up of AI Verify and AI Verify Foundation by the Infocomm Media Development Authority (IMDA) fits this purpose. AI Verify is an AI governance testing framework and software toolkit unveiled last year to help companies demonstrate responsible AI in an objective and verifiable manner. It is a validation system, enabling developers to test their applications against expected impact, reveal potential biases, and check for accuracy, fairness and security. More than 50 local and multinational companies including UBS, Hitachi, Singapore Airlines and IBM, are interested to work with AI Verify which has open sourced its tools to drive adoption.The AI Verify Foundation, which was recently announced in early June, takes this concept one step further. It aims to harness the collective power and contributions of the global open-source community to develop AI testing tools for responsible AI. What is vital is that AI Verify Foundation is a public-private sector collaboration, acting with the voice of policy makers and industry. Its seven premier members namely Aicadium, Google, IBM, IMDA, Microsoft, Red Hat and Salesforce will collectively set strategic directions and development roadmap of AI Verify. Currently, the Foundation has more than 60 general members. To create a stronger voice, the Foundation should invite other governments and think-tanks to join it. Then it will really have a global voice to bring AI hallucinations under control. In the absence of global governance of generative AI, the work of AI Verify and its Foundation is meaningful for industry and the economy. The added benefit is that Singapore will be recognised as thought leader in the AI.Grace Chng is the executive producer of transformlife.sg#AI #generativeAI #chatbots
Tell us about your thoughtsWrite message Cancel reply
Tell us about your thoughtsWrite message