Artificial intelligence (AI) is making a deep impact on countries, societies and people. ChatGPT, the latest iteration of AI tools, was launched at the end of 2022 and has shown no signs of slowing down anytime soon. While such a powerful tool can cause havoc if abused, it also has the ability to enhance our lives and help with countries and societies’ progress and innovation. In this Q&A interview on the two faces of generative AI, privacy and data governance expert Kevin Shepherdson, CEO and founder of Straits Interactive, highlights the two faces of generative AI: it can be a tool to augment human intelligence but it can also cause great harm to businesses and society. The interview is in two parts, the second part will be published on June 22, 2023. Why is Chat GPT such a game-changer? In ChatGPT, we have a language model so powerful that we can have human-like conversations on any topic, using it to generate various text responses, including poetry, and evn to summarise and analyse content. We can literally experience and feel the presence of this technology. This technology has the potential to revolutionise the way people interact with AI, making it an invaluable tool for businesses and individuals alike. The beauty lies in its simplicity. Anyone, anywhere, can embrace it, and it can be utilised for almost everything. Organisations are starting to use ChatGPT, finetuning it for specific applications which allow for a higher level of personalisation and relevance in its responses. A few Sundays ago, I decided to use ChatGPT during my church’s mass service. I thought that asking it to explain and summarise the main biblical themes by providing the bible verses behind the service readings was too easy, so I asked ChatGPT to go one step further and create a customised prayer for me to share with my friends based on the Sunday readings. I was in absolute awe. It is this ability to seemingly “think” and reason with speed and conciseness that makes it a game-changer. There were other chatbots before ChatGPT, but were they easy to use?  Before ChatGPT, chatbots were a pain to deal with especially in customer service situations. Older chatbots typically required customers to go through a series of steps or processes to get to the essence of what they were trying to find out.  Fast forward to today. Any developer, even without knowledge of AI, can now develop his or her own chatbots to create a personalised experience for their users. This is where ChatGPT represents a significant leap forward in the development of AI language models. How could it open users to potential risks? What sort of risks are there?  Anyone using ChatGPT needs to be aware of potential biases and that the model is only trained on content up to September 2021. Moreover, the responses from ChatGPT are only as good as the quality of the questions asked. This is similar to the way the human mind applies “generalisation”, “deletion”, and “distortion” to make sense of the vast amount of sensory input received daily. These processes help us filter and simplify information, allowing us to function efficiently in a complex world. However, they can also impact communication in various ways, sometimes leading to misunderstandings or misinterpretations. First, generalisation.  Since ChatGPT has been trained on a massive amount of data, it may “generalise” if users do not provide a specific context, resulting in the creation of inaccurate content or a response too general to be considered useful.  Second, deletion. This is the process of the human mind filtering out or omitting specific aspects of sensory input, in order to allow us to focus on what we consider most relevant or essential. Using ChatGPT, this could lead to the loss of important context or details in communication. For example, my son wanted me to turn off the air-conditioning in the car. He said: “Papa I am cold”, with the deleted or unspoken phrase “please turn off the aircon.” Of course, I understood him but if he was to key in the same prompt into ChatGPT without context, it would respond with advice on how to keep warm, like wearing warmer clothes. Therefore, ChatGPT must be provided with specific content so that it can understand the contextual meaning. Otherwise, both “generalisation” and “deletion” mean that ChatGPT may be misdirected, biased, or may not provide an accurate source of truth. With this in mind, users should always ask ChatGPT to cite references for validation or do the necessary fact-checking. Third, distortion. This presents the biggest risk to users. Distortion is the process of the human mind altering or modifying sensory input to create new interpretations, perspectives, or meanings. Ironically, ChatGPT mimics this unique human filtering process to the extreme, leading to what is now commonly known as “hallucination”. In this case, ChatGPT may make up its own facts and confidently proclaim the answers.  It is important for users to learn the importance of inputting the correct prompts to elicit more accurate answers. Many users are also unaware of the “biases” that may be embedded in their conversational exchanges with ChatGPT. For instance, when using ChatGPT to research a certain topic, unless it is asked for pros and cons, or alternative perspectives, it will only provide the specific answers requested, ignoring other potentially relevant angles or viewpoints.    Are there privacy and security risks to ChatGPT and other generative AI tools?  OpenAI, the creator of ChatGPT, states on its website that its mission is “to ensure that artificial general intelligence (AGI) benefits all of humanity” and to make it safe over the long term.  While this is reassuring, there are regulatory concerns about ChatGPT’s privacy practices.  For example, Italy banned the use of ChatGPT, pending OpenAI changing its privacy policy. Italy did this to ensure better alignment with the requirements of the General Data Protection Regulation (GDPR) in the European Union. The Italian government stated that OpenAI must inform users in Italy of “the methods and logic” behind the processing of data necessary for ChatGPT to operate. This was not revealed in its privacy policy when it launched ChatGPT. In addition, the company must also provide tools to enable people whose data is involved, including non-users, to request the correction of information, or if a correction is not possible, its deletion. Other requirements include introducing an age verification system capable of excluding children under 13. Every developer is rushing to build new tools and services to leverage on ChatGPT. What are the consequences of this trend?  My bigger concern is clone apps, which are the tools and services built by developers and startups, using OpenAI’s API, to access the functionality of ChatGPT. Examples include Replika and    These clone apps’ business models often revolve around personalised advertising, which relies heavily on processing personally identifiable information (PII). Developers of clone apps have varying levels of expertise and experience in AI and of privacy and security expectations of the community and users. Privacy and security may not even be their priority, and they may lack the competencies to implement robust data protection measures. This situation creates potential risks for organisations sharing corporate data with these clone applications, even if they are not sharing PII. In our research, we found over 100 mobile apps in the Google PlayStore that use the ChatGPT API, all with “ChatGPT” in their names. Besides offering a broad range of innovative chat services, these apps offer features that allow users to upload corporate data, including business plans, policies, and spreadsheets, so that ChatGPT, as an example, can analyse or summarise them for users. One significant concern is the potential for privacy and confidentiality violation that can arise when using such clone apps since they have full access to the corporate data. The clone app can mishandle or misuse the information. If the app’s developers do not have robust data protection practices, then potentially, the corporate data could end up being publicly available to future users of the Clone App. In part 2, Kevin Shepherdson will discuss the steps organisations can take to protect against the risks of ChatGPT and generative AI tools. Additional contributions by Lynette Hooi, a freelance writer and Grace Chng is the executive producer of