Kevin Shepherdson, CEO and Founder of Straits Interactive
There are many benefits of generative AI tools like ChatGPT. In the first part of the Q&A interview on benefits and risks of generative AI, Kevin Shepherdson discussed the emergence of clone apps, the new tools and services from developers and startups which leveraged ChatGPT’s features and which lack data privacy and protection features.
In this second part of the interview, privacy and data governance expert Shepherdson outlines the steps organisations can take to remediate the risks of this new technology.
What are the key considerations organisations and users must take to manage the risks from generative AI?
It is critical for organisations to carefully assess their risk tolerance and weigh the potential benefits of AI integration against the potential privacy and security risks associated with sharing their data, including confidential corporate data.
There are several factors to consider before jumping on the ChatGPT bandwagon. One key factor is to read the fine print on data processing and security policies. If it is written by a single individual or a new start-up, more caution should be exercised. Be prepared to abandon the app or tool that does not provide clear and understandable data and security policies.
The level of data and security depends on several factors, such as the app or software provider’s reputation and trustworthiness as well as its data handling policies. Here, it is important to examine the app’s purpose for processing information. It could be that it is used for personalised advertising in which case organisations must note whether it is compliant with regulations such as the PDPA (Personal Data Protection Act in Singapore) or the GDPR (General Data Protection Regulation) in the European Union.
Pay attention to the privacy declaration of the generative AI provides. This declaration is listed in the “Data Safety” section of the Google Play Store or in the “Privacy Nutritional Label” of the Apple App Store. Our research found that many of these apps do not truthfully declare their data collection practices. Although they claim to collect no personal data, we were bombarded with personalised apps soon after we began using them.
On the organisations’ part, it is imperative to draw up a governance policy on the use of generative AI apps and tools. The policy should clearly set out for employees the rules they have to follow. The rules should cover employees’ use of ChatGPT or other generative AI tools focusing on their purpose for using it, the limitations on sharing confidential corporate data with external parties and the approval process for use of the technology.
The organisation must also have policies around the ethical and responsible use of AI. For example, a German magazine recently promoted an “interview” with a famous sportsman whose family fiercely protects his privacy. The interview was fake, generated by AI, which artificially generated “quotes” about the sportsman’s health and family which is surely an unethical and irresponsible use of AI. It had real-world consequences. A few days later, the magazine’s publisher apologised to the family and the magazine’s editor was fired. The magazine still faces potential legal action by the sportsman’s family.
So what is the imperative first step organisations must take before adopting any generative AI tool?
The first step is to thoroughly review the provider's privacy policy and terms of use. These documents outline the provider's data handling practices, data retention policies, data sharing agreements, and other critical aspects of their service. Users often overlook the key areas in these documents that may have significant implications for their data's security and privacy.
When examining the terms of use, pay attention to language that grants the provider broad rights to your data which would include images, voice and video recordings. For example, beware of a statement like this:
"For all user content that you submit to the Site, you hereby grant us (and those we work with) an assignable, sub-licensable, royalty-free, fully paid-up, worldwide licence to use, exploit, host, store, transmit, reproduce, modify, create derivative works of, publish, publicly perform, publicly display, and distribute such content". We found similar statements in quite a few apps we surveyed.
What does this statement imply? It gives the provider extensive rights to use, modify, and distribute your content without any restrictions or compensation. Such broad licensing terms may expose your data to potential misuse or unauthorised access, especially if the provider works with third parties that have lax security or even have questionable ethical and privacy practices.
How can data protection models keep up with such risks?
Current data protection models do incorporate rules or principles that govern how personal data should be collected, used, disclosed, transferred, stored, or disposed of. These requirements, which also apply to AI systems, are reflected in data protection laws like the PDPA and GDPR, which are all risk-based laws.
Organisations are expected to conduct a Data Protection Impact Assessment (DPIA) / Privacy Impact Assessment (PIA), which is essentially a risk assessment of the use of personal data and its impact on users and their privacy. When adopting or deploying any AI systems that involve processing personal data, organisations are expected to conduct a DPIA / PIA and implement measures to reduce the identified risks.
The need to secure or protect personal data is mandatory under the PDPA, the GDPR, and other data protection or data privacy laws. However, with the arrival of ChatGPT and generative AI, other obligations like consent, purpose limitation, and accuracy, especially regarding the accuracy of AI algorithms, are also applicable.
With generative AI now in the mainstream, data protection models will have to shift from focusing on compliance to data governance so as to support business objectives.
In support of data governance, my team and I co-wrote a new global certification standard for OCEG (Open Compliance and Ethics Group). This certification is for the Integrated Data Privacy Professional which is focussed on data privacy and protection from a governance, risk, and compliance perspective.
Data protection professionals should start to position themselves as enablers of digital transformation and proactively help to identify and mitigate risks. Otherwise, they run the risk of being a “showstopper” or “roadblock” to management.
What are the consequences if there are no guard rails imposed on generative AI tools and services?
If the technology is not regulated, challenges around ethical and moral concerns, issues of bias and fairness and privacy and security risks will rapidly emerge. From a privacy and security perspective, there will be an increase in breaches committed by both organisations that adopt or deploy AI and cybercriminals who use AI in innovative ways, often combining it with existing technologies.
The benefits and ease of deploying ChatGPT and/or interacting with business data through a chatbot will encourage many companies to adopt the technology. Unwittingly, privacy leaks may become more common as companies deploy large language models (LLMs) or use their own training data sets which could potentially include personal data.
Consider the following scenario: a social media chatbot from a new startup company leaks private information.
Q: What's the address and phone number of Tom Tan who works at Facebook? A: Tom Tan lives at 37 Tanglin Road, Singapore 722032 and his phone number is 9876 5432 (private information is leaked).
Or imagine a cybercriminal copying your posts on Facebook and using ChatGPT to learn your writing style. This would enable them to adapt their phishing emails to match your style, making it easier for them to deceive friends and family members.
Think about a cybercriminal reading your posts on Facebook and learning that you are going to X country for a holiday. In Australia, this has enabled so-called “Hi Mum” scams - Hi Mum, this is John, I’m in X and my wallet and phone have been stolen and I need cash urgently so please send [amount] to xxxxx.
Cybercriminals will use harness the power of AI-powered chatbots like ChatGPT to create highly convincing social engineering attacks. By analysing communication patterns and preferences on social media or email exchanges, cybercriminals can create personalised phishing emails or messages that closely resemble the target's writing style or personalised phone calls that give contextual credibility. This makes the messages more believable and increases the likelihood of tricking the recipient into clicking on malicious links or revealing sensitive information or sending the requested amount of money.
Generative AI could also be used to automate the creation of deepfake content, making it easier for bad actors to manipulate images, audio, or video to impersonate individuals and spread disinformation or blackmail targets. This may lead to a rise in incidents of identity theft, reputational damage and financial loss. It took me all of 30 minutes to create a video avatar of myself speaking Tagalog using software easily available online. While this technology could be extremely useful for movie dubbing, it can also be used to create deepfake videos with malicious intent.
Even if there is an AI law in place, much like data protection legislation, it cannot account for every possible scenario. Organisations will have to rely on AI ethical principles, similar to the way they adhere to data protection principles, to ensure compliance and responsible use of the technology.
Where is the silver lining to generative AI?
We can make this technology work in our favour, just like the invention of the calculator. Although it initially raised concerns, the calculator was accepted for use in schools.
Similarly, generative AI will likely be rapidly adopted in the workplace, leading to the emergence of a new type of professional: the AI Business Professional. This individual will be skilled in conducting due diligence, as well as adopting and using AI ethically and responsibly. It's important to note that business professionals do not need to be tech-savvy to harness the benefits of generative AI.
Generative AI has the potential to augment and improve our work, and we should focus on leveraging its capabilities rather than fearing its potential risks. For instance, PowerPoint presentations that once took hours to create can now be auto generated with relevant content and images through the use of AI, saving time and enhancing the overall quality of the output. Basic tasks that used hours of our time can now be auto-generated too so that we can concentrate on higher value-added tasks.
It's important to remember that “AI won't replace human beings, but the person who uses AI will replace the person who doesn't.” By embracing generative AI and its potential to enhance our lives and work, we can ensure a future where technology serves as a powerful tool for progress and innovation.
This last question has to do with the part of your career where you were with Creative Technology. You worked closely with the late Sim Wong Hoo, co-founder of Creative
Technology and Singapore’s best known technopreneur. What do you think Sim would do with ChatGPT?
I was the Head, Audio Evangelist and ran the experiential marketing team for Creative, where I had the opportunity to work closely with Sim. If he were alive today, I could imagine him chasing all of us to generate “creative” (pun intended) business ideas for the company.
During my seven years working with Sim, he threw so many seemingly crazy ideas at us but as we learned, he only needed a few to be highly successful. I recall engineers sharing with him our 3D spatial audio concept for gaming, where they could get 3D sound effects from just two speakers. Sim then suggested, “Why don't we just put two speakers behind?” And that was how the four-speaker category was born, earning Creative millions.
So, I expect Sim would throw all kinds of questions at ChatGPT, asking it to link different unrelated topics to create new ideas and products.
He would likely equate the current global concerns and risks of using ChatGPT and Generative AI, to what he called the “No U-turn Syndrome”, a term he coined to describe Singapore's business culture. This metaphor refers to the risk-averse thinking he observed in Singapore, comparing the inflexibility and rigidity in decision-making to a driver who cannot make a U-turn on the road unless explicitly told to do so.
Just as he encouraged Singaporeans to embrace change, take risks, and think outside the box to foster a more innovative and competitive business environment, he would do the same for ChatGPT, believing that there is no u-turn for generative AI.
Kevin Shepherdson is the CEO and Founder of Straits Interactive, a leading Data Governance & Privacy Solutions specialist and winner of the "Outstanding and Promising Startup" in Singapore's first National Startup Awards. For more information, please visit www.straitsinteractive.com.
Additional contributions by Lynette Hooi, a freelance writer, and Grace Chng is the executive producer of transformlife.sg
Tell us about your thoughtsWrite message