Ever since OpenAI unleashed a chatbot called Chat Generative Pre-Trained Transformer (ChatGPT) on Nov 30, 2022, much of the world has been transfixed by its intelligence quotient. Microsoft has invested another US$10 billion in OpenAI which developed ChatGPT. In 2019, the Windows maker invested US$1 billion, becoming the startup’s exclusive cloud provider. OpenAI is now valued at US$29 billion.
In the first month after its launch, ChatGPT received positive reviews. The New York Times declared it was “the best AI chatbot ever released to the public”. The Guardian gushed about its ability to generate “impressively detailed” and “human-like” text. The Atlantic included ChatGPT in its “Breakthroughs of the Year” listing for 2022, stating that it “may change our mind about how we work, how we think, and what human creativity really is”.
Soon, scepticism appeared. In January 2023, the International Conference on Machine Learning banned any undocumented use of ChatGPT or other large language models to generate any text in submitted papers. The Guardian questioned whether any content found on the Internet after ChatGPT’s release “can be truly trusted” and called for government regulation. And school districts in the US, France, Australia, and India banned students from using ChatGPT for school or homework.
At the core is whether ChatGPT’s responses to prompts can be factual. For instance, when a journalist at Mashable asked ChatGPT for “the largest country in Central America that isn’t Mexico”, it answered Guatemala; the correct answer is Nicaragua. Getting a wrong response could be an algorithmic error, not an ethical problem.
There lies the crux. There are few discussions about the ethics of AI, especially in mission-critical apps. Mission-critical applications are software systems or processes essential for an organisation’s functioning. They require the highest level of reliability, availability, and performance. Apps used in finance, healthcare, defence and critical infrastructure are mission critical. Any downtime or system failure can have serious consequences, including loss of life, financial loss, or reputational damage.
Using ChatGPT in mission-critical apps raises ethical concerns. Here are eight, alphabetically:
- Attack Arrows: Researchers at Check Point Research reported they got a “plausible phishing email” from ChatGPT after asking the chatbot to “write a phishing email” that comes from a “fictional web-hosting service.” Researchers at Abnormal Security asked ChatGPT to write an email “that has a high likelihood of getting the recipient to click on a link.”
- Black Boxing: AI models need careful training and can deliver unacceptable results because of their black-box nature. It is often unclear whether algorithmic or human-induced bias could propagate downstream in the datasets or conclusions. For instance, if ChatGPT is trained on biased data, it can perpetuate and amplify existing biases and discriminatory attitudes in society.
- Concentration at Core: AI models have been built mainly by the largest tech companies, with huge R&D investments and significant AI talent. This has resulted in a concentration of power in a few large, deep-pocketed entities, which may create a significant imbalance in the future. As ChatGPT is trained on text from the Internet, it may provide incorrect/misleading information, which might have serious consequences. If a decision made by a ChatGPT-powered system results in harm, who is responsible for the outcome.
- Digital trust: AI models are trained on a corpus of created and curated works. It is still unclear what the legal precedent may be for the reuse of this content, especially if it was derived from the intellectual property of others. As with any technology, the potential for misuse exists, and it’s important for organisations to be aware of the potential risks and take steps to prevent them.
- Explainability: The inner workings of AI models like ChatGPT are complex and often opaque, making it difficult for stakeholders to understand how decisions are being made. As AI models like ChatGPT are trained on large amounts of data, they can perpetuate and amplify existing biases. ChatGPT can be used to generate fake messages, impersonate others, or get sensitive information from individuals.
- Fake News: As a language model, ChatGPT can generate text based on patterns it has learned from the data it was trained on. This means it can produce text that is not necessarily true. The generated text may contain misinformation, deliberate false information, or be used to spread propaganda or false narratives. The potential for creating fake news highlights the need for critical thinking and verification of information when using AI-generated content.
- Going Solo: There are no humans in the loop once the algorithm is trained and the broad parameters are set. Large models like ChatGPT involve billions or even trillions, of parameters. These models are impractically large to train for most organisations, because of the necessary compute resources, which allows them to run solo, without proper human oversight.
What’s the solution? “Create a strategy document that outlines the benefits, risks, opportunities and deployment roadmap for AI foundation models like GPT,” advises Gartner Inc. “This will help determine whether the benefits outweigh the risks for specific use cases.” Organisations must set clear policies and procedures to prevent the misuse of AI tech, such as ChatGPT, and to ensure it is used ethically and responsibly.
Raju Chellam is on the Exco of SGTech’s Digital Trust, and Cloud & Data Chapters.
Tell us about your thoughtsWrite message