Technology brings innovation, speeding up and simplifying complex activities for the user, but the ethical implications are often an afterthought. ChatGPT is the latest manifestation of an accelerating digital transformation that can feel at times both uncontrolled and alarming. In this article we will demystify some of the hype and identify the ethical concerns that ChatGPT should be flagging for organisations.
ChatGPT is an artificial intelligence (AI) chatbot. It is designed to hold a conversation in a natural way and it can complete language-based tasks, such as answering a question or writing an article, poem or song. The technology works by taking ‘large language models’ that utilise huge data sets to generate natural language using a ‘Generative Pretrained Transformer’ (GPT) process.
GPT technology was created by OpenAI, a USA-based research company founded in 2015 by Elon Musk (owner of Tesla and Twitter) and a cohort of other tech leaders with a strong pedigree in the digital space:
Musk left in 2018 to focus on his other businesses[1] and in the same year OpenAI published a paper on their original GPT. The launch of ChatGPT in 2023 is the most recent and most sophisticated iteration of the technology.
AI has been in our lives for years and we’re already using it in many settings:
In these examples, the machines are learning based on your feedback which is a core part of AI. The machine – your voice assistant, email client, satnav or MRI scanner - gathers and analyses data, and from this it can generate responses, which is why it’s described as ‘generative’.
Feedback is given to the machines when you rephrase the question for your voice assistant or say it’s wrong, you ignore or correct the predictive text, you ignore the satnav directions or request alternatives, or when physicians check the scans and note the image interpretation as being accurate or not.
You may wonder if ChatGPT is different. The real answer is that it’s not.
All the tech companies are working on GPT products with systems that are trained on large data sets, and the chat variations are trained on large language models. Google’s Bard is competing in the same marketplace as ChatGPT, albeit somewhat less successfully at present, and Microsoft has invested in OpenAI to power its search engine Bing.
All languages have structures; our sentences follow an order (grammar), some words are written in a formal style, some are spoken in a conversational style. These are easy to replicate using existing tools such as dictionaries, thesaurus and grammar guides. The GPT systems are programmed using these tools as data, but they are also constantly learning from the feedback users provide.
We’ve been using large language models for years. For example Grammarly, the plug-in writing checker, is based on AI and gains big data from its 30 million users which means it is constantly learning and updating options to improve grammar, punctuation and spelling¹.
In the same way, users might provide ChatGPT with a scenario such as ‘Write an essay for me on market segmentation’, and a response is generated based on the available data. If you don’t like the response you ask the GPT to revise its content, and this feedback provides learning points and contributes to the ever-growing dataset.
These examples of machine learning are a type of AI known as unsupervised learning. The machines check their rules (algorithms) and launch without the need for human supervision. This is leading to concerns about:
Many users have reported that the ChatGPT responses are ‘vanilla’; they lack depth and detail, and the context may be missing too. In time the language models will learn, but at present they generate content rather than creating it by stealing material from other people’s contributions (whether from within the GPT system or from other online sources) without providing any new ideas about the content.
To create content requires different skills. You need to know the audience, their location, education level and the real purpose of the material. The singer-songwriter Nick Cave understandably responded acerbically to a fan who asked ChatGPT to create a song ‘written in the style of Nick Cave’[3]. The system may have generated the song, but it lacks the passion, feeling and meaning embedded in the creative mind. Songwriters often have a thread running through several songs that represents genuine human emotions. The machines can’t yet produce this.
In many instances, the material generated is also inaccurate. Worryingly, as Tim Harford writing in the Financial Times[4] observed, the content seems realistic and has a degree of plausibility, even though it’s not accurate.
In an academic setting, going back to our example ‘Write an essay for me on market segmentation’, ChatGPT may provide a basic overview. But for students it won’t support the notion of critical thinking; deeply considering a subject, reviewing its meaning, conducting research and formulating a response supported with relevant and reliable evidence. When asked to support the arguments with references, ChatGPT will refine the essay and provide a range of outdated texts (segmentation has moved on a lot in the last 10 years), which raises further questions about whether the platform owner is paying for, or circumnavigating, access to the materials cited.
A bigger issue is that because the material seems credible, it could become a major channel of disinformation, where users believe the content produced is valid. It’s easy to add incorrect material online which could then be incorporated into a ChatGPT response. Think about the singer-songwriter James Blunt, who edited his own Wiki page to say he had played the organ at a royal wedding; it was totally untrue but believed by many[5]. This serves as an example of online disinformation and highlights the issue that a source-checking facility is not yet available in GPTs.
The race to adopt ChatGPT demonstrated the ‘bandwagon effect’ as organisations rushed to trial and explore how the system could save time and money. Yet they failed to consider the strategic implications of these actions. If the product is free, you are the product. So organisations freely gave away their thoughts, ideas and plans. Imagine two advertising agencies are bidding for a pitch with a client; they both use ChatGPT and the best-case scenario is that they present each other’s ideas, or worse-case – the same material.
There are other big issues to consider, including the notion of bias in the material. GPTs could perpetuate hate-speech or generate biased content that impacts different groups. It could be misused with significant material being added which is inappropriate. While this could be programmed out, you will know from any spam email you’ve ever received that there are ways around this, with hundreds of variations on words that you think are blocked, such as Viagra (e.g. V1agra, Vi@gra, V1*gra).
Security is another concern. As ChatGPT rushed to release its latest version, they missed some security processes, resulting in a data breach where ‘due to a bug in an open-source library which allowed some users to see titles from another active user’s chat history. It’s also possible that the first message of a newly-created conversation was visible in someone else’s chat history if both users were active around the same time.’[6] So that’s all your confidential thoughts being shared with other people.
Google’s GPT version is known as Bard[7], which it describes as :
Meet Bard: your creative and helpful collaborator, here to supercharge your imagination, boost your productivity, and bring your ideas to life.
Bard is an experiment and may give inaccurate or inappropriate responses. You can help make Bard better by leaving feedback.
There have been widely reported ethical concerns about releasing Bard too early, but in the land grab to get into the GPT space, Google has gone ahead and launched the system. Yet releasing software before it’s ready always means that it includes bugs. The users are then the beta testers, flagging issues as they arise.
The key questions organisations need to ask themselves are:
As tech heavyweights including Elon Musk publicly acknowledge their concerns about AI, whilst the systems continue to develop and evolve, we will need stronger ethical codes to ensure we understand the impact of their use and protect against misuse.
Notes
1 Hanlon, A. (2024). Digital Business: Strategic Planning & Integration. SAGE Publications Ltd.
² https://www.weforum.org/agenda/2022/10/artificial-intelligence-improving-medical-imaging/
³ https://www.theguardian.com/music/2023/jan/17/this-song-sucks-nick-cave-responds-to-chatgpt-song-written-in-style-of-nick-cave
⁴ https://www.ft.com/content/6c2de6dd-b679-4074-bffa-438d41430c31
⁵ https://www.joe.ie/music/james-blunt-edited-wikipedia-page-say-hed-performed-royal-wedding-609537
⁶ https://openai.com/blog/march-20-chatgpt-outage
⁷ https://bard.google.com/
Author
Dr Annmarie Hanlon - Senior Lecturer Digital and Social Media Marketing, Director of the Strategic Marketing Forum at Cranfield School of Management. Dr Hanlon is an academic and practitioner in strategic digital marketing and the application of social media for business. Her PhD investigated the benefits and outcomes of social media marketing within organisations for which she was awarded the Mais Scholarship. Originally a graduate in French and Linguistics from University of London, Annmarie studied for the Chartered Institute of Marketing Diploma and won the Worshipful Company of Marketors’ award for the best results worldwide. She gained a Master’s in Business Administration, focusing on marketing planning and achieved a distinction for the Chartered Institute of Marketing’s E-Marketing Award.