<img height="1" width="1" style="display:none" src="https://pool.admedo.com/pixel?id=152384&amp;t=img">
read

Understand the ethical considerations of using ChatGPT in your organisation

By Dr Annmarie Hanlon
ChatGPT has been generating front-page headlines around the globe and creating consternation in many countries, to such an extent that the Italian government was the first to temporarily ban the system. Should this response be signalling caution to the rest of the world, or is ChatGPT just another exciting new tool for organisations to exploit? 


 

Technology brings innovation, speeding up and simplifying complex activities for the user, but the ethical implications are often an afterthought. ChatGPT is the latest manifestation of an accelerating digital transformation that can feel at times both uncontrolled and alarming. In this article we will demystify some of the hype and identify the ethical concerns that ChatGPT should be flagging for organisations.  

  

What is ChatGPT?

ChatGPT is an artificial intelligence (AI) chatbot. It is designed to hold a conversation in a natural way and it can complete language-based tasks, such as answering a question or writing an article, poem or song. The technology works by taking ‘large language models’ that utilise huge data sets to generate natural language using a ‘Generative Pretrained Transformer’ (GPT) process.

GPT technology was created by OpenAI, a USA-based research company founded in 2015 by Elon Musk (owner of Tesla and Twitter) and a cohort of other tech leaders with a strong pedigree in the digital space: 

  • Sam Altman (Y Combinator, a venture capital firm),
  • Greg Brockman (Stripe payment processing),
  • Dr Ilya Sutskever (Google Brain),
  • Dr John Schulman,
  • Dr Wojciech Zaremba (NVIDIA graphics firm, Google Brain, Facebook AI). 

Musk left in 2018 to focus on his other businesses[1] and in the same year OpenAI published a paper on their original GPT. The launch of ChatGPT in 2023 is the most recent and most sophisticated iteration of the technology.

 

AI and machine learning

AI has been in our lives for years and we’re already using it in many settings: 

  • At home: When you ask Siri, Alexa or Bixby about the weather, the name of that celebrity that you can’t remember, or to play music, the response is based on AI.
  • At work: Gmail and Outlook complete sentences for you as you start to create or respond to an email.
  • On the road: With satnav your car learns and adapts to your preferred routes.
  • In healthcare: Thousands of MRI images are assessed every day by AI applications that offer a fast and cost-effective diagnostic solution[2]. 

In these examples, the machines are learning based on your feedback which is a core part of AI. The machine – your voice assistant, email client, satnav or MRI scanner - gathers and analyses data, and from this it can generate responses, which is why it’s described as ‘generative’.

Feedback is given to the machines when you rephrase the question for your voice assistant or say it’s wrong, you ignore or correct the predictive text, you ignore the satnav directions or request alternatives, or when physicians check the scans and note the image interpretation as being accurate or not.

 

Why is ChatGPT different?

You may wonder if ChatGPT is different. The real answer is that it’s not. 

All the tech companies are working on GPT products with systems that are trained on large data sets, and the chat variations are trained on large language models. Google’s Bard is competing in the same marketplace as ChatGPT, albeit somewhat less successfully at present, and Microsoft has invested in OpenAI to power its search engine Bing. 

All languages have structures; our sentences follow an order (grammar), some words are written in a formal style, some are spoken in a conversational style. These are easy to replicate using existing tools such as dictionaries, thesaurus and grammar guides. The GPT systems are programmed using these tools as data, but they are also constantly learning from the feedback users provide.

We’ve been using large language models for years. For example Grammarly, the plug-in writing checker, is based on AI and gains big data from its 30 million users which means it is constantly learning and updating options to improve grammar, punctuation and spelling¹.

In the same way, users might provide ChatGPT with a scenario such as ‘Write an essay for me on market segmentation’, and a response is generated based on the available data. If you don’t like the response you ask the GPT to revise its content, and this feedback provides learning points and contributes to the ever-growing dataset.

 

Why are we worried about ChatGPT?

These examples of machine learning are a type of AI known as unsupervised learning. The machines check their rules (algorithms) and launch without the need for human supervision. This is leading to concerns about: 

  • Who is checking the material;
  • The type of material produced;
  • How this may impact content creators or authors;
  • Issues with copyright and plagiarism;
  • Accuracy of the machine-generated content. 

Many users have reported that the ChatGPT responses are ‘vanilla’; they lack depth and detail, and the context may be missing too. In time the language models will learn, but at present they generate content rather than creating it by stealing material from other people’s contributions (whether from within the GPT system or from other online sources) without providing any new ideas about the content. 

To create content requires different skills. You need to know the audience, their location, education level and the real purpose of the material. The singer-songwriter Nick Cave understandably responded acerbically to a fan who asked ChatGPT to create a song ‘written in the style of Nick Cave’[3]. The system may have generated the song, but it lacks the passion, feeling and meaning embedded in the creative mind. Songwriters often have a thread running through several songs that represents genuine human emotions. The machines can’t yet produce this. 

In many instances, the material generated is also inaccurate. Worryingly, as Tim Harford writing in the Financial Times[4] observed, the content seems realistic and has a degree of plausibility, even though it’s not accurate. 

In an academic setting, going back to our example ‘Write an essay for me on market segmentation’, ChatGPT may provide a basic overview. But for students it won’t support the notion of critical thinking; deeply considering a subject, reviewing its meaning, conducting research and formulating a response supported with relevant and reliable evidence. When asked to support the arguments with references, ChatGPT will refine the essay and provide a range of outdated texts (segmentation has moved on a lot in the last 10 years), which raises further questions about whether the platform owner is paying for, or circumnavigating, access to the materials cited.

A bigger issue is that because the material seems credible, it could become a major channel of disinformation, where users believe the content produced is valid. It’s easy to add incorrect material online which could then be incorporated into a ChatGPT response. Think about the singer-songwriter James Blunt, who edited his own Wiki page to say he had played the organ at a royal wedding; it was totally untrue but believed by many[5]. This serves as an example of online disinformation and highlights the issue that a source-checking facility is not yet available in GPTs.

 

Why organisations should create policy

The race to adopt ChatGPT demonstrated the ‘bandwagon effect’ as organisations rushed to trial and explore how the system could save time and money. Yet they failed to consider the strategic implications of these actions. If the product is free, you are the product. So organisations freely gave away their thoughts, ideas and plans. Imagine two advertising agencies are bidding for a pitch with a client; they both use ChatGPT and the best-case scenario is that they present each other’s ideas, or worse-case – the same material. 

There are other big issues to consider, including the notion of bias in the material. GPTs could perpetuate hate-speech or generate biased content that impacts different groups. It could be misused with significant material being added which is inappropriate. While this could be programmed out, you will know from any spam email you’ve ever received that there are ways around this, with hundreds of variations on words that you think are blocked, such as Viagra (e.g. V1agra, Vi@gra, V1*gra). 

Security is another concern. As ChatGPT rushed to release its latest version, they missed some security processes, resulting in a data breach where ‘due to a bug in an open-source library which allowed some users to see titles from another active user’s chat history. It’s also possible that the first message of a newly-created conversation was visible in someone else’s chat history if both users were active around the same time.’[6] So that’s all your confidential thoughts being shared with other people. 

Google’s GPT version is known as Bard[7], which it describes as : 

Meet Bard: your creative and helpful collaborator, here to supercharge your imagination, boost your productivity, and bring your ideas to life.
Bard is an experiment and may give inaccurate or inappropriate responses. You can help make Bard better by leaving feedback. 

There have been widely reported ethical concerns about releasing Bard too early, but in the land grab to get into the GPT space, Google has gone ahead and launched the system. Yet releasing software before it’s ready always means that it includes bugs. The users are then the beta testers, flagging issues as they arise.

 

Ethical considerations for organisations

The key questions organisations need to ask themselves are: 

  • What procedures are in place to avoid biased or inaccurate material being generated?
  • If they secure products or services with intellectual property (IP) legislation, are they protected when using online third-party systems like ChatGPT?
  • How can they avoid plagiarising or stealing other people’s work and fall foul of IP legislation?
  • Are they happy to give confidential information to a machine that can share it with others who ask that question?
  • What governance is in place to ensure staff creating client material don’t opt for the fast ChatGPT approach which could result in embarrassment or lost clients? 

As tech heavyweights including Elon Musk publicly acknowledge their concerns about AI, whilst the systems continue to develop and evolve, we will need stronger ethical codes to ensure we understand the impact of their use and protect against misuse.

 

 

Cranfield AI Masterclass Series 2024

Prepare yourself and your organisation for the AI work revolution with our AI Masterclass Series.

 

Notes 

1 Hanlon, A. (2024). Digital Business: Strategic Planning & Integration. SAGE Publications Ltd.

² https://www.weforum.org/agenda/2022/10/artificial-intelligence-improving-medical-imaging/

³ https://www.theguardian.com/music/2023/jan/17/this-song-sucks-nick-cave-responds-to-chatgpt-song-written-in-style-of-nick-cave

⁴ https://www.ft.com/content/6c2de6dd-b679-4074-bffa-438d41430c31

https://www.joe.ie/music/james-blunt-edited-wikipedia-page-say-hed-performed-royal-wedding-609537

⁶ https://openai.com/blog/march-20-chatgpt-outage

⁷ https://bard.google.com/

 

 


 

Author

Dr Annmarie Hanlon - Senior Lecturer Digital and Social Media Marketing, Director of the Strategic Marketing Forum at Cranfield School of Management. Dr Hanlon is an academic and practitioner in strategic digital marketing and the application of social media for business. Her PhD investigated the benefits and outcomes of social media marketing within organisations for which she was awarded the Mais Scholarship. Originally a graduate in French and Linguistics from University of London, Annmarie studied for the Chartered Institute of Marketing Diploma and won the Worshipful Company of Marketors’ award for the best results worldwide. She gained a Master’s in Business Administration, focusing on marketing planning and achieved a distinction for the Chartered Institute of Marketing’s E-Marketing Award.

 

 

Tags: article, strategic marketing and sales

Cranfield Executive Development

  

New call-to-action