Experts in top team and board consulting, training and development
The News
Posted on 12th October 2023
An ethical framework for AI chatbot adoption by universities

As concern grows over the role and impact of generative AI tools, it’s clear it will soon be impossible to distinguish artificially-generated text from the ‘real’ thing.

Agreeing an ethical and reputation-enhancing framework is the only way forward, say Professors Andrew and Nada Kakabadse of Henley Business School.

Your student’s essay looks good but someone, or something else, wrote it.

The rapid development of AI language models and chatbots has caught many educational institutions off-guard.

There is normally a delay in the adoption of new technology, allowing institutions to consider the implications of how best to incorporate, manage and – in some cases – regulate it. This has not been the case with chatbots.

As a result, the ethics of, and reputational issues aligned with, AI in universities has been massively overlooked. To address this gap, here follow ten points that must be considered as part of any educational institution’s plans to incorporate AI into their strategic plan.

1. The fruitless struggle to ban AI
Many educators are concerned students will use AI to unfairly complete assignments. The New York City Department of Education has banned ChatGPT on its networks and devices, while nine UK Russell Group universities have reportedly told students not to access AI in their work.

Turnitin, a UK plagiarism detection company, recently announced the development of new software using advanced algorithms to analyse text and identify patterns indicative of AI-generated content. Another option voiced is the addition of ‘digital watermarks’ to all AI-generated material. All of this and more threatens significant changes to teaching, student supervision and assessment.

Despite this, experience and current observations suggest that it’s only a matter of time before it becomes impossible to detect every case of AI-generated content and universities begin embracing chatbots in line with modified learning practices.

At the same time, the last thing hard-pressed academics need is an additional layer of bureaucracy in the form of a box-ticking exercise they can never win. So how should Universities move forward in the face of this seemingly impossible ethical conundrum?

2. Don’t ignore the benefits
In truth, the majority of institutions are likely to benefit from this technology. For example, AI chatbots can help explain complex issues in everyday, accessible language. They can also support students in developing their writing skills, providing personalised feedback and enhancing the overarching learning experience.

Embracing chatbots in education could further involve incorporating them into coursework and assignments, supported by the responsible leadership of creative writing centres and other support services.

3. The need for better supervision
The ultimate answer to the AI riddle is that university councils, senates and senior management need to become more stringent in their supervision of staff and students. In so doing they should draw on AI chatbots to enable learning, while minimising any unacceptable practices so that the institution’s reputation is protected and enhanced.

A first step is to openly recognise and admit to existing failings and their impact. Evidence-driven disclosure allows time to rethink and address the deeply challenging issues raised by AI.

It’s important to appreciate that academics don’t have to be experts in how an AI chatbot systems operate to be able to contribute towards their students submitting quality work. It is the guidance they provide on how quality is realised that makes all the difference.

4. The value of training
Training on the potential harms and values of AI chatbot adoption should become mandatory, with programmes designed for three different levels of seniority, including:

  • Heads of faculty – covering how to realise alignment of the supervisory practice of AI chatbots across the diversity of departments, each shaped by the particular philosophy of their subject matter expertise. In this way, and through consistency of adoption, council, senate and the management can trust how the organisation approaches AI chatbots. Like in all other aspects of expertise the ultimate goal of the university is to become adaptable, resilient and responsive to stakeholder needs
  • Heads of departments and professorial faculty – this is a key level for quality control. A key question to be asked is, is the adoption of this new technology necessary to stimulate growth and educational improvement? Is additional funding needed to redesign departments or the educational institution to meet the new AI challenges? In pursuing such considerations separation will need to be made between ‘nice to have’ and ‘essential’
  • Teaching and research faculty – because it is in these individuals’ hands that direct responsibility for undergraduate, postgraduate and postdoctoral education and development lies. Training needs to focus on the reality of practice and what needs to be done to enhance AI chatbot application for students’ everyday experience.

5. Wellbeing matters
A key issue to be addressed is how the use of various AI systems in the university impact the wellbeing of students and staff? Do chatbot systems employed directly by the university make it clear that any interactions are artificial and take place without a human operator?

It’s crucial that AI is never disguised as a real person and framed as a source of further insight and information and should not cause harm or concern for stakeholders. The council or university senate must take responsibility for the type of systems and make-up of algorithms being used, monitoring of their performance, and report against a set of carefully considered key performance indicators.

6. Transparency and communication
Have all staff and students been made aware of agreed and expected protocols regarding the ethical use and referencing of AI in connection to their work and performance? Have academics been given the opportunity to participate in discussions on any policy changes, and had the necessary time and training to appreciate any modifications to assessment protocols been allowed? Are students fully aware of their ethical responsibilities on AI and the agreement they are entering into? Effective and ongoing communication is crucial during periods of rapid change.

7. Cybersecurity
Cybersecurity is not just the responsibility of the IT department. Leadership and staff must be educated and informed about relevant risks. In addition AI systems being used alongside or integrated into the university should be secure against phishing, ransomware and data breaches.

Who is responsible for the strategy to monitor and regulate AI compliance against intended use? Information should be made available to all that explains the necessary steps being taken and to reassure staff and students that the university’s systems are technically secure.

8. Responsibility
Who is in charge of the ongoing monitoring of AI policies and systems being used in relation to the university? Any aspect of AI that which is being used to improve teaching and learning, contact and engagement, or policies relating to student use of chatbots and related technology need to be considered. These details will be incorporated into staff communications and appropriate training as required.

9. The danger of hidden bias
Universities should take care in ensuring that chatbots don’t discriminate against factors including gender, ethnicity and socioeconomic status. AI’s use for technical support, service desk queries and student recruitment can be positive when liberating staff from mundane and repetitive tasks.

At the same time AI’s inclusion within apps and tools for administrators, faculty and students may hide algorithmic bias. This must be addressed through regular review of chatbot programming and data analysis to identify any issues.

10. Governance in action
Ultimately AI chatbots are an advanced operational tool that can enable learning through appropriately channelled and observable guidelines. However, it is the consistency of application that allows AI to be utilised to best effect. In this way enhancing the university’s reputation is realised and enhanced.

Alternatively, neglect or inconsistency of AI chatbot application poses significant reputational harm. Rebuilding reputation after facing accusations of ‘wholesale cheating’ takes considerable time and an institution is unlikely to emerge from such a battle with a satisfactory outcome.

In effect, AI chatbots are operational enablers, but their continued and appropriate adoption falls within the remit of the governance and oversight of the institution. Reputation lies at the heart of the sustainable future of the University and once AI chatbots are recognised in this light, their use will be of benefit to all.