Artificial intelligence-powered tools like ChatGPT are forcing a much-needed opportunity to reimagine the role of education in the 21st century, says Alex Sims

ChatGPT is the focus of much discussion, excitement and fear across tertiary education. While many of us in universities relish the opportunities such artificial intelligence (AI) writing tools enable, others, used to doing things in certain ways, find it difficult to embrace the rapid change associated with such technologies.

ChatGPT has garnered considerable media attention in the past few months for its ability to answer questions, provide advice on almost any topic in fluent, well-written English, write computer code and perform various other tasks. 

The chatbot, launched in November 2022, has been tested using a broad range of exam questions, including law, medical and business school exams. It passed those exams. Some of the answers provided by ChatGPT are nothing short of magic and I have seen experts rendered speechless by them. Yet, these uncanny answers were pure luck. ChatGPT does not know whether an answer is correct; it simply predicts the solution based on its massive dataset. Therefore, many answers are not 100 per cent accurate and can even be spectacularly wrong. A human is needed to determine the accuracy of its answers. 

The reaction of universities to ChatGPT and other similar AI tools has been mixed, falling into three main types: prevention, banning and embracing. 

First, to prevent the use of AI tools, some universities are falling back on in-person exams featuring old-fashioned pen and paper. However, tests and exams have never been ideal assessment methods. They don’t indicate whether a person can work well in teams or present and communicate information verbally, and they disadvantage those with debilitating exam anxiety. Indeed, to accommodate these limitations, many courses have reduced the percentage of course marks given out for tests and exams. 

In addition, preventing the use of ChatGPT would work only if all of a course’s assessments were for in-person work. To ensure that no student could use ChatGPT would require increasing the percentage of marks for old-school tests and exams, which would be a retrograde step. 

Second, some tertiary providers have explored banning ChatGPT, and other AI tools, with the support of AI detection tools. These AI detection tools are not 100 per cent accurate and can be worked around. My concern is that students will spend more time attempting to circumvent the system than learning the content. 

Both banning and preventing the use of AI tools for all, or most, assessments is counterproductive. People will not, for the foreseeable future, be in competition with AI. Instead, they will be competing with people who are adept and skilled at using such tools. Indeed, people unable to use AI tools may become unemployable in many professional settings as they will be considered too inefficient and slow. 

The key to successfully integrating AI into education lies in understanding that AI tools are not a replacement for human expertise but rather that they are tools that can augment and enhance it. 

Universities need to teach students how to use these tools effectively, to provide training and guidance on how these tools can enhance students’ learning and prepare them for the workforce. 

We have adapted to new tools in the past. For example, the fears that electronic spreadsheets would put accountants out of work did not materialise as the accounting profession pivoted. Similarly, AI tools are forcing a much-needed opportunity to reimagine the role of education in the 21st century. 

So where does this leave us with the vexed question of assessment? How do we assess students’ knowledge? For most courses, some element of in-person evaluation, whether written, oral or both, is necessary. The remaining assessments require rethinking and what may work for one discipline or course may not work for others. 

One idea is that instead of the traditional approach of providing a question to which the student writes an answer, both the question and answer could be given. The students could critique the question and answer and explain what they think is correct or incorrect and why. 

Alternatively, a student could be assessed on the nature and quality of the prompts they ask an AI tool. This may increase the time required for marking, but it will develop the students’ skills with using the tools and provide a good way of assessing their knowledge of the subject matter at hand.   

As with most technology, the challenge is not the technology itself but rather our human emotions, experience and reaction to it.  

Alex Sims is an associate professor in the University of Auckland’s department of commercial law and an associate at the UCL Centre for Blockchain Technologies.

If you found this interesting and want advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the THE Campus newsletter.

Resource: https://www.timeshighereducation.com/campus/chatgpt-and-future-university-assessment

About MariAl Associates

MariAl Associates are experts in education and translation.

For Young Learners: we provide a bespoke service in tutoring, mentoring and advice on general education, higher education and career choices.

For Adults: we help with skills tutoring, work placements and career advice.

For Families: we work closely with our partners to deliver the best relocation packages.

For Everyone: our certified linguists offer translation and interpreting services for individuals and businesses.

More links

Our Services

  • Tutoring
  • Language Programmes
  • Translation
  • Interpreting
  • Editing
  • Proofreading
  • Apostille services
  • Educational consultancy
  • Mentoring
  • Relocation services

 

This page is also available in

Contact Us

Address

36 Gallon Close
Greenwich
London
SE7 8SY
UK