The sorts of skills we seek to develop in students varies across disciplines and fields of study. There are some common ‘core competences’ across CAP degrees, however, that can be identified and the impact of generative AI upon them discussed.
This guide is a response to interest from staff and students about how to approach generative AI and what are better or worse uses of these tools. It has become clear that to answer this question we need to understand what generative AI is, and is not, doing and then set that alongside some of the key things that define a university education and why you come to the ANU. Things will continue to change rapidly, but the following is our current approach to responding to generative AI and to identifying the better and worse ways it can be used in education, and explaining the reasons for those decisions.
This guide also provides additional resources you can access to learn more about generative AI. In all cases if you are ever in doubt talk to your convenor. A similar guide has been provided to all CAP convenors so both staff and students have the same set of information.
What is generative AI actually doing?
The first thing we need to do is consider what generative AI is producing, and what those products actually represent.
Underpinning generative AI products such as ChatGPT are Language Models (LM). Put most simply, these models collect huge amounts of text from across the internet and parse it for coincidences, regularities, and sequences. LMs do not understand what information is parsed into them, and they do not have any understanding of accuracy or inaccuracy. All they do is examine patterns of words: Contrary to how it may seem when we observe its output, an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning (Bender and Gebru, et. al. 2021 617).
The result is text that seems fluent and confident, but that has no purpose or meaning behind it. We make a mistake when we read this text as if it is means something, or as if it is an intentional effort to communicate. If one side of the communication does not have meaning, then the comprehension of the implicit meaning is an illusion arising from our singular human understanding of language (Bender and Gebru, et. al. 2021 616).
The result is that the information, arguments and themes that generative AI can produce may look to us as if it is accurate and meaningful, but it can never be guaranteed to be so, and indeed we should expect errors and inaccuracies.
The result for students is that generative AI can appear to provide answers and be useful, but you should not trust it to be accurate, reliable, consistent, or meaningful. They are tools, but they are not substitutes for your own academic skills. These systems simulate certainty, but often that is a mirage.
A good resource to read is 'On the Dangers of Stochastic Parrots: Can Language Models be too big?' by Emily M. Bender, Timnit Gebru, and Angelina McMillan-Major, which introduces the nature of these LLMs as well as discussing their features and weaknesses. We do need to remember that this field is rapidly evolving and is characterised by innovations and debates. Our response will have to develop with those changes.
Subject knowledge
Independent expertise about the key issues in any particular subject. These may include facts, theories, perspectives, events, key authors/thinkers.
Critical skills
The ability to develop independent opinions and arguments, and engage with the opinions and arguments of others. To assess the validity and value of facts, theories, and interpretations relative to a set question or task.
Self-awareness
A development in the student’s own beliefs, understandings, attitudes as a result of engaging with
new fields of study.
Communication skills
Clear and precise communication of complex issues and ideas, together with an ability to develop,defend, and promote independent perspectives in written and oral forms.
The challenge of generative AI
Generative AI is not banned at the ANU. However, that does not mean that all uses of these products and platforms are equally valid. We can use our understanding of what generative AI is, and our appreciation of what university education is about, to come to some broad conclusions.
- Generative AI has negative consequences where it replaces a student’s development of subject knowledge, critical skills, and self-awareness. Students using it to do so are undercutting their own ability to develop these skills.
- Generative AI is positive where it supports a student’s independent development of subject knowledge, critical skills, and self-awareness.
In turn, this broad division can be related to particular types of activity by students as a guide to what sorts of behaviours are more or less problematic.
Inappropriate uses of generative AI
What unites examples of inappropriate use of AI is that they all substitute the product of an AI system for your own work and understanding.
None of these use cases support the development of your core competences, indeed they all represent ways to avoid developing and using them. It is your job to learn and recall, organise, and argue independently.
Remember - generative AI is not producing meaning, so all of these activities are going to produce outputs that are inaccurate, incorrect, or problematic when viewed by a subject expert.
Issue | Explanation | Example |
---|---|---|
Using AI-produced text ‘as your own’ | This is a clear academic integrity issue - a student doing this is passing of the work of someone/thing as their own. | Asking ChatGPT to write a 1000 word essay on ‘What was the most important reason for the wars in Vietnam?’ |
Using AI to generate structures for assignments such as essay plans | This may be an academic integrity issue. Structures are not neutral - they tell you what to talk about and what to ignore. These are vital parts of the work students are expected to do because a structure is in effect an argument about what is important enough to discuss. | Asking ChatGPT to write an essay structure on a given essay topic |
Using AI to rephrase the work of other authors to avoid plagiarism detection | This is a clear academic integrity issue - a student doing this is passing of the work of someone/thing as their own. | Asking Quillbot to rewrite a paragraph from a key reading or existing student work so you can put it into your essay |
Using AI to generate synopses of complex issues | This may not be an academic integrity issue by itself. However, it is still a misuse of generative AI - students are expected to have the core competence to undertake this sort of activity. Plus the nature of generative AI means that there is a very high likelihood of error and inaccuracy | Asking generative AI to ‘explain how Indonesian politics works’ |
The 'grey zone' - where you need to ask the course convenor
What unites examples in this grey zone is that there are clear pros and cons. Here you should talk with your convenor about how to approach these examples, and only use them with guidance from the convenor.
Issue | Explanation | Example |
---|---|---|
Using AI text with citation/referencing | This avoids the academic integrity issue as the student is not claiming the AI produced text as theirs, but nor is it using core competences in robust ways. Students need core competences to do this. | Generating a short sentence on a topic and then referencing that in an assignment |
Using AI to review your existing structure | This provides students additional feedback on their own creations, but to use this more effectively, students need to understand what generative AI is, and interpret its responses. Students need core competences to do this. | Feeding in your independently created essay structure and asking for improvements, or suggestions about what is missing/not needed |
Using AI as a ‘tutor’, especially outside of regular contact hours | This can provide quick feedback but to use this effectively, students need to understand what generative AI is, and interpret its responses. Students need core competences to do this.
Ask yourself, given what generative AI is, would you ask a human tutor questions and use their answers if you knew they were not a subject expert and could be making things up? |
Asking generative AI whether you have a good understanding of something, or how to improve something |
Using AI to provide a synopsis of a single text |
The student is not reading the work independently, and reading is crucial to core skills. So this is not a substitute.
By specifying a particular source, the student is providing specific guidance to generative AI which may help cut down egregious errors. |
Asking generative AI to produce a summary of a specific named text or by uploading a file of a key reading to summarise. This will tend towards greater accuracy as there is a bounded and specific source to work from |
The least problematic uses of generative AI
Good use cases subject the products of generative AI to the student’s own critical abilities, reasoning, and independent knowledge.
They are based on an understanding of what generative AI produces and how to actively respond to that.
If you have additional use cases of generative AI you think are valid, speak to your convenor to see what they think.
Issue | Explanation | Example |
---|---|---|
Using AI text with citation/ referencing AND subjecting that text to the critical analysis of the student | Both respects academic integrity and actively applies student critical abilities and learning. This comes up against the nature of what generative AI is actually producing, but the student is actively using their core competences to assess and respond to that. | Generating content from ChatGPT on a specific prompt, referencing that in an assignment, and actively explaining why this is persuasive or inaccurate. All existing best practices around how to engage with sources apply. |
Using AI to identify sources on a topic a student needs to read | This can save time in searching for relevant material - but remember that AI can produce ‘fake references’ that do not actually exist and the onus is on the student to check. You also need to be able to find sources independently, without AI. | Asking generative AI to suggest five key readings on the topic of market deregulation, then locating (and identifying they really exist) those sources, reading them, and consideration of what was missed. |
A note on translation and generative AI
For non-language courses
For non-language courses: Translation is not a neutral act, it is creative as the choice of words, expression, grammar and syntax do not map simply or linearly between languages. As such students should actively check with a convenor whether using generative AI or some other translation system is appropriate in this course. You should always reference which translation system was used.
One particular issue to be aware of is writing all or part of an assignment in your native language and translating it using a system. Our guidance is do not do this. Translation is a creative act and whilst this may not be plagiarism, it may still represent an infringement of the principles of academic integrity - which emphasise that you directly are the creator of the content being graded, except where you are citing appropriately external sources. In our experience many translation systems create hard to read and inaccurate translations that seriously impact the quality of submitted work.
For any language course
There is a very strong presumption against any use of generative AI or other translation program. You should not use it unless explicitly instructed by a course convenor, and even then only in the completion of the specific task/activity that the convenor noted was a legitimate use.
CAP’s approach overall
This guide is part of a broader concern across the college to help students understand the importance and value of developing and using the core competences during your time at ANU.
We recognise that developing and using these core competencies is time consuming and often challenging. This is the heart of the learning journey that you are on when you study with us - it is why degrees take years not days. You should never feel lost or alone in this journey, and always reach out for explanation and assistance, particularly around understanding what is required in assessments. Equally, however, it is never right to ‘short cut’ this journey by misusing generative AI, or other actions that infringe on academic integrity.
Our plan is to work together to identify the better uses of generative AI and develop ways we can use that in classrooms. Similarly we also want to work together to identify the more problematic ways to use generative AI, the reasons why these might be tempting even given all the problems associated with it, and how to avoid the misuse that undercuts the value of education and the degrees you work so hard to graduate with.
Some advice on crafting questions for generative AI
For use cases of generative AI that have your convenor's support, there are still better or worse ways to query generative AI systems like ChatGPT.
Read more
Effective Prompting
What types of questions to generative AI are more likely to provide useful information?
This is still an emerging field of study. What we know so far is that effective questions tend to be more focused and specific, whether that is about a specific author, text, or idea. With greater specificity we are more able to control for the inevitable inaccuracies, mistakes, and misrepresentations that accompany generative AI.
The key thing to know then is that good questions to AI require pre-existing knowledge because to be focused and specific you have to know what to be focused and specific about.
Effective interpreting
All products of generative AI need to be assessed. If you ask a question of these systems you need to know the answer it generates for you is actually appropriate. There is no way to do that without yourself, independently of generative AI, having read, discussed, and engaged with issues and content in your courses.
Without the ability to interpret you are unable to manage the risks of misuse of generative AI
More ethical, effective, and appropriate use
Effective prompting and effective interpreting enables productive and critical use of generative AI by ensuring students have the skills and knowledges to understand what it produces, why it produces it, and what is actively required of them to turn that into meaning.
The more a student relies on these systems, or misunderstands what they can and cannot do, the more they are unable to come to their own opinions, understandings, and insights.
Remember - misuse of generative AI is always a problem and may be an academic integrity issue
The use cases we have highlighted in the red table are all potential triggers for existing academic integrity review processes, and may result in findings of poor academic practice or academic misconduct, with the associated range of penalties that already accompany those findings.
Remember that misusing generative AI even when it is not an academic integrity issue is still a problem. Our experience so far has been that these types of misuses of generative AI result in essays/assignments that are weak, most often because the AI systems being used have no idea of the quality of what they are producing, which is picked up by subject experts. Do not fall into the trap of a false certainty, and do not deny yourself the ability to independently show you have the core competences discussed on 'generative AI and student skill development'.
Resources
Contacts who can help
The most important contact is your course convenor - this can be on use cases for generative AI, but of course is vital to understand assessment tasks and course content.
We also recommend you contact academic skills teams, whether those are located in your school or centrally.
Find out more
- Wolfram S (2023) ‘What is ChatGPT doing … and why does it work?’, Stephen Wolfram: writings.
- Wilson M (16 March 2023) ‘ChatGPT explained: everything you need to know about the AI chatbot’, TechRadar.
- On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
- ANU Policy: Student academic integrity
- ANU Library on Artificial Intelligence including generative AI
- Referencing guide