Advice for students regarding Turnitin and AI writing detection

On 5 April 2023, Turnitin launched a new tool that identifies material that has potentially been written by artificial intelligence (AI) software (eg ChatGPT). The tool is in the early stages of development and is currently only available in staff view – this setting cannot be changed by the University.

How reliable is the tool?

This is an early release of the tool which the University has chosen to deploy so that we can thoroughly test it and actively provide input to Turnitin on its design.

The tool looks for English language patterns it scores as likely generated from an AI source and produces a conservative identification of AI written content. The scores per sentence and across groups of sentences assigned by the tool must collectively reach a high confidence threshold (98%) before they are flagged as likely having been written by AI.

This means that if the tool indicates that 40% of the overall text has been AI-generated, it is 98% confident that is the case. The University, along with others in the sector, are seeking more detail on the sensitivity and specificity of this model and how confidence intervals are calculated as well as conducting our own tests on its reliability. This information will be publicly shared as it becomes available.

What will the University do if the tool reports a high score for submitted work?

As with the similarity report generated by Turnitin, the result of the AI writing detector tool is a prompt for further investigation.

Should there be a suspicion that part or all of your submitted assessment has been produced using generative AI, you may be asked to explain your essay and argument (how you developed the argument, what sources you used, how you reached the conclusion you did), or to provide drafts or notes of early versions of the assessment.

The Turnitin AI writing detector is a new tool and has only been in use at the University since Semester 1, 2023. This may mean that the tool incorrectly identifies some assessments as having been produced by AI when they have not. Should you be asked to discuss or explain components of your assessment task, understand that this, alone, is not an accusation of academic misconduct. The AI writing detector score would not normally be used as the only evidence to raise an allegation of academic misconduct – but it might be one of several indicators.

When is it OK to use AI tools?

The acceptable use of AI will vary across disciplines, subjects, and assessment tasks. Your subject coordinator will provide this information, but it is your responsibility to check the assessment guidelines and relevant policies, and to understand what is expected of you. Resources on academic integrity are available to you through your subject’s LMS site, Academic Skills, and the Library.

If an assessment task does permit the use of AI tools and technologies in the preparation of the submission, this usage must be appropriately acknowledged and cited in accordance with the Assessment and Results Policy (MPF1326).

If an assessment task does not permit the use of such tools, or if you use such tools in the preparation of an assessment submission without acknowledgement, this is academic misconduct. In accordance with the Student Academic Integrity Policy (MPF1310), any student who commits academic misconduct is subject to the penalties outlined in the Schedule of Student Academic Misconduct Penalties.

As other tools to detect the use of AI become available, the University will consider adopting their use. Work submitted for assessment is subject to checking through these tools at any stage. This includes in the years following graduation, and the University has the right to amend marks or rescind degrees should academic misconduct be found at any stage.