Students would be allowed to use generative AI tools like ChatGPT for assessments under a draft framework developed by a taskforce of experts to guide the safe and ethical use of the technology in Australian schools.
But the framework has also foreshadowed changes to assessments in order to preserve academic integrity and ensure evaluation of student performance, skills and knowledge remains “fair and unbiased”.
The draft framework, released for consultation on Friday, is the result of four months of work by the Education Ministers AI in Schools Taskforce, which was set up in late February in response to the arrival of ChatGPT.
The framework was presented to the Education Ministers Meeting for the first time earlier this month, with the final version expected to be rolled out in schools next year, according to federal Education minister Jason Clare.
It comes amid a parliamentary inquiry into the risks and potential opportunities of generative AI tools in school and higher education, which has garnered submissions from a wide range of education institutions and industry bodies.
ChatGPT remains largely banned in public schools across the country, with South Australia the only state to have never blocked the chatbot. Western Australia, having initially banned the app, lifted its ban in May.
The draft Australian Framework for Generative AI in Schools, contained within the consultation paper, encompasses six core elements – teaching and learning, human and social wellbeing, transparency, fairness, accountability, and privacy and security – and 22 underlying principles.
It asks that schools use generative AI tools for “positive impact” on teaching and learning, and not to “restrict human thought and experience”.
“The use of generative AI in the classroom will need to be balanced and primarily used to enhance, augment, or complement human skills,” the consultation paper said, adding that schools might consider teaching students “metacognitive strategies” and how to evaluate information for credibility.
Schools will be expected to “engage students in learning about generative AI tools and how they work, including their potential limitations and biases, and deepening learning as student usage increases”.
At an academic integrity level, the framework indicates that “when used in assessments, generative AI tools provide a fair and unbiased evaluation of students’ performance, skills knowledge”, noting that changes to assessment could be required.
“Generative AI can be used to support student learning and assessment. It can also be used by students to generate answers for exams and assignments. Consideration will need to be given to identifying and responding to inappropriate use of AI to generate content,” the paper states.
“Assessments may need to be modified to avoid or use generative AI tools, so that they continue to ensure outputs will be a fair and unbiased evaluation of students’ performance, skills, and knowledge.”
The draft framework also asks that “decisions remain in human control with clear human accountability”, and schools “regularly monitor” the impact of any generative AI tools on students and teachers.
Schools would also be expected to “proactively informed” students, parents and other stakeholders about how data is collected or used by tools, as well as use tools “in ways that protect inputs by or about students, such as typed prompts, uploading multimedia, or other data”.
A review of the framework, which is described as an “evolving document”, is expected within 12 months of publication and every 12 months thereafter to reflect the “fast-moving pace of technological development”.
The consultation will close 16 August 2023.
Do you know more? Contact James Riley via Email.