Quantcast
Channel: assessment – Leon Furze
Viewing all articles
Browse latest Browse all 15

Video: Understanding the Australian Framework for GenAI in Schools

$
0
0

In this article I’m breaking from my usual format and including a video explaining the Australian Framework for GenAI in Schools. As I’m travelling around between schools and associations, I’m finding that while most people (in Australia) are familiar with the Framework, it’s still a fairly hazy topic.

The Framework, though it is fairly sparse on details at the moment, offers the beginnings of our approach to GenAI in K-12 education. It should help schools to plan for professional development and the ethical, appropriate incorporation of AI into classrooms. It should also provide some guidance on how to discuss GenAI with the broader school community.

Many organisations, universities, and individuals will be working this year to bulk out the Framework and produce resources to support the principals. My new book, Practical AI Strategies, is aligned to the Framework in numerous places and focuses on practical advice for educators to help navigate the ethical and practical concerns of GenAI.

This video should help as a (re)introduction to the Framework, briefly explaining the purpose and scope of the Framework, and what I hope will happen next. You’ll find an AI generated transcript at the end of this article.

The first core principle of the Australian Framework is Teaching and Learning. If you’re interested in learning more about how I’ve been approaching AI and Assessment, make sure you sign up for the free eBook of the AI Assessment Scale:

I’ve collated all of the resources on the AI Assessment Scale into a free eBook: click here join the mailing list to get a copy.

If you have questions about anything related to GenAI, including advisory services aligned to the Australian Framework, get in touch below:

Transcript: Uncorrected, generated in Otter.ai

Hi, I’m Leon furze, and in this video, I’m going to be talking about the Australian framework for generative artificial intelligence in schools. The framework was published at the end of 2023, and intends to provide a responsible and ethical guide to using generative AI tools in ways that benefit students, schools, and society. So all of the materials in these slides come directly from the framework itself. And I’ll share a link to that in a moment.

First of all, there’s a definition in the framework of generative artificial intelligence, which I think is helpful. Generative AI can generate new content such as text, images, audio and video that resembles what humans can produce. It’s effective at recognizing patterns and emulating them when tasked with producing something. And for me, one of the most important parts of that is that it does recognize that Gen AI is multimodal not just text generation, like chat GPT. But all of those other modes and possibilities as well. The framework has been designed around six core principles and 25 guiding statements within those principles. And in this video, I’ll go through each of those core principles in turn, the diagram gives us a high level illustration of the framework.

And on the website, there is also a one page poster, which breaks down all of those core principles and guiding statements. So make sure you check out the PDF or document of the full framework and the poster if you need a handy reference. The first of the six core principles, and probably the longest, in terms of the number of guiding statements is teaching and learning. And I think it’s great, first of all, that teaching and learning is up there as a priority. Some of the ethical concerns that we get into a little bit later, are perhaps a little bit harder to address. But teaching and learning should obviously be the priority of a framework like this.

The guiding statements within teaching and learning all focus on the idea that generative AI isn’t going to replace teachers or students or critical or creative thinking. So in terms of 1.1 impact, we need to be using these tools in ways that enhance and support teaching, and administration and learning. And in 1.2 instruction, schools need to engage students in learning about generative AI tools and how they work including limitations and biases, and to deepen the learning as the student usage increases. Now, originally, when this framework was published, I was a little critical of how sparse it was, you know, a lot of work went into this, I know some of the consulting schools and some of the academics that put a lot of effort into this framework.

And when it was released as just a few pages document, compared to some of the work that was coming out internationally, it seemed a little bit thin on the ground. But I do know that a lot of work is happening through various individuals and organizations this year, to actually put some resources behind these guiding statements. And hopefully, there’s going to be some great resources to support all of these areas, including instructing students in that kind of gradual way, deepening their learning as they use AI more and more throughout their schooling.

1.3 is teacher expertise. Genuinely BI tools are used in ways which support teacher expertise and teachers are recognized and respected as subject matter experts in the classroom. And I would also argue experts beyond subject matter. So pedagogical experts, experts in negotiating those individual personalities in the classroom and working with groups of students, experts in all of those areas that generative AI or AI more broadly, certainly isn’t going to replace. So 1.3 for me is actually one of the most important guiding statements in this whole document, and speaks to the need for ongoing continuous professional development, and implementing these tools in ways which don’t supplant or push aside teacher expertise.

1.4 is critical thinking and generative AI tools are used, again, to support enhance critical thinking and creativity and not to restrict human thought and experience. So we’re not trying to replace the need for students to actually think for themselves, we’re just using technologies in a way that might support that. Learning Design means that work needs to be clearly identified how generative AI tools can and can’t be used and also to allow for unbiased evaluation of students ability. So if artificial intelligence is used in assessment practices, there needs to be an acknowledgement there that these models can be biased they can produce biased or discriminatory outputs. And we need to account for that in our use of the tools.

And finally 1.6 academic integrity which is obviously been one of the most prevalent narrative lives around these tools in education since chat GPT was released way back in November 2022. I think that there are much more pressing issues ethically with these technologies than academic integrity. But we do need to address it in a framework like this. So that core principle of teaching and learning for me is the most important in the framework at the moment, and the one that I think we need to put the most effort into resourcing over the next 12 months whilst the framework is under review.

The next core principle is human and social wellbeing. And this draws on similar areas to the UNESCO guidelines, which I’ve spoken about elsewhere. The idea of using these models in ways which support promote and also don’t impinge upon human rights, wellbeing. 2.1 is that generative AI tools are used in ways that don’t harm the well being and safety of any member of the school community. And 2.2 diversity of perspectives, recognizes that Gen AI needs to be used in ways which expose users to diverse ideas and perspectives and avoid the reinforcement of biases.

Now, both of these I think are going to be really difficult for schools to grapple with well being because as we know from device use in general, and social media and other aspects of digital technologies, it’s really hard to understand how young people are using technologies, it’s hard to engage them in meaningful conversations around these technologies outside of the context of things like cyber safety, digital consent lessons, and so on. And we know that often there are issues with well being caused by digital technologies that happen outside of the school outside of the classroom, which are to an extent out of our control. So this is reinforcing that we need to be conscious of those issues.

And we need to think of ways that we can approach those issues in schools. diversity of perspectives, again, speaks to that bias issue, which is a bit of a theme throughout pretty much any documents that talks about generative AI. And we need to look for ways to avoid the reinforcement of bias, again, really difficult, because these technologies are inherently biased. And I think the best way to approach that for now is to understand where that bias comes from in the datasets and through the development of these models, and be able to talk to students about that bias. And then finally, human rights overlapping obviously, with the UNESCO guidelines, generative AI tools are used in ways with respect to human and worker rights, including individual autonomy and dignity. And that speaks to some of the potentially near future of the implications of these technologies.

The third core principle is transparency. And school communities understand how generative AI tools work, how they can be used, and when and how these tools are impacting 3.1 information in support. teachers, students, staff, parents and carers have access to clear and appropriate information and guidance about Gen AI. Again, that’s going to be something that we need to work on for this, this 12 months during the review period. And a lot of people are putting together resources myself included, that go out into the school communities. So when I run parent information sessions for schools, for example, there’s often a lot of interest from parents and carers in exactly what these technologies do, how they work, and the implications for their children in schools.

Linked to that 3.2 disclosure, school communities are appropriately informed when Gen AI tools are used. I don’t think that we need to disclose every single use of Gen AI. We don’t need the little tagline at the end of emails like we get when we send an email from an iPhone, for example. But there does need to be some documentation in the school somewhere, which indicates whether generative AI is used in lesson planning or in reporting, and so on. And that can go in as part of things like digital user agreements, or your general school policies and guidelines. And 3.3 is explainability vendors ensure that users broadly Understand the methods used by generative AI tools and their potential biases. This pushes some of the responsibility back on to the vendors, I’d like to see a little bit more of that in the framework, a little bit more responsibility from the developers and from the companies pushing these tools into schools.

Because I don’t think that there’s a lot that schools can actually do to ensure that models being used are transparent. If you’ve seen anything like the Stanford transparency index for generative AI, you’ll know that most of these models are very lacking in transparency. And it’s really difficult to avoid that black box issue of not really knowing what’s going on under the hood. So I don’t think it’s reasonable to assume that schools are going to be able to identify transparent models or even to fully grapple with the potential for bias. The responsibility there definitely lies with the developers and the vendors. The next And principle is fairness, generative AI tools are used in ways that are accessible, fair and respectful.

Again, overlapping with those those human wellbeing, but with a little bit more of a focus here on accessibility, inclusivity, and equity. So we’re looking at using these tools in ways which are inclusive, accessible and equitable, including for people with disability and from diverse backgrounds. And we know that these technologies could potentially be used as accessibility tools. But I’m waiting to see more solid research around the efficacy of these kinds of applications. And we certainly don’t want people to struggle in education because these tools become ubiquitous and they don’t have access. Similarly, regional, rural and remote communities are considered when implementing Gen AI.

Incredibly important, I live and worked in a regional area in southwest Victoria and access to technologies is an issue. It was an issue all the way through COVID. It continues to be an issue in terms of access to digital infrastructures, technologies, and so on. So we don’t want to exacerbate those issues. 4.3 non discrimination, again, just reinforcing that Dan AI needs to be used in ways which support inclusivity and minimize opportunities for discrimination. This speaks to that bias issue. But it also means that when we are using them for assessment purposes, or if we use other forms of AI like predictive algorithms, we need to make sure that we know and understand the impact of those technologies, and 4.4 cultural and intellectual property. Gen AI tools are used in ways which respect the cultural rights of various groups, including indigenous cultural and intellectual property rights.

And there’s some great research happening in Australia and globally with various indigenous researchers and indigenous groups into the implications of these technologies. So I would encourage you to go out and investigate some of that research that’s happening at the moment because there are some really complex issues surrounding indigenous cultural and intellectual property rights surrounding indigenous languages and their use in these kinds of models. It’s well worth looking into core principle five is accountability. And we’re looking at using Gen AI tools in ways that are open to challenge and retain human agency and accountability for decisions. So the human responsibility, teachers and school leaders retain control of decision making and remain accountable for decisions supported by the use of AI tools.

And when we’re using tools they are tested before they used and reliably operate in accordance with their intended purpose. Again, I think that that one is out of the hands of schools to a certain extent, we will get chatbots and generative AI tools pushed into schools from other areas that might come from Ed Tech. It might come from developers or vendors it might come from state departments of education, such as the recently released new south wales edgy chat chat bot, which has been trialed at the moment or similar chat bots being trialed in other states like South Australia and Queensland. So the people responsible for developing again, have the responsibility for testing reliability and safety there. The impact of generative AI tools on school communities is actively and regularly monitored and emerging risks opportunities are identified and managed. Again, this is something internal to schools in terms of monitoring, whether these things actually have a positive impact on education.

There’s an academic or university research component to this, we will need to see more research coming out around the efficacy or otherwise, of these programs. And again, government and developers are responsible for that. And 5.4 contestability members of school communities that are impacted by Gen AI tools are informed about and have opportunities to question the outputs. So if we’re using generative AI to do parts of assessment reporting or anything like that, students, parents community have an opportunity to contest that. The final core principle is privacy, security and safety. And a lot of this is already covered by existing regulations that apply to education. So many of the uses of these technologies already need to comply with privacy and data rights, Australian law with regard the unnecessary collection, retention and distribution of student data, so we’re already covered by existing laws, their privacy disclosures, which again can go into school documentation, protection of students inputs, so taking appropriate care when entering information into Gen AI tools, and my advice is to just never put identifiable information into Gen AI models.

Cybersecurity and resilience, implementing robust cybersecurity measures to protect the integrity and availability of school infrastructure and again In that kind of penetration testing, red teaming, security testing should fall to two government if they are responsible for releasing chatbots to the EdTech, and the vendors, and also with internal processes in schools. And finally, copyright compliance. And we need to be aware of applicable copyright rules, and particularly Australian copyright copyright agency in Australia is working at the moment on ways of dealing with these technologies as our copyright organizations across the world. So this one is definitely a Moveable Feast. So these are the six core principles and their guiding statements. What I would encourage you to do if you are in a school leadership position, in particular is to spend some time just looking through the framework as it stands. Again, there’s not a great deal of detail to it at the moment beyond these guiding statements, but over the course of the next 12 months, I’m hopeful that people will start to put some resources behind all of this. Thank you very much.


Viewing all articles
Browse latest Browse all 15

Latest Images

Trending Articles





Latest Images