Self-regulation: It’s a myth. If the tobacco industry self-regulated, then Mr Squiggle would currently be a cigarette mascot. If our builders and tradies self-regulated, then they’d still be using asbestos. Therefore, it’s imperative that some AI regulations are established in Australia.
As it stands, AI chatbots have popped the heck off. In December of 2022, OpenAI’s ChatGPT had over one million users. In January of 2023, that number was probs higher than 100 million.
However, despite having millions of users, ChatGPT is basically regulation-less. ChatGPT doesn’t have to credit or compensate the writers that it’s stolen research from. ChatGPT doesn’t have to cite its sources. ChatGPT has a wealth of power from the data it’s been willingly been sent.
Additionally, these concerns are just the tip of the iceberg. On May 15, Australia’s Human Rights Commissioner, Lorraine Finlay, wrote about how AI chatbots are already harming our society.
“Concerningly, both ChatGPT and Bard have been found to be able to write convincingly in favour of known conspiracy theories,” stated Finlay.
“Distinguishing between fact and fiction will become increasingly difficult as AI becomes commonplace in our daily lives. Even knowing whether we are interacting with a human or a machine may become challenging.”
So, with this in mind, is the Federal Government on top of this situation? Will AI regulations become the norm across Australia? Let’s dive into these virtual waters right now.
AI Regulations in Australia
In the 2023 Federal Budget, Labor said that they’ll use $41.2 million to responsibly deploy multiple AI technologies across the country. They have also set up a Responsible AI Network. This network will make sure our industries aren’t exploiting this scheme.
The National AI Centre’s Director, Stela Solar, is stoked that this network has been established. She believes that it will act as a form of AI regulation.
As Solar said, “Australian businesses have told us that understanding ethics and governance in implementing AI is lacking across organisations globally.”
“The Responsible AI Network provides a unique offering: Practical guidance and coaching from experts on law, standards, principles, governance, leadership, and technology to ensure explainability, fairness, and accountability are built into Australian AI systems.”
What’s more, Australia is starting to plan how it will regulate AI technologies that won’t be deployed in this programme. This is thanks to Ed Husic, our Industry and Science Minister. Husic has been in talks with high-level industry folks about the risks and rewards of using AI technologies. He’ll then use what he learnt from these talks to inform Australia’s AI regulations.
However, Husic isn’t stopping there. He’s also planning on getting his constituents involved. At some stage in the future, he’ll open up the floor for public consultation. This means you might have a chance to tell the Australian government your feelings on this matter.
Meanwhile, some folks in the Labor Party are fighting for even more to be done. For example, the MP Julian Hill wants an Australian AI Commission to be established.
When explaining his reasoning, Hill said, “ChatGPT has fuelled public awareness, but large language models are just the canary in the coal mine. Despite great uncertainty in precisely how AI technology will develop, what is clear is that AI is set to transform human society, how we experience our lives and understand reality.”
As of May 30, an Australian AI Commission isn’t on the cards. But we’ll let you know if this changes.
Related: A Definitive Ranking of Three ChatGPT Alternatives
Related: ChatGPT vs. Google Bard — Which Bot’s Para-Social?
Read more stories from The Latch and subscribe to our email newsletter.