Ai mandatory guardrails in high risk settings 66174071017
Published: 1-Oct-24
AI Mandatory Guardrails in High-Risk Settings
The Albanese government has released a Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings. It calls for submissions on: • A proposed definition of high-risk AI • Ten proposed mandatory guardrails • Three regulatory options to mandate the guardrails including a standalone Australian AI Act (similar to regulation already introduced in the European Union). Consistent with Canada and the EU, the proposed guardrails will require organisations developing or deploying high-risk AI systems to take steps to ensure products reduce the likelihood of harms.
The risk-based approach emphasises testing, transparency and accountability, including the labelling of AI systems and testing of products before and after release.
Ready to start your training journey?
To discuss your training needs and how we can support you - request a callback using the form below.