Privacy + Cyber Year in Review and What to Expect in 2024

2023 has been a watershed year for privacy, cyber security and data regulation – with more to come in 2024.

We’ve seen an unprecedented political will to ensure the regulatory framework for privacy, data and cyber security remains “fit for purpose”. Australian and New Zealand Governments have shown their commitment to strengthening privacy and cyber security preparedness and resilience, through the appointment of the National Cyber Security Coordinator, conducting cyber security exercises for critical infrastructure and consulting on a wide range of regulatory approaches, strategies and issues.

This is expected to come to fruition in 2024, with anticipated reforms to Australia’s Privacy Act, introduction of a ransomware reporting scheme for Australian organisations, a NZ Biometrics Code and potential regulation of AI technologies.

We also expect to see more regulator enforcement action in Australia. In particular, from the Australian Privacy Commissioner, which now has enhanced enforcement powers, is better resourced and increasingly working with other Australian regulators and international privacy and data protection authorities. A recent example is the joint Australian and New Zealand investigation into the Latitude data breach.

In this inaugural annual review, we’ll cover key developments from 2023 and what to expect for 2024 including:

  1. Australian Government commitment to reforming the Privacy Act in 2024.
  2. Australia’s Cyber Security Strategy: “to be the most cyber secure nation by 2030”.
  3. Mandatory data breach schemes introduced for Queensland and NSW Governments.
  4. Australian privacy regulator enforcement activities in 2023 and priorities for 2024.
  5. NZ to consult on Biometrics Code in 2024.
  6. Critical infrastructure Risk Management Plan “switched on”.
  7. Australia’s e-Safety Commissioner questions social media with industry codes now in effect.
  8. Progress for Australia’s digital ID program.
  9. Australian Consumer Data Right roll-out continues.
  10. AI regulation for Australia and New Zealand?

 

1. Australian Government commits to reforming the Privacy Act in 2024.

The Australian Privacy Act (which applies to the private sector and Commonwealth Government agencies, with some exceptions) is undergoing significant review and the Australian Government has committed to introducing a legislative proposal next year.

In response to the recent review of the Privacy Act, the Government has agreed to only a small fraction of the proposed reforms (38 out of 116), with the majority of the reforms “agreed in principle”, with further targeted consultation as to “whether and how” they may be implemented. This includes the proposed removal of the small business and employee records exemptions and the introduction of an overarching “fair and reasonable” test.

Given the large number of reforms that are subject to further consultation, it seems unlikely the reform Bill next year will address all the “agreed in principle” proposals. However, we can safely assume that the privacy law reform package will include the “fair and reasonable” test, enhanced powers for the Privacy Commissioner and more stringent obligations with respect to data breaches.

While the detail of the reform package is yet to be determined, the Government’s response sets out a clear pathway for privacy law reform in Australia. The key message to regulated entities is that work needs to start now to ensure compliance with the current requirements of the Privacy Act (or better yet, privacy best practice). This will make any up-lift to address the reforms a relatively straightforward process.

For New Zealand regulated entities, while there is not currently a review process underway, the NZ Privacy Commissioner has publicly stated that he will recommend amendments to the NZ Privacy Act to make it fit for the digital age. This includes increasing civil penalties for major non-compliance, introducing new data subject rights for individuals, such as a right of erasure and stronger requirements for automated decision making.

For New Zealand regulated entities, while there is not currently a review process underway for the Privacy Act 2020 (NZ Privacy Act) as a whole, the NZ Privacy Commissioner has publicly stated he will recommend amendments to the NZ Privacy Act to make it fit for the digital age. This includes increasing civil penalties for major non-compliance, introducing new data subject rights for individuals, such as a right of erasure and stronger requirements for automated decision making. Earlier this year, a Privacy Amendment Bill which would add a new privacy principle to the NZ Privacy Act was introduced to address indirect collection of personal information (and requirements for agencies to notify individuals of the circumstances of the collection). It is currently awaiting its initial reading and if passed would come into force on 1 June 2025.

2. Australia’s Cyber Security Strategy: “to be the most cyber secure nation by 2030”.

The Australian Government released its 2023-2030 Australian Cyber Security Strategy in November 2023, following extensive consultation earlier this year. The strategy revolves around 6 “National Cyber Shields” and will be implemented in 2 yearly increments. This comes within the same year in which we have seen the establishment of the National Cyber Security Co-Ordinator and a much more involved government in the operational response to cyber-attacks. This has been largely driven and facilitated by the changes to the Security of Critical Infrastructure Act.

A key component of the Government’s Cyber Security Strategy will be the introduction of a mandatory ransomware incident reporting scheme. The mandatory reporting scheme will operate on a “no fault” basis. The key objective of the scheme is to encourage reporting and sharing of threat information. The details of the scheme will be co-designed with industry, but generally require organisations to report any ransomware incident, demand or payment to the Government soon after being subject to an attack. Failure to comply with the scheme may result in civil penalties.

While the Australian Government maintains its view that the default position on ransom payments to cyber criminals should be “no pay”, the Government does not support legislating an outright ban on paying ransoms, at this stage. The New Zealand Government issued similar guidance on cyber ransom payments in April 2023. The Government strongly discourages the payment of ransoms to cybercriminals and urges reporting to the relevant agency, whether or not a ransom is paid. The NZ Cabinet has agreed that Government agencies do not pay cyber ransoms.

3. Mandatory data breach schemes introduced for Queensland and NSW Governments.

As part of the package to reform the Queensland Information Privacy Act 2009, Queensland agencies will now be required to notify the Office of the Information Commissioner and affected individuals in the event of a data breach. Similar to the notifiable Data Breaches Scheme under the Federal Privacy Act, notification is required where there is a likely risk of serious harm for affected individuals as a result of the data breach. The scheme also aligns with the introduction of a mandatory data breach notification scheme for New South Wales agencies, which commenced in November 2023.

The Queensland reforms are expected to commence on 1 July 2025, with a 12 month transitional period for local Government.

Currently, Queensland and New South Wales are the only States to implement mandatory data breach notification schemes (although voluntary notification is generally encouraged).

4. Australian privacy regulator enforcement activities in 2023 and priorities for 2024.

2023 saw increased enforcement activity by the Australian privacy regulator, the Office of the Australian Information Commissioner (OAIC).

This is perhaps not unexpected given the OAIC’s enforcement priorities for 2023 and that the OAIC is better resourced than it has been historically with enhanced regulatory powers, following changes to the Privacy Act which came into effect quickly after the Optus data breach. These changes include increasing civil penalties for serious or repeated interferences with privacy from $2.2 million to an amount not more than the greater of $50 million, three times the value of the benefit obtained from the contravention or 30% of adjusted turnover.

In 2023, the OAIC launched a major investigation into the Optus data breach. The investigation is understood to be significantly progressed and a report is expected soon, as is the report following the OAIC’s investigation into the use of facial recognition technology by Kmart and Bunnings. The OAIC is also undertaking a joint investigation into the Latitude data breach with the NZ Privacy Commissioner.

The OAIC commenced Federal Court proceedings against Australian Clinical Labs in November 2023, following an investigation into its privacy practices following a data breach in 2022. The OAIC is seeking civil penalties (this will be under the old civil penalty regime).

The OAIC has stated that its priorities for 2024 will be around the security of personal information, AI, Consumer Data Rights and privacy law reform, particularly assisting regulated entities to prepare for the reforms. It is worth also noting that the Federal Government this year re-instated the three Commissioner model for the OAIC, which we have not had since 2014. This is an important structural change that highlights the focus on privacy and the reforms ahead.

5. NZ to consult on Biometrics Code in 2024.

The New Zealand Office of the Privacy Commissioner will be progressing draft privacy rules for biometric information and will launch a consultation in 2024 on a biometrics privacy code. This is to address the key privacy risks associated with biometric information identified by the NZ Privacy Commissioner during public consultation in 2023: “unnecessary or high-risk collection and use, scope creep and a lack of control or knowledge about when and how biometrics are collected and used”.

The proposed scope of the biometrics code will be on the collection and use of biometric information to verify, identify or categorise individuals using automated processes. It will not apply to manual processes that use biometric information. It will focus on proportionality, transparency and notification requirements, as well as purpose limitation. It is anticipated that the code will provide for exceptions for research purposes.

The intention of the code is to bring New Zealand into closer alignment with Australia and the EU, by addressing privacy risks by bringing certainty to when agencies can collect biometric information and by incorporating clear transparency and notification requirement frameworks. It also aims to address risks related to biometrics for Māori, such as potential bias, discrimination, and surveillance recognising the tapu nature of biometric information in te ao Māori.

Additionally, the Privacy Commissioner will also develop comprehensive guidance for agencies using biometrics to ensure adherence to the proposed code requirements and the NZ Privacy Act.

6. Critical infrastructure Risk Management Plan “switched on”.

Security obligations for Australia’s critical infrastructure assets have been progressively “switched on” since the reforms to the Security of Critical Infrastructure Act (SOCI Act) in 2021 and 2022.

The latest obligation to be “switched on” is the requirement for responsible entities to adopt, maintain, comply with and keep updated a critical infrastructure risk management plan (CIRMP) for specified critical infrastructure assets and comply with ongoing annual reporting obligations. Following a six-month grace period, responsible entities were required to adopt a CIRMP by 17 August 2023 in order to maintain compliance with the SOCI Act. Failure to do so may result in a civil penalty of up to $62,600 (200 penalty units).

The requirements for a CIRMP applicable to each type of critical infrastructure asset vary. We prepared a summary of these in our article here.

7. Australia’s e-Safety Commissioner questions social media; industry codes now in effect.

The Australian eSafety Commissioner took important steps this year to further enforce its “Basic Online Safety Expectations” under the Online Safety Act, as part of the discharge of its duty to safeguard the online safety of Australians. In February 2023, the eSafety Commissioner served notices on Twitter, TikTok and Google requiring them to answer questions about how they are tackling online child sexual abuse. This includes questions about the role their algorithms play in amplifying harmful content. Similar notices were sent to other tech giants in 2022, including Apple and Microsoft. There are civil penalties of up to $700,000 under the Online Safety Act for failing to comply with the notices within 35 days.

In 2023, five industry codes for online safety and to address harmful online content were registered under the Online Safety Act for social media services, ISPs, equipment providers, app distribution services and hosting services. These codes come into effect on 16 December 2023. A further code for internet search engines comes into effect on 12 March 2024.

This comes within the broader context of the Australian Government agreeing to develop a Children’s Online Privacy Code (as an APP Code under the Privacy Act), once the proposed legislated protections for children are enacted. This includes defining a child under the Privacy Act as a person under the age of 18.

8. Progress for Australia’s digital ID program.

In its eighth year, a significant step forward in Australia’s digital ID program was made in 2023 with the introduction of the Digital ID Bill (Cth) and the Digital ID (Transitional and Consequential Provisions) Bill 2023 (Cth) in November 2023. This builds on the current voluntary Digital ID accreditation scheme (the Trusted Digital Identity Framework, TDIF) for digital ID services. Following significant data breaches in Australia over the past 12 to 18 months which compromised identification documentation for millions of Australians, the Digital ID scheme aims to reduce the privacy risk associated with providing identification documents to various governments and organisations.

The draft legislation outlines the Digital ID scheme, which would allow individuals to verify their identity online. It also sets out the requirements and processes of accreditation and sets up a more comprehensive legislative framework for additional entities to be accredited. The scheme involves a 4 part phased expansion for access by government services initially and private sector services in the future. It also sets out a number of privacy and security safeguards and a civil penalty regime. The key to this scheme, as opposed to earlier proposed “Australia Card” and similar schemes, is that it will be voluntary for individuals. This scheme will introduce a “Digital ID” system that individuals can voluntarily use to verify their identity without having to provide copies of identification documents (e.g. passports, birth certificates and driver’s licences) each time verification is required. The Bill is currently before the Senate.

9. Australian Consumer Data Right roll-out continues

This year, the roll-out of the Consumer Data Right (CDR) to the energy sector continued, with several obligations coming into effect for energy retailers, with further obligations to come into effect in 2024. The CDR is Australia’s version of “data portability” with some notable differences: it permits accredited data holders to transfer data to accredited data recipients with consumer consent. It has been rolled out to the banking and energy sectors and will continue to be rolled out across the economy, with non-bank lending and other sectors to follow. The CDR permits individuals and businesses to share data with accredited third parties and is regulated by the ACCC and OAIC.

In August 2023, Treasury consulted on the draft CDR Rules for the expansion of CDR to the non-bank lending sector, as well as a review of the current rules and standards applicable to CDR consents. The review of CDR consents considered bundling of consents, withdrawal of consent information, consents for de-identification, and dark patterns).

10. AI regulation for Australia and New Zealand?

While AI is not new, in response to the rapid adoption of AI-driven technologies, particularly generative AI, the Australian and New Zealand Governments have sharpened their focus as to the appropriate approach to ensure safe and responsible development and use of AI. Currently, while there are various laws and regulations that apply to AI technologies in Australia and New Zealand, there are no AI-specific regulations or mandatory standards. This is compared to other jurisdictions, like the EU, that have taken a stronger regulatory approach (the EU AI Act has recently received political agreement).

In Australia, the Department of Industry, Science and Resources released its Safe and Responsible AI in Australia Discussion Paper for consultation in 2023 (consultation closed in August 2023). This follows publication in 2019 of the Government’s voluntary Artificial Intelligence Ethics Framework. The Discussion Paper invited submissions on what approach should be taken: either or both a voluntary approach (tools frameworks and principles) and/or an enforceable regulatory approach (laws and mandatory standards). The Government is in the process of reviewing submissions. The review of the Australian Privacy Act will also likely impact the use of AI technologies. While the details of the reforms to the Privacy Act are not yet known, there will likely be increased transparency and accountability for organisations that use AI technologies to handle personal information, as well as enhanced enforcement powers for Australia’s privacy regulator. These changes to the Privacy Act are expected to be implemented as part of the Government’s broader review of the regulation of AI and automated decision-making.

In New Zealand, the Office of Privacy Commissioner (OPC) issued guidelines in September 2023 on the responsible use of AI tools and systems. In summary, OPC expects agencies to utilise AI tools to secure senior leadership approval following a comprehensive risk assessment and mitigation analysis. They should assess the necessity and proportionality of deploying generative AI tools, taking into account potential privacy impacts and exploring alternative methods. Further, before implementing AI tools, agencies must conduct a privacy impact assessment and prioritise transparency by informing individuals about the purpose, timing, and rationale behind the AI tool’s usage. Agencies are also required to engage with Māori communities to address potential risks and impacts on the taonga of their information.

For more information, please contact our Australian and New Zealand Privacy & Cyber team.

KEY CONTACTS

Special Counsel

Lawyer

Associate