Skip to content

Security Breach Transparency

Another security breach has made the news. This time the folks at Ticketek are having a hard time explaining why ...customers' names, emails and dates of birth may have been accessed in cyber security breach.

What happened?

It's not really clear at this stage. TickeTek is claiming a "reputable 3rd party" was involved, yet reports have circulated claiming their Snowflake data warehouse was compromised. Snowflake issued a notice on their website, as did the Australian Cyber team.

Looking at all of these advisories, we can assume that TickeTek did not have MFA configured on their instances. This is purely speculative. Hence the reason for this post.

How did we get here?

As an Information Security professional, it is my job to advise customers on their security posture and help them implement security controls to best meet their security risk appetite. Be it Optus, Medibank, Uber, or TickeTek, something went wrong. We will probably never know what happened since the post-incident reports are typically kept private.

Should they be?

Double standard for data privacy

With all of these data breaches, there does appear to be a double standard that I need to call out. Data processors are quick to apologise for breaching your data with the classical "we take security seriously" messaging, yet when it comes to why they messed up, and what lessons learned can be shared with the rest of the industry, they are very quiet. They will claim that the information is confidential, and do want to share it.

So you're allowed to breach our data, but when it comes to understanding why you breached it, then you're all secretive about it? Sorry, that doesn't work in my book.

What do we need?

You don't start an arson investigation while the building is on fire. When the issue is first detected, the cyber and legal teams need to be involved immediately to address the issue. Stop the bleeding, put the fire out, making sure that the leak is contained, and the operation can be returned to normal as soon as possible. Nothing changes on this side.

We do however need better transparency with the investigation following the incident. Data processors are typically not overly keen to share this data as it could highlight major deficiencies in their cyber security, causing more breaches, or, it can just be embarrassing to admit that they screwed up. Public admissions can go several ways. In many cases, they are concerned about the impact on public perception or reputation, or the impact on their share price.

Publish the incident report

Without any finger-pointing or blame-shifting, the breached organisation must publish the incident report. Here's an example on a hypothetical incident to describe the different components of such a report.

What happened?

Describe the business view of the issue

On January 1st, 2010, Example Org suffered a data breach where the email addresses and dates of birth of all 10,000 customers were leaked on the internet.

Describe the technical view of what happened?

On January 1st, 2010, a database backup from the e-commerce application was accessed by an unknown 3rd party. The database backup was stored in a public AWS S3 bucket hosted by a 3rd party data processing supplier.

How did it happen?

Describe what led to the events to occur

Example Org uses the services of Company Z to store offsite backups as part of their internal disaster recovery plan. Through a misconfiguration on the supplier side, Company Z allowed the data to be stored in a public S3 bucket.

Why did it happen?

Describe the reasons that led to the incident to occur

Example Org was facing a number of new business challenges, and decided to outsource the backup and recovery of their key system to a third party (Company Z), to allow them time to focus on their internal business growth.

Company Z has experienced unprecedented growth and hired a number of new personnel to help them scale their operation. An intern hired by the company was tasked to implement the backup solution and had difficulty implementing the authentication between the storage bucket and the application server. To work around the problem, the intern inadvertent set the bucket to be publicly accessible. While this solved the immediate problem of communication between the application server and the storage bucket, it resulted in the data being publicly exposed.

What did we learn from it?

After the investgation, what are the lessons you've learned from this? What can the rest of the industry benefit from?

  • No Supplier Assurance review was performed - Company Z was contracted to perform the data storage and recovery service, yet no formal review on their security practices have been performed.
  • Architectural Review Practices - When new solutions are designed, an architectural review process needs to be performed, to ensure customer data is kept secure at all times.
  • Implement Cloud Security Tooling - No CSPM or CNAPP tool was implemented to identify the issue. Even through AWS Trust Advisor did alert on the issue, no one was seeing the alerts, and no operational tickets were created to trigger an action to correct the misconfiguration.
  • Improve training and onboarding - When new team members join the team, they need to be onboarded with security best practices in mind. Education and support need to be available to allow them to ask questions when difficult situations occur.

What are we doing about it?

Describe the steps of what you're doing about it, what actions are being taken to ensure this does not ever happen again.

  • Procure and implement a CNAPP tool - within the next 3 months
  • Develop the new onboarding security program - Within the next month
  • Update the change management process to include an Architectural Review step

Improved risk assessments

At some point throughout the Cyber program, a risk assessment should have been performed to identify possible gaps in the way how the systems are operated.

  • Did the Risk Assessment address any operational deficiencies?
  • Did it call out any tooling requirements?
  • Did it call out any impacts on customer perception or impacts on reputation?

Looking back at the Risk Assessment after the security incident - did you get it right? The security incident is the realisation of a risk you hoped would never occur.

Post Incident Audit

Within 12 months of the security incident, an external security audit will need to be performed to ensure that the issues identified during the investigation has been rectified. The audit must address not just that the issue has been resolved, but that it has been resolved in a way that will not cause it to occur again.

Fines

There is merit in introducing fines when organisations suffer a data breach. I would see something like a sliding scale, where the value of the fine is proportionate to the size of the breach. This is designed in a way to force organisations to delete the records when they're done with it, and also to consider that when they do collect huge volumes of data, that data is used to generate more revenue. The risk of managing that data should be considered against the risk of doing business.

# of records breached Cost per record Total fine
10,000 $10 $100,000
100,000 $20 $2,000,000
1,000,000 $50 $50,000,000
10,000,000 $80 $800,000,000
100,000,000 $100 $10,000,000,000