Tag Archives: MTTI

Spectre and Meltdown – No need to enter Panic Mode

7 January 2018

Spectre Icon

Spectre

When I read about Meltdown and Spectre in the Reuters Technology News early on Wednesday morning I digged directly somewhat deeper to find details about the access vectors and severity. From a quick view of the published material I concluded that these vulnerabilities were only locally exploitable and would have medium to high impact. No need to panic.

Media coverage was very high the next morning. Even the German local radio stations brought details about Spectre and Meltdown in the news, although there was no ground for public panic.

The following table shows the Meltdown and Spectre vulnerability details:

Meltdown and Spectre Vulnerability Details, CVSS V3 Metrics

Meltdown and Spectre Vulnerability Details, CVSS V3 Metrics

Sources: [1] NIST NVD, [2] RedHat Customer Portal[3] NIST NVD
Abbreviation list: AV: Access Vector, AC: Access Complexity, PR: Privileges Required, UI: User Interaction, C: Confidentiality, I: Integrity, A: Avaliability

To exploit these vulnerabilities an attacker must have either local access to a system on your network (Access Vector Local) or access to your local network (Access Vector Adjacent Network).

But why should an attacker, who got access to a system on your network, exploit e.g. Meltdown to extract passwords from the memory of a process? The access complexity is high; thus, the likelihood of early detection goes up.

We can expect that cyber criminals don’t behave irrationally. They choose the attack method with low chance of detection. And recent publications suggest this:

According to the Ponemon 2017 Cost of Data Breach Study the Mean Time to Identify (MTTI) a data breach in 2016 was 191 days, down from 201 days in 2015. If cyber criminals would behave irrationally, the MTTI would be much shorter.

Thus, there is no need for panic. Just apply the latest patches and check the performance of critical systems.

Have a great week.

Yahoo hacked in late 2014 – Breach detected in 2016

25 September 2016

In August 2016, a hacker offered 200 Million Yahoo Accounts for sale on the Darknet. In a first investigation, Yahoo found no evidence for this assertion. But the investigation team found indications for a data breach which happened in 2014.

Last Thursday, Yahoo announced that account information of 500 Million users was stolen in late 2014. The good news is that the company found no evidence that the attackers are still active in their network. And that only names, email addresses, phone numbers, birth dates, encrypted passwords, and, in some cases, security questions and answers were stolen.

That is bad enough, especially because reuse of account information like security questions and answers is a widespread bad habit. Yahoo users are well advised to change their security questions wherever they have reused them.

But what really worries me is that it took about 600 days before the breach was detected. That is far more than the MTTI (Mean Time to Identify) of 206 days the Ponemon Institute estimated in the ‘2015 Cost of Data Breach Study:  Global Analysis’. And more than the max. value of 582 days.

One can only speculate whether indicators of compromise were non-existent or ignored or not recorded or not regularly reviewed. Regular review of event and incident data is a really tough job, but essential if it comes to the assessment of indicators of compromise.

Have a good week.

Excellus BCBS Breached, 10 Million Customers’ Records Affected

12 September 2015

When I read the headlines of this LIFRAS post my first thought was:  “2015 is going to be an annus horribilis for the US healthcare insurers”. Anthem, Premera, and now Excellus, what organization will be the next?

One paragraph in the Excellus announcement of the data breach is really interesting:

‘On August 5, 2015, Excellus BlueCross BlueShield learned that cyberattackers had executed a sophisticated attack to gain unauthorized access to our Information Technology (IT) systems.  Our investigation further revealed that the initial attack occurred on December 23, 2013.’

It took 590 days to identify the breach! That are 8 days more than the maximum Mean Time To Identify (MTTI) of 582 days the latest Ponemon cost of data breach study found for 2014.

This is really remarkable because it makes clear that a ‘very sophisticated’ cyber-attack is hard to identify, even with latest security technology in place. And I bet, Excellus has such technology installed. I am really curious about the details of the attack.

Take care! If you like to do some further reading please take a look at the latest issue of the Cyber Intelligencer ‘You can’t detect what you can’t see’.

OPM May Have Exposed Security Clearance Data

7 June 2015

When I read David Sanger’s report ‘Hacking Linked to China Exposes Millions of U.S. Workers’ in the New York Times about the Office of Personnel Management (OPM) attack I was shocked on both, the large number of stolen records and the obviously inadequate protection measures and processes.

‘The intrusion came before the personnel office fully put into place a series of new security procedures that restricted remote access for administrators of the network and reviewed all connections to the outside world through the Internet’.

Are basic protection measures like Two Factor Authentication for all employees for access from the internet to federal computer networks really not in place, not even for the NSA:

‘In acting too late, the personnel agency was not alone: The N.S.A. was also beginning to put in place new network precautions after its most delicate information was taken by Edward J. Snowden.’

And why does it take such a long time until an investigation starts? From a LIFARS blog we learn:

‘The possibility of a data breach was first detected back in April, by the Department of Homeland Security. An internal investigation conducted in May, confirmed that the breach had indeed occurred.’

In the New York Times article we find the reason for this delay:

‘Administration officials said they made the breach public only after confirming last month that the data had been compromised and after taking additional steps to insulate other government agencies from the intrusion.’

Again, it seems to me that basic protection measures like proper network segmentation are not in place. In addition to effective communication processes and business continuity management, which could cut the Mean Time To Identify (MTTI) a breach dramatically due to the Ponemon 2015 Cost of Data Breach Study, page 24, figure 24.

Take care!