Monthly Archives: August 2014

Greetings from Bentheim Castle

30 August 2014

Bentheim Castle shows the direction to application security.

A minimized attack surface …

Bad Bentheim Castle Wall

Bad Bentheim Castle Wall

A single point of access …

Bad Bentheim Castle Main Door

Bad Bentheim Castle Main Door

A hidden jewel inside …

Bentheim Castle Court Yard

Bentheim Castle Court Yard

E-book review: Staying Ahead in the Cyber Security Game

28 August 2014

Some weeks ago I attended the webinar ‘Staying Ahead in the Cyber Security Game: What matters Now’ sponsored by IBM and Sogeti.
The webinar is a good introduction to the free e-book with the same title. And the e-book is absolutely worth reading.

Chapter 10 is entitled ‘The data scientist will be your next security superhero’. Wow! Superhero reminds me always of the Queen song ‘Flash Gordon’:

Flash a-ah
Savior of the universe

In verse ‘Seemingly there is no reason for these extraordinary intergalactical upsets’ the work of a big data Analyst is well described. My favourite verse is at the end of the song:

Flash Flash I love you
But we only have fourteen hours to save the Earth
Flash

I love this song, I would really love to be a superhero … ;-). Back to the e-book!

‘We may have effective detection tools to reduce the impact of the attacks. But the real revolution will be with big data: We will be able to more finely analyze what is normal and what is not normal.’

This statement gives me pause. How long does it take to find a hint where seemingly is none? Do we really have fourteen hours in the case of an unknown attack to save the company? Would big data analytics have prevented the eBay or Code Spaces disaster? Should we rely on the good brains of a big data analyst only?

My answer is: Don’t just rely on a single technology! And don’t believe that everything is as easy as it sounds.

Big data technology can support us in boosting IT security but, of course, it will take some time before clear indications to data breaches could be generated.

First, you have to set up data sources like firewall or Windows event logs. In parallel, your analysts and your system must start learning what is normal to recognize what is abnormal, because abnormal events are a strong indicator of an advanced threat or breach. And finally you should make an incident response plan to do the right things when your systems detects an incident.

Sounds like a plan, doesn’t it?

By the way: The first security superhero was David Levinson in ‘Independence Day’. In an ocean of electromagnetic signals he detected an alien signal and identified it as countdown, and all within a few minutes. A true superhero!

Review – US nuclear regulator hacked several times over three years

24 August 2014

In post US nuclear regulator hacked several times over three years. from 19 August 2014 Warwick Ashford talks about attacks on the U.S. Nuclear Regulatory Commission (NRC).

The big question is: What makes the NRC so interesting for attackers? Reports of safety audits containing information that should not be made public? I really doubt it.

In Exclusive: Nuke Regulator Hacked by Suspected Foreign Powers you get an idea about the real reasons:

‘Federal systems are constantly probed by hackers, but those intrusions are not always successful.’

Thank goodness this is absolutely correct! In nuclear power plants very old IT technology is used that can not be attacked easily. But the detailed description of vulnerabilities found in audit reports will make successful attacks more likely.

Perhaps you remember the film ‘War Games’? Although the Maximum Credible Accident in a nuclear power plant is not comparable to a nuclear world war, the impact on health and environment is catastrophic. Therefore such events must be taken extremely serious.

By the way, the statement above talks about the known attacks on federal systems. The total number of successful attacks may be much higher …

Don’t Panic!

Rule No. 5: Minimize the The Attack Surface

21 August 2014

Complex applications are composed of many infrastructure layers, e.g. database and file services or web services. Services are provided by one or many systems through complex software packages. All systems communicate with each other and with infrastructure systems like directory, naming or backup services. In order to simplify matters we omit the users.

Every operating system, software package, infrastructure service, etc. has vulnerabilities which could be used to attack the application. For example, the U.S. National Vulnerability Database (NVD) lists 9 vulnerabilities for the often used middleware JBOSS, all published in the past 3 month . On top we add some self-made vulnerabilities by our application design.

The set of all vulnerabilities is the known attack surface.

Please keep in mind:

[1] The whole is more than the sum of its parts!

[2] The unknown attack surface is greater than the known attack surface, and millions of hackers are working hard every day to detect new vulnerabilities.

Today’s standard answer to this challenge is patching, patching, … But from my point of view Security by Design shows a way out of the chaos. Application systems should be designed according to

Rule 5: Minimize the total attack surface!

What does this mean for the application/system design?

  • Decompose the application into separate functions, if possible provided by separate services
  • Minimize the number of interfaces between the application components
  • Minimize the number of 3rd party components
  • Relocate services onto separate encapsulated systems
  • Minimize the number of installed software packages per system
  • Minimize the dependencies on infrastructure services

The effort for build and run will be definitely higher, but the known attack surface will be much smaller.

Keep it smart and simple!

The Minimalist Approach to IT Security

18 August 2014

When it comes to USB device security everyone starts talking about tools immediately. A tool for locking or disabling the USB devices, a tool for encryption of devices, etc. Small and smart tools, integrated in a smart big management solution to simplify end point administration. And each tool installs at least one agent on the end point which ensures that the latest policy changes are downloaded in due time.

Today, tools are necessary for efficient administration of the complex IT systems we run to support businesses in executing their strategies. Unfortunately every smart tool adds complexity to this IT systems.

In addition, with every new tool the attack surface of our complex IT systems increases dramatically. Why?

  • Tools are not error free. Every tool comes with some unknown vulnerabilities that could be used by attackers to get unauthorized access to our systems.
  • Tools, in particular the agents, are communicating with lots of other tools. In this highly connected tools universe it is very likely that new vulnerabilities are created from a combination of vulnerabilities of each tool.

This holds for every IT task we support by tools, and in particular for the security related tasks.

Therefore I am in favour of the minimalist approach:

(1) Use as few tools as possible

(2) Check first, if the problem could be solved by existing means

For the USB devices:  Try to use a group policy and awareness training before implementing a new tool.

Simplify your Life!

Poweliks it is still stuck in my mind

17 August 2014

It may sound funny, but Poweliks is still stuck in my mind. The bad news for me is: Poweliks resides only in Windows registry.

The good news is: To start at every login the malware uses the Windows registry, namely the outdated method of using the [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run] key.

And this is exactly the vulnerability of Poweliks we can use for taking counter measures!

The Windows policy ‘Do not process the legacy run list’ could be used to block Poweliks. If enabled this policy blocks the programs listed in the run key from getting executed during login. That’s it!

Do Not Process Legacy Run List Policy

Do Not Process Legacy Run List Policy

To enable the ‘Do not process the run once list’ policy start the local group policy editor gpedit.msc and navigate to section User Configuration\Administrative Templates\System\Logon. Double click the policy, select option ‘Enabled’, enter a comment and click ‘Apply’.

Use policy ‘Run these programs at user logon’ to whitelist the programs which you want to start at login. To prevent unwanted programs from getting started during system boot, enable the ‘Do not process the run once list’ in Computer Configuration as well.

Sounds somewhat strange, like fighting fire with fire. A much better solution would be to isolate all applications in AppContainers like Internet Explorer and run them at integrity level “Low” when connected to whatever network.

Microsoft, please do us this favour in Windows 10 the latest!

Security testing – The new magic trick?

14 August 2014

Security testing is one of the top issues in the media at the moment.

Security testing will definitely support companies in delivering less error prone and vulnerable software to their customers. It is an old truth that the cost to fix an error after rollout is considerably higher than before. But when it comes to security relevant vulnerabilities, errors can have catastrophic effects on a company.

In my opinion, standalone security testing wil not lead to more secure software in the long-term. Security should be built into the entire development process from requirements specification to user acceptance test, with verification and validation in each step. And it is very important to make it crystal clear to the customer that security comes at a price.

Security by design is the means by which less vulnerable software products could be delivered.

In particular the coding phase is critical for the vulnerability of a product. To create less vulnerable software, developers have to unlearn old programming habits, and to acquire the well known best practice for developing secure products. To ensure success, this transformation process should be embedded in a change process.

Drive the change!

Review – ‘Poweliks’ malware variant employs new antivirus evasion techniques

9 August 2014

On 4 August 2014 Brandan Blevins talks in his post ‘‘Poweliks’ malware variant employs new antivirus evasion techniques‘ about a new malware which uses new infection routes.

My first thought was: Oh no, not another new malware that could not be detected by state-of-the-art Anti Virus systems!

My second thought was: Hold on for a moment. The Poweliks malware appears to jump into our computers like a deus ex machina! Sounds like magic, doesn’t it?

If you dig somewhat deeper, you find, that to implant the malware, attackers must exploit a vulnerability of the system and, the good faith of the users. In this case the media was a Word attachment of an email and a flaw in the MSCOMCTL.OCX described in CVE-2012-0158.

In section ‘What might an attacker use the vulnerability to do?’ Microsoft describes the impact:

Bacteriophage P2. Source: Mostafa Fatehi

Bacteriophage P2. Source: Mostafa Fatehi

‘An attacker who successfully exploited this vulnerability could gain the same user rights as the logged-on user. If a user is logged on with administrative user rights, an attacker who successfully exploited this vulnerability could take complete control of an affected system. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights…’.

And this is exactly what the Poweliks malware does.

What countermeasures could we take?

(a) Do not open attachment and files from untrusted sources like email. Common sense can prevent lots of malware attacks.

(b) Do not work with permanent administrative rights.

(c) Change the User Account Control (UAC) Settings to the highest level ‘Always notify’. The malware installs Powershell, if not already installed. In this case UAC will notify you.

(d) Check whether the latest updates and patches are installed. CVE-2012-0158 was fixed in 2012 and can not be used for an attack, if Windows Update is configured to automatically install updates.

(e) Review the Trust Center Settings in Microsoft Office.

Activate ‘ Disable all macros with notification’ in section ‘Macro Settings’,

Activate ‘Prompt me before enabling all controls with minimal restrictions’ in section ‘ActiveX Settings’.

Activate ‘File Block Settings’ except for Office 2007 or later formats in section ‘File Block Settings’.

(f) Check your AV providers Homepage for the latest updates or utilities. I bet you will find some Information or tool which could support you in an emergency.

(g) Don’t Panic!

Have a good Weekend

My favourite tools – Remote Desktop Services

7 August 2014

Remote desktop or terminal services are my favourite tools. When I started with this technology in 1997, it was bare necessity. We had to offer a 2 tier CAE application to about 500 engineers at 3 major sites in 30 buildings. The fat client graphics application was installed on Windows NT workstations. Data was stored in about 120 Oracle V7 databases hosted on 3 SUN database servers. It was a really hard job to keep the client workstations up-to-date, in particular because everyone was working with permanent admin rights. The nightmare of all system administrators!

Terminal Services put an end to this nightmare. Since users had no longer privileges on the servers the number of help desk calls declined dramatically. Release changes were implemented within an afternoon and new users were authorized to the application within minutes.

We gained back control, and my kids had their father back.

Today, terminal services are my preferred method to control access to the core business data. They are really low hanging fruits! This is the major use cases:

(1) Block access to the data  from all systems except of few terminal services. This will reduce the attack surface dramatically. The terminal services are the only trusted end-user devices in your network. Located in the data center they are in the best case completely under your control. It’s easier to keep the trust state of few servers as the trust state of hundreds of workstations located elsewhere inside or outside the company network.

(2) Grant access to the terminal services to authorized users only, based on the Need-to-Know principle. Review authorizations on a regular basis and make sure, that no user owns the permissions to change his own privileges or the trust state of the terminal services or any other infrastructure service.

Even if an application does not support user and role management the combination of (1) and (2) will increase the security of your information assets dramatically

(3) Terminal servers allow you to restrict users to well-defined applications and data sources with low effort. This could be implemented by configuring the firewalls on the terminal services. Just block any outgoing network connections except of infrastructure services and the applications. Users are prevented from creating unauthorized copies of the data.  In addition, the Need-to-Know principle is enforced because only the information essential to the users work is provided.

(4) It’s far easier to implement Two Factor Authorization for a limited number of terminal services than for thousands of endpoints. Targeted phishing attacks will no longer work because the password is no longer the single source for identification.

The transformation to terminal services controlled computing is very easy because you can set up the systems and applications in parallel to the existing application infrastructure. The final switch will have nearly no impact on the user’s daily work if the entire process is governed by change process.

Have you ever decided about a BYOD strategy based on Remote Desktop Services? Terminal services are a perfect measure to raise the trust level of the entire network, in particular when combined with your employees own devices.

But this is another story…

If you have any questions, please feel free to leave a comment. And enjoy the free time with your family.

BadUSB – Don’t fall into a doomsday mood!

2 August 2014

When Karsten Nohl published his research on 21 July 2014, BadUSB spread throughout the media within hours. One had the feeling that the end of the world arrives at the door. Millions of  potentially compromised USB sticks could take over control of all other USB devices.

But the worst is yet to come: We are utterly powerless! Antivirus products of whatever vendor could not block this kind of attack.  As if we did not know, that Antivirus products are of limited value today.

My first reaction was: Keep cool! It’s just a proof of concept. It’s not in the wild! And the best is: It’s a very complex task, and therefore not lucrative for normal attackers.

Vulnerabilities in the handling of USB devices are not new. A search in the U.S. National Vulnerabilty Database (NVD) shows 4 high severity flaws in the past 18 month. Moreover, it is well-known that viruses are very often spread through USB devices. We all know the risk!

And even the vulnerabilities in onboard controllers are not new. Mathieu Stephan reports in his post ‘Hacking SD Card & Flash Memory Controllers’ from 29 December 2013 that the Firmware of SD Card’s was compromised. Take a look at the Video in his post.

Marshall Honorof’s post ‘Don’t Panic Over the Latest USB Flaw’ from 1 August 2014 saved my day.

At the end of his post Marshall sums it up: ‘Make no mistake: BadUSB is a fantastic proof-of-concept, and lays bare some serious problems with USB stick security. But, like anything else in the world of computing, you can avoid trouble using a little common sense.

To be honest, I expect a technical solution to the BadUSB trouble within the next month. Otherwise the USB stick market will collapse.

But in the meantime: Don’t Panic!