Monthly Archives: November 2014

Why is Internet Explorer security such a challenge? More tips to minimize the risk

29 November 2014

In his Report ‘Why is Internet Explorer security such a challenge?‘ Stephen Bigelow talks about Internet Explorer (IE) security and attack trends. In section ‘Tips to minimize the risk’ he introduces the standard mitigation measures.

In addition, IE 11 and Windows 8 provide security functions which can be activated or adjusted to make Internet use less risky:

1. Set User Account Control (UAC) to ‘Always notify me’

With UAC set to ‘Always notify me’ you will be notified if malicious code which is executed in Internet Explorer tries to install software or tries to make changes to your computer.

2. Activate SmartScreen Filtering to reduce the risk of phishing attacks

SmartScreen Filtering was introduced with IE8 and was integrated in the OS with Windows 8. SmartScreen Filtering checks web sites and files, after you clicked on the link, against a list of harmful sites and blocks downloads from these sites.

If the SmartScreen Filter blocks a malicious site you will get an error message like

SmartScreen Filter Error Message

SmartScreen Filter Error Message

To activate SmartScreen Filtering check Enable SmartScreen Filter in the IE Advanced Security Options.

3. Activate Enhanced Protection Mode (EPM) in the Internet Explorer Advanced Security Options

With EPM activated IE runs in an AppContainer at low integrity level. Write access to resources at medium or high integrity level, e.g. Windows system resources, is blocked.

4. Try to work without administrative rights

From my point of view this is the most important advice at all. Without administrative privileges it is very unlikely that malicious code executed by Internet Explorer could attack the operating system because this is blocked by the User Account Control (UAC) in Windows.

Even if you activate only SmartScreen Filtering and EPM, Internet use will become less risky.

Moon over Wangalm, Austria. 47°22'54.1"N 11°06'35.4"E

Moon over Wangalm, Austria. 47°22’54.1″N 11°06’35.4″E

Have a nice weekend.

Isolated security measures make no sense!

27 November 2014

In the past days I did lots of application protection planning with Oracle databases as backend.

Oracle SQL*Net encryption is an easy to implement measure for protection of the network traffic against sniffing attacks in a standard client-server application with an Oracle database as backend.

Why is encryption of the Oracle network traffic such important? Because everyone who can edit the sqlnet.ora configuration file is able to set up Oracle network tracing by adding just some configuration parameters.

With tracing enabled the entire session traffic, e.g. a change password command like

Alter user system identified by ",v$1ry c2mplex p$3ssword!";

is logged in plain text to the trace file. That’s extremely dangerous! Never change your password this way!

But the output of a standard SQL command like

 Select employeeName, employeeSocialSecNumber from employees;

will be reported in plain text as well, even if column ’employeeSocialSecNumber’ is secured with Oracle Transparent Data Encryption option. Works as designed, transparent to the user. Even if the user takes care of the data the attacker with admin privileges could easily create a copy from the trace file.

With SQL*Net encryption activated the trace is no longer readable.

SQL*Net encryption is set up by just adding some configuration parameters to the sqlnet.ora file on the clients and the server. Thus everyone who is able to edit the sqlnet.ora file on the client could potentially disable encryption…

Fortunately parameter SQLNET.ENCRYPTION_SERVER set to REQUIRED on the database server controls session behaviour. A client connection attempt with session encryption disabled will be rejected with error message ORA-12260. Great!

But all server admins are able to edit this configuration file…

What can we learn from this?

Applying isolated security measures will not raise the overall security level.

The House of Security

The House of Security. Click to enlarge.

The combination of security measures makes the difference!

Happy Thanksgiving!

The new first line of defence?

22 November 2014

In his latest post at ComputerWeekly.com Warwick Ashford reviews the CyberArk Report ‘Exploits of Privileged Accounts Shift the Front Lines of Security’. His post is absolutely worth reading.‘

‘“One of the reasons for this is smaller, less well-defended organisations have become a prime target for attackers who are ultimately aiming at larger partners in the supply chain,” said Mokady.’

That’s definitely true. Perhaps you remember the Home Depot data breach? This breach was caused by stolen credentials of a third-party vendor.

‘“Securing privileged accounts represents the new first line of defence in the ongoing cyber battle companies are fighting,” he added.’

Very well said. But what really confuses me is that Udi Mokady talks about the new first line of defense. 

The majority of the big data breaches have been caused by stolen credentials. With a Two Factor Authentication (TFA) most of this breaches could have been prevented.

It’s definitely very important to secure privileged accounts. With admin privileges it is very easy to change log settings or tamper audit records. But it is definitely not enough to secure only privileged accounts. Because even with standard user privileges you may have access to business critical data to do your job.

Let me give you an example. Oracle Transparent Data Encryption and SQL*Net encryption and integrity checking are easy to implement measures to secure an Oracle database. This will prevent man-in-middle attacks, eavesdropping of the data traffic and direct access to the database files.

Sounds pretty secure, doesn’t it? Unfortunately it isn’t. Even an unprivileged user, and even more a malicious insider with stolen credentials, is able to install an oracle instant client and use Excel and ODBC to create a copy of all data he could use with his standard user rights.

With TFA enabled, at least on all business critical systems, and for all users, the probability of such an event is dramatically reduced.

Securing accounts with TFA is the very first line of defense.

In addition you should decide about granting privileged access on a per task basis. For business critical infrastructure and applications an administrator should receive an authorization and one-time-password for just one task. At log off the authorizations are dropped. In the best case the admin group for a windows servers is empty. Only the local admin could always logon, but his password is in a safe place.

The authorization process should be implemented with strict application of the separation-of-duties principle, and the permissions should be granted with strict Application of the principle of least privilege. Important: The employees who grant authorizations and privileges should never have the possibility to grant whatever privileges to themselves.

Moreover the consistent application of the principle of least privileges even for standard users and processes will significantly reduce the attack surface of your company.

Nothing really new, just the same old story.

Glacier near by Grächen, Switzerland

Glacier near by Grächen, Switzerland

Have a good Weekend.

Risk management keeps the attack surface on an acceptable level

20 November 2014

In post ‘Experts: Cyber risk management requires teamwork, preparation’ Sharon Shea reports about the 2014 Advanced Cyber Security Center conference.

“‘You are not going to eliminate the risk of attacks, you are going to manage the risk’ said Michael Chertoff, former secretary of the U.S. Department of Homeland Security and executive chairman and co-founder of the Chertoff Group, during his keynote presentation at the 2014 Advanced Cyber Security Center conference.”

Well said, I fully agree. The four ways to treat risks are to transfer, eliminate, accept, or mitigate them.

To eliminate a risk is more of academic value. Eliminating the risk means eliminating the function, thus, in the worst case, eliminating the business.

The fifth option, ignore, is not acceptable for an enterprise because the hours until you are out of business could be counted on the fingers of one hand.

Risk management always starts with identifying and evaluating the risk. This is the responsibility of the business groups, with support of IT. Once you have evaluated the risk you could start managing it. Managing the risk means to bring the risk to an acceptable level, e. g. by applying mitigation measures or accepting it.

For risk evaluation it’s very important to treat attacks by malicious insiders with the same probability as attacks at servers on the perimeter of your network. If this assumption is taken into account during risk evaluation you will come to a balanced approach.

The concept of the attack surface is perfectly suited to make this clear. Even a single, not hardened, server operated inside your network increases the attack surface of your IT system dramatically because it could be used by an attacker as a gateway into your system.

Risk management should always keep the overall attack surface of a company on an acceptable level.

Minimize your attack surface, and have a good day.

Microsoft Publishes Critical Vulnerability MS14-066 in Windows SSL Library

15 November 2014

On November 11, 2014 Microsoft published in Security Bullentin MS14-066 a vulnerability in the Microsoft Secure Channel (Schannel) security package in Windows. The vulnerability is rated Critial, the CVSS base score is 10 (high).

The good news is: This vulnerability was discovered by Microsoft itself during a proactive security assessment.

The bad news is: Since nearly all Microsoft products that uses SSL will use the Schannel package, the impact of this vulnerability might be greater than that of the Heartbleed SSL bug.

Although Microsoft published a patch last Tuesday, the November patch day, it will take a long time to patch possibly thousands of systems in a company. But the guys on the dark side will not sleep. It is very likely that exploits will be available on the black market within the next days.

Thus the patching must be strategically addressed. Hopefully you have an up-to-date inventory of your systems. I would start with systems that are exposed to the internet, e.g. external mail servers or web servers. In parallel I would patch all laptops and pad computers that leave the network. Although it’s not very likely that they listen for inbound SSL connections you should check and patch them. In the next step I would patch all internal servers and the remaining internal clients.

Bon week end!

Rion-Antirion Bridge, 38°19'11.0"N 21°46'25.2"E

Rion-Antirion Bridge, 38°19’11.0″N 21°46’25.2″E

The Home Depot Story

13 November 2014

After two month of investigation the reason for the Home Depot data breach appears to be clear: Cyber criminals used stolen credentials from a third-party vendor to enter the Home Depot network. In a report by Mike Davin from November 7, 2014 one could read some more details: ‘The hackers then acquired elevated rights that allowed them to navigate portions of Home Depot’s network and to deploy “unique, custom-built malware” on its self-checkout systems in the U.S. and Canada.’

It’s a complete mystery to me why companies do not secure the access to business critical data with Two Factor Authentication. TFA would severely hamper such data breaches. I am not overly surprised that the attacker could acquire elevated privileges.

But what really worries me is that the attackers we able deploy software to the company’s point-of-sales devices. It is quite obvious that the software deployment process is not sufficiently secured and could be easily tampered.

From my point of view Home Depot’s IT should invest some time in threat modelling of the software deployment process to avoid such incidents in future. In particular the strict enforcement of the Separation-of-Duties principle will prevent unplanned deployment of critical Software.

Have a good day!

The average employee stores 2,037 files in the cloud, a new study says

8 November 2014

The Report ‘Research shows enterprises leaking shadow data to the cloud’ by Rob Wright is absolutely worth reading:

‘A new study by cloud security startup Elastica shows that enterprise employees are unknowingly leaking sensitive data through cloud apps and services.’

The results from a review of about 100 million files from approximately 100 different companies are really alarming:

‘185 files on average are shadow data — data that is uploaded to cloud services such as Dropbox or Google Drive — which has been broadly shared without approval via cloud services with either the entire enterprise or people outside of the company. Worse, 20% of those broadly shared files contain compliance data, with 56% of that compliance data being personally identifiable information such as social security numbers, 29% being personal health information, and 15% being payment card information.’

But the assumption that employees share sensitive information unknowingly, is in my opinion unrealistic. Employees use Dropbox or Skydrive to simplify their daily work!

Although BYOD is a hot topic for years now most of the businesses are not yet aware of the problem. Even if a company has not started a BYOD program, or has deliberately opted against a BYOD program, the existing policies have to be updated and communicated to all employees. If the company has decided against a BYOD program it is very important to communicate the reasons for this decision to all employees.

IT groups must implement appropriate measures to support the business strategy regarding BYOD, e.g. block Dropbox or Skydrive and provide effective and easy to use means for communication with external Partners.

Enjoy the colors …

Evening Colors, 49°35'48.1"N 6°37'05.8"E

Evening Colors, 49°35’48.1″N 6°37’05.8″E

to find some peace of mind for reading the White Paper.