Tuesday, March 16, 2010

Imagine a World where passwords were useless

Recently, in the press:
March 12, The Register – (International) SSD tools crack passwords 100 times
faster.
Password-cracking tools optimised to work with SSDs have achieved speeds up to 100 times quicker than previously possible. After optimizing its rainbow tables of password hashes to make use of SSDs Swiss security firm Objectif Securite was able to crack 14-digit WinXP passwords with special characters in just 5.3 seconds. Objectif Securite spokesman told Heise Security that the result was 100 times faster than possible with their old 8GB Rainbow Tables for XP hashes. The exercise illustrated that the speed of hard discs rather than processor speeds was the main bottleneck in password cracking based on password hash lookups. Objectif’s test rig featured an ageing Athlon 64 X2 4400+ with an SSD and optimised tables containing 80GB of password hashes. The system supports a brute force attack of 300 billion passwords per second, and is claimed to be 500 times faster than a password cracker from Russian firm Elcomsoft that takes advantages of the number crunching prowess of a graphics GPU from NVIDIA.
(By the way, SSD stands for Solid-State Drive -- a faster way to store data)

An SSD is much faster than a hard drive but orders of magnitude slower than fast RAM, so if these folks ran the same test with the Rainbow Tables in local RAM they'd be cracking the same passwords in 0.0053 seconds (unless this moved the performance bottleneck to the CPU).

If you want a solution, I recommend something like this.

Thursday, February 25, 2010

Sometimes you're already in the cloud

Federal Trade Commission links wide data breach to file sharing

The Federal Trade Commission (FTC) said Monday that it has uncovered widespread data breaches at companies, schools and local governments whose employees are swapping music, software and movie files over the Internet.

http://www.washingtonpost.com/wp-dyn/content/article/2010/02/22/AR2010022204889.html?hpid=sec-tech


Peer-to-Peer (P2P) file sharing was perhaps the second killer app for the Internet (after Mosaic) because of its ease of use and utility for sharing free music and porn.

P2P is very easy to use, after installing the application select the files you want to share, then start browsing and downloading files from other users. P2P networks are comprised of millions
and often tens of millions of users -- making these applications the largest compute and storage networks in the world.

There are two big risks with P2P:
  1. Oversharing -- incorrectly configuring the P2P application to share all of your files
  2. Compromise -- P2P is often leveraged to download malware to unsuspecting users
The FTC warning described in the Post article arises from the problem of oversharing. For business, the problem arises because the more P2P users you have, the more likely that one or more of them are sharing confidential information -- without realizing it.

Assuring the secure configuration of P2P file sharing across more than a handful of users is very, very difficult. For a large enterprise infeasible. In an enterprise of any size, security depends on the detection of P2P and either on blocking all use or limiting use to selected systems that are subject to stringent access and configuration controls.

Don't be fooled into thinking that your firewalls protect you from this threat. Most P2P applications have been designed to bypass firewalls. P2P detection and control requires the deployment of effective Intrusion Detection (IDS) or Intrusion Protection (IPS) systems.

IPS systems will give you the capability of discriminating between types of P2P applications, selecting a response, and protecting your data.

Michael

Wednesday, February 24, 2010

You Should Use Profiling

Thanks to Headline T-shirts for this amusing image.

Torn from the headline, "Chinese school linked to Google
attacks also linked to ‘01 attacks on White House site.
" Comes the thought that only idiots fail to profile threats.

For network security this is a simple matter:
  1. Know your services and
  2. Know your users

The first item requires that you self-check with port scans, vulnerability scans, and traffic analysis to understand your networked application and your potential vulnerabilities. You should always plan that there will be defects you do not know about -- these are called zero-day attacks. Always patch everything you can and what you can't patch will require even more protection. Between zero-day worries and the things you can't patch, you'll need intrusion detection and prevention.

The second item should be incorporated in your site user statistics and operation's processes. This means understanding on a statistical and individual basis who, where, and how your users access your network applications. Once you have a grasp of these behaviors it becomes very simple to develop two key profiles: one that describes how authorized users behave, and second, the converse -- how unauthorized users behave. For example, an Austin Texas based music store will typically have many local customers and a few other customers from around Texas or perhaps more remote places like Nashville, New York, or Los Angeles. Once you have the geographic profile of your customers it becomes very useful to think about places you don't have customers. Places like South Korea, China, Eastern Europe, and Brazil; by extension everywhere except North America. Obviously, the same store in Shanghai will have a different customer profile.

Now comes the important part.

USE THE PROFILE.

If folks from Lilliput never visit your site, treat their traffic with care, blocking it is best, but if you can't bring yourself to block them then at least redirect Lilliputian visitors to an "interest" form, gather some marketing information and put them on a white list. Now, that's for people from Lilliput visiting you, even less likely is authorized traffic from your network going to Lilliput (and really Lilliput is just a place holder for real threat countries: China for example.) IDS and IPS exist for a reason, so do firewalls, make sure you are filtering, blocking, or at least detecting traffic to specific countries and regions of the world you are not doing business with.

Friday, January 22, 2010

The Cloud is Attacking You

Collected from US-CERT and other sources:

Microsoft has released out-of-band Security Bulletin MS10-002
(http://www.microsoft.com/technet/security/bulletin/MS10-002.mspx) to resolve seven privately reported vulnerabilities and one publicly disclosed vulnerability. This update includes resolution for a recently, reported zero-day vulnerability in Internet Explorer (IE) which is detailed in Microsoft Security Advisory 979352. (http://www.microsoft.com/technet/security/advisory/979352.mspx)

This vulnerability may have been used in the recent attacks on Google and other organizations. Knowledge of this attack is now widely known and the broader criminal community is now leveraging this exploit.

Organizations should review Microsoft Security Bulletin MS10-002 and apply the patches as soon as possible. US-CERT recommends that the patches be tested within your organization enterprise first and then deployed in an expedited manor. In addition to patching, the recommendations below may be leveraged to better position your organization to withstand future serious vulnerabilities.

Enable Data Execution Prevention (DEP) both in software and hardware if supported (see Microsoft KB 912923). This may provide future vulnerability resiliency. (http://support.microsoft.com/kb/912923)

Be proactive by defining internal servers that should generally be trusted that can be placed in Internet Explorer’s "Trusted Sites" list. By doing so, this may ease the impact to your organization should a future reactive measure be required to set the "Internet Zone" to a "High" security setting. (See Microsoft KB 174360 -- http://support.microsoft.com/kb/174360)

Monday, December 21, 2009

PCI compliance in the cloud (Part B)

First published here on 12/14/2009:

In Part A, I discussed the functional requirements for a virtual firewall. Now let's take a look at the technologies required to make this work.

Traffic segmentation

Firewalls segment traffic. That's obvious, but think about this in the cloud. For this to work, there must be a method to assure that all traffic to/from a tenant is available for inspection and the application of access controls by the firewall. This means the virtualization host must support at least one of the following:

  1. Routing traffic to/from a tenant system through the virtual firewall at the network layer, this is how "bump-in-the-wire" devices work. This is a poor solution in virtual environments.
  2. Routing traffic to/from a tenant system through the virtual firewall at the hypervisor layer. This is a more efficient technique because it reduces latency and the number of CPU cycles needed to inspect packets.
  3. Other novel techniques enabled by virtualization -- Magic. I call this "Magic" because it is now possible to create intelligence around which packets need to be inspected or filtered by the firewall.




Configuration management

Virtual firewalls must include configuration management capabilities. Why? Because it is much easier to reconfigure ports and networks in the virtual environment, or even configure a virtual machine to bridge networks. This is a tricky situation in the cloud because this capability requires visibility and integration into the cloud provider’s management framework.

Dynamic policy enforcement

Virtual machines migrate. This requires policy enforcement capabilities that are independent of location and layer 2 and 3 connectivity. Segmentation and access controls must transparently follow virtual machines as they migrate or are copied between virtualization hosts, data centers, or cloud providers.


Cloud management

Unless cloud providers wish to assume all of the responsibility for correct configuration of their customer's virtual firewalls, the provider must give their customers control of the firewall policies while at the same time preventing one customer from inappropriately blocking traffic to another customer.

Can anyone name a cloud provider who makes this all possible?

Michael

Catbird

PCI compliance in the cloud (Part A)

First posted here on 12/07/2009:

The new cloud (or if you prefer hosted computing services, or IAAS) rests on top of virtualization. If we’re going to take the cloud seriously, it will have to be compliant. One of the more stringent compliance frameworks is PCI DSS. Let’s look at requirement one and start building a solution for the cloud.

PCI DSS 1.2.1, test procedure 1.1: Obtain and inspect the firewall and router configuration standards and other documentation specified below to verify that standards are complete.


Deploying virtual firewalls is insufficient, as the virtual firewall must share the support structure with the virtual machines, virtual switches, and hypervisor. Technical controls must also be deployed to validate the configuration of a virtual firewall and to detect and alert if tampering occurs.


Physical firewalls are insufficient unless every virtual machine is on a unique VLAN, VLAN hopping is mitigated, and all traffic must flow through the physical firewall. Further, virtual machine mobility must be constrained and virtual machines must be subjected to the same firewall policy regardless of physical location or layer 2 connectivity.


While sufficient, the physical solution may be impractical due to the constraints it places on deployment, consolidation, and high availability.


The optimal solution will be one that allows deployment of a best practice virtualization architecture for security, integrity, and availability, which also maximizes consolidation and the virtualization return on investment.
This requires a virtualized firewall deployment with the following characteristics:

  1. Assurance of integrity for the security management framework
  2. Enforcement of separation of duties for server, network, and security operations
  3. Enforcement of least privilege
  4. Dynamic network segmentation that is independent of location, IP address, or layer 2 connectivity
  5. Integrated auditing and configuration management for virtualization layers



If that sounds like more than a firewall, you’re right.

Michael

Thursday, August 13, 2009

Missing Russian Ship

Right out of a Tom Clancy novel, a 4,000 tonne cargo ship is missing. Reportedly, this ship had nothing worth hijacking. There are not a lot of facts about this available but there are some interesting bits:
  1. 10 armed men boarded the ship about a week before it disappeared. They left 12 hours later.
  2. The ship spent two weeks in Kaliningrad before beginning its voyage.
  3. The Russians are searching for the ship with all available resources.
As reported here, the Russians have battlefield nuclear weapons in Kaliningrad.

I wish the Russians good luck in their search and I hope the NATO forces provide all available resources to assist.