5 minute read

Almost a decade ago, the Computing Research Association published Four Grand Challenges in Trustworthy Computing. Working in a rapidly-evolving digital field it is easy to think everything we see is new, especially when it comes to digital crime. On a technological level this may be true, but if we look at higher level concepts has anything changed in 9 years, and what progress have we made?

In the introduction to the challenges, references are made to "increasingly portable systems in every aspect of life", cyber defense/war, threats relating to power outages, transportation and communications system (critical infrastructure) denial of service (DoS); insider attacks; "loss of privacy; alteration of critical data; and new forms of theft and fraud on an unprecedented scale". Each of the mentioned advancement of technology topics and identified threat areas are still relevant today. Most of which are gaining increasing awareness from the general public.

The group's overall goal was "to create an alternative future in which spam, viruses and worms... have been eliminated. In this vision of the future individuals would control their own privacy and could count on the infrastructure to deliver uninterrupted services... In such a world, policy and technology fit together in a rational way, balancing human needs with regulation and law enforcement."
"In short, it would be a world in which information technology could be trusted."
To this end, the four identified grand challenges of trustworthy computing are:
  1. Develop new approaches for eradicating widespread, epidemic attacks in cyberspace.
  2. Ensure that new, critical systems currently on the drawing board are immune from destructive attack.
  3. Provide tools to decision-makers in government and industry to guide future investment in information security.
  4. Design new computing systems so that the security and privacy aspects of those systems are understandable and controllable by the average user.
This single will focus on Challenge 1. The other challenges will be looked at in later singles.

Challenge 1: Eliminate Epidemic Attacks by 2014
Epidemic attacks, or "Cyber Epidemics", in this case are basically categorized as viruses and worms, spam, and Distributed Denial of Service (DDoS) attacks.

Suggested approaches to achieve this goal are summarized as follows:
  • Immune System for Networks - respond to and disable viruses and worms by dynamically managed connectivity
  • Composability - two systems operating together will not introduce vulnerabilities that neither have individually
  • Knowledge Confinement - partitioning information so an attacker never has enough knowledge to propagate through the network
  • Malice Tolerance - continue to operating in spite of arbitrarily destructive behavior of a minority of system components
  • Trusted Hardware - tie software and services guarantees to the physical security of hardware devices
With the 9th of July 2012 recently past, the first DoS attack that comes to mind is DNSchanger. And with the increase of cyber activism / terrorism, DDoS is a relatively common issue for businesses and government entities. Looking at the Q1 2012 reports from McAfee and Trend Micro, viruses and malware are at all time highs for all platforms. Spam is also still increasing, with Norton's Cyber Crime Index projecting spam to be about 68% of all sent email traffic as total email traffic continues to increase.

Some advancements have been made in the suggested areas, for example Malice Tolerance, has been improved as a natural consequence to massively distributed computing. Trusted hardware became a big topic when Windows Visa used trusted platform modules (TPM) with BitLocker for disk encryption. TPM is still used, mostly in businesses, but did not really catch on with the public, seemingly because of privacy concerns and general lack of interest.

Immune System for Networks has advanced, but there are limitations. Security management platforms have to account for activities from many different sources, where malicious activities may not be known, or even suspicious. For businesses, targeted attacks using social engineering are increasing. The weak link continues to be a lack of awareness on the part of the user that allows many attacks to be successful. In these situations, current security systems are not robust enough to detect, and deal with, all types of incidents introduced by the users, while at the same time allowing flexibility and access that they require. Further, many organizations have not given proper consideration, or devoted enough resources, to their cyber security. For example, many business do not fully consider their cyber security, and are being targeted because of it [Irish Times][Infosec Island][CBS News].

As far as systems operating together that will not introduce vulnerabilities that neither have individually, I immediately think of application programming interfaces (API). APIs have become a common way for software components to communicate with each other, but many additional risks exist on both client and server sides when APIs are used [Dark Reading]. The Cloud Security Alliance even listed "Insecure Interfaces and APIs" as one of the top threats to Cloud computing (v1.0).

Finally, making sure an attacker never has enough information to propagate through the network. Unfortunately, this reminds me of Stuxnet. In the case of Stuxnet, the the malware propagates across Windows machines via network/USB, and checks for Seimens software. If the software is known to run on, or interface with, Windows machines, then the attacker just needs to take advantage of a vulnerability (or three) that is likely to exist in the system. There is not really much information that is needed to propagate through the network if network connectivity exits, and both systems are vulnerable to the same attack. Knowledge confinement could potentially be done by using non-standard configurations, but, then again, Verizon claim that in 2011, "97% of breaches were avoidable through simple or intermediate controls".

So looking at Challenge 1 as it was defined in 2003, eliminating cyber epidemics by 2014 seems unrealistic at this stage. While some of the suggested approaches have been developed, the application of these ideas into the practices of people, businesses, and even governments has not come to fruition on a large scale. This does not mean we are less secure. WhiteHat Security claims that in Q1 of 2012 there are less (web) vulnerabilities, and those that are identified are being fixed faster than in previous years. But, like Verizon, they also found that basic mitigation techniques, such as application firewalls, could have reduced the risk of 71% of all custom Web application vulnerabilities.

Until everyone begins to understand cyber security and their role (and take it seriously), the challenge will not be met. Will the Challenge 1 recommendations ever completely eliminate cyber epidemics? I don't think so. They can most definitely help, just like implementing basic security measures, but the Internet is not a closed system, and it only takes one weak link.

Bookmark and Share
Image: FreeDigitalPhotos.net