Revisiting the Four Grand Challenges in Trustworthy Computing: Challenge 1

Almost a decade ago, the Computing Research Association published Four Grand Challenges in Trustworthy Computing. Working in a rapidly-evolving digital field it is easy to think everything we see is new, especially when it comes to digital crime. On a technological level this may be true, but if we look at higher level concepts has anything changed in 9 years, and what progress have we made?

In the introduction to the challenges, references are made to “increasingly portable systems in every aspect of life”, cyber defense/war, threats relating to power outages, transportation and communications system (critical infrastructure) denial of service (DoS); insider attacks; “loss of privacy; alteration of critical data; and new forms of theft and fraud on an unprecedented scale”. Each of the mentioned advancement of technology topics and identified threat areas are still relevant today. Most of which are gaining increasing awareness from the general public.

<div>The group’s overall goal was “to create an alternative future in which spam, viruses and worms… have been eliminated. In this vision of the future individuals would control their own privacy and could count on the infrastructure to deliver uninterrupted services… In such a world, policy and technology fit together in a rational way, balancing human needs with regulation and law enforcement.”</div><blockquote class="tr_bq">“In short, it would be a world in which information technology could be trusted.”</blockquote>To this end, the four identified grand challenges of trustworthy computing are:
<ol><li>Develop new approaches for eradicating widespread, epidemic attacks in cyberspace.</li><li>Ensure that new, critical systems currently on the drawing board are immune from destructive attack.</li><li>Provide tools to decision-makers in government and industry to guide future investment in information security.</li><li>Design new computing systems so that the security and privacy aspects of those systems are understandable and controllable by the average user.</li></ol><div>This single will focus on Challenge 1. The other challenges will be looked at in later singles.</div><div>
</div>Challenge 1: Eliminate Epidemic Attacks by 2014
Epidemic attacks, or “Cyber Epidemics”, in this case are basically categorized as viruses and worms, spam, and Distributed Denial of Service (DDoS) attacks.

Suggested approaches to achieve this goal are summarized as follows:
<ul><li>Immune System for Networks - respond to and disable viruses and worms by dynamically managed connectivity</li><li>Composability - two systems operating together will not introduce vulnerabilities that neither have individually</li><li>Knowledge Confinement - partitioning information so an attacker never has enough knowledge to propagate through the network</li><li>Malice Tolerance - continue to operating in spite of arbitrarily destructive behavior of a minority of system components</li><li>Trusted Hardware - tie software and services guarantees to the physical security of hardware devices</li></ul>With the 9th of July 2012 recently past, the first DoS attack that comes to mind is DNSchanger. And with the increase of cyber activism / terrorism, DDoS is a relatively common issue for businesses and government entities. Looking at the Q1 2012 reports from McAfee and Trend Micro, viruses and malware are at all time highs for all platforms. Spam is also still increasing, with Norton’s Cyber Crime Index projecting spam to be about 68% of all sent email traffic as total email traffic continues to increase.

Some advancements have been made in the suggested areas, for example Malice Tolerance, has been improved as a natural consequence to massively distributed computing. Trusted hardware became a big topic when Windows Visa used trusted platform modules (TPM) with BitLocker for disk encryption. TPM is still used, mostly in businesses, but did not really catch on with the public, seemingly because of privacy concerns and general lack of interest.

Immune System for Networks has advanced, but there are limitations. Security management platforms have to account for activities from many different sources, where malicious activities may not be known, or even suspicious. For businesses, targeted attacks using social engineering are increasing. The weak link continues to be a lack of awareness on the part of the user that allows many attacks to be successful. In these situations, current security systems are not robust enough to detect, and deal with, all types of incidents introduced by the users, while at the same time allowing flexibility and access that they require. Further, many organizations have not given proper consideration, or devoted enough resources, to their cyber security. For example, many business do not fully consider their cyber security, and are being targeted because of it [Irish Times][Infosec Island][CBS News].

As far as systems operating together that will not introduce vulnerabilities that neither have individually, I immediately think of application programming interfaces (API). APIs have become a common way for software components to communicate with each other, but many additional risks exist on both client and server sides when APIs are used [Dark Reading]. The Cloud Security Alliance even listed “Insecure Interfaces and APIs” as one of the top threats to Cloud computing (v1.0).

Finally, making sure an attacker never has enough information to propagate through the network. Unfortunately, this reminds me of Stuxnet. In the case of Stuxnet, the the malware propagates across Windows machines via network/USB, and checks for Seimens software. If the software is known to run on, or interface with, Windows machines, then the attacker just needs to take advantage of a vulnerability (or three) that is likely to exist in the system. There is not really much information that is needed to propagate through the network if network connectivity exits, and both systems are vulnerable to the same attack. Knowledge confinement could potentially be done by using non-standard configurations, but, then again, Verizon claim that in 2011, “97% of breaches were avoidable through simple or intermediate controls”.

So looking at Challenge 1 as it was defined in 2003, eliminating cyber epidemics by 2014 seems unrealistic at this stage. While some of the suggested approaches have been developed, the application of these ideas into the practices of people, businesses, and even governments has not come to fruition on a large scale. This does not mean we are less secure. WhiteHat Security claims that in Q1 of 2012 there are less (web) vulnerabilities, and those that are identified are being fixed faster than in previous years. But, like Verizon, they also found that basic mitigation techniques, such as application firewalls, could have reduced the risk of 71% of all custom Web application vulnerabilities.

Until everyone begins to understand cyber security and their role (and take it seriously), the challenge will not be met. Will the Challenge 1 recommendations ever completely eliminate cyber epidemics? I don’t think so. They can most definitely help, just like implementing basic security measures, but the Internet is not a closed system, and it only takes one weak link.

Bookmark and Share
Image: FreeDigitalPhotos.net

5 min read

CFP: IRISSCERT Cyber Crime Conference

The IRISSCERT Cyber Crime Conference will be held November 22, 2012 in Dublin, Ireland. More information can be found here.

They are currently running a call for papers on the topics below. Audience is business community within Ireland.

Submission deadline: July 20, 2012 17:00 GMT.
<ul><li>Cyber Crime</li><li>Cyber Security</li><li>Cloud Security</li><li>Incident Response</li><li>Data Protection</li><li>Incident Investigation</li><li>Information Security</li><li>Threats Information</li><li>Security Trends</li><li>Securing the Critical Network Infrastructure</li></ul><div>

Technical Streams:</div><div><ul><li>Security Tools</li><li>Application Security</li><li>Network Security</li><li>Cloud Security</li><li>Database Security</li><li>Electronic Device Security</li><li>Computer Forensics</li></ul></div>

~1 min read

Webinar: Finding Evidence in an Online World - Trends & Challenges in Digital Forensics

[Edit] A recording of the webinar can be found here: http://www.forensicfocus.com/DF_Multimedia/page=watch/id=79/d=1/

Resingle from: http://www.forensicfocus.com/News/article/sid=1898/

Learn about the methods and techniques used to recover Internet-related evidence left behind on hard drives and RAM by registering for a free Forensic Focus webinar delivered by Jad Saliba of JADsoftware, developers of Internet Evidence Finder (IEF). Jad will discuss a wide range of potential evidence sources including cloud, social networking, chat, web history, P2P, and webmail artifacts.

Date: Tuesday, July 17 2012Time: 11AM EDT US / 4PM BST UK / 15:00 GMT
Duration: 30 mins

All attendees will receive 10% off the purchase price of IEF until August 31st.

Register now at http://forensicfocus.enterthemeeting.com/m/4BGB7KYU

~1 min read

ICTTF - Cyber Threat Summit 2012

The ICTTF Cyber Threat Summit will be held in Dublin on September 20-21, 2012. Have a look at this years agenda. You can get a 10% registration discount if you use the code: SPGNSPXV.


For those of you who will not be able to attend, there is a free web broadcast that will be offered. Register for login details.

More information can be found at cyberthreatsummit.com.

From Cyber Threat Summit.com:
<blockquote>Following last years hugely successful event, ICTTF are proud to announce an enhanced event this year over two days.
The event will be a conference and exhibition. The syllabus will be delivered by over 20 of the world’s leading cyber security experts with a specific European perspective.
A cross industry master class developed for senior executives to understand the developments, strategies and best practice in cyber security. 
Learn to understand the types of threats, potential impacts, motivational factors and trends to observe in cyber security. 
Review best practice in protecting your organisation and develop appropriate cyber defence strategies. 
Network with your industry peers and discuss the latest cyber security technologies in the marketplace.</blockquote>

~1 min read

Predictive Policing and Online Crime

FutureCrimes.com just passed on the single Sci-fi policing: predicting crime before it occurs. Crime modeling used by the LAPD appears to have contributed to, or is the result of, a 13% decrease in crime in the area in which it was being tested.

The crime model is apparently based on models to predict earthquake aftershocks. While I’ve not yet found any publications specific to the method, I am assuming the model deals with predicting crime based on the likelihood reoccurrence in a particular area.

I have discussed this type of modeling with officers from Chile before, who had been working on something similar before 2008. Crime types and the locations in which these were likely to happen could be accurately predicted, they said. The issue with disruption on crime hot-spots, however, is that the overall amount of crime is actually not reduced, but instead dispersed to other areas. For example, if a model predicts crime is likely to happen in a certain area, and officers begin patrolling the area, the crime is probably not tied specifically to the physical location. Yes, the patrol is a deterrent, and crime is reduced in the area, but what the Chile officers found what that the crime moved to other areas in the city.

In the LAPD’s case, crime was modeled and measured within a specific area. The article submits that “[c]rimes were down in the area 13 percent following the rollout compared to a slight uptick across the rest of the city where the program wasn’t being used”. The question is, was the “slight uptick” in other areas in the city a result of naturally increasing crime, or is it the result of displacement of crime from identified hot-spots? Based on Chile’s experiences, I am guessing the latter.

Predictive policing is an interesting concept that seems to be a natural extension of data mining over crime data. However, I have not yet seen research dealing with predictive policing of online crime. Just like a victim of opportunistic crime is likely to be re-victimized in the physical world, is it also likely that a victim of opportunistic crime online would be more likely to be re-victimized? If so, what methods to ‘cyber cops’ have to disrupt such crime? And finally, if disruption were possible online, would the crimes just be dispersed instead of reduced?


Related:

<ul><li>Stopping Crime Before it Starts</li></ul>

Google Scholar Search:

<ul><li>Predictive Policing</li><li>Modeling Crime</li></ul>

Image: FreeDigitalPhotos.net

1 min read

International Symposium on Cybercrime Response (ISCR) 2012

I’m just back from the 1st INTERPOL NCRP Cybercrime Training Workshop and International Symposium on Cybercrime Response 2012, held in Seoul, South Korea. The joint INTERPOL and Korea National Police (KNP) conference was hosted by the KNP Cyber Terror Response Center (CTRC).

ICSR 2012 Agenda

<div class="separator" style="clear: both; text-align: center;"></div>The first day was a look at Law Enforcement (LE) communication networks, including INTERPOL, the G8 24/7 High Tech Crime Network1, and even more informal communication channels. The overall consensus seems to be that the more formal networks are too slow to deal with the requirements of international cybercrime investigation requests. This appears to be partially a limitation with the efficiency of the networks as well as the ability of receiving countries to process the requests either because of resource issues or laws (or lack of) in the requested country to deal with the investigation request.

It was determined that informal channels of LE communication are currently more effective since they bypass international bureaucracy. These channels appeared to be created mostly by networking (conferences, etc.), and luck.

There essentially seemed to be three camps: Formal communication networks like INTERPOL and G8 24/7, less formal networks created via bilateral agreements, and LE social networks (p2p). Each camp had success stories, and I know each has had failures.

The question is, how can the situation be improved? Criminal communication networks at an international level work much more efficiently than law enforcement networks. There are many reasons why, but what can be done?

The issue of trust in LE communication was brought up, where if you are requesting information or cooperation the person with whom you are communicating should be more than just a name on a list. This is an interesting point to me. If LE is given a list of contact points per country from a formal communication network, do they question the contact point? I think they would automatically trust the contact point via the reputation of the network referring them, even without meeting the contact personally. The issue comes when these contacts are slow or fail to respond to requests from the network. Trust, then, comes from showing you are reliable when something is requested, whether or not you physically meet the contact representative.

Another interesting point was the concept of “exercising” your team(s) in international request response. LE basically creates an incident response (IR) plan for international requests. Incident response is a huge topic in network security. If you read this article, for example, it is geared (at a high level) towards setting up an incident response plan. Each of the tips, however, could be directly transposed into international LE response. The discussed point of exercising your team would be the final testing requirement. Unfortunately, this is the phase that is often neglected, usually due to time and resources. In the case of LE, especially at an international level, it would be difficult to coordinate and perhaps even justify the time needed just for testing communication when it was not really requested.

The topic of international LE communication came down to looking at a few different questions (and I added a few): What exactly is the problem, and has a solution been identified? What type of information is needed? Who has legal authority? Have international procedures been established? Are all concerned bodies part of the procedure and willing to cooperate? How do we test the procedure? How do we measure success? Who is responsible for updates?

These questions are not exactly easy to answer, even within a single organization, and working with multiple organizations in multiple jurisdictions to find answers to these questions is even more difficult and time consuming. In my opinion, this is where providers of formal networks should be filling in the gaps. I should not expect my local investigators to create their own international networks, and unless this process is centralized then different procedures will be created, incomplete networks will be formed and there will be much duplication of effort.

The rest of the conference further discussed communication and law, examined current threats, and some gave case studies (success stories) involving international communication and collaboration between international law enforcement, private sector and sometimes academia.

Overall the conference is directed at practitioners. It did not get very technical nor theoretical, and could probably be understood by anyone regardless of their familiarity with cybercrime. Some cybercrime damage estimates were given, although how to accurately measure is a problem that was not addressed. The estimates looked impressively dramatic, but felt like the stats from different presentations did not relate to each other well.

Similarly, definitions used in each presentation were quite different for the same terminology. The group was composed of people from many different countries, all practitioners, but a lack of consistency in the use (and scope) of terms was an obvious communication problem, even for terms as general as “cybercrime”. Sometimes nonstandard term usage made it difficult for me to know exactly what the speaker really meant. This made me realize that even in the same area of cybercrime investigation, we are speaking different languages. How do we expect to be able to communicate at a practical level when it is so difficult to accurately communicate our needs in a way that can be understood by everyone in the area?

Many case studies were given by law enforcement that dealt with international communication, but other than “we need more / better communication” I really did not see any actionable solution proposed beyond ad-hoc cooperation. From these great case studies and information from the private sector, I was still left with a feeling of where do we start?

Overall, I found the conference to be interesting. Topics were mostly on communication, but, unfortunately, with little actionable items discussed. Case studies are useful for understanding problems and potential solutions. Some slightly more technical presentations outlined how technologies can potentially be used to help law enforcement’s current situation when dealing with cybercrime. The (potentially) most useful benefit of the conference, however, was the contacts made. There was not enough time to talk to everyone as much as I would have liked, but there appears to be potential in the group to help drive effective law enforcement communication on a global scale.


Image: FreeDigitalPhotos.net

1. The G8 24/7 High Tech Crime Network (HTCN) is an informal network that provides around-the-clock, high-tech expert contact points: IT Law Wiki 

5 min read