Webinar: Industrial Espionage, Weaponized Malware, and State-Sponsored Cyber Attacks: How to Identify, Counter, and React
Date: 31 July, 2012
Time: 15:00 PT (22:00 GMT, 23:00 IST)
Duration: 60 min.
From Info Security:
The information security industry has been issuing warnings of an increase in sophisticated state-sponsored cyber attacks in the wake of Flame.
Neither the U.S. or Israel have denied their role in the use of recently discovered weaponized malware. Add in to the mix the fact that India recently announced the empowerment of its government agencies to carry out state-sponsored cyberattacks, and suddenly political – and thus industrial – espionage has never been more of a threat. Join Infosecurity Magazine in a free one-hour webinar to take a detailed look at how weaponized malware is getting on to crucial systems and how you can protect against it.
You will learn:
<ol><li>How state-sponsored cyber attacks are succeeding</li><li>How weaponized malware is getting on to crucial systems</li><li>How you can protect against these attacks</li><li>How to recognize and identify these targeted attacks</li><li>How to counter against weaponized malware</li></ol>
Almost a decade ago, the Computing Research Association published Four Grand Challenges in Trustworthy Computing. Working in a rapidly-evolving digital field it is easy to think everything we see is new, especially when it comes to digital crime. On a technological level this may be true, but if we look at higher level concepts has anything changed in 9 years, and what progress have we made?
In the introduction to the challenges, references are made to “increasingly portable systems in every aspect of life”, cyber defense/war, threats relating to power outages, transportation and communications system (critical infrastructure) denial of service (DoS); insider attacks; “loss of privacy; alteration of critical data; and new forms of theft and fraud on an unprecedented scale”. Each of the mentioned advancement of technology topics and identified threat areas are still relevant today. Most of which are gaining increasing awareness from the general public.
<div>The group’s overall goal was “to create an alternative future in which spam, viruses and worms… have been eliminated. In this vision of the future individuals would control their own privacy and could count on the infrastructure to deliver uninterrupted services… In such a world, policy and technology fit together in a rational way, balancing human needs with regulation and law enforcement.”</div><blockquote class="tr_bq">“In short, it would be a world in which information technology could be trusted.”</blockquote>To this end, the four identified grand challenges of trustworthy computing are:
<ol><li>Develop new approaches for eradicating widespread, epidemic attacks in cyberspace.</li><li>Ensure that new, critical systems currently on the drawing board are immune from destructive attack.</li><li>Provide tools to decision-makers in government and industry to guide future investment in information security.</li><li>Design new computing systems so that the security and privacy aspects of those systems are understandable and controllable by the average user.</li></ol><div>This single will focus on Challenge 1. The other challenges will be looked at in later singles.</div><div>
</div>Challenge 1: Eliminate Epidemic Attacks by 2014
Epidemic attacks, or “Cyber Epidemics”, in this case are basically categorized as viruses and worms, spam, and Distributed Denial of Service (DDoS) attacks.
Suggested approaches to achieve this goal are summarized as follows:
<ul><li>Immune System for Networks - respond to and disable viruses and worms by dynamically managed connectivity</li><li>Composability - two systems operating together will not introduce vulnerabilities that neither have individually</li><li>Knowledge Confinement - partitioning information so an attacker never has enough knowledge to propagate through the network</li><li>Malice Tolerance - continue to operating in spite of arbitrarily destructive behavior of a minority of system components</li><li>Trusted Hardware - tie software and services guarantees to the physical security of hardware devices</li></ul>With the 9th of July 2012 recently past, the first DoS attack that comes to mind is DNSchanger. And with the increase of cyber activism / terrorism, DDoS is a relatively common issue for businesses and government entities. Looking at the Q1 2012 reports from McAfee and Trend Micro, viruses and malware are at all time highs for all platforms. Spam is also still increasing, with Norton’s Cyber Crime Index projecting spam to be about 68% of all sent email traffic as total email traffic continues to increase.
Some advancements have been made in the suggested areas, for example Malice Tolerance, has been improved as a natural consequence to massively distributed computing. Trusted hardware became a big topic when Windows Visa used trusted platform modules (TPM) with BitLocker for disk encryption. TPM is still used, mostly in businesses, but did not really catch on with the public, seemingly because of privacy concerns and general lack of interest.
Immune System for Networks has advanced, but there are limitations. Security management platforms have to account for activities from many different sources, where malicious activities may not be known, or even suspicious. For businesses, targeted attacks using social engineering are increasing. The weak link continues to be a lack of awareness on the part of the user that allows many attacks to be successful. In these situations, current security systems are not robust enough to detect, and deal with, all types of incidents introduced by the users, while at the same time allowing flexibility and access that they require. Further, many organizations have not given proper consideration, or devoted enough resources, to their cyber security. For example, many business do not fully consider their cyber security, and are being targeted because of it [Irish Times][Infosec Island][CBS News].
As far as systems operating together that will not introduce vulnerabilities that neither have individually, I immediately think of application programming interfaces (API). APIs have become a common way for software components to communicate with each other, but many additional risks exist on both client and server sides when APIs are used [Dark Reading]. The Cloud Security Alliance even listed “Insecure Interfaces and APIs” as one of the top threats to Cloud computing (v1.0).
Finally, making sure an attacker never has enough information to propagate through the network. Unfortunately, this reminds me of Stuxnet. In the case of Stuxnet, the the malware propagates across Windows machines via network/USB, and checks for Seimens software. If the software is known to run on, or interface with, Windows machines, then the attacker just needs to take advantage of a vulnerability (or three) that is likely to exist in the system. There is not really much information that is needed to propagate through the network if network connectivity exits, and both systems are vulnerable to the same attack. Knowledge confinement could potentially be done by using non-standard configurations, but, then again, Verizon claim that in 2011, “97% of breaches were avoidable through simple or intermediate controls”.
So looking at Challenge 1 as it was defined in 2003, eliminating cyber epidemics by 2014 seems unrealistic at this stage. While some of the suggested approaches have been developed, the application of these ideas into the practices of people, businesses, and even governments has not come to fruition on a large scale. This does not mean we are less secure. WhiteHat Security claims that in Q1 of 2012 there are less (web) vulnerabilities, and those that are identified are being fixed faster than in previous years. But, like Verizon, they also found that basic mitigation techniques, such as application firewalls, could have reduced the risk of 71% of all custom Web application vulnerabilities.
Until everyone begins to understand cyber security and their role (and take it seriously), the challenge will not be met. Will the Challenge 1 recommendations ever completely eliminate cyber epidemics? I don’t think so. They can most definitely help, just like implementing basic security measures, but the Internet is not a closed system, and it only takes one weak link.
The IRISSCERT Cyber Crime Conference will be held November 22, 2012 in Dublin, Ireland. More information can be found here.
They are currently running a call for papers on the topics below. Audience is business community within Ireland.
Submission deadline: July 20, 2012 17:00 GMT.
<ul><li>Cyber Crime</li><li>Cyber Security</li><li>Cloud Security</li><li>Incident Response</li><li>Data Protection</li><li>Incident Investigation</li><li>Information Security</li><li>Threats Information</li><li>Security Trends</li><li>Securing the Critical Network Infrastructure</li></ul><div>
Technical Streams:</div><div><ul><li>Security Tools</li><li>Application Security</li><li>Network Security</li><li>Cloud Security</li><li>Database Security</li><li>Electronic Device Security</li><li>Computer Forensics</li></ul></div>
[Edit] A recording of the webinar can be found here: http://www.forensicfocus.com/DF_Multimedia/page=watch/id=79/d=1/
Resingle from: http://www.forensicfocus.com/News/article/sid=1898/
Learn about the methods and techniques used to recover Internet-related evidence left behind on hard drives and RAM by registering for a free Forensic Focus webinar delivered by Jad Saliba of JADsoftware, developers of Internet Evidence Finder (IEF). Jad will discuss a wide range of potential evidence sources including cloud, social networking, chat, web history, P2P, and webmail artifacts.
Date: Tuesday, July 17 2012Time: 11AM EDT US / 4PM BST UK / 15:00 GMT
Duration: 30 mins
All attendees will receive 10% off the purchase price of IEF until August 31st.
Register now at http://forensicfocus.enterthemeeting.com/m/4BGB7KYU
For those of you who will not be able to attend, there is a free web broadcast that will be offered. Register for login details.
More information can be found at cyberthreatsummit.com.
From Cyber Threat Summit.com:
<blockquote>Following last years hugely successful event, ICTTF are proud to announce an enhanced event this year over two days.
The event will be a conference and exhibition. The syllabus will be delivered by over 20 of the world’s leading cyber security experts with a specific European perspective.
A cross industry master class developed for senior executives to understand the developments, strategies and best practice in cyber security.
Learn to understand the types of threats, potential impacts, motivational factors and trends to observe in cyber security.
Review best practice in protecting your organisation and develop appropriate cyber defence strategies.
Network with your industry peers and discuss the latest cyber security technologies in the marketplace.</blockquote>
FutureCrimes.com just passed on the single Sci-fi policing: predicting crime before it occurs. Crime modeling used by the LAPD appears to have contributed to, or is the result of, a 13% decrease in crime in the area in which it was being tested.
The crime model is apparently based on models to predict earthquake aftershocks. While I’ve not yet found any publications specific to the method, I am assuming the model deals with predicting crime based on the likelihood reoccurrence in a particular area.
I have discussed this type of modeling with officers from Chile before, who had been working on something similar before 2008. Crime types and the locations in which these were likely to happen could be accurately predicted, they said. The issue with disruption on crime hot-spots, however, is that the overall amount of crime is actually not reduced, but instead dispersed to other areas. For example, if a model predicts crime is likely to happen in a certain area, and officers begin patrolling the area, the crime is probably not tied specifically to the physical location. Yes, the patrol is a deterrent, and crime is reduced in the area, but what the Chile officers found what that the crime moved to other areas in the city.
In the LAPD’s case, crime was modeled and measured within a specific area. The article submits that “[c]rimes were down in the area 13 percent following the rollout compared to a slight uptick across the rest of the city where the program wasn’t being used”. The question is, was the “slight uptick” in other areas in the city a result of naturally increasing crime, or is it the result of displacement of crime from identified hot-spots? Based on Chile’s experiences, I am guessing the latter.
Predictive policing is an interesting concept that seems to be a natural extension of data mining over crime data. However, I have not yet seen research dealing with predictive policing of online crime. Just like a victim of opportunistic crime is likely to be re-victimized in the physical world, is it also likely that a victim of opportunistic crime online would be more likely to be re-victimized? If so, what methods to ‘cyber cops’ have to disrupt such crime? And finally, if disruption were possible online, would the crimes just be dispersed instead of reduced?
<ul><li>Stopping Crime Before it Starts</li></ul>
Google Scholar Search:
<ul><li>Predictive Policing</li><li>Modeling Crime</li></ul>