CFP: Africomm 2012

<div class="p1">Call for Papers</div><div class="p1">——————–</div><div class="p1">Fourth International IEEE EAI Conference on e-Infrastructure and e-Services for Developing Countries - Africomm 2012 </div><div class="p1">12-14 November 2012  </div><div class="p1">Yaounde’, Cameroon</div><div class="p2"></div><div class="p1">——————–</div><div class="p3">
</div><div class="p3"></div><div class="p1">[Important dates]</div><div class="p3">
</div><div class="p1">Submission: 15th August 2012</div><div class="p1">Notification of Acceptance: 15th September 2012</div><div class="p1">Camera Ready: 29th September 2012</div><div class="p1">Posters/Demos: 29th September 2012</div><div class="p1">Conference Date: 12th-14th November 2012 </div>
<div class="p1">
</div><div class="p1">[Scope]</div><div class="p3">
</div><div class="p1">Information and Communication Technologies (ICTs) have proven to offer immense potential to development. However, designing, developing, and deploying infrastructures and solutions that are affordable and efficient when only limited resources are available is a very challenging task. This presents a setback for developing countries against reaping the benefits that ICTs offers. AFRICOMM 2012 aims to bring together researchers, practitioners, and policy makers in ICT to discuss issues and trends, recent research, innovation advances and on-the-field experiences related to e-Infrastructure, e-Governance, e-Business, and enabling policy and regulations with a deep focus on developing countries. </div><div class="p1">AFRICOMM invites submission of unpublished and quality research papers with novel contributions. All submitted contributions will undergo a double-blind peer review. Accepted papers will be presented at the conference and published by Springer’s LNICST series. </div><div class="p1">AFRICOMM 2012, the fourth conference of the series, includes workshops, tutorials, demos and a student track. Research papers may have conceptual as well as practical focus. In particular real-world experience reports are strongly encouraged.</div><div class="p3">
</div><div class="p1">[Topics]</div><div class="p3">
</div><div class="p1">Proposals to contribute are very welcomed and should focus on some of the following aspects:</div><div class="p3">
</div><div class="p1">Communication Infrastructures in Developing Countries  </div><div class="p1">-Design and analysis of protocols and architectures for developing countries</div><div class="p1">-Existing and emerging wireless broadband access technologies, such as WiMAX, LTE, etc.</div><div class="p1">-Innovations in femtocells, picocells and relay network technologies    </div><div class="p1">-Cognitive radio and advanced spectrum management methods</div><div class="p1">-Self-managed deployment, operation, and maintenance of IT infrastructures</div><div class="p1">-Machine-to-machine (M2M) communications in developing country environments</div><div class="p1">-Exploitation of delay-tolerant networking</div><div class="p1">-Cooperative communications and networking</div><div class="p1">-Energy-aware ICT infrastructure, e.g., energy aware systems and networking</div><div class="p1">-ICT infrastructures based on alternative energies</div><div class="p1">-Network and IT security issues in developing countries</div><div class="p1">-Overlay networks (such as p2p, bitTorrent, etc) in developing countries</div><div class="p1">-ICT infrastructures for critical environmental conditions</div><div class="p1">-Testbeds and reference implementation for validating infrastructure requirements and   usage</div><div class="p1">-Critical information infrastructure protection (CIIP)</div><div class="p1">-Geographic Information systems  and   applications</div><div class="p1">-ICT policy and regulations</div><div class="p3">
</div><div class="p1">Electronic Service for Developing Countries</div><div class="p1">Experiences and applications in areas such as:  </div><div class="p1">-e-health, e-learning, e-agriculture, e-business, e-government, and e-participation</div><div class="p1">-Mobile health information systems, including voice-based interfaces</div><div class="p1">-ICT for  development</div><div class="p1">-Mobile-based services and applications  </div><div class="p1">-Mobile computing for next generation phones </div><div class="p1">-Cloud services (or specifically, mobile cloud services)</div><div class="p1">-Service creation environments, service delivery platforms</div><div class="p1">-Telco 2.0 vs over-the-top (OTT) services</div><div class="p1">-Emergency and disaster management </div><div class="p1">-Open source and open source models for e-services   </div><div class="p1">-Participatory design and living labs </div><div class="p1">-E-services for environmental sustainability </div><div class="p3">
</div><div class="p1">ICT Policy and Regulatory Issues for Developing Countries</div><div class="p3">
</div><div class="p1">[Paper submission]</div><div class="p3">
</div><div class="p1">The page limit for research papers is ten pages, demo descriptions and singleers are limited to two-pages. All contributions should be formatted according to the LNICST Conference Publishing Services formatting instructions, including figures and references. All submissions must be in English.  </div><div class="p1">Please visit the conference website for detailed submission instructions:  One author of each accepted contribution will be required to register and present the work at the conference.</div><div class="p3">
</div><div class="p1">

2 min read

ZFS and NFS for Forensic Storage Servers

We’ve been looking at different storage solutions to act as storage servers for forensic images, and some extracted data. Essentially we have a server with eight 3TB drives, and one 40GB flash drive. RAID cards are RAIDCore BC4000 series and another HighPoint RocketRAID 272x series. The RIADCore, we have been unable to get working in either FreeNAS 8 or Ubuntu 12.04. It does, however, give the OS direct access to the connected drives if they are not RAIDed. The HighPoint works with Ubuntu 10.04 (kernel 2.6), and works with the newest version of FreeNAS (BSD driver). Though with FreeNAS I have had drives somehow fall out of a zfs pool, and not be able to be re-added. HighPoint also gives the OS direct access to the connected drives if they are not RAIDed.

So, since we were having trouble with the hardware RAID, we decided to look for alternatives. The ideal solution would aggregate all the storage across 4 servers into one volume. So initially we considered using sharing each of the physical disks via iSCSI, then having some single-head initiator to access them all. The initiator would the be responsible for some type of software RAID, LVM and sharing the  NFS mount point. Connectivity and data redundancy are big issues, and I was worried with this method, if one of the servers drops out a lot of data may be lost or corrupted, plus the head may be a bottleneck.

Instead, we decided that each server would host its own NFS share, so we just needed a way to combine all the 3TB physical disks into one large volume. I thought first about a software RAID+ext4, but instead decided to try ZFS. It seemed to have the redundancy we were looking for, plus some other interesting features, such as taking care of NFS from ZFS.

For ZFS we decided to try both Ubuntu and FreeNAS (BSD) to figure out 1) which is faster and 2) which is easier to setup and administer.

After installing Ubuntu 12.04, and following these instructions with a bit of help from here, we had terabytes of aggregate storage accessible over NFS in about 15 minutes. Pretty easy.

Then we started trying to image to the share. From a USB device to the server using Guymager, we could image about 11-13MB/sec. After 30 to 40 seconds of imaging, all the drives in the zpool would start flashing like crazy, and imaging would drop to around 3MB/sec. Faster initial speeds, but similar degradation happened with eSATA/IDE drives on the acquisition machine.

[edit] I believe part of the problem was with NFS having the ‘sync’ flag set on the server-side. I had less degradation and slightly faster performance with ‘async’.

So, for large files ZFS was quite slow. This did NOT seem to be the case when writing from the server to the local zpool, which leads me to think it might be partially a problem with NFS (which zfs handles itself with the ‘zfs set sharenfs=on … ‘ command). I think the initial high speed is because of caching. So now I am wondering if ZFS is meant for bursts of smaller reads and writes rather than writing large files.

Looking around for why ZFS seems degrade so much, I found this article, which claims to improve ZFS performance. NOTE: if you make any of the changes from the article, make sure you have a backup of your data. I found that after making some of the tweaks, my pool had been corrupted and had to be re-created. The tweaks did seem to make the server more stable over time (less freezing), but write speeds still degraded.

FreeNAS took much less time than Ubuntu to install, and as long as it can detect your disks, creation of a new pool and sharing the pool takes about 10 clicks of your mouse (from the web interface). ZFS on FreeNAS appears to be less stable that on Ubuntu (we had drives drop out, and a more freezing), but we are getting write speeds of approximately 15MB/sec with less - but some - degradation over time.

To see if the slow write speed is ZFS related, we are currently building a software array, and will just format it with ext4, and create a standard NFS share. Syncing the 8 disks in the array will take hours, but after hopefully we will have much, much better transfer speeds. Ubuntu allows the creation of a software array on install, which makes it very easy to set up.

If you have any experience with ZFS, and know what some of the issues might be, or even just a better set-up for aggregating large disks and getting good performance writing large files, please leave a comment.

Bookmark and Share

3 min read

Future Crimes Ted Talk

[Update] See Bruce Schneier’s response

Our friends at recently had a good Ted talk about technology, crime and a potential way to fight crime in the future.

<div class="separator" style="clear: both; text-align: center;"></embed></div>
From “Marc Goodman imagines the future crime and terrorism challenges we will all face as a result of advancing technologies. He thinks deeply about the disruptive security implications of robotics, artificial intelligence, social data, virtual reality and synthetic biology. Technology, he says, is affording exponentially growing power to non-state actors and rogue players, with significant consequences for our common global security. How to respond to these threats? The crime-fighting solution might just lie in crowdsourcing.”

Bookmark and Share

~1 min read

Digital Forensic Investigation and Cloud Computing

We have a chapter in an upcoming book, Cybercrime and Cloud Forensics: Applications for Investigation Processes

Our chapter aims to be a high-level introduction into fundamental concepts of both digital forensic investigations and cloud computing for non-experts in one or both areas. Once fundamental concepts are established, we examine cloud computing security-related questions; specifically how past security challenges are inherited or solved by cloud computing models, as well as new security challenges that are unique to cloud environments. Next, an analysis is given of the challenges and opportunities cloud computing brings to digital forensic investigations. Finally, the Integrated Digital Investigation Process model is used as a guide to illustrate considerations and challenges during investigations involving cloud environments.

Bookmark and Share

~1 min read

Forensic Acquisition of a Virtual Machine with Access to the Host

Someone recently asked about an easy way to create a RAW image of virtual machine (VM) disks, so here is a quick how-to.

<div class="separator" style="clear: both; text-align: center;"></div>If you have access to the VM host, you could either copy and convert the virtual disks on the host using something like qemu-img, or if for some reason you cannot convert the virtual disks, you can image the VM from within the virtual environment. This how-to will go through relatively easy ways to image a live or offline virtual machine using the virtual environment with access to the host.

First, if the virtual machine cannot be shut down (live environment), you will make changes to the ‘suspect’ environment. If it is a suspect device, make sure your tools are on write-protected media. Verification of disk images won’t work in a live environment since the disk is changing while imaging is taking place. If you are doing and offline acquisition for forensic purposes, make sure you are verifying the images once you create them.

If the VM is live and cannot be shut down:
<ul><li>Fist check if the management interface allows devices to be attached to the VM; specifically USB/Firewire devices.</li><ul><li>If you cannot attach devices for whatever reason, then you will have to use a network share or a netcat tunnel to copy the image.</li><li>Ensure your storage media is larger than the Virtual Machine’s disk</li></ul><li>If it is a Windows environment, copy your imaging program like FTK Imager lite, or dd.exe from unxutils, to the network share/USB device. I also like chrysocome’s dd since it lets you list devices.</li><li>In the Virtual Machine, mount your share/USB device</li><li>From the mounted device, you should be able to access and run the imaging tools you copied previously - ensure you output the image to your share/USB device.</li><ul><li>dd</li><li>FTK Imager Lite</li></ul></ul><div>
</div>If the VM is offline or can be shut down:
<ul><li>First check if you can boot the VM from a CD/USB device</li><ul><li>If yes, use a live CD like FCCU or Helix to boot the VM</li><ul><li>All we are really interested in is 1) an imaging program on the live CD and 2) USB or network support and 3) netcat installed.</li></ul><li>If no:</li><ul><li>Can you add CD/USB support to the VM?</li><li>Can you copy the VM, and convert to a RAW image using qemu-img (part of qemu)?</li><li>Boot the VM, and make an image in the live environment (go to the live imaging section)</li></ul></ul><li>After you have booted the VM from a live CD…</li><ul><li>Using external storage to store the image:</li><ul><li>List the current disks - (fdisk -l) - and take note of the current suspect disks to image</li><ul><li>/dev/hda would be a physical disk, while /dev/hda1 would be partition 1 on disk ‘a’</li></ul><li>Attach and mount an external storage disk</li><li>Make a copy of the physical disk (usually something like /dev/sda) using an imaging program like dd or guymager.</li><ul><li>Make sure you are copying the image to your external disk!</li></ul></ul><li>Using the network:</li><ul><li>Network share:</li><ul><li>List the current disks, and take note of the suspect disks to image</li><li>Mount the network share</li><li>Make a copy of the physical disk (usually something like /dev/sda) using an imaging program like dd or guymager.</li></ul><li>No network share:</li><ul><li>Set up a netcat tunnel between a server (any computer you control), and the suspect VM (client)</li><ul><li>Note: this connection is unencrypted!!!</li></ul><li>You can use ssh or cryptcat to for an encrypted tunnel, and bzip for compression and faster transfer speeds.</li></ul></ul></ul></ul>
That’s it. Pretty generic, but it should be enough to get anyone started. Please comment if you have any questions

Bookmark and Share

2 min read