-
Notifications
You must be signed in to change notification settings - Fork 255
Automated malware analysis setup
The following is a quick guide to setup an automated processing loop for malware analysis. The idea is to receive malware samples in a target folder, run DRAKVUF on each sample and store the results in an another folder. The malware samples will be served to the DRAKVUF instance using Apache over the network. Three folders will be utilized: /malware_incoming, /malware_processing and /malware_finished. The first folder is where incoming malware samples waiting to be processed are placed. The second folder is where samples currently being processed are, and the last is where all the results are placed and where the sample is moved once all analysis completes.
- Install all DRAKVUF binaries by running
make install
after you have built it as described on http://drakvuf.com - Install additional packages:
apt-get install screen apache2 tcpdump vlan openvswitch-switch
- Configure Apache2 by editing /etc/apache2/apache2.conf and changing the default folder to /malware_processing and remove "Indexes" from the options. Once done editing, restart Apache:
/etc/init.d/apache2 restart
. - Add an OVS bridge to be used by the analysis clones:
ovs-vsctl add-br xenbr1
- Edit tools/clone.pl and change the configuration options to match your setup, including LVM VG name and bridge name (if you choose something else other then xenbr1).
- Configure your VM as you see fit, determine what PID you want to use for hijacking and save the domain using
xl save
, then restore it withxl restore -p -e
. - Start a screen session with logging enabled to run dirwatch:
screen -L -d dirwatch [config options]
. Config options required are:
dirwatch <origin domain name> <domain config> <rekall_profile> <injection pid> <watch folder> <serve folder> <output folder> <max clones> <clone_script> <config_script> <drakvuf_script> <cleanup_script> <tcpdump_script>
There are sample scripts for each located in the tools
folder in DRAKVUF. Feel free to read each script to see how the VM gets preconfigured before the analysis starts.