The purpose of the FaaS (Factoring as a Service) project is to demonstrate that 512-bit integers can be factored in only a few hours, for less than $100 of compute time in a public cloud environment. This illustrates the amazing progress in computing power over time, and the risk of continued use of 512-bit RSA keys.
Please do not use these scripts to attack any systems that you do not own.
Our scripts launch a compute cluster on Amazon EC2 and run the CADO-NFS and Msieve implementations of the number field sieve factoring algorithm, with some improvements to parallelization. For more information about the project, see the project webpage and our paper.
This section shows you how to quickly get set up to factor.
Set up and configure AWS CLI using these instructions. Make sure that your ~/.aws/config looks like
[default]
region = <EC2-region>
output = json
and your ~/.aws/credentials looks like
[default]
aws_access_key_id = <key_id>
aws_secret_access_key = <access_key>
Install Ansible using these instructions and configure Boto (python interface to AWS) using these instructions, and make sure your ~/.boto config looks like
[Credentials]
aws_access_key_id = <key_id>
aws_secret_access_key = <access_key>
Install GNU Parallel using these instructions.
>$ cd ec2
The scripts are available here, and you can download them with the following commands:
>$ wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.py
>$ chmod +x ec2.py
>$ wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.ini
Set the following values in ec2.ini.
regions = <EC2-region> # you can leave this as 'all', but the script runs more quickly with a single region
cache_max_age = 0 # we want to always see the most up-to-date info about running instances, so do not cache
rds = False # if your AWS user does not have rds permissions
elasticache = False # if your AWS user does not have elasticache permissions
The following script will create a new AWS VPC (Virtual Private Cloud) configured for FaaS.
>$ ./configure-aws.py
>$ vim vars/custom.yml
We provide a public AMI for the region us-east-1 (ami-19642b7c), which is the result of running the following script. To use our public AMI, replace the value of the 'base_image' variable in 'vars/ec2.py' with 'ami-19642b7c' and do not run the script. The script can take up to an hour to run.
>$ ./build-base.sh
To check that your AWS environment is correctly configured, we recommend that you run a small test factorization. Our test script will build a custom AMI for the test factorization, launch a cluster of four m4.large nodes, and factor a 100-digit number. The entire process should cost less than a dollar in EC2 credit, but will hopefully help you debug any issues with cluster setup. We recommend that you run the commands in the script one by one.
>$ ./test-factor.sh
The following script will build a new AMI, launch a cluster, and factor the 512-bit integer that you specified in vars/custom.yml. We recommend that you run the commands in the script one by one. After you've already built a custom AMI, it is no longer necessary to build a new AMI each time, so you may wish to comment out the first few lines.
>$ ./factor.sh
If all goes to plan, you will recieve an email with the results of the factorization. However, this is research code, and things do not always go as planned :).
By default, the master node will be stopped (not terminated) after the factorization has completed so that the relevant log files will not be deleted. This will continue to use resources on your Amazon account until the node is terminated.
The relevant log files are in the following locations on the master node:
/home/ubuntu/server.stderr # the supervisor output file. You can watch the factorization live with 'tail -f server.stderr'.
/workdir/<job_name>/<job_name>.log # the faas log file
/workdir/<job_name>/<job_name>.cmd # the commands executed by the factoring script
/var/log/slurm/slurmctld.log # the Slurm controller daemon log file
Good luck!