Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Long and large cron jobs with travis #154

Open
calispac opened this issue Mar 7, 2018 · 3 comments
Open

Long and large cron jobs with travis #154

calispac opened this issue Mar 7, 2018 · 3 comments

Comments

@calispac
Copy link
Collaborator

calispac commented Mar 7, 2018

Latelly we have had integration of the simtel_array data analysis in digicampipe. And in the near future we will have files from the camera calibration.

So far we are unit-testing but with cron jobs from TravisCI we can try to schedule more complete tests.

The issue right now are:

  1. Time limitation by TravisCI
  2. Limited space inside the repo.

Regarding those points

  1. The timeout is:
  • Total size of GH repo ~ 1GB
  • File size limitation = 100MB
  • No idea what is the limit for Travis (assume same as GH)

So it seems not a solution to run these kind of tests with TravisCI in the long term. But I believe one can ask Travis to trigger some jobs on a machine/cluster of machines that has access to the MC and data files.

Any suggestions?

@dneise
Copy link
Collaborator

dneise commented Mar 7, 2018

I would like to propose a more "low tech" way than travis...

  • Find or buy a machine (~500,-)
  • Install a recent Ubuntu LTS
  • Download our crab-data-sample, and other input data, slowdata and stuff to that machine.

Write a cron-job which runs, dunno .... once per hour or every 2 hours or so... and does:

  • fresh install of anaconda and of our software.
  • run tests and analyses as much as you want.
  • capture the entire stuff in a textfile and send it around as an email and/or present it on a website on that machine.

@dneise
Copy link
Collaborator

dneise commented Mar 7, 2018

I have a PC here under my desk which could do that, but I'd assume one would rather have this machine in geneva or krakow?

@calispac
Copy link
Collaborator Author

calispac commented Mar 7, 2018

Yes I think the plan is to use a machine that has already access to the "database". That would be even better if it is a machine we are already familiar with (one of the cluster we use, or a cluster that we will use).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants