Skip to content

fazamani/2016-ml-contest

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

2016-ml-contest

THE CONTEST IS NOW CLOSED. THANK YOU TO EVERYONE THAT PARTICIPATED.

Final standings: congratulations to LA_Team!

The top teams, based on the median F1-micro score from 100 realizations of their models were:

Position Team F1 Algorithm Language Solution
1 LA_Team (Mosser, de la Fuente) 0.6388 Boosted trees Python Notebook
2 PA Team (PetroAnalytix) 0.6250 Boosted trees Python Notebook
3 ispl (Bestagini, Tuparo, Lipari) 0.6231 Boosted trees Python Notebook
4 esaTeam (Earth Analytics) 0.6225 Boosted trees Python Notebook

I have stochastic scores for other teams, and will continue to work through them, but it seems unlikely that these top teams will change at this point.


Welcome to the Geophysical Tutorial Machine Learning Contest 2016! Read all about the contest in the October 2016 issue of the magazine. Look for Brendon Hall's tutorial on lithology prediction with machine learning.

You can run the notebooks in this repo in the cloud, just click the badge below:

Binder

You can also clone or download this repo with the green button above, or just read the documents:

Leaderboard

F1 scores of models against secret blind data in the STUART and CRAWFORD wells. The logs for those wells are available in the repo, but contestants do not have access to the facies.

** These are deterministic scores, the final standings depend on stochastic scores — see above **

Team F1 Algorithm Language Solution
LA_Team (Mosser, de la Fuente) 0.641 Boosted trees Python Notebook
ispl (Bestagini, Tuparo, Lipari) 0.640 Boosted trees Python Notebook
SHandPR 0.631 Boosted trees Python Notebook
HouMath 0.630 Boosted trees Python Notebook
esaTeam 0.629 Boosted trees Python Notebook
Pet_Stromatolite 0.625 Boosted trees Python Notebook
PA Team 0.623 Boosted trees Python Notebook
CC_ml 0.619 Boosted trees Python Notebook
geoLEARN 0.613 Random forest Python Notebook
ar4 0.606 Random forest Python Notebook
Houston_J 0.600 Boosted trees Python Notebook
Bird Team 0.598 Random forest Python Notebook
gccrowther 0.589 Random forest Python Notebook
thanish 0.580 Random forest R Code
MandMs 0.579 Majority voting Python Notebook
evgenizer 0.578 Boosted trees Python Notebook
jpoirier 0.574 Random forest Python Notebook
kr1m 0.570 AdaBoosted trees Python Notebook
ShiangYong 0.570 ConvNet Python Notebook
CarlosFuerte 0.570 Multilayer perceptron Python Notebook
fvf1361 0.568 Majority voting Python Notebook
CarthyCraft 0.566 Boosted trees Python Notebook
gganssle 0.561 Deep neural net Lua Notebook
StoDIG 0.561 ConvNet Python Notebook
wouterk1MSS 0.559 Random forest Python Notebook
Anjum48 0.559 Majority voting Python Notebook
itwm 0.557 ConvNet Python Notebook
JJlowe 0.556 Deep neural network Python Notebook
adatum 0.552 Majority voting R Notebook
CEsprey 0.550 Majority voting Python Notebook
osorensen 0.549 Boosted trees R Notebook
rkappius 0.534 Neural network Python Notebook
JesperDramsch 0.530 Random forest Python Notebook
cako 0.522 Multi-layer perceptron Python Notebook
BGC_Team 0.519 Deep neural network Python Notebook
CannedGeo 0.512 Support vector machine Python Notebook
ARANZGeo 0.511 Deep nerual network Python Code
daghra 0.506 k-nearest neighbours Python Notebook
BrendonHall 0.427 Support vector machine Python Initial score in article

Getting started with Python

Please refer to the User guide to the geophysical tutorials for tips on getting started in Python and find out more about Jupyter notebooks.

Find out more about the contest

If you intend to enter this contest, I suggest you check the open issues and read through the closed issues too. There's some good info in there.

To find out more please read the article in the October issue or read the manuscript in the tutorials-2016 repo.

Rules

We've never done anything like this before, so there's a good chance these rules will become clearer as we go. We aim to be fair at all times, and reserve the right to make judgment calls for dealing with unforeseen circumstances.

IMPORTANT: When this contest was first published, we asked you to hold the SHANKLE well blind. This is no longer necessary. You can use all the published wells in your training. Related: I am removing the file of predicted facies for the STUART and CRAWFORD wells, to reduce confusion — they are not actual facies, only those predicted by Brendon's first model.

  • You must submit your result as code and we must be able to run your code.
  • Entries will be scored by a comparison against known facies in the STUART and CRAWFORD wells, which do not have labels in the contest dataset. We will use the F1 cross-validation score. See issue #2 regarding this point. The scores in the 'leaderboard' reflect this.
  • Where there is stochastic variance in the predictions, the median average of 100 realizations will be used as the cross-validation score. See issue #114 regarding this point. The scores in the leaderboard do not currently reflect this. Probably only the top entries will be scored in this way. [updated 23 Jan]
  • The result we get with your code is the one that counts as your result.
  • To make it more likely that we can run it, your code must be written in Python or R or Julia or Lua [updated 26 Oct].
  • The contest is over at 23:59:59 UT (i.e. midnight in London, UK) on 31 January 2017. Pull requests made aftetr that time won't be eligible for the contest.
  • If you can do even better with code you don't wish to share fully, that's really cool, nice work! But you can't enter it for the contest. We invite you to share your result through your blog or other channels... maybe a paper in The Leading Edge.
  • This document and documents it links to will be the channel for communication of the leading solution and everything else about the contest.
  • This document contains the rules. Our decision is final. No purchase necessary. Please exploit artificial intelligence responsibly.

Licenses

Please note that the dataset is not openly licensed. We are working on this, but for now please treat it as proprietary. It is shared here exclusively for use on this problem, in this contest. We hope to have news about this in early 2017, if not before.

All code is the property of its author and subject to the terms of their choosing. If in doubt — ask them.

The information about the contest, and the original article, and everything in this repo published under the auspices of SEG, is licensed CC-BY and OK to use with attribution.

About

Machine learning contest - October 2016 TLE

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 95.7%
  • HTML 3.9%
  • R 0.2%
  • Python 0.2%
  • Lua 0.0%
  • Shell 0.0%