Skip to content

Latest commit

 

History

History
36 lines (30 loc) · 1.9 KB

README.md

File metadata and controls

36 lines (30 loc) · 1.9 KB

hncynic

The best Hacker News comments are written with a complete disregard for the linked article. hncynic is an attempt at capturing this phenomenon by training a model to predict Hacker News comments just from the submission title. More specifically, I trained a Transformer encoder-decoder model on Hacker News data. In my second attempt, I also included data from Wikipedia.

The generated comments are fun to read, but often turn out meaningless or contradictory -- see here for some examples generated from recent HN titles.

There is a demo live at https://hncynic.leod.org/.

Steps

Hacker News

Train a model on Hacker News data only:

  1. data: Prepare the data and extract title-comment pairs from the HN data dump.
  2. train: Train a Transformer translation model on the title-comment pairs using TensorFlow and OpenNMT-tf.

Transfer Learning

Train a model on Wikipedia data, then switch to Hacker News data:

  1. data-wiki: Prepare data from Wikipedia articles.
  2. train-wiki: Train a model to predict Wikipedia section texts from titles.
  3. train-wiki-hn: Continue training on HN data.

Hosting

  1. serve: Serve the model with TensorFlow serving.
  2. ui: Host a web interface for querying the model.

Future Work

  • Acquire GCP credits, train for more steps.
  • It's probably nonideal to use encoder-decoder models. In retrospect, I should have trained a language model instead, on data like title <SEP> comment.
  • I've completely excluded HN comments that are replies from the training data. It might be interesting to train on these as well.