Skip to content

Materials for the talk "Do androids read about electric sheep?"

Notifications You must be signed in to change notification settings

chopeen/do-androids-read

 
 

Repository files navigation

Do androids read about electric sheep? Machine reading comprehension algorithms

Presentation video

Recording from Devoxx Ukraine 2019

Research papers

Stanford Question Answering Dataset (SQuAD)

Papers: https://arxiv.org/pdf/1606.05250.pdf (2016), https://arxiv.org/pdf/1806.03822.pdf (2018)
Website: https://rajpurkar.github.io/SQuAD-explorer/
Leaderboard: see the main page of the website

Reading Comprehension with Multiple Hops (QAnagaroo)

Paper: https://arxiv.org/pdf/1710.06481.pdf (2018)
Website: https://qangaroo.cs.ucl.ac.uk/
Leaderboard: https://qangaroo.cs.ucl.ac.uk/leaderboard.html

NarrativeQA Reading Comprehension Challenge (NarrativeQA)

Paper: https://arxiv.org/pdf/1712.07040.pdf (2017)
Website: https://github.com/deepmind/narrativeqa/
Leaderboard: https://paperswithcode.com/sota/reading-comprehension-narrativeqa (unofficial)

General Language Understanding Evaluation (GLUE)

Paper: https://arxiv.org/pdf/1804.07461.pdf (2019)
Website: https://gluebenchmark.com/
Leaderboard: https://gluebenchmark.com/leaderboard/

Other sources

Recommended materials

NLP websites

The following websites aggregate latest research papers and datasets, grouped by NLP tasks:

Videos

AI skepticism

Machines Beat Humans on a Reading Test. But Do They Understand?

But is AI actually starting to understand our language — or is it just getting better at gaming our systems? As BERT-based neural networks have taken benchmarks like GLUE by storm, new evaluation methods have emerged that seem to paint these powerful NLP systems as computational versions of Clever Hans, the early 20th-century horse who seemed smart enough to do arithmetic, but who was actually just following unconscious cues from his trainer.

Why A.I. is a big fat lie

(...) we have very little insight into how our brains pull off what they pull off. Replicating a brain neuron-by-neuron is a science fiction "what if" pipe dream. And introspection – when you think about how you think – is interesting, big time, but ultimately tells us precious little about what's going on in there.

Your common sense is more amazing – and unachievable – than your common sense can sense. You're amazing. Your ability to think abstractly and "understand" the world around you might feel simple in your moment-to-moment experience, but it's incredibly complex. That experience of simplicity is either a testament to how adept your uniquely human brain is or a great illusion that's intrinsic to the human condition – or probably both.

The trick is to take a moment to think about this difference. Our own personal experiences of being one of those smart creatures called a human is what catches us in a thought trap. Our very particular and very impressive capabilities are hidden from ourselves beneath a veil of a conscious experience that just kind of feels like "clarity." It feels simple, but under the surface, it's oh so complex. Replicating our "general common sense" is a fanciful notion that no technological advancements have ever moved us towards in any meaningful way.

DeepMind's Losses and the Future of Artificial Intelligence

Researchers in machine learning now often ask, “How can machines optimize complex problems using massive amounts of data?” We might also ask, “How do children acquire language and come to understand the world, using less power and data than current AI systems do?” If we spent more time, money, and energy on the latter question than the former, we might get to artificial general intelligence a lot sooner.

Deep Learning: A Critical Appraisal

What exactly is deep learning, and what has its shown about the nature of intelligence? What can we expect it to do, and where might we expect it to break down? How close or far are we from “artificial general intelligence”, and a point at which machines show a human-like flexibility in solving unfamiliar problems? The purpose of this paper is both to temper some irrational exuberance and also to consider what we as a field might need to move forward.

AI in healthcare

How IBM Watson Overpromised and Underdelivered on AI Health Care

In many attempted applications, Watson’s NLP struggled to make sense of medical text—as have many other AI systems. “We’re doing incredibly better with NLP than we were five years ago, yet we’re still incredibly worse than humans,” says Yoshua Bengio, a professor of computer science at the University of Montreal and a leading AI researcher. In medical text documents, Bengio says, AI systems can’t understand ambiguity and don’t pick up on subtle clues that a human doctor would notice. Bengio says current NLP technology can help the health care system: "It doesn’t have to have full understanding to do something incredibly useful," he says. But no AI built so far can match a human doctor’s comprehension and insight. "No, we’re not there," he says.

Watson learned fairly quickly how to scan articles about clinical studies and determine the basic outcomes. But it proved impossible to teach Watson to read the articles the way a doctor would. "The information that physicians extract from an article, that they use to change their care, may not be the major point of the study," Kris says. Watson’s thinking is based on statistics, so all it can do is gather statistics about main outcomes, explains Kris. "But doctors don’t work that way."

AI And Healthcare: Is The Bloom Finally Off The Rose?

Bender goes on to also point out that many of the so-called early successes around "AI in drug discovery" are important but also "a good number of steps away from the more difficult biological and in vivo stages, where efficacy and toxicity in living organisms decides the fate of drugs waiting to be discovered. Hence, there is still a gap that needs to be bridged…"

The key issue, he concludes, is the need for "sufficient and sufficiently relevant data in order to predict properties of potential therapies that are relevant for the in vivo situation, which are related to efficacy and toxicity-relevant endpoints."

Viewing the notebook online

Generally, GitHub is able to render Jupyter notebooks, so just click the file nlp_demo.ipynb first. If it fails (probably with a generic error like Sorry, something went wrong. Reload?), you have two options:

  1. view the notebook saved in different formats in directory /exported,
  2. use another nbviewer server, e.g. nlp_demo.ipynb @ nbviewer.jupyter.org.

Running the notebook

Check NOTES.md for hints to run the notebook locally or use Binder to deploy it to the cloud.

Binder

It may take Binder 10-15 minutes to download all the dependencies and build the Docker image for the project (but it's amazing to watch it happen in a browser window).

About

Materials for the talk "Do androids read about electric sheep?"

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • HTML 95.9%
  • Jupyter Notebook 3.9%
  • Other 0.2%