From e401b4f2b12b478fee27326288345ece0fda99e9 Mon Sep 17 00:00:00 2001 From: Cheng Ren Date: Mon, 11 Jan 2021 11:24:49 -0800 Subject: [PATCH] add avro tutorial testing data (#1267) Co-authored-by: Cheng Ren <1428327+chengren311@users.noreply.github.com> --- docs/tutorials/avro.ipynb | 576 ++++++++++++++++++++++++++++++++++ docs/tutorials/avro/test.avro | Bin 0 -> 369 bytes docs/tutorials/avro/test.avsc | 1 + 3 files changed, 577 insertions(+) create mode 100644 docs/tutorials/avro.ipynb create mode 100644 docs/tutorials/avro/test.avro create mode 100644 docs/tutorials/avro/test.avsc diff --git a/docs/tutorials/avro.ipynb b/docs/tutorials/avro.ipynb new file mode 100644 index 0000000000..9bf0e52682 --- /dev/null +++ b/docs/tutorials/avro.ipynb @@ -0,0 +1,576 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "Tce3stUlHN0L" + }, + "source": [ + "##### Copyright 2020 The TensorFlow IO Authors." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "tuOe1ymfHZPu" + }, + "outputs": [], + "source": [ + "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", + "# you may not use this file except in compliance with the License.\n", + "# You may obtain a copy of the License at\n", + "#\n", + "# https://www.apache.org/licenses/LICENSE-2.0\n", + "#\n", + "# Unless required by applicable law or agreed to in writing, software\n", + "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", + "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", + "# See the License for the specific language governing permissions and\n", + "# limitations under the License." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "qFdPvlXBOdUN" + }, + "source": [ + "# Avro Dataset API" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "MfBg1C5NB3X0" + }, + "source": [ + "\n", + " \n", + " \n", + " \n", + " \n", + "
\n", + " View on TensorFlow.org\n", + " \n", + " Run in Google Colab\n", + " \n", + " View source on GitHub\n", + " \n", + " Download notebook\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "xHxb-dlhMIzW" + }, + "source": [ + "## Overview\n", + "\n", + "The objective of Avro Dataset API is to load Avro formatted data natively into TensorFlow as TensorFlow dataset. Avro is a data serialization system similiar to Protocol Buffers. It's widely used in Apache Hadoop where it can provide both a serialization format for persistent data, and a wire format for communication between Hadoop nodes. Avro data is a row-oriented, compacted binary data format. It relies on schema which is stored as a separate JSON file. For the spec of Avro format and schema declaration, please refer to the official manual.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "MUXex9ctTuDB" + }, + "source": [ + "## Setup package\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "upgCc3gXybsA" + }, + "source": [ + "### Install the required tensorflow-io package" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "uUDYyMZRfkX4" + }, + "outputs": [], + "source": [ + "!pip install tensorflow-io" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "gjrZNJQRJP-U" + }, + "source": [ + "### Import packages" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": { + "id": "m6KXZuTBWgRm" + }, + "outputs": [], + "source": [ + "import tensorflow as tf\n", + "import tensorflow_io as tfio\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "eCgO11GTJaTj" + }, + "source": [ + "### Validate tf and tfio imports" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": { + "id": "dX74RKfZ_TdF" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensorflow-io version: 0.17.0\n", + "tensorflow version: 2.4.0\n" + ] + } + ], + "source": [ + "print(\"tensorflow-io version: {}\".format(tfio.__version__))\n", + "print(\"tensorflow version: {}\".format(tf.__version__))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "J0ZKhA6s0Pjp" + }, + "source": [ + "## Usage" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "4CfKVmCvwcL7" + }, + "source": [ + "### Explore the dataset\n", + "\n", + "For the purpose of this tutorial, let's download the sample Avro dataset. \n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "IGnbXuVnSo8T" + }, + "source": [ + "Download a sample Avro file:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "Tu01THzWcE-J" + }, + "outputs": [], + "source": [ + "!curl -OL https://github.com/tensorflow/io/raw/master/docs/tutorials/avro/test.avro\n", + "!ls -l test.avro" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "IGnbXuVnSo8T" + }, + "source": [ + "Download the corresponding schema file of the sample Avro file:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "Tu01THzWcE-J" + }, + "outputs": [], + "source": [ + "!curl -OL https://github.com/tensorflow/io/raw/master/docs/tutorials/avro/test.avsc\n", + "!ls -l test.avsc" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "z9GCyPWNuOm7" + }, + "source": [ + "In the above example, a testing Avro dataset were created based on mnist dataset. The original mnist dataset in TFRecord format is generated from TF named dataset. However, the mnist dataset is too large as a demo dataset. For simplicity purpose, most of it were trimmed and first few records only were kept. Moreover, additional trimming was done for `image` field in original mnist dataset and mapped it to `features` field in Avro. So the avro file `test.avro` has 4 records, each of which has 3 fields: `features`, which is an array of int, `label`, an int or null, and `dataType`, an enum. To view the decoded `test.avro` (Note the original avro data file is not human readable as avro is a compacted format):\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "upgCc3gXybsB" + }, + "source": [ + "Install the required package to read Avro file:\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "nS3eTBvjt-O4" + }, + "outputs": [], + "source": [ + "!pip install avro\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "upgCc3gXybsB" + }, + "source": [ + "To read and print an Avro file in a human-readable format:\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "nS3eTBvjt-O5" + }, + "outputs": [], + "source": [ + "from avro.io import DatumReader\n", + "from avro.datafile import DataFileReader\n", + "\n", + "import json\n", + "\n", + "def print_avro(avro_file, max_record_num=None):\n", + " if max_record_num is not None and max_record_num <= 0:\n", + " return\n", + "\n", + " with open(avro_file, 'rb') as avro_handler:\n", + " reader = DataFileReader(avro_handler, DatumReader())\n", + " record_count = 0\n", + " for record in reader:\n", + " record_count = record_count+1\n", + " print(record)\n", + " if max_record_num is not None and record_count == max_record_num:\n", + " break\n", + "\n", + "print_avro(avro_file='test.avro')\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "z9GCyPWNuOm7" + }, + "source": [ + "And the schema of `test.avro` which is represented by `test.avsc` is a JSON-formatted file.\n", + "To view the `test.avsc`: \n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "nS3eTBvjt-O5" + }, + "outputs": [], + "source": [ + "def print_schema(avro_schema_file):\n", + " with open(avro_schema_file, 'r') as handle:\n", + " parsed = json.load(handle)\n", + " print(json.dumps(parsed, indent=4, sort_keys=True))\n", + "\n", + "print_schema('test.avsc')\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "4CfKVmCvwcL7" + }, + "source": [ + "### Prepare the dataset\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "z9GCyPWNuOm7" + }, + "source": [ + "Load `test.avro` as TensorFlow dataset with Avro dataset API: \n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "nS3eTBvjt-O5" + }, + "outputs": [], + "source": [ + "features = {\n", + " 'features[*]': tfio.experimental.columnar.VarLenFeatureWithRank(dtype=tf.int32),\n", + " 'label': tf.io.FixedLenFeature(shape=[], dtype=tf.int32, default_value=-100),\n", + " 'dataType': tf.io.FixedLenFeature(shape=[], dtype=tf.string)\n", + "}\n", + "\n", + "schema = tf.io.gfile.GFile('test.avsc').read()\n", + "\n", + "dataset = tfio.experimental.columnar.make_avro_record_dataset(file_pattern=['test.avro'],\n", + " reader_schema=schema,\n", + " features=features,\n", + " shuffle=False,\n", + " batch_size=3,\n", + " num_epochs=1)\n", + "\n", + "for record in dataset:\n", + " print(record['features[*]'])\n", + " print(record['label'])\n", + " print(record['dataType'])\n", + " print(\"--------------------\")\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "IF_kYz_o2DH4" + }, + "source": [ + "The above example converts `test.avro` into tensorflow dataset. Each element of the dataset is a dictionary whose key is the feature name, value is the converted sparse or dense tensor. \n", + "E.g, it converts `features`, `label`, `dataType` field to a VarLenFeature(SparseTensor), FixedLenFeature(DenseTensor), and FixedLenFeature(DenseTensor) respectively. Since batch_size is 3, it coerce 3 records from `test.avro` into one element in the result dataset.\n", + "For the first record in `test.avro` whose label is null, avro reader replaces it with the specified default value(-100).\n", + "In this example, there're 4 records in total in `test.avro`. Since batch size is 3, the result dataset contains 3 elements, last of which's batch size is 1. However user is also able to drop the last batch if the size is smaller than batch size by enabling `drop_final_batch`. E.g: \n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "nS3eTBvjt-O5" + }, + "outputs": [], + "source": [ + "dataset = tfio.experimental.columnar.make_avro_record_dataset(file_pattern=['test.avro'],\n", + " reader_schema=schema,\n", + " features=features,\n", + " shuffle=False,\n", + " batch_size=3,\n", + " drop_final_batch=True,\n", + " num_epochs=1)\n", + "\n", + "for record in dataset:\n", + " print(record)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "IF_kYz_o2DH4" + }, + "source": [ + "One can also increase num_parallel_reads to expediate Avro data processing by increasing avro parse/read parallelism.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "nS3eTBvjt-O5" + }, + "outputs": [], + "source": [ + "dataset = tfio.experimental.columnar.make_avro_record_dataset(file_pattern=['test.avro'],\n", + " reader_schema=schema,\n", + " features=features,\n", + " shuffle=False,\n", + " num_parallel_reads=16,\n", + " batch_size=3,\n", + " drop_final_batch=True,\n", + " num_epochs=1)\n", + "\n", + "for record in dataset:\n", + " print(record)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "IF_kYz_o2DH4" + }, + "source": [ + "For detailed usage of `make_avro_record_dataset`, please refer to API doc.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "4CfKVmCvwcL7" + }, + "source": [ + "### Train tf.keras models with Avro dataset\n", + "\n", + "Now let's walk through an end-to-end example of tf.keras model training with Avro dataset based on mnist dataset.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "z9GCyPWNuOm7" + }, + "source": [ + "Load `test.avro` as TensorFlow dataset with Avro dataset API: \n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "nS3eTBvjt-O5" + }, + "outputs": [], + "source": [ + "features = {\n", + " 'features[*]': tfio.experimental.columnar.VarLenFeatureWithRank(dtype=tf.int32)\n", + "}\n", + "\n", + "schema = tf.io.gfile.GFile('test.avsc').read()\n", + "\n", + "dataset = tfio.experimental.columnar.make_avro_record_dataset(file_pattern=['test.avro'],\n", + " reader_schema=schema,\n", + " features=features,\n", + " shuffle=False,\n", + " batch_size=1,\n", + " num_epochs=1)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "z9GCyPWNuOm7" + }, + "source": [ + "Define a simple keras model: \n" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": { + "id": "m6KXZuTBWgRm" + }, + "outputs": [], + "source": [ + "def build_and_compile_cnn_model():\n", + " model = tf.keras.Sequential()\n", + " model.compile(optimizer='sgd', loss='mse')\n", + " return model\n", + "\n", + "model = build_and_compile_cnn_model()\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "4CfKVmCvwcL7" + }, + "source": [ + "### Train the keras model with Avro dataset:\n" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": { + "id": "m6KXZuTBWgRm" + }, + "outputs": [], + "source": [ + "model.fit(x=dataset, epochs=1, steps_per_epoch=1, verbose=1)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "IF_kYz_o2DH4" + }, + "source": [ + "The avro dataset can parse and coerce any avro data into TensorFlow tensors, including records in records, maps, arrays, branches, and enumerations. The parsing information is passed into the avro dataset implementation as a map where \n", + "keys encode how to parse the data \n", + "values encode on how to coerce the data into TensorFlow tensors – deciding the primitive type (e.g. bool, int, long, float, double, string) as well as the tensor type (e.g. sparse or dense). A listing of TensorFlow's parser types (see Table 1) and the coercion of primitive types (Table 2) is provided. \n", + "\n", + "Table 1 the supported TensorFlow parser types:\n", + "\n", + "TensorFlow Parser Types|TensorFlow Tensors|Explanation\n", + "----|----|------\n", + "tf.FixedLenFeature([], tf.int32)|dense tensor|Parse a fixed length feature; that is all rows have the same constant number of elements, e.g. just one element or an array that has always the same number of elements for each row \n", + "tf.SparseFeature(index_key=['key_1st_index', 'key_2nd_index'], value_key='key_value', dtype=tf.int64, size=[20, 50]) |sparse tensor|Parse a sparse feature where each row has a variable length list of indices and values. The 'index_key' identifies the indices. The 'value_key' identifies the value. The 'dtype' is the data type. The 'size' is the expected maximum index value for each index entry\n", + "tfio.experimental.columnar.VarLenFeatureWithRank([],tf.int64) |sparse tensor|Parse a variable length feature; that means each data row can have a variable number of elements, e.g. the 1st row has 5 elements, the 2nd row has 7 elements\n", + "\n", + "Table 2 the supported conversion from Avro types to TensorFlow's types:\n", + "\n", + "Avro Primitive Type|TensorFlow Primitive Type\n", + "----|----\n", + "boolean: a binary value|tf.bool\n", + "bytes: a sequence of 8-bit unsigned bytes|tf.string\n", + "double: double precision 64-bit IEEE floating point number|tf.float64\n", + "enum: enumeration type|tf.string using the symbol name\n", + "float: single precision 32-bit IEEE floating point number|tf.float32\n", + "int: 32-bit signed integer|tf.int32\n", + "long: 64-bit signed integer|tf.int64\n", + "null: no value|uses default value\n", + "string: unicode character sequence|tf.string\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "IF_kYz_o2DH4" + }, + "source": [ + "A comprehensive set of examples of Avro dataset API is provided within the tests.\n" + ] + } + ], + "metadata": { + "accelerator": "GPU", + "colab": { + "collapsed_sections": [ + "Tce3stUlHN0L" + ], + "name": "avro.ipynb", + "toc_visible": true + }, + "kernelspec": { + "display_name": "Python 3", + "name": "python3" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +} diff --git a/docs/tutorials/avro/test.avro b/docs/tutorials/avro/test.avro new file mode 100644 index 0000000000000000000000000000000000000000..35f63a6b6239d4cd8c4e60a29ac226538128f5a8 GIT binary patch literal 369 zcmeZI%3@>@Nh~YM*GtY%NloU+E6vFf1M`cMGg5OC=dn~Pl~fj_Dp@Hg6{RNU7o{la zC@AG6=7L2$a}(23T@p(Yi&INL;%S+wIVr_Jwb5{0aE4N1QBh(gNL6M@YA#5TQf6L> zQZ15kX{m`NrA4X5AVIjkXs|MnDxlMpVv&^RBqpWips0mwQcBG$%|&);3eb@uKz|g2 z1dA(klk#)G?o|xY mkGp+k=KR+j7F-Mr3``shObjeQ29RI^l59*&K#GG2T{QsBCUl?x literal 0 HcmV?d00001 diff --git a/docs/tutorials/avro/test.avsc b/docs/tutorials/avro/test.avsc new file mode 100644 index 0000000000..904864b373 --- /dev/null +++ b/docs/tutorials/avro/test.avsc @@ -0,0 +1 @@ +{"name": "ImageDataset", "type": "record", "fields": [{"name": "features", "type": {"type": "array", "items": "int"}}, {"name": "label", "type": ["int", "null"]}, {"name": "dataType", "type": {"type": "enum", "name": "dataTypes", "symbols": ["TRAINING", "VALIDATION"]}}]} \ No newline at end of file