diff --git a/notebooks/204-vision-facies-segmentation/204-vision-facies-segmentation.ipynb b/notebooks/204-vision-facies-segmentation/204-vision-facies-segmentation.ipynb new file mode 100644 index 00000000000..f64c48e5332 --- /dev/null +++ b/notebooks/204-vision-facies-segmentation/204-vision-facies-segmentation.ipynb @@ -0,0 +1,775 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Facies segmentation Python Demo\n", + "\n", + "This Jupyter notebook demonstrates how to run facies segmentation with seismic data using OpenVINO™ Toolkit.\n", + "\n", + "This model is used for seismic interpretation tasks. Facies is the characteristics of a rock that reflects its origin and differentiates a unit from the others around it. Mineralogy and sedimentary source, fossil content, sedimentary structures and texture distinguish one facies from another. Data are presented in the 3D arrays.\n", + "\n", + "See the source repository to learn more about the model architecture and training method - https://github.com/yalaudah/facies_classification_benchmark\n", + "\n", + "The `IR` model was created with OpenVINO™ [Model Optimizer](https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)\n", + "\n", + "To convert the model from ONNX to `IR`, we used the `mo_onnx.py` script with the `--extension` flag as shown below:\n", + " \n", + " ```bash\n", + " python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo_onnx.py \\\n", + " --input_model model.onnx \\\n", + " --extension openvino_pytorch_layers/mo_extensions\n", + "```\n", + "Where `openvino_pytorch_layers/mo_extensions` is python code from [openvino_pytorch_layers](https://github.com/dkurt/openvino_pytorch_layers) repository" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---\n", + "**NOTE**\n", + "\n", + "Currently, only 2D visualization is supported in `jupyter lab`, for 3D visualizations please use `jupyter notebook` instead. To switch from Jupyter Lab to classic Jupyter Notebook, select `Help` from the menu bar and then `Launch Classic Notebook`.\n", + "\n", + "---" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Demo Output\n", + "\n", + "The application displays 3D visualization inside the Jupyter notebook using itkwidget with resulting instance classification masks." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## How It Works\n", + "At start-up, the demo application loads a network and a given dataset file to Inference Engine plugin. When inference is complete, the application displays the 3D itkwidget viewer with facies interpretation." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Installation of dependencies" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Setup virtual-env\n", + "\n", + "Step 1: Import the required packages" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import sys\n", + "from collections import defaultdict\n", + "\n", + "import ipywidgets as widgets\n", + "import matplotlib as mpl\n", + "import matplotlib.pyplot as plt\n", + "import numpy as np\n", + "from openvino.inference_engine import IECore\n", + "from openvino_extensions import get_extensions_path\n", + "from tqdm import tqdm" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Download the model:\n", + "\n", + "Step 1: Create a model directory and download the model:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "! mkdir model\n", + "! curl -L -o model/facies-segmentation-deconvnet.bin https://www.dropbox.com/s/x0c7ao8kebxykj1/facies-segmentation-deconvnet.bin?dl=1/\n", + "! curl -L -o model/facies-segmentation-deconvnet.xml https://www.dropbox.com/s/g288xdcd7xumqm7/facies-segmentation-deconvnet.xml?dl=1/\n", + "! curl -L -o model/facies-segmentation-deconvnet.mapping https://www.dropbox.com/s/a7kge25hfpjnhvf/facies-segmentation-deconvnet.mapping?dl=1/" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Download dataset:" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The open source dataset used in this demo is available on GitHub: https://github.com/yalaudah/facies_classification_benchmark" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Download dataset\n", + "! mkdir data\n", + "! curl -L -o data/test2_seismic.npy https://www.dropbox.com/s/sbj2atyukpjgssx/test2_seismic.npy?dl=1/" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Define functions" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def get_config():\n", + " config = defaultdict(str)\n", + " config.update(\n", + " {\n", + " \"model\": os.path.join(\"model\", \"facies-segmentation-deconvnet.xml\"),\n", + " \"data_path\": os.path.join(\"data\", \"test2_seismic.npy\"),\n", + " \"name_classes\": [\n", + " \"upper_ns\",\n", + " \"middle_ns\",\n", + " \"lower_ns\",\n", + " \"rijnland_chalk\",\n", + " \"scruff\",\n", + " \"zechstein\",\n", + " ],\n", + " \"slice_no\": 101,\n", + " \"slice_axis\": 1,\n", + " \"edge_one\": (0, 30, 0),\n", + " \"edge_two\": (500, 199, 244),\n", + " }\n", + " )\n", + "\n", + " return config" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def normalize(data, mu=0, std=1):\n", + " if not isinstance(data, np.ndarray):\n", + " data = np.array(data)\n", + " data = (data - data.flatten().mean()) / data.flatten().std()\n", + " return data * std + mu\n", + "\n", + "\n", + "def load_data(config):\n", + " data_format = config[\"data_path\"].split(\".\")[1]\n", + " assert not (\n", + " config[\"data_path\"].split(\".\")[0] == \"\" or data_format == \"\"\n", + " ), f'Invalid path to data file: {config[\"data_path\"]}'\n", + " if data_format == \"npy\":\n", + " data = np.load(config[\"data_path\"])\n", + " elif data_format == \"dat\":\n", + " data = np.fromfile(config[\"data_path\"])\n", + " elif data_format == \"segy\":\n", + " import segyio\n", + "\n", + " data = segyio.tools.cube(config[\"data_path\"])\n", + " data = np.moveaxis(data, -1, 0)\n", + " data = np.ascontiguousarray(data, \"float32\")\n", + " else:\n", + " assert False, f\"Unsupported data format: {data_format}\"\n", + "\n", + " data = normalize(data, mu=1e-8, std=0.2097654)\n", + " print(f\"[INFO] Dataset has been loaded, shape is {data.shape}\")\n", + " print(\n", + " f\"[INFO] Dataset mean is {data.flatten().mean():.5f}, std {data.flatten().std():.5f}\"\n", + " )\n", + "\n", + " x_min = min(config[\"edge_one\"][0], config[\"edge_two\"][0])\n", + " x_max = max(config[\"edge_one\"][0], config[\"edge_two\"][0])\n", + " y_min = min(config[\"edge_one\"][1], config[\"edge_two\"][1])\n", + " y_max = max(config[\"edge_one\"][1], config[\"edge_two\"][1])\n", + " z_min = min(config[\"edge_one\"][2], config[\"edge_two\"][2])\n", + " z_max = max(config[\"edge_one\"][2], config[\"edge_two\"][2])\n", + " x_lim, y_lim, z_lim = data.shape\n", + " assert x_min >= 0 and y_min >= 0 and z_min >= 0\n", + " assert x_max < x_lim and y_max < y_lim and z_max < z_lim, \"Invalid edges\"\n", + " sub_data = data[x_min:x_max, y_min:y_max, z_min:z_max]\n", + " return sub_data" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def reshape_model(net, shape, axis=None):\n", + " if axis is None:\n", + " index_of_dim = np.argmin(shape)\n", + " else:\n", + " index_of_dim = axis\n", + " input_data_shape = list(shape)\n", + " del input_data_shape[index_of_dim]\n", + "\n", + " input_net_info = net.input_info\n", + " input_name = next(iter(input_net_info))\n", + " input_net_shape = input_net_info[input_name].input_data.shape\n", + "\n", + " print(f\"[INFO] Infer should be on {input_data_shape} resolution\")\n", + " if input_data_shape != input_net_shape[-2:]:\n", + " net.reshape({input_name: [1, 1, *input_data_shape]})\n", + " print(f\"[INFO] Reshaping model to fit for slice shape: {input_data_shape}\")\n", + " else:\n", + " print(f\"[INFO] Use not reshaped model\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def infer_cube(exec_net, data, axis=None):\n", + " if axis is None:\n", + " index_of_dim = np.argmin(data.shape)\n", + " else:\n", + " index_of_dim = axis\n", + " predicted_cube = np.empty(data.shape)\n", + " size = data.shape[index_of_dim]\n", + " for slice_index in tqdm(range(size)):\n", + " if index_of_dim == 0:\n", + " inp = data[slice_index, :, :]\n", + " out = exec_net.infer(inputs={\"input\": inp})[\"output\"]\n", + " out = np.argmax(out, axis=1).squeeze()\n", + " predicted_cube[slice_index, :, :] = out\n", + " if index_of_dim == 1:\n", + " inp = data[:, slice_index, :]\n", + " out = exec_net.infer(inputs={\"input\": inp})[\"output\"]\n", + " out = np.argmax(out, axis=1).squeeze()\n", + " predicted_cube[:, slice_index, :] = out\n", + " if index_of_dim == 2:\n", + " inp = data[:, :, slice_index]\n", + " out = exec_net.infer(inputs={\"input\": inp})[\"output\"]\n", + " out = np.argmax(out, axis=1).squeeze()\n", + " predicted_cube[:, :, slice_index] = out\n", + " return predicted_cube" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def discrete_cmap(N, base_cmap=None):\n", + " \"\"\"Create an N-bin discrete colormap from the specified input map\"\"\"\n", + "\n", + " # Note that if base_cmap is a string or None, you can simply do\n", + " # return plt.cm.get_cmap(base_cmap, N)\n", + " # The following works for string, None, or a colormap instance:\n", + "\n", + " base = plt.cm.get_cmap(base_cmap)\n", + " color_list = base(np.linspace(0, 1, N))\n", + " cmap_name = base.name + str(N)\n", + " return base.from_list(cmap_name, color_list, N)\n", + "\n", + "\n", + "def show_legend(N, cmap_name):\n", + " base = plt.cm.get_cmap(cmap_name)\n", + " color_list = base(np.linspace(0, 1, N))\n", + " print(color_list)\n", + "\n", + "\n", + "def show_legend(labels, cmap):\n", + " N = len(labels)\n", + " fig = plt.figure(figsize=(12, 6))\n", + " ax1 = fig.add_axes([0.05, 0.80, 0.9, 0.15])\n", + " cb1 = mpl.colorbar.ColorbarBase(\n", + " ax1,\n", + " cmap=cmap,\n", + " ticks=np.arange(0, N, 1) / N + 1 / (2 * N),\n", + " orientation=\"horizontal\",\n", + " )\n", + " cb1.ax.set_xticklabels(labels, fontsize=20)\n", + " cb1.set_label(\"Legend\", fontsize=24)\n", + " plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Get config and load dataset" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "config = get_config()\n", + "data = load_data(config)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Running" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Load model" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ie = IECore()\n", + "ie.add_extension(get_extensions_path(), \"CPU\")\n", + "net = ie.read_network(config[\"model\"])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We use `get_extensions_path` function to to set up our own CPU extension to infer this model. This is necessary because the model uses the `MaxUnpool2D` layer, which is not yet supported by OpenVINO ™.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Run model on a single slice" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's define function for decodig class labels into a color" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "LABEL_COLOURS = np.asarray(\n", + " [\n", + " [69, 117, 180],\n", + " [145, 191, 219],\n", + " [224, 243, 248],\n", + " [254, 224, 144],\n", + " [252, 141, 89],\n", + " [215, 48, 39],\n", + " ]\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def decode_segmap(label_mask):\n", + " \"\"\"Decode segmentation class labels into a color image\n", + " Args:\n", + " label_mask (np.ndarray): an (M,N) array of integer values denoting\n", + " the class label at each spatial location.\n", + " plot (bool, optional): whether to show the resulting color image\n", + " in a figure.\n", + " Returns:\n", + " (np.ndarray, optional): the resulting decoded color image.\n", + " \"\"\"\n", + " r = label_mask.copy()\n", + " g = label_mask.copy()\n", + " b = label_mask.copy()\n", + " for ll in range(0, len(LABEL_COLOURS)):\n", + " r[label_mask == ll] = LABEL_COLOURS[ll, 0]\n", + " g[label_mask == ll] = LABEL_COLOURS[ll, 1]\n", + " b[label_mask == ll] = LABEL_COLOURS[ll, 2]\n", + " rgb = np.zeros((label_mask.shape[0], label_mask.shape[1], 3))\n", + " rgb[:, :, 0] = r / 255.0\n", + " rgb[:, :, 1] = g / 255.0\n", + " rgb[:, :, 2] = b / 255.0\n", + " return rgb" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Function for plotting output from the model:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def show_facies_interpretation(input_slice, output_labels_slice):\n", + " from matplotlib.colors import LinearSegmentedColormap\n", + "\n", + " res_image = decode_segmap(output_labels_slice.squeeze())\n", + "\n", + " color_list = LABEL_COLOURS / 255\n", + " cm = LinearSegmentedColormap.from_list(\"custom_cmap\", color_list, N=6)\n", + "\n", + " fig, axs = plt.subplots(1, 2, figsize=(15, 12))\n", + " fig.suptitle(\"Facies classification results\", fontsize=22)\n", + "\n", + " axs[0].imshow(input_slice, cmap=\"Greys\")\n", + " axs[0].set_title(\"Input data slice\", fontsize=15)\n", + "\n", + " im = axs[1].imshow(res_image, cmap=cm)\n", + " axs[1].set_title(\"Interpretation of the slice\", fontsize=15)\n", + "\n", + " cbaxes = fig.add_axes([0.95, 0.15, 0.02, 0.65])\n", + " cb = fig.colorbar(\n", + " im, ax=axs[0], cax=cbaxes, ticks=[0.23, 0.36, 0.5, 0.65, 0.78, 0.93]\n", + " )\n", + " cb.ax.set_yticklabels(\n", + " [\"upper_ns\", \"middle_ns\", \"lower_ns\", \"rijnland_chalk\", \"scruff\", \"zechstein\"],\n", + " fontsize=12,\n", + " ha=\"left\",\n", + " )" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Prepare slice for 2d visualization" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "seismic_data = np.load(config[\"data_path\"])\n", + "if config[\"slice_axis\"] == 0:\n", + " inp_slice = seismic_data[config[\"slice_no\"], :, :]\n", + "elif config[\"slice_axis\"] == 1:\n", + " inp_slice = seismic_data[:, config[\"slice_no\"], :]\n", + "elif config[\"slice_axis\"] == 2:\n", + " inp_slice = seismic_data[:, :, config[\"slice_no\"]]\n", + "else:\n", + " assert False, \"Invalid slice_axis, it must be int 0, 1 or 2\"\n", + "\n", + "seismic_data = normalize(seismic_data, mu=1e-8, std=0.2097654)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Prepare model" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "If you want to run inference on another device, such as a `GPU`, you must specify the `net` variable like the examples below:\n", + "* for `GPU`:\n", + "```\n", + "exec_net = ie.load_network(network=net, device_name=\"HETERO:GPU, CPU\")\n", + "```" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "reshape_model(net, seismic_data.shape, axis=config[\"slice_axis\"])\n", + "exec_net = ie.load_network(network=net, device_name=\"CPU\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Infer model on a slice" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "out_slice_from_model = exec_net.infer(inputs={\"input\": inp_slice})[\"output\"]\n", + "out_labels_slice = np.argmax(out_slice_from_model, axis=1).squeeze()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Show output" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "show_facies_interpretation(inp_slice, out_labels_slice)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Output explanation**:\n", + "\n", + "Different facies are identified as output classes. Facies represent a body of rock with specific characteristics that differentiate it from others.\n", + "\n", + "More details on the dataset can be found at this link: https://arxiv.org/abs/1901.07659\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Run model cube (multiple slices) for 3D visualization" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "is_visualize_3d = False\n", + "try:\n", + " from itkwidgets import view\n", + "\n", + " is_visualize_3d = True\n", + "except ModuleNotFoundError:\n", + " print(\n", + " \"[WARNING]: itkwidgets is not installed,\\n to see 3D visualizations, please install itkwidgets by uncommenting the following cell \"\n", + " )" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# %pip install itkwidgets==0.23.1\n", + "# from itkwidgets import view\n", + "# is_visualize_3d = True" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Prepare model" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "reshape_model(net, data.shape, axis=1)\n", + "exec_net = ie.load_network(network=net, device_name=\"CPU\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Run inference (multiple slices)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "predicted_data = infer_cube(exec_net, data, axis=1)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Inference is now running on the model. Slices along axis 1 are fed to the input and the results are combined into an output cube (`predicted_data`)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Visualize original and predicted data" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "* Prepare original data viewer" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "if is_visualize_3d:\n", + " viewer_orig_data = view(data, shadow=False, annotations=False)\n", + " count_of_greys = 100\n", + " viewer_orig_data.cmap = np.array(\n", + " [\n", + " [i / count_of_greys, i / count_of_greys, i / count_of_greys]\n", + " for i in range(count_of_greys)\n", + " ]\n", + " )" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "* Prepare predicted data viewer" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "if is_visualize_3d:\n", + " cmap = discrete_cmap(len(config[\"name_classes\"]), \"jet\")\n", + " show_legend(config[\"name_classes\"], cmap)\n", + " viewer_interpret_data = view(predicted_data, shadow=False, annotations=False)\n", + " viewer_interpret_data.cmap = cmap\n", + " widgets.link(\n", + " (viewer_interpret_data, \"camera\"), (viewer_orig_data, \"camera\")\n", + " ) # link widget cameras\n", + " pass" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "* Run render" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "is_visualize_3d and viewer_interpret_data or print(\"3D rendering disabled\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "is_visualize_3d and viewer_orig_data or print(\"3D rendering disabled\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can now see a visualization of the interpreted and raw seismic data. It is also possible to use your mouse to interactively rotate, zoom and explore the interpreted data. If you don't see rendering, just restart this jupyter-notebook." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "#### Thanks to the authors:\n", + "\n", + "A machine-learning benchmark for facies classification / Yazeed Alaudah,\\\n", + "Patrycja Michałowicz, Motaz Alfarraj, Ghassan AlRegib // Interpretation. 2019. T. 7, № 3.\\\n", + "C. SE175-SE187.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "openvino_env", + "language": "python", + "name": "openvino_env" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.9" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/notebooks/204-vision-facies-segmentation/README.md b/notebooks/204-vision-facies-segmentation/README.md new file mode 100644 index 00000000000..1043efb132a --- /dev/null +++ b/notebooks/204-vision-facies-segmentation/README.md @@ -0,0 +1,16 @@ +# Facies segmentation Python Demo + +This demo demonstrate how to run facies classification using OpenVINO™ + +See the source repository to learn more about the model architecture and training method - https://github.com/yalaudah/facies_classification_benchmark + +--- +**NOTE** + +For now the 3D visialization does not yet work in `Jupyter Lab`, please use `jupyter notebook` instead. + +--- + +![img](demo.png) + +See details in `facies_demo.ipynb` diff --git a/notebooks/204-vision-facies-segmentation/demo.png b/notebooks/204-vision-facies-segmentation/demo.png new file mode 100644 index 00000000000..41067402dcb Binary files /dev/null and b/notebooks/204-vision-facies-segmentation/demo.png differ diff --git a/requirements.txt b/requirements.txt index ffc966c9668..06ecb2784f8 100644 --- a/requirements.txt +++ b/requirements.txt @@ -2,6 +2,7 @@ openvino-dev[onnx,tensorflow2]==2021.4.* matplotlib<3.4 gdown pytube +tqdm # ONNX notebook requirements geffnet==0.9.8 @@ -21,6 +22,10 @@ jupyterlab ipython==7.10.* jedi==0.17.2 setuptools>=56.0.0 + +# openvino extensions +openvino-extensions + Pillow==8.2.* ipykernel==5.* pygments>=2.7.4 # not directly required, pinned by Snyk to avoid a vulnerability