Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(route/duckdb): change blogs link and author #17856

Merged
merged 2 commits into from
Dec 10, 2024

Conversation

mocusez
Copy link
Contributor

@mocusez mocusez commented Dec 10, 2024

Involved Issue / 该 PR 相关 Issue

Close #

Example for the Proposed Route(s) / 路由地址示例

/duckdb/news

New RSS Route Checklist / 新 RSS 路由检查表

  • New Route / 新的路由
  • Anti-bot or rate limit / 反爬/频率限制
    • If yes, do your code reflect this sign? / 如果有, 是否有对应的措施?
  • Date and time / 日期和时间
    • Parsed / 可以解析
    • Correct time zone / 时区正确
  • New package added / 添加了新的包
  • Puppeteer

Note / 说明

Due to the update of the DuckDB Blog page layout, the old way of fetching links is not working

@github-actions github-actions bot added the Route label Dec 10, 2024
Copy link
Contributor

Successfully generated as following:

http://localhost:1200/duckdb/news - Success ✔️
<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
  <channel>
    <title>DuckDB News</title>
    <link>https://duckdb.org/news/</link>
    <atom:link href="http://localhost:1200/duckdb/news" rel="self" type="application/rss+xml"></atom:link>
    <description>DuckDB News - Powered by RSSHub</description>
    <generator>RSSHub</generator>
    <webMaster>[email protected] (RSSHub)</webMaster>
    <language>en</language>
    <lastBuildDate>Tue, 10 Dec 2024 13:24:55 GMT</lastBuildDate>
    <ttl>5</ttl>
    <item>
      <title>The DuckDB Avro Extension</title>
      <description>&lt;div class=&quot;content&quot;&gt;
        &lt;div class=&quot;contentwidth&quot;&gt;
        &lt;h1&gt;The DuckDB Avro Extension&lt;/h1&gt;
        &lt;div class=&quot;infoline&quot;&gt;
        &lt;div class=&quot;icon&quot;&gt;
        &lt;img src=&quot;https://duckdb.org/images/blog/authors/hannes_muehleisen.jpg&quot; alt=&quot;Author Avatar&quot; referrerpolicy=&quot;no-referrer&quot;&gt;
        &lt;/div&gt;
        &lt;div&gt;
        &lt;span class=&quot;author&quot;&gt;Hannes Mühleisen&lt;/span&gt;
        &lt;div class=&quot;publishedinfo&quot;&gt;
        &lt;span&gt;Published on&lt;/span&gt;
        &lt;span class=&quot;date&quot;&gt;2024-12-09&lt;/span&gt;
        &lt;/div&gt;
        &lt;/div&gt;
        &lt;/div&gt;
        &lt;div class=&quot;excerpt&quot;&gt;
        &lt;p&gt;&lt;em&gt;TL;DR: DuckDB now supports reading Avro files through the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;avro&lt;/code&gt; Community Extension.&lt;/em&gt;&lt;/p&gt;
        &lt;/div&gt;
        &lt;h2 id=&quot;the-apache-avro-format&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#the-apache-avro-format&quot;&gt;The Apache™ Avro™ Format&lt;/a&gt;
        &lt;/h2&gt;
        &lt;p&gt;&lt;a href=&quot;https://avro.apache.org/&quot;&gt;Avro&lt;/a&gt; is a binary format for record data. Like many innovations in the data space, Avro was &lt;a href=&quot;https://vimeo.com/7362534&quot;&gt;developed&lt;/a&gt; by &lt;a href=&quot;https://en.wikipedia.org/wiki/Doug_Cutting&quot;&gt;Doug Cutting&lt;/a&gt; as part of the Apache Hadoop project &lt;a href=&quot;https://github.com/apache/hadoop/commit/8296413d4988c08343014c6808a30e9d5e441bfc&quot;&gt;in around 2009&lt;/a&gt;. Avro gets its name – somewhat obscurely – from a defunct &lt;a href=&quot;https://en.wikipedia.org/wiki/Avro&quot;&gt;British aircraft manufacturer&lt;/a&gt;. The company famously built over 7,000 &lt;a href=&quot;https://en.wikipedia.org/wiki/Avro_Lancaster&quot;&gt;Avro Lancaster heavy bombers&lt;/a&gt; under the challenging conditions of World War 2. But we digress.&lt;/p&gt;
        &lt;p&gt;The Avro format is yet another attempt to solve the dimensionality reduction problem that occurs when transforming a complex &lt;em&gt;multi-dimensional data structure&lt;/em&gt; like tables (possibly with nested types) to a &lt;em&gt;single-dimensional storage layout&lt;/em&gt; like a flat file, which is just a sequence of bytes. The most fundamental question that arises here is whether to use a columnar or a row-major layout. Avro uses a row-major layout, which differentiates it from its famous cousin, the &lt;a href=&quot;https://parquet.apache.org/&quot;&gt;Apache™ Parquet™&lt;/a&gt; format. There are valid use cases for a row-major format: for example, appending a few rows to a Parquet file is difficult and inefficient because of Parquet&#39;s columnar layout and due to the fact the Parquet metadata is stored &lt;em&gt;at the back&lt;/em&gt; of the file. In a row-major format like Avro with the metadata &lt;em&gt;up top&lt;/em&gt;, we can “just” add those rows to the end of the files and we&#39;re done. This enables Avro to handle appends of a few rows somewhat efficiently.&lt;/p&gt;
        &lt;p&gt;Avro-encoded data can appear in several ways, e.g., in &lt;a href=&quot;https://en.wikipedia.org/wiki/Remote_procedure_call&quot;&gt;RPC messages&lt;/a&gt; but also in files. In the following, we focus on files since those survive long-term.&lt;/p&gt;
        &lt;h3 id=&quot;header-block&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#header-block&quot;&gt;Header Block&lt;/a&gt;
        &lt;/h3&gt;
        &lt;p&gt;Avro “object container” files are encoded using a comparatively simple binary &lt;a href=&quot;https://avro.apache.org/docs/++version++/specification/#object-container-files&quot;&gt;format&lt;/a&gt;: each file starts with a &lt;strong&gt;header block&lt;/strong&gt; that first has the &lt;a href=&quot;https://en.wikipedia.org/wiki/List_of_file_signatures&quot;&gt;magic bytes&lt;/a&gt; &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Obj1&lt;/code&gt;. Then, a metadata “map” (a list of string-bytearray key-value pairs) follows. The map is only strictly required to contain a single entry for the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;avro.schema&lt;/code&gt; key. This key contains the Avro file schema encoded as JSON. Here is an example for such a schema:&lt;/p&gt;
        &lt;div class=&quot;language-json highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;namespace&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;example.avro&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;type&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;record&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;name&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;User&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;fields&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;name&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;name&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;type&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;string&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;},&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;name&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;favorite_number&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;type&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;int&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;null&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]},&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;name&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;favorite_color&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;type&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;string&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;null&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
        &lt;p&gt;The Avro schema defines a record structure. Records can contain scalar data fields (like &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;int&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;double&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;string&lt;/code&gt;, etc.) but also more complex types like records (similar to &lt;a href=&quot;https://duckdb.org/docs/sql/data_types/struct.html&quot;&gt;DuckDB &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;STRUCT&lt;/code&gt;s&lt;/a&gt;), unions and lists. As a sidenote, it is quite strange that a data format for the definition of record structures would fall back to another format like JSON to describe itself, but such are the oddities of Avro.&lt;/p&gt;
        &lt;h3 id=&quot;data-blocks&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#data-blocks&quot;&gt;Data Blocks&lt;/a&gt;
        &lt;/h3&gt;
        &lt;p&gt;The header concludes with 16 randomly chosen bytes as a “sync marker”. The header is followed by an arbitrary amount of &lt;strong&gt;data blocks&lt;/strong&gt;: each data block starts with a record count, followed by a size and a byte array containing the actual records. Optionally, the bytes can be compressed with deflate (gzip), which will be known from the header metadata.&lt;/p&gt;
        &lt;p&gt;The data bytes can only be decoded using the schema. The &lt;a href=&quot;https://avro.apache.org/docs/++version++/specification/#object-container-files&quot;&gt;object file specification&lt;/a&gt; contains the details on how each type is encoded. For example, in the example schema we know each value is a record of three fields. The root-level record will encode its entries in the order they are declared. There are no actual bytes required for this. First we will be reading the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;name&lt;/code&gt; field. Strings consist of a length followed by the string bytes. Like other formats (e.g., Thrift), Avro uses &lt;a href=&quot;https://en.wikipedia.org/wiki/Variable-length_quantity#Zigzag_encoding&quot;&gt;variable-length integers with zigzag encoding&lt;/a&gt; to store lengths and counts and the like. After reading the string, we can proceed to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;favorite_number&lt;/code&gt;. This field is a union type (encoded with the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;[]&lt;/code&gt; syntax). This union can have values of two types, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;int&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;null&lt;/code&gt;. The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;null&lt;/code&gt; type is a bit odd, it can only be used to encode the fact that a value is missing. To decode the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;favorite_number&lt;/code&gt; fields, we first read an &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;int&lt;/code&gt; that encodes which choice of the union was used. Afterward, we use the “normal” decoders to read the values (e.g., &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;int&lt;/code&gt; or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;null&lt;/code&gt;). The same can be done for &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;favorite_color&lt;/code&gt;. Each data block again ends with the sync marker. The sync marker can be used to verify that the block was fully written and that there is no garbage in the file.&lt;/p&gt;
        &lt;h2 id=&quot;the-duckdb-avro-community-extension&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#the-duckdb-avro-community-extension&quot;&gt;The DuckDB &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;avro&lt;/code&gt; Community Extension&lt;/a&gt;
        &lt;/h2&gt;
        &lt;p&gt;We have developed a DuckDB Community Extension that enables DuckDB to &lt;em&gt;read&lt;/em&gt; &lt;a href=&quot;https://avro.apache.org/&quot;&gt;Apache Avro™&lt;/a&gt; files.&lt;/p&gt;
        &lt;p&gt;The extension does not contain Avro &lt;em&gt;write&lt;/em&gt; functionality. This is on purpose, by not providing a writer we hope to decrease the amount of Avro files in the world over time.&lt;/p&gt;
        &lt;h3 id=&quot;installation--loading&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#installation--loading&quot;&gt;Installation &amp;amp; Loading&lt;/a&gt;
        &lt;/h3&gt;
        &lt;p&gt;Installation is simple through the DuckDB Community Extension repository, just type&lt;/p&gt;
        &lt;div class=&quot;language-sql highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;INSTALL&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;avro&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;community&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;LOAD&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;avro&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
        &lt;p&gt;in a DuckDB instance near you. There is currently no build for Wasm because of dependencies (sigh).&lt;/p&gt;
        &lt;h3 id=&quot;the-read_avro-function&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#the-read_avro-function&quot;&gt;The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;read_avro&lt;/code&gt; Function&lt;/a&gt;
        &lt;/h3&gt;
        &lt;p&gt;The extension adds a single DuckDB function, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;read_avro&lt;/code&gt;. This function can be used like so:&lt;/p&gt;
        &lt;div class=&quot;language-sql highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;read_avro&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&#39;some_example_file.avro&#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
        &lt;p&gt;This function will expose the contents of the Avro file as a DuckDB table. You can then use any arbitrary SQL constructs to further transform this table.&lt;/p&gt;
        &lt;h3 id=&quot;file-io&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#file-io&quot;&gt;File IO&lt;/a&gt;
        &lt;/h3&gt;
        &lt;p&gt;The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;read_avro&lt;/code&gt; function is integrated into DuckDB&#39;s file system abstraction, meaning you can read Avro files directly from e.g., HTTP or S3 sources. For example:&lt;/p&gt;
        &lt;div class=&quot;language-sql highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;read_avro&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&#39;http://blobs.duckdb.org/data/userdata1.avro&#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;read_avro&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&#39;s3://my-example-bucket/some_example_file.avro&#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
        &lt;p&gt;should “just” work.&lt;/p&gt;
        &lt;p&gt;You can also &lt;a href=&quot;https://duckdb.org/docs/sql/functions/pattern_matching.html#globbing&quot;&gt;&lt;em&gt;glob&lt;/em&gt; multiple files&lt;/a&gt; in a single read call or pass a list of files to the functions:&lt;/p&gt;
        &lt;div class=&quot;language-sql highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;read_avro&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&#39;some_example_file_*.avro&#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;read_avro&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;([&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&#39;some_example_file_1.avro&#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&#39;some_example_file_2.avro&#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]);&lt;/span&gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
        &lt;p&gt;If the filenames somehow contain valuable information (as is unfortunately all-too-common), you can pass the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;filename&lt;/code&gt; argument to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;read_avro&lt;/code&gt;:&lt;/p&gt;
        &lt;div class=&quot;language-sql highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;read_avro&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&#39;some_example_file_*.avro&#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;filename&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;true&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
        &lt;p&gt;This will result in an additional column in the result set that contains the actual filename of the Avro file.&lt;/p&gt;
        &lt;h3 id=&quot;schema-conversion&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#schema-conversion&quot;&gt;Schema Conversion&lt;/a&gt;
        &lt;/h3&gt;
        &lt;p&gt;This extension automatically translates the Avro Schema to the DuckDB schema. &lt;em&gt;All&lt;/em&gt; Avro types can be translated, except for &lt;em&gt;recursive type definitions&lt;/em&gt;, which DuckDB does not support.&lt;/p&gt;
        &lt;p&gt;The type mapping is very straightforward except for Avro&#39;s “unique” way of handling &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;NULL&lt;/code&gt;. Unlike other systems, Avro does not treat &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;NULL&lt;/code&gt; as a possible value in a range of e.g., &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;INTEGER&lt;/code&gt; but instead represents &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;NULL&lt;/code&gt; as a union of the actual type with a special &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;NULL&lt;/code&gt; type. This is different to DuckDB, where any value can be &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;NULL&lt;/code&gt;. Of course DuckDB also supports &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;UNION&lt;/code&gt; types, but this would be quite cumbersome to work with.&lt;/p&gt;
        &lt;p&gt;This extension &lt;em&gt;simplifies&lt;/em&gt; the Avro schema where possible: an Avro union of any type and the special null type is simplified to just the non-null type. For example, an Avro record of the union type &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;[&quot;int&quot;, &quot;null&quot;]&lt;/code&gt; (like &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;favorite_number&lt;/code&gt; in the &lt;a href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#header-block&quot;&gt;example&lt;/a&gt;) becomes a DuckDB &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;INTEGER&lt;/code&gt;, which just happens to be &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;NULL&lt;/code&gt; sometimes. Similarly, an Avro union that contains only a single type is converted to the type it contains. For example, an Avro record of the union type &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;[&quot;int&quot;]&lt;/code&gt; also becomes a DuckDB &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;INTEGER&lt;/code&gt;.&lt;/p&gt;
        &lt;p&gt;The extension also “flattens” the Avro schema. Avro defines tables as root-level “record” fields, which are the same as DuckDB &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;STRUCT&lt;/code&gt; fields. For more convenient handling, this extension turns the entries of a single top-level record into top-level columns.&lt;/p&gt;
        &lt;h3 id=&quot;implementation&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#implementation&quot;&gt;Implementation&lt;/a&gt;
        &lt;/h3&gt;
        &lt;p&gt;Internally, this extension uses the “official” &lt;a href=&quot;https://avro.apache.org/docs/++version++/api/c/&quot;&gt;Apache Avro C API&lt;/a&gt;, albeit with some minor patching to allow reading Avro files from memory.&lt;/p&gt;
        &lt;h3 id=&quot;limitations--next-steps&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#limitations--next-steps&quot;&gt;Limitations &amp;amp; Next Steps&lt;/a&gt;
        &lt;/h3&gt;
        &lt;p&gt;In the following, we disclose the limitations of the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;avro&lt;/code&gt; DuckDB extension along with our plans to mitigate them in the future:&lt;/p&gt;
        &lt;ul&gt;
        &lt;li&gt;
        &lt;p&gt;The extension currently does not make use of &lt;strong&gt;parallelism&lt;/strong&gt; when reading either a single (large) Avro file or when reading a list of files. Adding support for parallelism in the latter case is on the roadmap.&lt;/p&gt;
        &lt;/li&gt;
        &lt;li&gt;
        &lt;p&gt;There is currently no support for projection or filter &lt;strong&gt;pushdown&lt;/strong&gt;, but this is also planned at a later stage.&lt;/p&gt;
        &lt;/li&gt;
        &lt;li&gt;
        &lt;p&gt;There is currently no support for the Wasm or the Windows-MinGW builds of DuckDB due to issues with the Avro library dependency (sigh again). We plan to fix this eventually.&lt;/p&gt;
        &lt;/li&gt;
        &lt;li&gt;
        &lt;p&gt;As mentioned above, DuckDB cannot express recursive type definitions that Avro has. This is unlikely to ever change.&lt;/p&gt;
        &lt;/li&gt;
        &lt;li&gt;
        &lt;p&gt;There is no support to allow users to provide a separate Avro schema file. This is unlikely to change, all Avro files we have seen so far had their schema embedded.&lt;/p&gt;
        &lt;/li&gt;
        &lt;li&gt;
        &lt;p&gt;There is currently no support for the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;union_by_name&lt;/code&gt; flag that other readers in DuckDB support. This is planned for the future.&lt;/p&gt;
        &lt;/li&gt;
        &lt;/ul&gt;
        &lt;h2 id=&quot;conclusion&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#conclusion&quot;&gt;Conclusion&lt;/a&gt;
        &lt;/h2&gt;
        &lt;p&gt;The new &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;avro&lt;/code&gt; Community Extension for DuckDB enables DuckDB to read Avro files directly as if they were tables. If you have a bunch of Avro files, go ahead and try it out! We&#39;d love to &lt;a href=&quot;https://github.com/hannes/duckdb_avro/issues&quot;&gt;hear from you&lt;/a&gt; if you run into any issues.&lt;/p&gt;
        &lt;/div&gt;
        &lt;/div&gt;
        &lt;div class=&quot;toc_sidebar&quot;&gt;
        &lt;div class=&quot;toc_menu&quot;&gt;
        &lt;h5&gt;In this article&lt;/h5&gt;
        &lt;ul id=&quot;toc&quot; class=&quot;section-nav&quot;&gt;
        &lt;li class=&quot;toc-entry toc-h2&quot;&gt;&lt;a href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#the-apache-avro-format&quot;&gt;The Apache™ Avro™ Format&lt;/a&gt;
        &lt;ul&gt;
        &lt;li class=&quot;toc-entry toc-h3&quot;&gt;&lt;a href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#header-block&quot;&gt;Header Block&lt;/a&gt;&lt;/li&gt;
        &lt;li class=&quot;toc-entry toc-h3&quot;&gt;&lt;a href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#data-blocks&quot;&gt;Data Blocks&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
        &lt;/li&gt;
        &lt;li class=&quot;toc-entry toc-h2&quot;&gt;&lt;a href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#the-duckdb-avro-community-extension&quot;&gt;The DuckDB avro Community Extension&lt;/a&gt;
        &lt;ul&gt;
        &lt;li class=&quot;toc-entry toc-h3&quot;&gt;&lt;a href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#installation--loading&quot;&gt;Installation &amp;amp; Loading&lt;/a&gt;&lt;/li&gt;
        &lt;li class=&quot;toc-entry toc-h3&quot;&gt;&lt;a href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#the-read_avro-function&quot;&gt;The read_avro Function&lt;/a&gt;&lt;/li&gt;
        &lt;li class=&quot;toc-entry toc-h3&quot;&gt;&lt;a href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#file-io&quot;&gt;File IO&lt;/a&gt;&lt;/li&gt;
        &lt;li class=&quot;toc-entry toc-h3&quot;&gt;&lt;a href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#schema-conversion&quot;&gt;Schema Conversion&lt;/a&gt;&lt;/li&gt;
        &lt;li class=&quot;toc-entry toc-h3&quot;&gt;&lt;a href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#implementation&quot;&gt;Implementation&lt;/a&gt;&lt;/li&gt;
        &lt;li class=&quot;toc-entry toc-h3&quot;&gt;&lt;a href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#limitations--next-steps&quot;&gt;Limitations &amp;amp; Next Steps&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
        &lt;/li&gt;
        &lt;li class=&quot;toc-entry toc-h2&quot;&gt;&lt;a href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
        &lt;/div&gt;
        &lt;/div&gt;
      </description>
      <link>https://duckdb.org/2024/12/09/duckdb-avro-extension.html</link>
      <guid isPermaLink="false">https://duckdb.org/2024/12/09/duckdb-avro-extension.html</guid>
      <pubDate>Mon, 09 Dec 2024 00:00:00 GMT</pubDate>
      <author>Hannes Mühleisen</author>
    </item>
    <item>
      <title>DuckDB: Running TPC-H SF100 on Mobile Phones</title>
      <description>&lt;div class=&quot;content&quot;&gt;
        &lt;div class=&quot;contentwidth&quot;&gt;
        &lt;h1&gt;DuckDB: Running TPC-H SF100 on Mobile Phones&lt;/h1&gt;
        &lt;div class=&quot;infoline&quot;&gt;
        &lt;div class=&quot;icon&quot;&gt;
        &lt;/div&gt;
        &lt;div&gt;
        &lt;span class=&quot;author&quot;&gt;Gabor Szarnyas, Laurens Kuiper, Hannes Mühleisen&lt;/span&gt;
        &lt;div class=&quot;publishedinfo&quot;&gt;
        &lt;span&gt;Published on&lt;/span&gt;
        &lt;span class=&quot;date&quot;&gt;2024-12-06&lt;/span&gt;
        &lt;/div&gt;
        &lt;/div&gt;
        &lt;/div&gt;
        &lt;div class=&quot;excerpt&quot;&gt;
        &lt;p&gt;&lt;em&gt;TL;DR: DuckDB runs on mobile platforms such as iOS and Android, and completes the TPC-H benchmark faster than state-of-the-art research systems on big iron machines 20 years ago.&lt;/em&gt;&lt;/p&gt;
        &lt;/div&gt;
        &lt;p&gt;A few weeks ago, we set out to perform a series of experiments to answer two simple questions:&lt;/p&gt;
        &lt;ol&gt;
        &lt;li&gt;Can DuckDB complete the TPC-H queries on the SF100 data set when running on a new smartphone?&lt;/li&gt;
        &lt;li&gt;If so, can DuckDB complete a run in less than 400 seconds, i.e., faster than the system in the research paper that originally introduced vectorized query processing?&lt;/li&gt;
        &lt;/ol&gt;
        &lt;p&gt;These questions took us on an interesting quest.
        Along the way, we had a lot of fun and learned the difference between a cold run and a &lt;em&gt;really cold&lt;/em&gt; run.
        Read on to find out more.&lt;/p&gt;
        &lt;h2 id=&quot;a-song-of-dry-ice-and-fire&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/06/duckdb-tpch-sf100-on-mobile.html#a-song-of-dry-ice-and-fire&quot;&gt;A Song of Dry Ice and Fire&lt;/a&gt;
        &lt;/h2&gt;
        &lt;p&gt;Our first attempt was to use an iPhone, namely an &lt;a href=&quot;https://www.gsmarena.com/apple_iphone_16_pro-13315.php&quot;&gt;iPhone 16 Pro&lt;/a&gt;.
        This phone has 8 GB memory and a 6-core CPU with 2 performance cores (running at 4.05 GHz) and 4 efficiency cores (running at 2.42 GHz).&lt;/p&gt;
        &lt;p&gt;We implemented the application using the &lt;a href=&quot;https://duckdb.org/docs/api/swift.html&quot;&gt;DuckDB Swift client&lt;/a&gt; and loaded the benchmark on the phone, all 30 GB of it.
        We quickly found that the iPhone can indeed run the workload without any problems – except that it heated up during the workload. This prompted the phone to perform thermal throttling, slowing down the CPU to reduce heat production. Due to this, DuckDB took 615.1 seconds. Not bad but not enough to reach our goal.&lt;/p&gt;
        &lt;p&gt;The results got us thinking: what if we improve the cooling of the phone? To this end, we purchased a box of dry ice, which has a temperature below -50 degrees Celsius, and put the phone in the box for the duration of the experiments.&lt;/p&gt;
        &lt;div align=&quot;center&quot;&gt;
        &lt;img src=&quot;https://duckdb.org/images/blog/tpch-mobile/ice-cooled-iphone-1.jpg&quot; alt=&quot;iPhone in a box of dry ice, running TPC-H&quot; width=&quot;600px&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;
        &lt;div align=&quot;center&quot;&gt;iPhone in a box of dry ice, running TPC-H. Don&#39;t try this at home.&lt;/div&gt;
        &lt;p&gt;This helped a lot: DuckDB completed in 478.2 seconds. This is a more than 20% improvement – but we still didn&#39;t manage to be under 400 seconds.&lt;/p&gt;
        &lt;div align=&quot;center&quot;&gt;
        &lt;img src=&quot;https://duckdb.org/images/blog/tpch-mobile/ice-cooled-iphone-2.jpg&quot; alt=&quot;The phone with icing on it, a few minutes after finishing the benchmark&quot; width=&quot;300px&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;
        &lt;div align=&quot;center&quot;&gt;The phone a few minutes after finishing the benchmark. It no longer booted because the battery was too cold!&lt;/div&gt;
        &lt;h2 id=&quot;do-androids-dream-of-electric-ducks&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/06/duckdb-tpch-sf100-on-mobile.html#do-androids-dream-of-electric-ducks&quot;&gt;Do Androids Dream of Electric Ducks?&lt;/a&gt;
        &lt;/h2&gt;
        &lt;p&gt;In our next experiment, we picked up a &lt;a href=&quot;https://www.gsmarena.com/samsung_galaxy_s24_ultra-12771.php&quot;&gt;Samsung Galaxy S24 Ultra phone&lt;/a&gt;, which runs Android 14. This phone is full of interesting hardware. First, it has an 8-core CPU with 4 different core types (1×3.39 GHz, 3×3.10 GHz, 2×2.90 GHz and 2×2.20 GHz). Second, it has a huge amount of RAM – 12 GB to be precise. Finally, its cooling system includes a &lt;a href=&quot;https://www.sammobile.com/news/galaxy-s24-sustain-performance-bigger-vapor-chamber/&quot;&gt;vapor chamber&lt;/a&gt; for improved heat dissipation.&lt;/p&gt;
        &lt;p&gt;We ran DuckDB in the &lt;a href=&quot;https://termux.dev/en/&quot;&gt;Termux terminal emulator&lt;/a&gt;. We compiled DuckDB &lt;a href=&quot;https://duckdb.org/docs/api/cli/overview.html&quot;&gt;CLI client&lt;/a&gt; from source following the &lt;a href=&quot;https://duckdb.org/docs/dev/building/build_instructions.html#android&quot;&gt;Android build instructions&lt;/a&gt; and ran the experiments from the command line.&lt;/p&gt;
        &lt;div align=&quot;center&quot;&gt;
        &lt;img src=&quot;https://duckdb.org/images/blog/tpch-mobile/duckdb-termux-android-emulator.png&quot; alt=&quot;Screenshot of DuckDB in Termux, running in the Android emulator&quot; width=&quot;600px&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;
        &lt;div align=&quot;center&quot;&gt;DuckDB in Termux, running in the Android emulator&lt;/div&gt;
        &lt;p&gt;In the end, it wasn&#39;t even close. The Android phone completed the benchmark in 235.0 seconds, outperforming our baseline by around 40%.&lt;/p&gt;
        &lt;h2 id=&quot;never-was-a-cloudy-day&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/06/duckdb-tpch-sf100-on-mobile.html#never-was-a-cloudy-day&quot;&gt;Never Was a Cloudy Day&lt;/a&gt;
        &lt;/h2&gt;
        &lt;p&gt;The results got us thinking: how do the results stack up among cloud servers? We picked two x86-based cloud instances in AWS EC2 with instance-attached NVMe storage.&lt;/p&gt;
        &lt;p&gt;The details of these benchmarks are far less interesting than those of the previous ones. We booted up the instances with Ubuntu 24.04 and ran DuckDB in the command line. We found that an &lt;a href=&quot;https://instances.vantage.sh/aws/ec2/r6id.large&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;r6id.large&lt;/code&gt; instance&lt;/a&gt; (2 vCPUs with 16 GB RAM) completes the queries in 570.8 seconds, which is roughly on-par with an air-cooled iPhone. However, an &lt;a href=&quot;https://instances.vantage.sh/aws/ec2/r6id.xlarge&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;r6id.xlarge&lt;/code&gt;&lt;/a&gt; (4 vCPUs with 32 GB RAM) completes the benchmark in 166.2 seconds, faster than any result we achieved on phones.&lt;/p&gt;
        &lt;h2 id=&quot;summary-of-duckdb-results&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/06/duckdb-tpch-sf100-on-mobile.html#summary-of-duckdb-results&quot;&gt;Summary of DuckDB Results&lt;/a&gt;
        &lt;/h2&gt;
        &lt;p&gt;The table contains a summary of the DuckDB benchmark results.&lt;/p&gt;
        &lt;table&gt;
        &lt;thead&gt;
        &lt;tr&gt;
        &lt;th&gt;Setup&lt;/th&gt;
        &lt;th style=&quot;text-align: right&quot;&gt;CPU cores&lt;/th&gt;
        &lt;th style=&quot;text-align: right&quot;&gt;Memory&lt;/th&gt;
        &lt;th style=&quot;text-align: right&quot;&gt;Runtime&lt;/th&gt;
        &lt;/tr&gt;
        &lt;/thead&gt;
        &lt;tbody&gt;
        &lt;tr&gt;
        &lt;td&gt;iPhone 16 Pro (air-cooled)&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;6&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;8 GB&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;615.1 s&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
        &lt;td&gt;iPhone 16 Pro (dry ice-cooled)&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;6&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;8 GB&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;478.2 s&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
        &lt;td&gt;Samsung Galaxy S24 Ultra&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;8&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;12 GB&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;235.0 s&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
        &lt;td&gt;AWS EC2 &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;r6id.large&lt;/code&gt;&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;2&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;16 GB&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;570.8 s&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
        &lt;td&gt;AWS EC2 &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;r6id.xlarge&lt;/code&gt;&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;4&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;32 GB&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;166.2 s&lt;/td&gt;
        &lt;/tr&gt;
        &lt;/tbody&gt;
        &lt;/table&gt;
        &lt;h2 id=&quot;historical-context&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/06/duckdb-tpch-sf100-on-mobile.html#historical-context&quot;&gt;Historical Context&lt;/a&gt;
        &lt;/h2&gt;
        &lt;p&gt;So why did we set out to run these experiments in the first place?&lt;/p&gt;
        &lt;p&gt;Just a few weeks ago, &lt;a href=&quot;https://cwi.nl/&quot;&gt;CWI&lt;/a&gt;, the birthplace of DuckDB, held a ceremony for the &lt;a href=&quot;https://www.cwi.nl/en/events/dijkstra-awards/cwi-lectures-dijkstra-fellowship/&quot;&gt;Dijkstra Fellowship&lt;/a&gt;.
        The fellowship was awarded to Marcin Żukowski for his pioneering role in the development of database management systems and his successful entrepreneurial career that resulted in systems such as &lt;a href=&quot;https://en.wikipedia.org/wiki/Actian_Vector&quot;&gt;VectorWise&lt;/a&gt; and &lt;a href=&quot;https://en.wikipedia.org/wiki/Snowflake_Inc.&quot;&gt;Snowflake&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;A lot of ideas that originate in Marcin&#39;s research are used in DuckDB. Most importantly, &lt;em&gt;vectorized query processing&lt;/em&gt; allows DuckDB to be both fast and portable at the same time.
        With his co-authors Peter Boncz and Niels Nes, he first described this paradigm in the CIDR 2005 paper &lt;a href=&quot;https://www.cidrdb.org/cidr2005/papers/P19.pdf&quot;&gt;“MonetDB/X100: Hyper-Pipelining Query Execution”&lt;/a&gt;.&lt;/p&gt;
        &lt;blockquote&gt;
        &lt;p&gt;The terms &lt;em&gt;vectorization,&lt;/em&gt; &lt;em&gt;hyper-pipelining,&lt;/em&gt; and &lt;em&gt;superscalar&lt;/em&gt; refer to the same idea: processing data in slices, which turns out to be a good compromise between row-at-a-time or column-at-a-time. DuckDB&#39;s query engine uses the same principle.&lt;/p&gt;
        &lt;/blockquote&gt;
        &lt;p&gt;This paper was published in January 2005, so it&#39;s safe to assume that it was finalized in late 2004 – almost exactly 20 years ago!&lt;/p&gt;
        &lt;p&gt;If we read the paper, we learn that the experiments were carried out on an HP workstation equipped with 12 GB of memory (the same amount as the Samsung phone has today!).
        It also had an Itanium CPU and looked like this:&lt;/p&gt;
        &lt;div align=&quot;center&quot;&gt;
        &lt;img src=&quot;https://duckdb.org/images/blog/tpch-mobile/hp-itanium-workstation.jpg&quot; alt=&quot;The Itanium2 workstation used in original the experiments&quot; width=&quot;600px&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;
        &lt;div align=&quot;center&quot;&gt;The Itanium2 workstation used in original the experiments (source: &lt;a href=&quot;https://commons.wikimedia.org/wiki/File:HP-HP9000-ZX6000-Itanium2-Workstation_11.jpg&quot;&gt;Wikimedia&lt;/a&gt;)&lt;/div&gt;
        &lt;blockquote&gt;
        &lt;p&gt;Upon its release in 2001, the &lt;a href=&quot;https://en.wikipedia.org/wiki/Itanium&quot;&gt;Itanium&lt;/a&gt; was aimed at the high-end market with the goal of eventually replacing the then-dominant x86 architecture with a new instruction set that focused heavily on &lt;a href=&quot;https://en.wikipedia.org/wiki/Single_instruction,_multiple_data&quot;&gt;SIMD (single instruction, multiple data)&lt;/a&gt;. While this ambition did not work out, the Itanium was the state-of-the-art architecture of its day. Due to the focus on the server market, the Itanium CPUs had a large amount of cache: the &lt;a href=&quot;https://www.intel.com/content/www/us/en/products/sku/27982/intel-itanium-processor-1-30-ghz-3m-cache-400-mhz-fsb/specifications.html&quot;&gt;1.3 GHz Itanium2 model used in the experiments&lt;/a&gt; had 3 MB of L2 cache, while Pentium 4 CPUs released around that time only had 0.5–1 MB.&lt;/p&gt;
        &lt;/blockquote&gt;
        &lt;p&gt;The paper provides a detailed breakdown of the runtimes:&lt;/p&gt;
        &lt;div align=&quot;center&quot;&gt;
        &lt;img src=&quot;https://duckdb.org/images/blog/tpch-mobile/cidr2005-monetdb-x100-results.png&quot; alt=&quot;Benchmark results from the CIDR 2005 paper “MonetDB/X100: Hyper-Pipelining Query Execution”&quot; width=&quot;450px&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;
        &lt;div align=&quot;center&quot;&gt;Benchmark results from the paper “MonetDB/X100: Hyper-Pipelining Query Execution”&lt;/div&gt;
        &lt;p&gt;The total runtime of the TPC-H SF100 queries was 407.9 seconds – hence our baseline for the experiments.
        Here is a video of Hannes presenting the results at the event:&lt;/p&gt;
        &lt;div align=&quot;center&quot;&gt;
        &lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/H1N2Jr34jwU?si=7wYychjmxpRWPqcm&amp;amp;start=1617&quot; title=&quot;YouTube video player&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&quot; referrerpolicy=&quot;no-referrer&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt;
        &lt;/div&gt;
        &lt;p&gt;And here are all results visualized on a plot:&lt;/p&gt;
        &lt;div align=&quot;center&quot;&gt;
        &lt;img src=&quot;https://duckdb.org/images/blog/tpch-mobile/tpch-mobile-experiment-runtimes.svg&quot; alt=&quot;Plot with the TPC-H SF100 experiment results for MonetDB/X100 and DuckDB&quot; width=&quot;750px&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;
        &lt;div align=&quot;center&quot;&gt;TPC-H SF100 total query runtimes for MonetDB/X100 and DuckDB&lt;/div&gt;
        &lt;h2 id=&quot;conclusion&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/06/duckdb-tpch-sf100-on-mobile.html#conclusion&quot;&gt;Conclusion&lt;/a&gt;
        &lt;/h2&gt;
        &lt;p&gt;It was a long journey from the original vectorized execution paper to running an analytical database on a phone.
        Many key innovations happened that allowed these results, and the big improvement in hardware is just one of them.
        Another crucial component is that compiler optimizations became a lot more sophisticated.
        Thanks to this, while the MonetDB/X100 system needed to use explicit SIMD, DuckDB can rely on the &lt;a href=&quot;https://en.wikipedia.org/wiki/Automatic_vectorization&quot;&gt;auto-vectorization&lt;/a&gt; of our (carefully constructed) loops.&lt;/p&gt;
        &lt;p&gt;All that&#39;s left is to answer questions that we posed at the beginning of our journey.
        Yes, DuckDB can run TPC-H SF100 on a mobile phone.
        And yes, in some cases it can even outperform a research prototype running on a high-end machine of 2004 – on a modern smartphone that fits in your pocket.&lt;/p&gt;
        &lt;p&gt;And with newer hardware, smarter compilers and yet-to-be-discovered database optimizations, future versions are only going to be faster.&lt;/p&gt;
        &lt;/div&gt;
        &lt;/div&gt;
        &lt;div class=&quot;toc_sidebar&quot;&gt;
        &lt;div class=&quot;toc_menu&quot;&gt;
        &lt;h5&gt;In this article&lt;/h5&gt;
        &lt;ul id=&quot;toc&quot; class=&quot;section-nav&quot;&gt;
        &lt;li class=&quot;toc-entry toc-h2&quot;&gt;&lt;a href=&quot;https://duckdb.org/2024/12/06/duckdb-tpch-sf100-on-mobile.html#a-song-of-dry-ice-and-fire&quot;&gt;A Song of Dry Ice and Fire&lt;/a&gt;&lt;/li&gt;
        &lt;li class=&quot;toc-entry toc-h2&quot;&gt;&lt;a href=&quot;https://duckdb.org/2024/12/06/duckdb-tpch-sf100-on-mobile.html#do-androids-dream-of-electric-ducks&quot;&gt;Do Androids Dream of Electric Ducks?&lt;/a&gt;&lt;/li&gt;
        &lt;li class=&quot;toc-entry toc-h2&quot;&gt;&lt;a href=&quot;https://duckdb.org/2024/12/06/duckdb-tpch-sf100-on-mobile.html#never-was-a-cloudy-day&quot;&gt;Never Was a Cloudy Day&lt;/a&gt;&lt;/li&gt;
        &lt;li class=&quot;toc-entry toc-h2&quot;&gt;&lt;a href=&quot;https://duckdb.org/2024/12/06/duckdb-tpch-sf100-on-mobile.html#summary-of-duckdb-results&quot;&gt;Summary of DuckDB Results&lt;/a&gt;&lt;/li&gt;
        &lt;li class=&quot;toc-entry toc-h2&quot;&gt;&lt;a href=&quot;https://duckdb.org/2024/12/06/duckdb-tpch-sf100-on-mobile.html#historical-context&quot;&gt;Historical Context&lt;/a&gt;&lt;/li&gt;
        &lt;li class=&quot;toc-entry toc-h2&quot;&gt;&lt;a href=&quot;https://duckdb.org/2024/12/06/duckdb-tpch-sf100-on-mobile.html#conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
        &lt;/div&gt;
        &lt;/div&gt;
      </description>
      <link>https://duckdb.org/2024/12/06/duckdb-tpch-sf100-on-mobile.html</link>
      <guid isPermaLink="false">https://duckdb.org/2024/12/06/duckdb-tpch-sf100-on-mobile.html</guid>
      <pubDate>Fri, 06 Dec 2024 00:00:00 GMT</pubDate>
      <author>Gabor Szarnyas, Laurens Kuiper, Hannes Mühleisen</author>
    </item>
    <item>
      <title>CSV Files: Dethroning Parquet as the Ultimate Storage File Format — or Not?</title>
      <description>&lt;div class=&quot;content&quot;&gt;
        &lt;div class=&quot;contentwidth&quot;&gt;
        &lt;h1&gt;CSV Files: Dethroning Parquet as the Ultimate Storage File Format — or Not?&lt;/h1&gt;
        &lt;div class=&quot;infoline&quot;&gt;
        &lt;div class=&quot;icon&quot;&gt;
        &lt;img src=&quot;https://duckdb.org/images/blog/authors/pedro_holanda.jpg&quot; alt=&quot;Author Avatar&quot; referrerpolicy=&quot;no-referrer&quot;&gt;
        &lt;/div&gt;
        &lt;div&gt;
        &lt;span class=&quot;author&quot;&gt;Pedro Holanda&lt;/span&gt;
        &lt;div class=&quot;publishedinfo&quot;&gt;
        &lt;span&gt;Published on&lt;/span&gt;
        &lt;span class=&quot;date&quot;&gt;2024-12-05&lt;/span&gt;
        &lt;/div&gt;
        &lt;/div&gt;
        &lt;/div&gt;
        &lt;div class=&quot;excerpt&quot;&gt;
        &lt;p&gt;&lt;em&gt;TL;DR: Data analytics primarily uses two types of storage format files: human-readable text files like CSV and performance-driven binary files like Parquet. This blog post compares these two formats in an ultimate showdown of performance and flexibility, where there can be only one winner.&lt;/em&gt;&lt;/p&gt;
        &lt;/div&gt;
        &lt;h2 id=&quot;file-formats&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/05/csv-files-dethroning-parquet-or-not.html#file-formats&quot;&gt;File Formats&lt;/a&gt;
        &lt;/h2&gt;
        &lt;h3 id=&quot;csv-files&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/05/csv-files-dethroning-parquet-or-not.html#csv-files&quot;&gt;CSV Files&lt;/a&gt;
        &lt;/h3&gt;
        &lt;p&gt;Data is most &lt;a href=&quot;https://www.vldb.org/pvldb/vol17/p3694-saxena.pdf&quot;&gt;commonly stored&lt;/a&gt; in human-readable file formats, like JSON or CSV files. These file formats are easy to operate on, since anyone with a text editor can simply open, alter, and understand them.&lt;/p&gt;
        &lt;p&gt;For many years, CSV files have had a bad reputation for being slow and cumbersome to work with. In practice, if you want to operate on a CSV file using your favorite database system, you must follow this recipe:&lt;/p&gt;
        &lt;ol&gt;
        &lt;li&gt;Manually discover its schema by opening the file in a text editor.&lt;/li&gt;
        &lt;li&gt;Create a table with the given schema.&lt;/li&gt;
        &lt;li&gt;Manually figure out the dialect of the file (e.g., which character is used for a quote?)&lt;/li&gt;
        &lt;li&gt;Load the file into the table using a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;COPY&lt;/code&gt; statement and with the dialect set.&lt;/li&gt;
        &lt;li&gt;Start querying it.&lt;/li&gt;
        &lt;/ol&gt;
        &lt;p&gt;Not only is this process tedious, but parallelizing a CSV file reader is &lt;a href=&quot;https://www.microsoft.com/en-us/research/uploads/prod/2019/04/chunker-sigmod19.pdf&quot;&gt;far from trivial&lt;/a&gt;. This means most systems either process it single-threaded or use a two-pass approach.&lt;/p&gt;
        &lt;p&gt;Additionally, &lt;a href=&quot;https://youtu.be/YrqSp8m7fmk?si=v5rmFWGJtpiU5_PX&amp;amp;t=624&quot;&gt;CSV files are wild&lt;/a&gt;: although &lt;a href=&quot;https://www.ietf.org/rfc/rfc4180.txt&quot;&gt;RFC-4180&lt;/a&gt; exists as a CSV standard, it is &lt;a href=&quot;https://aic.ai.wu.ac.at/~polleres/publications/mitl-etal-2016OBD.pdf&quot;&gt;commonly ignored&lt;/a&gt;. Systems must therefore be sufficiently robust to handle these files as if they come straight from the wild west.&lt;/p&gt;
        &lt;p&gt;Last but not least, CSV files are wasteful: data is always laid out as strings. For example, numeric values like &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;1000000000&lt;/code&gt; take 10 bytes instead of 4 bytes if stored as an &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;int32&lt;/code&gt;. Additionally, since the data layout is row-wise, opportunities to apply &lt;a href=&quot;https://duckdb.org/2022/10/28/lightweight-compression.html&quot;&gt;lightweight columnar compression&lt;/a&gt; are lost.&lt;/p&gt;
        &lt;h3 id=&quot;parquet-files&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/05/csv-files-dethroning-parquet-or-not.html#parquet-files&quot;&gt;Parquet Files&lt;/a&gt;
        &lt;/h3&gt;
        &lt;p&gt;Due to these shortcomings, performance-driven file formats like Parquet have gained significant popularity in recent years. Parquet files cannot be opened by general text editors, cannot be easily edited, and have a rigid schema. However, they store data in columns, apply various compression techniques, partition the data into row groups, maintain statistics about these row groups, and define their schema directly in the file.&lt;/p&gt;
        &lt;p&gt;These features make Parquet a monolith of a file format — highly inflexible but efficient and fast. It is easy to read data from a Parquet file since the schema is well-defined. Parallelizing a scanner is straightforward, as each thread can independently process a row group. Filter pushdown is also simple to implement, as each row group contains statistical metadata, and the file sizes are very small.&lt;/p&gt;
        &lt;p&gt;The conclusion should be simple: if you have small files and need flexibility, CSV files are fine. However, for data analysis, one should pivot to Parquet files, right? Well, this pivot may not be a hard requirement anymore – read on to find out why!&lt;/p&gt;
        &lt;h2 id=&quot;reading-csv-files-in-duckdb&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/05/csv-files-dethroning-parquet-or-not.html#reading-csv-files-in-duckdb&quot;&gt;Reading CSV Files in DuckDB&lt;/a&gt;
        &lt;/h2&gt;
        &lt;p&gt;For the past few releases, DuckDB has doubled down on delivering not only an easy-to-use CSV scanner but also an extremely performant one. This scanner features its own custom &lt;a href=&quot;https://duckdb.org/2023/10/27/csv-sniffer.html&quot;&gt;CSV sniffer&lt;/a&gt;, parallelization algorithm, buffer manager, casting mechanisms, and state machine-based parser.&lt;/p&gt;
        &lt;p&gt;For usability, the previous paradigm of manual schema discovery and table creation has been changed. Instead, DuckDB now utilizes a CSV Sniffer, similar to those found in dataframe libraries like Pandas.
        This allows for querying CSV files as easily as:&lt;/p&gt;
        &lt;div class=&quot;language-sql highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&#39;path/to/file.csv&#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
        &lt;p&gt;Or tables to be created from CSV files, without any prior schema definition with:&lt;/p&gt;
        &lt;div class=&quot;language-sql highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;TABLE&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;t&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;AS&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&#39;path/to/file.csv&#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
        &lt;p&gt;Furthermore, the reader became one of the fastest CSV readers in analytical systems, as can be seen by the load times of the &lt;a href=&quot;https://github.com/ClickHouse/ClickBench/commit/0aba4247ce227b3058d22846ca39826d27262fe0&quot;&gt;latest iteration&lt;/a&gt; of &lt;a href=&quot;https://benchmark.clickhouse.com/#eyJzeXN0ZW0iOnsiQWxsb3lEQiI6ZmFsc2UsIkFsbG95REIgKHR1bmVkKSI6ZmFsc2UsIkF0aGVuYSAocGFydGl0aW9uZWQpIjpmYWxzZSwiQXRoZW5hIChzaW5nbGUpIjpmYWxzZSwiQXVyb3JhIGZvciBNeVNRTCI6ZmFsc2UsIkF1cm9yYSBmb3IgUG9zdGdyZVNRTCI6ZmFsc2UsIkJ5Q29uaXR5IjpmYWxzZSwiQnl0ZUhvdXNlIjpmYWxzZSwiY2hEQiAoRGF0YUZyYW1lKSI6ZmFsc2UsImNoREIgKFBhcnF1ZXQsIHBhcnRpdGlvbmVkKSI6ZmFsc2UsImNoREIiOmZhbHNlLCJDaXR1cyI6ZmFsc2UsIkNsaWNrSG91c2UgQ2xvdWQgKGF3cykiOmZhbHNlLCJDbGlja0hvdXNlIENsb3VkIChhenVyZSkiOmZhbHNlLCJDbGlja0hvdXNlIENsb3VkIChnY3ApIjpmYWxzZSwiQ2xpY2tIb3VzZSAoZGF0YSBsYWtlLCBwYXJ0aXRpb25lZCkiOmZhbHNlLCJDbGlja0hvdXNlIChkYXRhIGxha2UsIHNpbmdsZSkiOmZhbHNlLCJDbGlja0hvdXNlIChQYXJxdWV0LCBwYXJ0aXRpb25lZCkiOmZhbHNlLCJDbGlja0hvdXNlIChQYXJxdWV0LCBzaW5nbGUpIjpmYWxzZSwiQ2xpY2tIb3VzZSAod2ViKSI6ZmFsc2UsIkNsaWNrSG91c2UiOnRydWUsIkNsaWNrSG91c2UgKHR1bmVkKSI6dHJ1ZSwiQ2xpY2tIb3VzZSAodHVuZWQsIG1lbW9yeSkiOnRydWUsIkNsb3VkYmVycnkiOmZhbHNlLCJDcmF0ZURCIjpmYWxzZSwiQ3J1bmNoeSBCcmlkZ2UgZm9yIEFuYWx5dGljcyAoUGFycXVldCkiOmZhbHNlLCJEYXRhYmVuZCI6dHJ1ZSwiRGF0YUZ1c2lvbiAoUGFycXVldCwgcGFydGl0aW9uZWQpIjpmYWxzZSwiRGF0YUZ1c2lvbiAoUGFycXVldCwgc2luZ2xlKSI6ZmFsc2UsIkFwYWNoZSBEb3JpcyI6ZmFsc2UsIkRyaWxsIjpmYWxzZSwiRHJ1aWQiOmZhbHNlLCJEdWNrREIgKERhdGFGcmFtZSkiOmZhbHNlLCJEdWNrREIgKG1lbW9yeSkiOnRydWUsIkR1Y2tEQiAoUGFycXVldCwgcGFydGl0aW9uZWQpIjpmYWxzZSwiRHVja0RCIjpmYWxzZSwiRWxhc3RpY3NlYXJjaCI6ZmFsc2UsIkVsYXN0aWNzZWFyY2ggKHR1bmVkKSI6ZmFsc2UsIkdsYXJlREIiOmZhbHNlLCJHcmVlbnBsdW0iOmZhbHNlLCJIZWF2eUFJIjpmYWxzZSwiSHlkcmEiOmZhbHNlLCJJbmZvYnJpZ2h0IjpmYWxzZSwiS2luZXRpY2EiOmZhbHNlLCJNYXJpYURCIENvbHVtblN0b3JlIjpmYWxzZSwiTWFyaWFEQiI6ZmFsc2UsIk1vbmV0REIiOmZhbHNlLCJNb25nb0RCIjpmYWxzZSwiTW90aGVyRHVjayI6ZmFsc2UsIk15U1FMIChNeUlTQU0pIjpmYWxzZSwiTXlTUUwiOmZhbHNlLCJPY3RvU1FMIjpmYWxzZSwiT3hsYSI6ZmFsc2UsIlBhbmRhcyAoRGF0YUZyYW1lKSI6ZmFsc2UsIlBhcmFkZURCIChQYXJxdWV0LCBwYXJ0aXRpb25lZCkiOmZhbHNlLCJQYXJhZGVEQiAoUGFycXVldCwgc2luZ2xlKSI6ZmFsc2UsInBnX2R1Y2tkYiAoTW90aGVyRHVjayBlbmFibGVkKSI6ZmFsc2UsInBnX2R1Y2tkYiI6ZmFsc2UsIlBpbm90IjpmYWxzZSwiUG9sYXJzIChEYXRhRnJhbWUpIjpmYWxzZSwiUG9sYXJzIChQYXJxdWV0KSI6ZmFsc2UsIlBvc3RncmVTUUwgKHR1bmVkKSI6ZmFsc2UsIlBvc3RncmVTUUwiOmZhbHNlLCJRdWVzdERCIjp0cnVlLCJSZWRzaGlmdCI6ZmFsc2UsIlNlbGVjdERCIjpmYWxzZSwiU2luZ2xlU3RvcmUiOmZhbHNlLCJTbm93Zmxha2UiOmZhbHNlLCJTcGFyayI6ZmFsc2UsIlNRTGl0ZSI6ZmFsc2UsIlN0YXJSb2NrcyI6ZmFsc2UsIlRhYmxlc3BhY2UiOmZhbHNlLCJUZW1ibyBPTEFQIChjb2x1bW5hcikiOmZhbHNlLCJUaW1lc2NhbGUgQ2xvdWQiOmZhbHNlLCJUaW1lc2NhbGVEQiAobm8gY29sdW1uc3RvcmUpIjpmYWxzZSwiVGltZXNjYWxlREIiOmZhbHNlLCJUaW55YmlyZCAoRnJlZSBUcmlhbCkiOmZhbHNlLCJVbWJyYSI6dHJ1ZX0sInR5cGUiOnsiQyI6dHJ1ZSwiY29sdW1uLW9yaWVudGVkIjp0cnVlLCJQb3N0Z3JlU1FMIGNvbXBhdGlibGUiOnRydWUsIm1hbmFnZWQiOnRydWUsImdjcCI6dHJ1ZSwic3RhdGVsZXNzIjp0cnVlLCJKYXZhIjp0cnVlLCJDKysiOnRydWUsIk15U1FMIGNvbXBhdGlibGUiOnRydWUsInJvdy1vcmllbnRlZCI6dHJ1ZSwiQ2xpY2tIb3VzZSBkZXJpdmF0aXZlIjp0cnVlLCJlbWJlZGRlZCI6dHJ1ZSwic2VydmVybGVzcyI6dHJ1ZSwiZGF0YWZyYW1lIjp0cnVlLCJhd3MiOnRydWUsImF6dXJlIjp0cnVlLCJhbmFseXRpY2FsIjp0cnVlLCJSdXN0Ijp0cnVlLCJzZWFyY2giOnRydWUsImRvY3VtZW50Ijp0cnVlLCJHbyI6dHJ1ZSwic29tZXdoYXQgUG9zdGdyZVNRTCBjb21wYXRpYmxlIjp0cnVlLCJEYXRhRnJhbWUiOnRydWUsInBhcnF1ZXQiOnRydWUsInRpbWUtc2VyaWVzIjp0cnVlfSwibWFjaGluZSI6eyIxNiB2Q1BVIDEyOEdCIjpmYWxzZSwiOCB2Q1BVIDY0R0IiOmZhbHNlLCJzZXJ2ZXJsZXNzIjpmYWxzZSwiMTZhY3UiOmZhbHNlLCJjNmEuNHhsYXJnZSwgNTAwZ2IgZ3AyIjpmYWxzZSwiTCI6ZmFsc2UsIk0iOmZhbHNlLCJTIjpmYWxzZSwiWFMiOmZhbHNlLCJjNmEubWV0YWwsIDUwMGdiIGdwMiI6dHJ1ZSwiMTkyR0IiOmZhbHNlLCIyNEdCIjpmYWxzZSwiMzYwR0IiOmZhbHNlLCI0OEdCIjpmYWxzZSwiNzIwR0IiOmZhbHNlLCI5NkdCIjpmYWxzZSwiZGV2IjpmYWxzZSwiNzA4R0IiOmZhbHNlLCJjNW4uNHhsYXJnZSwgNTAwZ2IgZ3AyIjpmYWxzZSwiQW5hbHl0aWNzLTI1NkdCICg2NCB2Q29yZXMsIDI1NiBHQikiOmZhbHNlLCJjNS40eGxhcmdlLCA1MDBnYiBncDIiOmZhbHNlLCJjNmEuNHhsYXJnZSwgMTUwMGdiIGdwMiI6ZmFsc2UsImNsb3VkIjpmYWxzZSwiZGMyLjh4bGFyZ2UiOmZhbHNlLCJyYTMuMTZ4bGFyZ2UiOmZhbHNlLCJyYTMuNHhsYXJnZSI6ZmFsc2UsInJhMy54bHBsdXMiOmZhbHNlLCJTMiI6ZmFsc2UsIlMyNCI6ZmFsc2UsIjJYTCI6ZmFsc2UsIjNYTCI6ZmFsc2UsIjRYTCI6ZmFsc2UsIlhMIjpmYWxzZSwiTDEgLSAxNkNQVSAzMkdCIjpmYWxzZSwiYzZhLjR4bGFyZ2UsIDUwMGdiIGdwMyI6ZmFsc2UsIjE2IHZDUFUgNjRHQiI6ZmFsc2UsIjQgdkNQVSAxNkdCIjpmYWxzZSwiOCB2Q1BVIDMyR0IiOmZhbHNlfSwiY2x1c3Rlcl9zaXplIjp7IjEiOnRydWUsIjIiOmZhbHNlLCI0IjpmYWxzZSwiOCI6ZmFsc2UsIjE2IjpmYWxzZSwiMzIiOmZhbHNlLCI2NCI6ZmFsc2UsIjEyOCI6ZmFsc2UsInNlcnZlcmxlc3MiOmZhbHNlLCJ1bmRlZmluZWQiOmZhbHNlfSwibWV0cmljIjoibG9hZCIsInF1ZXJpZXMiOlt0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlXX0=&quot;&gt;ClickBench&lt;/a&gt;. In this benchmark, the data is loaded from an &lt;a href=&quot;https://datasets.clickhouse.com/hits_compatible/hits.csv.gz&quot;&gt;82 GB uncompressed CSV file&lt;/a&gt; into a database table.&lt;/p&gt;
        &lt;div align=&quot;center&quot;&gt;
        &lt;img src=&quot;https://duckdb.org/images/blog/csv-vs-parquet-clickbench.png&quot; alt=&quot;Image showing the ClickBench result 2024-12-05&quot; width=&quot;800px&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;
        &lt;div align=&quot;center&quot;&gt;ClickBench CSV loading times (2024-12-05)&lt;/div&gt;
        &lt;h2 id=&quot;comparing-csv-and-parquet&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/05/csv-files-dethroning-parquet-or-not.html#comparing-csv-and-parquet&quot;&gt;Comparing CSV and Parquet&lt;/a&gt;
        &lt;/h2&gt;
        &lt;p&gt;With the large boost in usability and performance for the CSV reader, one might ask: what is the actual difference in performance when loading a CSV file compared to a Parquet file into a table? Additionally, how do these formats differ when running queries directly on them?&lt;/p&gt;
        &lt;p&gt;To find out, we will run a few examples using both CSV and Parquet files containing TPC-H data to shed light on their differences. All scripts used to generate the benchmarks of this blogpost can be found in a &lt;a href=&quot;https://github.com/pdet/csv_vs_parquet&quot;&gt;repository&lt;/a&gt;.&lt;/p&gt;
        &lt;h3 id=&quot;usability&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/05/csv-files-dethroning-parquet-or-not.html#usability&quot;&gt;Usability&lt;/a&gt;
        &lt;/h3&gt;
        &lt;p&gt;In terms of usability, scanning CSV files and Parquet files can differ significantly.&lt;/p&gt;
        &lt;p&gt;In simple cases, where all options are correctly detected by DuckDB, running queries on either CSV or Parquet files can be done directly.&lt;/p&gt;
        &lt;div class=&quot;language-sql highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&#39;path/to/file.csv&#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&#39;path/to/file.parquet&#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
        &lt;p&gt;Things can differ drastically for wild, rule-breaking &lt;a href=&quot;https://reddead.fandom.com/wiki/Arthur_Morgan&quot;&gt;Arthur Morgan&lt;/a&gt;-like CSV files. This is evident from the number of parameters that can be set for each scanner. The &lt;a href=&quot;https://duckdb.org/docs/data/parquet/overview.html&quot;&gt;Parquet&lt;/a&gt; scanner has a total of six parameters that can alter how the file is read. For the majority of cases, the user will never need to manually adjust any of them.&lt;/p&gt;
        &lt;p&gt;The CSV reader, on the other hand, depends on the sniffer being able to automatically detect many different configuration options. For example: What is the delimiter? How many rows should it skip from the top of the file? Are there any comments? And so on. This results in over &lt;a href=&quot;https://duckdb.org/docs/data/csv/overview.html&quot;&gt;30 configuration options&lt;/a&gt; that the user might have to manually adjust to properly parse their CSV file. Again, this number of options is necessary due to the lack of a widely adopted standard. However, in most scenarios, users can rely on the sniffer or, at most, change one or two options.&lt;/p&gt;
        &lt;p&gt;The CSV reader also has an extensive error-handling system and will always provide suggestions for options to review if something goes wrong.&lt;/p&gt;
        &lt;p&gt;To give you an example of how the DuckDB error-reporting system works, consider the following CSV file:&lt;/p&gt;
        &lt;pre&gt;&lt;code class=&quot;language-csv&quot;&gt;Clint Eastwood;94
        Samuel L. Jackson
        &lt;/code&gt;&lt;/pre&gt;
        &lt;p&gt;In this file, the second line is missing the value for the second column.&lt;/p&gt;
        &lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;high

@github-actions github-actions bot added the Auto: Route Test Complete Auto route test has finished on given PR label Dec 10, 2024
Copy link
Contributor

Successfully generated as following:

http://localhost:1200/duckdb/news - Success ✔️
<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
  <channel>
    <title>DuckDB News</title>
    <link>https://duckdb.org/news/</link>
    <atom:link href="http://localhost:1200/duckdb/news" rel="self" type="application/rss+xml"></atom:link>
    <description>DuckDB News - Powered by RSSHub</description>
    <generator>RSSHub</generator>
    <webMaster>[email protected] (RSSHub)</webMaster>
    <language>en</language>
    <lastBuildDate>Tue, 10 Dec 2024 17:30:49 GMT</lastBuildDate>
    <ttl>5</ttl>
    <item>
      <title>The DuckDB Avro Extension</title>
      <description>&lt;div class=&quot;excerpt&quot;&gt;
        &lt;p&gt;&lt;em&gt;TL;DR: DuckDB now supports reading Avro files through the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;avro&lt;/code&gt; Community Extension.&lt;/em&gt;&lt;/p&gt;
        &lt;/div&gt;
        &lt;h2 id=&quot;the-apache-avro-format&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#the-apache-avro-format&quot;&gt;The Apache™ Avro™ Format&lt;/a&gt;
        &lt;/h2&gt;
        &lt;p&gt;&lt;a href=&quot;https://avro.apache.org/&quot;&gt;Avro&lt;/a&gt; is a binary format for record data. Like many innovations in the data space, Avro was &lt;a href=&quot;https://vimeo.com/7362534&quot;&gt;developed&lt;/a&gt; by &lt;a href=&quot;https://en.wikipedia.org/wiki/Doug_Cutting&quot;&gt;Doug Cutting&lt;/a&gt; as part of the Apache Hadoop project &lt;a href=&quot;https://github.com/apache/hadoop/commit/8296413d4988c08343014c6808a30e9d5e441bfc&quot;&gt;in around 2009&lt;/a&gt;. Avro gets its name – somewhat obscurely – from a defunct &lt;a href=&quot;https://en.wikipedia.org/wiki/Avro&quot;&gt;British aircraft manufacturer&lt;/a&gt;. The company famously built over 7,000 &lt;a href=&quot;https://en.wikipedia.org/wiki/Avro_Lancaster&quot;&gt;Avro Lancaster heavy bombers&lt;/a&gt; under the challenging conditions of World War 2. But we digress.&lt;/p&gt;
        &lt;p&gt;The Avro format is yet another attempt to solve the dimensionality reduction problem that occurs when transforming a complex &lt;em&gt;multi-dimensional data structure&lt;/em&gt; like tables (possibly with nested types) to a &lt;em&gt;single-dimensional storage layout&lt;/em&gt; like a flat file, which is just a sequence of bytes. The most fundamental question that arises here is whether to use a columnar or a row-major layout. Avro uses a row-major layout, which differentiates it from its famous cousin, the &lt;a href=&quot;https://parquet.apache.org/&quot;&gt;Apache™ Parquet™&lt;/a&gt; format. There are valid use cases for a row-major format: for example, appending a few rows to a Parquet file is difficult and inefficient because of Parquet&#39;s columnar layout and due to the fact the Parquet metadata is stored &lt;em&gt;at the back&lt;/em&gt; of the file. In a row-major format like Avro with the metadata &lt;em&gt;up top&lt;/em&gt;, we can “just” add those rows to the end of the files and we&#39;re done. This enables Avro to handle appends of a few rows somewhat efficiently.&lt;/p&gt;
        &lt;p&gt;Avro-encoded data can appear in several ways, e.g., in &lt;a href=&quot;https://en.wikipedia.org/wiki/Remote_procedure_call&quot;&gt;RPC messages&lt;/a&gt; but also in files. In the following, we focus on files since those survive long-term.&lt;/p&gt;
        &lt;h3 id=&quot;header-block&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#header-block&quot;&gt;Header Block&lt;/a&gt;
        &lt;/h3&gt;
        &lt;p&gt;Avro “object container” files are encoded using a comparatively simple binary &lt;a href=&quot;https://avro.apache.org/docs/++version++/specification/#object-container-files&quot;&gt;format&lt;/a&gt;: each file starts with a &lt;strong&gt;header block&lt;/strong&gt; that first has the &lt;a href=&quot;https://en.wikipedia.org/wiki/List_of_file_signatures&quot;&gt;magic bytes&lt;/a&gt; &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Obj1&lt;/code&gt;. Then, a metadata “map” (a list of string-bytearray key-value pairs) follows. The map is only strictly required to contain a single entry for the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;avro.schema&lt;/code&gt; key. This key contains the Avro file schema encoded as JSON. Here is an example for such a schema:&lt;/p&gt;
        &lt;div class=&quot;language-json highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;namespace&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;example.avro&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;type&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;record&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;name&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;User&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;fields&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;name&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;name&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;type&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;string&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;},&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;name&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;favorite_number&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;type&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;int&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;null&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]},&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;name&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;favorite_color&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;type&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;string&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;null&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
        &lt;p&gt;The Avro schema defines a record structure. Records can contain scalar data fields (like &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;int&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;double&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;string&lt;/code&gt;, etc.) but also more complex types like records (similar to &lt;a href=&quot;https://duckdb.org/docs/sql/data_types/struct.html&quot;&gt;DuckDB &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;STRUCT&lt;/code&gt;s&lt;/a&gt;), unions and lists. As a sidenote, it is quite strange that a data format for the definition of record structures would fall back to another format like JSON to describe itself, but such are the oddities of Avro.&lt;/p&gt;
        &lt;h3 id=&quot;data-blocks&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#data-blocks&quot;&gt;Data Blocks&lt;/a&gt;
        &lt;/h3&gt;
        &lt;p&gt;The header concludes with 16 randomly chosen bytes as a “sync marker”. The header is followed by an arbitrary amount of &lt;strong&gt;data blocks&lt;/strong&gt;: each data block starts with a record count, followed by a size and a byte array containing the actual records. Optionally, the bytes can be compressed with deflate (gzip), which will be known from the header metadata.&lt;/p&gt;
        &lt;p&gt;The data bytes can only be decoded using the schema. The &lt;a href=&quot;https://avro.apache.org/docs/++version++/specification/#object-container-files&quot;&gt;object file specification&lt;/a&gt; contains the details on how each type is encoded. For example, in the example schema we know each value is a record of three fields. The root-level record will encode its entries in the order they are declared. There are no actual bytes required for this. First we will be reading the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;name&lt;/code&gt; field. Strings consist of a length followed by the string bytes. Like other formats (e.g., Thrift), Avro uses &lt;a href=&quot;https://en.wikipedia.org/wiki/Variable-length_quantity#Zigzag_encoding&quot;&gt;variable-length integers with zigzag encoding&lt;/a&gt; to store lengths and counts and the like. After reading the string, we can proceed to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;favorite_number&lt;/code&gt;. This field is a union type (encoded with the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;[]&lt;/code&gt; syntax). This union can have values of two types, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;int&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;null&lt;/code&gt;. The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;null&lt;/code&gt; type is a bit odd, it can only be used to encode the fact that a value is missing. To decode the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;favorite_number&lt;/code&gt; fields, we first read an &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;int&lt;/code&gt; that encodes which choice of the union was used. Afterward, we use the “normal” decoders to read the values (e.g., &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;int&lt;/code&gt; or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;null&lt;/code&gt;). The same can be done for &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;favorite_color&lt;/code&gt;. Each data block again ends with the sync marker. The sync marker can be used to verify that the block was fully written and that there is no garbage in the file.&lt;/p&gt;
        &lt;h2 id=&quot;the-duckdb-avro-community-extension&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#the-duckdb-avro-community-extension&quot;&gt;The DuckDB &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;avro&lt;/code&gt; Community Extension&lt;/a&gt;
        &lt;/h2&gt;
        &lt;p&gt;We have developed a DuckDB Community Extension that enables DuckDB to &lt;em&gt;read&lt;/em&gt; &lt;a href=&quot;https://avro.apache.org/&quot;&gt;Apache Avro™&lt;/a&gt; files.&lt;/p&gt;
        &lt;p&gt;The extension does not contain Avro &lt;em&gt;write&lt;/em&gt; functionality. This is on purpose, by not providing a writer we hope to decrease the amount of Avro files in the world over time.&lt;/p&gt;
        &lt;h3 id=&quot;installation--loading&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#installation--loading&quot;&gt;Installation &amp;amp; Loading&lt;/a&gt;
        &lt;/h3&gt;
        &lt;p&gt;Installation is simple through the DuckDB Community Extension repository, just type&lt;/p&gt;
        &lt;div class=&quot;language-sql highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;INSTALL&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;avro&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;community&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;LOAD&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;avro&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
        &lt;p&gt;in a DuckDB instance near you. There is currently no build for Wasm because of dependencies (sigh).&lt;/p&gt;
        &lt;h3 id=&quot;the-read_avro-function&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#the-read_avro-function&quot;&gt;The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;read_avro&lt;/code&gt; Function&lt;/a&gt;
        &lt;/h3&gt;
        &lt;p&gt;The extension adds a single DuckDB function, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;read_avro&lt;/code&gt;. This function can be used like so:&lt;/p&gt;
        &lt;div class=&quot;language-sql highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;read_avro&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&#39;some_example_file.avro&#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
        &lt;p&gt;This function will expose the contents of the Avro file as a DuckDB table. You can then use any arbitrary SQL constructs to further transform this table.&lt;/p&gt;
        &lt;h3 id=&quot;file-io&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#file-io&quot;&gt;File IO&lt;/a&gt;
        &lt;/h3&gt;
        &lt;p&gt;The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;read_avro&lt;/code&gt; function is integrated into DuckDB&#39;s file system abstraction, meaning you can read Avro files directly from e.g., HTTP or S3 sources. For example:&lt;/p&gt;
        &lt;div class=&quot;language-sql highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;read_avro&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&#39;http://blobs.duckdb.org/data/userdata1.avro&#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;read_avro&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&#39;s3://my-example-bucket/some_example_file.avro&#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
        &lt;p&gt;should “just” work.&lt;/p&gt;
        &lt;p&gt;You can also &lt;a href=&quot;https://duckdb.org/docs/sql/functions/pattern_matching.html#globbing&quot;&gt;&lt;em&gt;glob&lt;/em&gt; multiple files&lt;/a&gt; in a single read call or pass a list of files to the functions:&lt;/p&gt;
        &lt;div class=&quot;language-sql highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;read_avro&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&#39;some_example_file_*.avro&#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;read_avro&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;([&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&#39;some_example_file_1.avro&#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&#39;some_example_file_2.avro&#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]);&lt;/span&gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
        &lt;p&gt;If the filenames somehow contain valuable information (as is unfortunately all-too-common), you can pass the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;filename&lt;/code&gt; argument to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;read_avro&lt;/code&gt;:&lt;/p&gt;
        &lt;div class=&quot;language-sql highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;read_avro&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&#39;some_example_file_*.avro&#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;filename&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;true&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
        &lt;p&gt;This will result in an additional column in the result set that contains the actual filename of the Avro file.&lt;/p&gt;
        &lt;h3 id=&quot;schema-conversion&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#schema-conversion&quot;&gt;Schema Conversion&lt;/a&gt;
        &lt;/h3&gt;
        &lt;p&gt;This extension automatically translates the Avro Schema to the DuckDB schema. &lt;em&gt;All&lt;/em&gt; Avro types can be translated, except for &lt;em&gt;recursive type definitions&lt;/em&gt;, which DuckDB does not support.&lt;/p&gt;
        &lt;p&gt;The type mapping is very straightforward except for Avro&#39;s “unique” way of handling &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;NULL&lt;/code&gt;. Unlike other systems, Avro does not treat &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;NULL&lt;/code&gt; as a possible value in a range of e.g., &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;INTEGER&lt;/code&gt; but instead represents &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;NULL&lt;/code&gt; as a union of the actual type with a special &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;NULL&lt;/code&gt; type. This is different to DuckDB, where any value can be &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;NULL&lt;/code&gt;. Of course DuckDB also supports &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;UNION&lt;/code&gt; types, but this would be quite cumbersome to work with.&lt;/p&gt;
        &lt;p&gt;This extension &lt;em&gt;simplifies&lt;/em&gt; the Avro schema where possible: an Avro union of any type and the special null type is simplified to just the non-null type. For example, an Avro record of the union type &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;[&quot;int&quot;, &quot;null&quot;]&lt;/code&gt; (like &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;favorite_number&lt;/code&gt; in the &lt;a href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#header-block&quot;&gt;example&lt;/a&gt;) becomes a DuckDB &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;INTEGER&lt;/code&gt;, which just happens to be &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;NULL&lt;/code&gt; sometimes. Similarly, an Avro union that contains only a single type is converted to the type it contains. For example, an Avro record of the union type &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;[&quot;int&quot;]&lt;/code&gt; also becomes a DuckDB &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;INTEGER&lt;/code&gt;.&lt;/p&gt;
        &lt;p&gt;The extension also “flattens” the Avro schema. Avro defines tables as root-level “record” fields, which are the same as DuckDB &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;STRUCT&lt;/code&gt; fields. For more convenient handling, this extension turns the entries of a single top-level record into top-level columns.&lt;/p&gt;
        &lt;h3 id=&quot;implementation&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#implementation&quot;&gt;Implementation&lt;/a&gt;
        &lt;/h3&gt;
        &lt;p&gt;Internally, this extension uses the “official” &lt;a href=&quot;https://avro.apache.org/docs/++version++/api/c/&quot;&gt;Apache Avro C API&lt;/a&gt;, albeit with some minor patching to allow reading Avro files from memory.&lt;/p&gt;
        &lt;h3 id=&quot;limitations--next-steps&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#limitations--next-steps&quot;&gt;Limitations &amp;amp; Next Steps&lt;/a&gt;
        &lt;/h3&gt;
        &lt;p&gt;In the following, we disclose the limitations of the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;avro&lt;/code&gt; DuckDB extension along with our plans to mitigate them in the future:&lt;/p&gt;
        &lt;ul&gt;
        &lt;li&gt;
        &lt;p&gt;The extension currently does not make use of &lt;strong&gt;parallelism&lt;/strong&gt; when reading either a single (large) Avro file or when reading a list of files. Adding support for parallelism in the latter case is on the roadmap.&lt;/p&gt;
        &lt;/li&gt;
        &lt;li&gt;
        &lt;p&gt;There is currently no support for projection or filter &lt;strong&gt;pushdown&lt;/strong&gt;, but this is also planned at a later stage.&lt;/p&gt;
        &lt;/li&gt;
        &lt;li&gt;
        &lt;p&gt;There is currently no support for the Wasm or the Windows-MinGW builds of DuckDB due to issues with the Avro library dependency (sigh again). We plan to fix this eventually.&lt;/p&gt;
        &lt;/li&gt;
        &lt;li&gt;
        &lt;p&gt;As mentioned above, DuckDB cannot express recursive type definitions that Avro has. This is unlikely to ever change.&lt;/p&gt;
        &lt;/li&gt;
        &lt;li&gt;
        &lt;p&gt;There is no support to allow users to provide a separate Avro schema file. This is unlikely to change, all Avro files we have seen so far had their schema embedded.&lt;/p&gt;
        &lt;/li&gt;
        &lt;li&gt;
        &lt;p&gt;There is currently no support for the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;union_by_name&lt;/code&gt; flag that other readers in DuckDB support. This is planned for the future.&lt;/p&gt;
        &lt;/li&gt;
        &lt;/ul&gt;
        &lt;h2 id=&quot;conclusion&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/09/duckdb-avro-extension.html#conclusion&quot;&gt;Conclusion&lt;/a&gt;
        &lt;/h2&gt;
        &lt;p&gt;The new &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;avro&lt;/code&gt; Community Extension for DuckDB enables DuckDB to read Avro files directly as if they were tables. If you have a bunch of Avro files, go ahead and try it out! We&#39;d love to &lt;a href=&quot;https://github.com/hannes/duckdb_avro/issues&quot;&gt;hear from you&lt;/a&gt; if you run into any issues.&lt;/p&gt;
      </description>
      <link>https://duckdb.org/2024/12/09/duckdb-avro-extension.html</link>
      <guid isPermaLink="false">https://duckdb.org/2024/12/09/duckdb-avro-extension.html</guid>
      <pubDate>Mon, 09 Dec 2024 00:00:00 GMT</pubDate>
      <author>Hannes Mühleisen</author>
    </item>
    <item>
      <title>DuckDB: Running TPC-H SF100 on Mobile Phones</title>
      <description>&lt;div class=&quot;excerpt&quot;&gt;
        &lt;p&gt;&lt;em&gt;TL;DR: DuckDB runs on mobile platforms such as iOS and Android, and completes the TPC-H benchmark faster than state-of-the-art research systems on big iron machines 20 years ago.&lt;/em&gt;&lt;/p&gt;
        &lt;/div&gt;
        &lt;p&gt;A few weeks ago, we set out to perform a series of experiments to answer two simple questions:&lt;/p&gt;
        &lt;ol&gt;
        &lt;li&gt;Can DuckDB complete the TPC-H queries on the SF100 data set when running on a new smartphone?&lt;/li&gt;
        &lt;li&gt;If so, can DuckDB complete a run in less than 400 seconds, i.e., faster than the system in the research paper that originally introduced vectorized query processing?&lt;/li&gt;
        &lt;/ol&gt;
        &lt;p&gt;These questions took us on an interesting quest.
        Along the way, we had a lot of fun and learned the difference between a cold run and a &lt;em&gt;really cold&lt;/em&gt; run.
        Read on to find out more.&lt;/p&gt;
        &lt;h2 id=&quot;a-song-of-dry-ice-and-fire&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/06/duckdb-tpch-sf100-on-mobile.html#a-song-of-dry-ice-and-fire&quot;&gt;A Song of Dry Ice and Fire&lt;/a&gt;
        &lt;/h2&gt;
        &lt;p&gt;Our first attempt was to use an iPhone, namely an &lt;a href=&quot;https://www.gsmarena.com/apple_iphone_16_pro-13315.php&quot;&gt;iPhone 16 Pro&lt;/a&gt;.
        This phone has 8 GB memory and a 6-core CPU with 2 performance cores (running at 4.05 GHz) and 4 efficiency cores (running at 2.42 GHz).&lt;/p&gt;
        &lt;p&gt;We implemented the application using the &lt;a href=&quot;https://duckdb.org/docs/api/swift.html&quot;&gt;DuckDB Swift client&lt;/a&gt; and loaded the benchmark on the phone, all 30 GB of it.
        We quickly found that the iPhone can indeed run the workload without any problems – except that it heated up during the workload. This prompted the phone to perform thermal throttling, slowing down the CPU to reduce heat production. Due to this, DuckDB took 615.1 seconds. Not bad but not enough to reach our goal.&lt;/p&gt;
        &lt;p&gt;The results got us thinking: what if we improve the cooling of the phone? To this end, we purchased a box of dry ice, which has a temperature below -50 degrees Celsius, and put the phone in the box for the duration of the experiments.&lt;/p&gt;
        &lt;div align=&quot;center&quot;&gt;
        &lt;img src=&quot;https://duckdb.org/images/blog/tpch-mobile/ice-cooled-iphone-1.jpg&quot; alt=&quot;iPhone in a box of dry ice, running TPC-H&quot; width=&quot;600px&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;
        &lt;div align=&quot;center&quot;&gt;iPhone in a box of dry ice, running TPC-H. Don&#39;t try this at home.&lt;/div&gt;
        &lt;p&gt;This helped a lot: DuckDB completed in 478.2 seconds. This is a more than 20% improvement – but we still didn&#39;t manage to be under 400 seconds.&lt;/p&gt;
        &lt;div align=&quot;center&quot;&gt;
        &lt;img src=&quot;https://duckdb.org/images/blog/tpch-mobile/ice-cooled-iphone-2.jpg&quot; alt=&quot;The phone with icing on it, a few minutes after finishing the benchmark&quot; width=&quot;300px&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;
        &lt;div align=&quot;center&quot;&gt;The phone a few minutes after finishing the benchmark. It no longer booted because the battery was too cold!&lt;/div&gt;
        &lt;h2 id=&quot;do-androids-dream-of-electric-ducks&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/06/duckdb-tpch-sf100-on-mobile.html#do-androids-dream-of-electric-ducks&quot;&gt;Do Androids Dream of Electric Ducks?&lt;/a&gt;
        &lt;/h2&gt;
        &lt;p&gt;In our next experiment, we picked up a &lt;a href=&quot;https://www.gsmarena.com/samsung_galaxy_s24_ultra-12771.php&quot;&gt;Samsung Galaxy S24 Ultra phone&lt;/a&gt;, which runs Android 14. This phone is full of interesting hardware. First, it has an 8-core CPU with 4 different core types (1×3.39 GHz, 3×3.10 GHz, 2×2.90 GHz and 2×2.20 GHz). Second, it has a huge amount of RAM – 12 GB to be precise. Finally, its cooling system includes a &lt;a href=&quot;https://www.sammobile.com/news/galaxy-s24-sustain-performance-bigger-vapor-chamber/&quot;&gt;vapor chamber&lt;/a&gt; for improved heat dissipation.&lt;/p&gt;
        &lt;p&gt;We ran DuckDB in the &lt;a href=&quot;https://termux.dev/en/&quot;&gt;Termux terminal emulator&lt;/a&gt;. We compiled DuckDB &lt;a href=&quot;https://duckdb.org/docs/api/cli/overview.html&quot;&gt;CLI client&lt;/a&gt; from source following the &lt;a href=&quot;https://duckdb.org/docs/dev/building/build_instructions.html#android&quot;&gt;Android build instructions&lt;/a&gt; and ran the experiments from the command line.&lt;/p&gt;
        &lt;div align=&quot;center&quot;&gt;
        &lt;img src=&quot;https://duckdb.org/images/blog/tpch-mobile/duckdb-termux-android-emulator.png&quot; alt=&quot;Screenshot of DuckDB in Termux, running in the Android emulator&quot; width=&quot;600px&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;
        &lt;div align=&quot;center&quot;&gt;DuckDB in Termux, running in the Android emulator&lt;/div&gt;
        &lt;p&gt;In the end, it wasn&#39;t even close. The Android phone completed the benchmark in 235.0 seconds, outperforming our baseline by around 40%.&lt;/p&gt;
        &lt;h2 id=&quot;never-was-a-cloudy-day&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/06/duckdb-tpch-sf100-on-mobile.html#never-was-a-cloudy-day&quot;&gt;Never Was a Cloudy Day&lt;/a&gt;
        &lt;/h2&gt;
        &lt;p&gt;The results got us thinking: how do the results stack up among cloud servers? We picked two x86-based cloud instances in AWS EC2 with instance-attached NVMe storage.&lt;/p&gt;
        &lt;p&gt;The details of these benchmarks are far less interesting than those of the previous ones. We booted up the instances with Ubuntu 24.04 and ran DuckDB in the command line. We found that an &lt;a href=&quot;https://instances.vantage.sh/aws/ec2/r6id.large&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;r6id.large&lt;/code&gt; instance&lt;/a&gt; (2 vCPUs with 16 GB RAM) completes the queries in 570.8 seconds, which is roughly on-par with an air-cooled iPhone. However, an &lt;a href=&quot;https://instances.vantage.sh/aws/ec2/r6id.xlarge&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;r6id.xlarge&lt;/code&gt;&lt;/a&gt; (4 vCPUs with 32 GB RAM) completes the benchmark in 166.2 seconds, faster than any result we achieved on phones.&lt;/p&gt;
        &lt;h2 id=&quot;summary-of-duckdb-results&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/06/duckdb-tpch-sf100-on-mobile.html#summary-of-duckdb-results&quot;&gt;Summary of DuckDB Results&lt;/a&gt;
        &lt;/h2&gt;
        &lt;p&gt;The table contains a summary of the DuckDB benchmark results.&lt;/p&gt;
        &lt;table&gt;
        &lt;thead&gt;
        &lt;tr&gt;
        &lt;th&gt;Setup&lt;/th&gt;
        &lt;th style=&quot;text-align: right&quot;&gt;CPU cores&lt;/th&gt;
        &lt;th style=&quot;text-align: right&quot;&gt;Memory&lt;/th&gt;
        &lt;th style=&quot;text-align: right&quot;&gt;Runtime&lt;/th&gt;
        &lt;/tr&gt;
        &lt;/thead&gt;
        &lt;tbody&gt;
        &lt;tr&gt;
        &lt;td&gt;iPhone 16 Pro (air-cooled)&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;6&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;8 GB&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;615.1 s&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
        &lt;td&gt;iPhone 16 Pro (dry ice-cooled)&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;6&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;8 GB&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;478.2 s&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
        &lt;td&gt;Samsung Galaxy S24 Ultra&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;8&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;12 GB&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;235.0 s&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
        &lt;td&gt;AWS EC2 &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;r6id.large&lt;/code&gt;&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;2&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;16 GB&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;570.8 s&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
        &lt;td&gt;AWS EC2 &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;r6id.xlarge&lt;/code&gt;&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;4&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;32 GB&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;166.2 s&lt;/td&gt;
        &lt;/tr&gt;
        &lt;/tbody&gt;
        &lt;/table&gt;
        &lt;h2 id=&quot;historical-context&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/06/duckdb-tpch-sf100-on-mobile.html#historical-context&quot;&gt;Historical Context&lt;/a&gt;
        &lt;/h2&gt;
        &lt;p&gt;So why did we set out to run these experiments in the first place?&lt;/p&gt;
        &lt;p&gt;Just a few weeks ago, &lt;a href=&quot;https://cwi.nl/&quot;&gt;CWI&lt;/a&gt;, the birthplace of DuckDB, held a ceremony for the &lt;a href=&quot;https://www.cwi.nl/en/events/dijkstra-awards/cwi-lectures-dijkstra-fellowship/&quot;&gt;Dijkstra Fellowship&lt;/a&gt;.
        The fellowship was awarded to Marcin Żukowski for his pioneering role in the development of database management systems and his successful entrepreneurial career that resulted in systems such as &lt;a href=&quot;https://en.wikipedia.org/wiki/Actian_Vector&quot;&gt;VectorWise&lt;/a&gt; and &lt;a href=&quot;https://en.wikipedia.org/wiki/Snowflake_Inc.&quot;&gt;Snowflake&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;A lot of ideas that originate in Marcin&#39;s research are used in DuckDB. Most importantly, &lt;em&gt;vectorized query processing&lt;/em&gt; allows DuckDB to be both fast and portable at the same time.
        With his co-authors Peter Boncz and Niels Nes, he first described this paradigm in the CIDR 2005 paper &lt;a href=&quot;https://www.cidrdb.org/cidr2005/papers/P19.pdf&quot;&gt;“MonetDB/X100: Hyper-Pipelining Query Execution”&lt;/a&gt;.&lt;/p&gt;
        &lt;blockquote&gt;
        &lt;p&gt;The terms &lt;em&gt;vectorization,&lt;/em&gt; &lt;em&gt;hyper-pipelining,&lt;/em&gt; and &lt;em&gt;superscalar&lt;/em&gt; refer to the same idea: processing data in slices, which turns out to be a good compromise between row-at-a-time or column-at-a-time. DuckDB&#39;s query engine uses the same principle.&lt;/p&gt;
        &lt;/blockquote&gt;
        &lt;p&gt;This paper was published in January 2005, so it&#39;s safe to assume that it was finalized in late 2004 – almost exactly 20 years ago!&lt;/p&gt;
        &lt;p&gt;If we read the paper, we learn that the experiments were carried out on an HP workstation equipped with 12 GB of memory (the same amount as the Samsung phone has today!).
        It also had an Itanium CPU and looked like this:&lt;/p&gt;
        &lt;div align=&quot;center&quot;&gt;
        &lt;img src=&quot;https://duckdb.org/images/blog/tpch-mobile/hp-itanium-workstation.jpg&quot; alt=&quot;The Itanium2 workstation used in original the experiments&quot; width=&quot;600px&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;
        &lt;div align=&quot;center&quot;&gt;The Itanium2 workstation used in original the experiments (source: &lt;a href=&quot;https://commons.wikimedia.org/wiki/File:HP-HP9000-ZX6000-Itanium2-Workstation_11.jpg&quot;&gt;Wikimedia&lt;/a&gt;)&lt;/div&gt;
        &lt;blockquote&gt;
        &lt;p&gt;Upon its release in 2001, the &lt;a href=&quot;https://en.wikipedia.org/wiki/Itanium&quot;&gt;Itanium&lt;/a&gt; was aimed at the high-end market with the goal of eventually replacing the then-dominant x86 architecture with a new instruction set that focused heavily on &lt;a href=&quot;https://en.wikipedia.org/wiki/Single_instruction,_multiple_data&quot;&gt;SIMD (single instruction, multiple data)&lt;/a&gt;. While this ambition did not work out, the Itanium was the state-of-the-art architecture of its day. Due to the focus on the server market, the Itanium CPUs had a large amount of cache: the &lt;a href=&quot;https://www.intel.com/content/www/us/en/products/sku/27982/intel-itanium-processor-1-30-ghz-3m-cache-400-mhz-fsb/specifications.html&quot;&gt;1.3 GHz Itanium2 model used in the experiments&lt;/a&gt; had 3 MB of L2 cache, while Pentium 4 CPUs released around that time only had 0.5–1 MB.&lt;/p&gt;
        &lt;/blockquote&gt;
        &lt;p&gt;The paper provides a detailed breakdown of the runtimes:&lt;/p&gt;
        &lt;div align=&quot;center&quot;&gt;
        &lt;img src=&quot;https://duckdb.org/images/blog/tpch-mobile/cidr2005-monetdb-x100-results.png&quot; alt=&quot;Benchmark results from the CIDR 2005 paper “MonetDB/X100: Hyper-Pipelining Query Execution”&quot; width=&quot;450px&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;
        &lt;div align=&quot;center&quot;&gt;Benchmark results from the paper “MonetDB/X100: Hyper-Pipelining Query Execution”&lt;/div&gt;
        &lt;p&gt;The total runtime of the TPC-H SF100 queries was 407.9 seconds – hence our baseline for the experiments.
        Here is a video of Hannes presenting the results at the event:&lt;/p&gt;
        &lt;div align=&quot;center&quot;&gt;
        &lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/H1N2Jr34jwU?si=7wYychjmxpRWPqcm&amp;amp;start=1617&quot; title=&quot;YouTube video player&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&quot; referrerpolicy=&quot;no-referrer&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt;
        &lt;/div&gt;
        &lt;p&gt;And here are all results visualized on a plot:&lt;/p&gt;
        &lt;div align=&quot;center&quot;&gt;
        &lt;img src=&quot;https://duckdb.org/images/blog/tpch-mobile/tpch-mobile-experiment-runtimes.svg&quot; alt=&quot;Plot with the TPC-H SF100 experiment results for MonetDB/X100 and DuckDB&quot; width=&quot;750px&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;
        &lt;div align=&quot;center&quot;&gt;TPC-H SF100 total query runtimes for MonetDB/X100 and DuckDB&lt;/div&gt;
        &lt;h2 id=&quot;conclusion&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/06/duckdb-tpch-sf100-on-mobile.html#conclusion&quot;&gt;Conclusion&lt;/a&gt;
        &lt;/h2&gt;
        &lt;p&gt;It was a long journey from the original vectorized execution paper to running an analytical database on a phone.
        Many key innovations happened that allowed these results, and the big improvement in hardware is just one of them.
        Another crucial component is that compiler optimizations became a lot more sophisticated.
        Thanks to this, while the MonetDB/X100 system needed to use explicit SIMD, DuckDB can rely on the &lt;a href=&quot;https://en.wikipedia.org/wiki/Automatic_vectorization&quot;&gt;auto-vectorization&lt;/a&gt; of our (carefully constructed) loops.&lt;/p&gt;
        &lt;p&gt;All that&#39;s left is to answer questions that we posed at the beginning of our journey.
        Yes, DuckDB can run TPC-H SF100 on a mobile phone.
        And yes, in some cases it can even outperform a research prototype running on a high-end machine of 2004 – on a modern smartphone that fits in your pocket.&lt;/p&gt;
        &lt;p&gt;And with newer hardware, smarter compilers and yet-to-be-discovered database optimizations, future versions are only going to be faster.&lt;/p&gt;
      </description>
      <link>https://duckdb.org/2024/12/06/duckdb-tpch-sf100-on-mobile.html</link>
      <guid isPermaLink="false">https://duckdb.org/2024/12/06/duckdb-tpch-sf100-on-mobile.html</guid>
      <pubDate>Fri, 06 Dec 2024 00:00:00 GMT</pubDate>
      <author>Gabor Szarnyas, Laurens Kuiper, Hannes Mühleisen</author>
    </item>
    <item>
      <title>CSV Files: Dethroning Parquet as the Ultimate Storage File Format — or Not?</title>
      <description>&lt;div class=&quot;excerpt&quot;&gt;
        &lt;p&gt;&lt;em&gt;TL;DR: Data analytics primarily uses two types of storage format files: human-readable text files like CSV and performance-driven binary files like Parquet. This blog post compares these two formats in an ultimate showdown of performance and flexibility, where there can be only one winner.&lt;/em&gt;&lt;/p&gt;
        &lt;/div&gt;
        &lt;h2 id=&quot;file-formats&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/05/csv-files-dethroning-parquet-or-not.html#file-formats&quot;&gt;File Formats&lt;/a&gt;
        &lt;/h2&gt;
        &lt;h3 id=&quot;csv-files&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/05/csv-files-dethroning-parquet-or-not.html#csv-files&quot;&gt;CSV Files&lt;/a&gt;
        &lt;/h3&gt;
        &lt;p&gt;Data is most &lt;a href=&quot;https://www.vldb.org/pvldb/vol17/p3694-saxena.pdf&quot;&gt;commonly stored&lt;/a&gt; in human-readable file formats, like JSON or CSV files. These file formats are easy to operate on, since anyone with a text editor can simply open, alter, and understand them.&lt;/p&gt;
        &lt;p&gt;For many years, CSV files have had a bad reputation for being slow and cumbersome to work with. In practice, if you want to operate on a CSV file using your favorite database system, you must follow this recipe:&lt;/p&gt;
        &lt;ol&gt;
        &lt;li&gt;Manually discover its schema by opening the file in a text editor.&lt;/li&gt;
        &lt;li&gt;Create a table with the given schema.&lt;/li&gt;
        &lt;li&gt;Manually figure out the dialect of the file (e.g., which character is used for a quote?)&lt;/li&gt;
        &lt;li&gt;Load the file into the table using a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;COPY&lt;/code&gt; statement and with the dialect set.&lt;/li&gt;
        &lt;li&gt;Start querying it.&lt;/li&gt;
        &lt;/ol&gt;
        &lt;p&gt;Not only is this process tedious, but parallelizing a CSV file reader is &lt;a href=&quot;https://www.microsoft.com/en-us/research/uploads/prod/2019/04/chunker-sigmod19.pdf&quot;&gt;far from trivial&lt;/a&gt;. This means most systems either process it single-threaded or use a two-pass approach.&lt;/p&gt;
        &lt;p&gt;Additionally, &lt;a href=&quot;https://youtu.be/YrqSp8m7fmk?si=v5rmFWGJtpiU5_PX&amp;amp;t=624&quot;&gt;CSV files are wild&lt;/a&gt;: although &lt;a href=&quot;https://www.ietf.org/rfc/rfc4180.txt&quot;&gt;RFC-4180&lt;/a&gt; exists as a CSV standard, it is &lt;a href=&quot;https://aic.ai.wu.ac.at/~polleres/publications/mitl-etal-2016OBD.pdf&quot;&gt;commonly ignored&lt;/a&gt;. Systems must therefore be sufficiently robust to handle these files as if they come straight from the wild west.&lt;/p&gt;
        &lt;p&gt;Last but not least, CSV files are wasteful: data is always laid out as strings. For example, numeric values like &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;1000000000&lt;/code&gt; take 10 bytes instead of 4 bytes if stored as an &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;int32&lt;/code&gt;. Additionally, since the data layout is row-wise, opportunities to apply &lt;a href=&quot;https://duckdb.org/2022/10/28/lightweight-compression.html&quot;&gt;lightweight columnar compression&lt;/a&gt; are lost.&lt;/p&gt;
        &lt;h3 id=&quot;parquet-files&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/05/csv-files-dethroning-parquet-or-not.html#parquet-files&quot;&gt;Parquet Files&lt;/a&gt;
        &lt;/h3&gt;
        &lt;p&gt;Due to these shortcomings, performance-driven file formats like Parquet have gained significant popularity in recent years. Parquet files cannot be opened by general text editors, cannot be easily edited, and have a rigid schema. However, they store data in columns, apply various compression techniques, partition the data into row groups, maintain statistics about these row groups, and define their schema directly in the file.&lt;/p&gt;
        &lt;p&gt;These features make Parquet a monolith of a file format — highly inflexible but efficient and fast. It is easy to read data from a Parquet file since the schema is well-defined. Parallelizing a scanner is straightforward, as each thread can independently process a row group. Filter pushdown is also simple to implement, as each row group contains statistical metadata, and the file sizes are very small.&lt;/p&gt;
        &lt;p&gt;The conclusion should be simple: if you have small files and need flexibility, CSV files are fine. However, for data analysis, one should pivot to Parquet files, right? Well, this pivot may not be a hard requirement anymore – read on to find out why!&lt;/p&gt;
        &lt;h2 id=&quot;reading-csv-files-in-duckdb&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/05/csv-files-dethroning-parquet-or-not.html#reading-csv-files-in-duckdb&quot;&gt;Reading CSV Files in DuckDB&lt;/a&gt;
        &lt;/h2&gt;
        &lt;p&gt;For the past few releases, DuckDB has doubled down on delivering not only an easy-to-use CSV scanner but also an extremely performant one. This scanner features its own custom &lt;a href=&quot;https://duckdb.org/2023/10/27/csv-sniffer.html&quot;&gt;CSV sniffer&lt;/a&gt;, parallelization algorithm, buffer manager, casting mechanisms, and state machine-based parser.&lt;/p&gt;
        &lt;p&gt;For usability, the previous paradigm of manual schema discovery and table creation has been changed. Instead, DuckDB now utilizes a CSV Sniffer, similar to those found in dataframe libraries like Pandas.
        This allows for querying CSV files as easily as:&lt;/p&gt;
        &lt;div class=&quot;language-sql highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&#39;path/to/file.csv&#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
        &lt;p&gt;Or tables to be created from CSV files, without any prior schema definition with:&lt;/p&gt;
        &lt;div class=&quot;language-sql highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;TABLE&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;t&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;AS&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&#39;path/to/file.csv&#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
        &lt;p&gt;Furthermore, the reader became one of the fastest CSV readers in analytical systems, as can be seen by the load times of the &lt;a href=&quot;https://github.com/ClickHouse/ClickBench/commit/0aba4247ce227b3058d22846ca39826d27262fe0&quot;&gt;latest iteration&lt;/a&gt; of &lt;a href=&quot;https://benchmark.clickhouse.com/#eyJzeXN0ZW0iOnsiQWxsb3lEQiI6ZmFsc2UsIkFsbG95REIgKHR1bmVkKSI6ZmFsc2UsIkF0aGVuYSAocGFydGl0aW9uZWQpIjpmYWxzZSwiQXRoZW5hIChzaW5nbGUpIjpmYWxzZSwiQXVyb3JhIGZvciBNeVNRTCI6ZmFsc2UsIkF1cm9yYSBmb3IgUG9zdGdyZVNRTCI6ZmFsc2UsIkJ5Q29uaXR5IjpmYWxzZSwiQnl0ZUhvdXNlIjpmYWxzZSwiY2hEQiAoRGF0YUZyYW1lKSI6ZmFsc2UsImNoREIgKFBhcnF1ZXQsIHBhcnRpdGlvbmVkKSI6ZmFsc2UsImNoREIiOmZhbHNlLCJDaXR1cyI6ZmFsc2UsIkNsaWNrSG91c2UgQ2xvdWQgKGF3cykiOmZhbHNlLCJDbGlja0hvdXNlIENsb3VkIChhenVyZSkiOmZhbHNlLCJDbGlja0hvdXNlIENsb3VkIChnY3ApIjpmYWxzZSwiQ2xpY2tIb3VzZSAoZGF0YSBsYWtlLCBwYXJ0aXRpb25lZCkiOmZhbHNlLCJDbGlja0hvdXNlIChkYXRhIGxha2UsIHNpbmdsZSkiOmZhbHNlLCJDbGlja0hvdXNlIChQYXJxdWV0LCBwYXJ0aXRpb25lZCkiOmZhbHNlLCJDbGlja0hvdXNlIChQYXJxdWV0LCBzaW5nbGUpIjpmYWxzZSwiQ2xpY2tIb3VzZSAod2ViKSI6ZmFsc2UsIkNsaWNrSG91c2UiOnRydWUsIkNsaWNrSG91c2UgKHR1bmVkKSI6dHJ1ZSwiQ2xpY2tIb3VzZSAodHVuZWQsIG1lbW9yeSkiOnRydWUsIkNsb3VkYmVycnkiOmZhbHNlLCJDcmF0ZURCIjpmYWxzZSwiQ3J1bmNoeSBCcmlkZ2UgZm9yIEFuYWx5dGljcyAoUGFycXVldCkiOmZhbHNlLCJEYXRhYmVuZCI6dHJ1ZSwiRGF0YUZ1c2lvbiAoUGFycXVldCwgcGFydGl0aW9uZWQpIjpmYWxzZSwiRGF0YUZ1c2lvbiAoUGFycXVldCwgc2luZ2xlKSI6ZmFsc2UsIkFwYWNoZSBEb3JpcyI6ZmFsc2UsIkRyaWxsIjpmYWxzZSwiRHJ1aWQiOmZhbHNlLCJEdWNrREIgKERhdGFGcmFtZSkiOmZhbHNlLCJEdWNrREIgKG1lbW9yeSkiOnRydWUsIkR1Y2tEQiAoUGFycXVldCwgcGFydGl0aW9uZWQpIjpmYWxzZSwiRHVja0RCIjpmYWxzZSwiRWxhc3RpY3NlYXJjaCI6ZmFsc2UsIkVsYXN0aWNzZWFyY2ggKHR1bmVkKSI6ZmFsc2UsIkdsYXJlREIiOmZhbHNlLCJHcmVlbnBsdW0iOmZhbHNlLCJIZWF2eUFJIjpmYWxzZSwiSHlkcmEiOmZhbHNlLCJJbmZvYnJpZ2h0IjpmYWxzZSwiS2luZXRpY2EiOmZhbHNlLCJNYXJpYURCIENvbHVtblN0b3JlIjpmYWxzZSwiTWFyaWFEQiI6ZmFsc2UsIk1vbmV0REIiOmZhbHNlLCJNb25nb0RCIjpmYWxzZSwiTW90aGVyRHVjayI6ZmFsc2UsIk15U1FMIChNeUlTQU0pIjpmYWxzZSwiTXlTUUwiOmZhbHNlLCJPY3RvU1FMIjpmYWxzZSwiT3hsYSI6ZmFsc2UsIlBhbmRhcyAoRGF0YUZyYW1lKSI6ZmFsc2UsIlBhcmFkZURCIChQYXJxdWV0LCBwYXJ0aXRpb25lZCkiOmZhbHNlLCJQYXJhZGVEQiAoUGFycXVldCwgc2luZ2xlKSI6ZmFsc2UsInBnX2R1Y2tkYiAoTW90aGVyRHVjayBlbmFibGVkKSI6ZmFsc2UsInBnX2R1Y2tkYiI6ZmFsc2UsIlBpbm90IjpmYWxzZSwiUG9sYXJzIChEYXRhRnJhbWUpIjpmYWxzZSwiUG9sYXJzIChQYXJxdWV0KSI6ZmFsc2UsIlBvc3RncmVTUUwgKHR1bmVkKSI6ZmFsc2UsIlBvc3RncmVTUUwiOmZhbHNlLCJRdWVzdERCIjp0cnVlLCJSZWRzaGlmdCI6ZmFsc2UsIlNlbGVjdERCIjpmYWxzZSwiU2luZ2xlU3RvcmUiOmZhbHNlLCJTbm93Zmxha2UiOmZhbHNlLCJTcGFyayI6ZmFsc2UsIlNRTGl0ZSI6ZmFsc2UsIlN0YXJSb2NrcyI6ZmFsc2UsIlRhYmxlc3BhY2UiOmZhbHNlLCJUZW1ibyBPTEFQIChjb2x1bW5hcikiOmZhbHNlLCJUaW1lc2NhbGUgQ2xvdWQiOmZhbHNlLCJUaW1lc2NhbGVEQiAobm8gY29sdW1uc3RvcmUpIjpmYWxzZSwiVGltZXNjYWxlREIiOmZhbHNlLCJUaW55YmlyZCAoRnJlZSBUcmlhbCkiOmZhbHNlLCJVbWJyYSI6dHJ1ZX0sInR5cGUiOnsiQyI6dHJ1ZSwiY29sdW1uLW9yaWVudGVkIjp0cnVlLCJQb3N0Z3JlU1FMIGNvbXBhdGlibGUiOnRydWUsIm1hbmFnZWQiOnRydWUsImdjcCI6dHJ1ZSwic3RhdGVsZXNzIjp0cnVlLCJKYXZhIjp0cnVlLCJDKysiOnRydWUsIk15U1FMIGNvbXBhdGlibGUiOnRydWUsInJvdy1vcmllbnRlZCI6dHJ1ZSwiQ2xpY2tIb3VzZSBkZXJpdmF0aXZlIjp0cnVlLCJlbWJlZGRlZCI6dHJ1ZSwic2VydmVybGVzcyI6dHJ1ZSwiZGF0YWZyYW1lIjp0cnVlLCJhd3MiOnRydWUsImF6dXJlIjp0cnVlLCJhbmFseXRpY2FsIjp0cnVlLCJSdXN0Ijp0cnVlLCJzZWFyY2giOnRydWUsImRvY3VtZW50Ijp0cnVlLCJHbyI6dHJ1ZSwic29tZXdoYXQgUG9zdGdyZVNRTCBjb21wYXRpYmxlIjp0cnVlLCJEYXRhRnJhbWUiOnRydWUsInBhcnF1ZXQiOnRydWUsInRpbWUtc2VyaWVzIjp0cnVlfSwibWFjaGluZSI6eyIxNiB2Q1BVIDEyOEdCIjpmYWxzZSwiOCB2Q1BVIDY0R0IiOmZhbHNlLCJzZXJ2ZXJsZXNzIjpmYWxzZSwiMTZhY3UiOmZhbHNlLCJjNmEuNHhsYXJnZSwgNTAwZ2IgZ3AyIjpmYWxzZSwiTCI6ZmFsc2UsIk0iOmZhbHNlLCJTIjpmYWxzZSwiWFMiOmZhbHNlLCJjNmEubWV0YWwsIDUwMGdiIGdwMiI6dHJ1ZSwiMTkyR0IiOmZhbHNlLCIyNEdCIjpmYWxzZSwiMzYwR0IiOmZhbHNlLCI0OEdCIjpmYWxzZSwiNzIwR0IiOmZhbHNlLCI5NkdCIjpmYWxzZSwiZGV2IjpmYWxzZSwiNzA4R0IiOmZhbHNlLCJjNW4uNHhsYXJnZSwgNTAwZ2IgZ3AyIjpmYWxzZSwiQW5hbHl0aWNzLTI1NkdCICg2NCB2Q29yZXMsIDI1NiBHQikiOmZhbHNlLCJjNS40eGxhcmdlLCA1MDBnYiBncDIiOmZhbHNlLCJjNmEuNHhsYXJnZSwgMTUwMGdiIGdwMiI6ZmFsc2UsImNsb3VkIjpmYWxzZSwiZGMyLjh4bGFyZ2UiOmZhbHNlLCJyYTMuMTZ4bGFyZ2UiOmZhbHNlLCJyYTMuNHhsYXJnZSI6ZmFsc2UsInJhMy54bHBsdXMiOmZhbHNlLCJTMiI6ZmFsc2UsIlMyNCI6ZmFsc2UsIjJYTCI6ZmFsc2UsIjNYTCI6ZmFsc2UsIjRYTCI6ZmFsc2UsIlhMIjpmYWxzZSwiTDEgLSAxNkNQVSAzMkdCIjpmYWxzZSwiYzZhLjR4bGFyZ2UsIDUwMGdiIGdwMyI6ZmFsc2UsIjE2IHZDUFUgNjRHQiI6ZmFsc2UsIjQgdkNQVSAxNkdCIjpmYWxzZSwiOCB2Q1BVIDMyR0IiOmZhbHNlfSwiY2x1c3Rlcl9zaXplIjp7IjEiOnRydWUsIjIiOmZhbHNlLCI0IjpmYWxzZSwiOCI6ZmFsc2UsIjE2IjpmYWxzZSwiMzIiOmZhbHNlLCI2NCI6ZmFsc2UsIjEyOCI6ZmFsc2UsInNlcnZlcmxlc3MiOmZhbHNlLCJ1bmRlZmluZWQiOmZhbHNlfSwibWV0cmljIjoibG9hZCIsInF1ZXJpZXMiOlt0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlXX0=&quot;&gt;ClickBench&lt;/a&gt;. In this benchmark, the data is loaded from an &lt;a href=&quot;https://datasets.clickhouse.com/hits_compatible/hits.csv.gz&quot;&gt;82 GB uncompressed CSV file&lt;/a&gt; into a database table.&lt;/p&gt;
        &lt;div align=&quot;center&quot;&gt;
        &lt;img src=&quot;https://duckdb.org/images/blog/csv-vs-parquet-clickbench.png&quot; alt=&quot;Image showing the ClickBench result 2024-12-05&quot; width=&quot;800px&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;
        &lt;div align=&quot;center&quot;&gt;ClickBench CSV loading times (2024-12-05)&lt;/div&gt;
        &lt;h2 id=&quot;comparing-csv-and-parquet&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/05/csv-files-dethroning-parquet-or-not.html#comparing-csv-and-parquet&quot;&gt;Comparing CSV and Parquet&lt;/a&gt;
        &lt;/h2&gt;
        &lt;p&gt;With the large boost in usability and performance for the CSV reader, one might ask: what is the actual difference in performance when loading a CSV file compared to a Parquet file into a table? Additionally, how do these formats differ when running queries directly on them?&lt;/p&gt;
        &lt;p&gt;To find out, we will run a few examples using both CSV and Parquet files containing TPC-H data to shed light on their differences. All scripts used to generate the benchmarks of this blogpost can be found in a &lt;a href=&quot;https://github.com/pdet/csv_vs_parquet&quot;&gt;repository&lt;/a&gt;.&lt;/p&gt;
        &lt;h3 id=&quot;usability&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/05/csv-files-dethroning-parquet-or-not.html#usability&quot;&gt;Usability&lt;/a&gt;
        &lt;/h3&gt;
        &lt;p&gt;In terms of usability, scanning CSV files and Parquet files can differ significantly.&lt;/p&gt;
        &lt;p&gt;In simple cases, where all options are correctly detected by DuckDB, running queries on either CSV or Parquet files can be done directly.&lt;/p&gt;
        &lt;div class=&quot;language-sql highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&#39;path/to/file.csv&#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&#39;path/to/file.parquet&#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
        &lt;p&gt;Things can differ drastically for wild, rule-breaking &lt;a href=&quot;https://reddead.fandom.com/wiki/Arthur_Morgan&quot;&gt;Arthur Morgan&lt;/a&gt;-like CSV files. This is evident from the number of parameters that can be set for each scanner. The &lt;a href=&quot;https://duckdb.org/docs/data/parquet/overview.html&quot;&gt;Parquet&lt;/a&gt; scanner has a total of six parameters that can alter how the file is read. For the majority of cases, the user will never need to manually adjust any of them.&lt;/p&gt;
        &lt;p&gt;The CSV reader, on the other hand, depends on the sniffer being able to automatically detect many different configuration options. For example: What is the delimiter? How many rows should it skip from the top of the file? Are there any comments? And so on. This results in over &lt;a href=&quot;https://duckdb.org/docs/data/csv/overview.html&quot;&gt;30 configuration options&lt;/a&gt; that the user might have to manually adjust to properly parse their CSV file. Again, this number of options is necessary due to the lack of a widely adopted standard. However, in most scenarios, users can rely on the sniffer or, at most, change one or two options.&lt;/p&gt;
        &lt;p&gt;The CSV reader also has an extensive error-handling system and will always provide suggestions for options to review if something goes wrong.&lt;/p&gt;
        &lt;p&gt;To give you an example of how the DuckDB error-reporting system works, consider the following CSV file:&lt;/p&gt;
        &lt;pre&gt;&lt;code class=&quot;language-csv&quot;&gt;Clint Eastwood;94
        Samuel L. Jackson
        &lt;/code&gt;&lt;/pre&gt;
        &lt;p&gt;In this file, the second line is missing the value for the second column.&lt;/p&gt;
        &lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;Invalid Input Error: CSV Error on Line: 2
        Original Line: Samuel L. Jackson
        Expected Number of Columns: 2 Found: 1
        Possible fixes:
        * Enable null padding (null_padding=true) to replace missing values with NULL
        * Enable ignore errors (ignore_errors=true) to skip this row
        file = western_actors.csv
        delimiter = , (Auto-Detected)
        quote = &quot; (Auto-Detected)
        escape = &quot; (Auto-Detected)
        new_line = \n (Auto-Detected)
        header = false (Auto-Detected)
        skip_rows = 0 (Auto-Detected)
        comment = \0 (Auto-Detected)
        date_format = (Auto-Detected)
        timestamp_format = (Auto-Detected)
        null_padding = 0
        sample_size = 20480
        ignore_errors = false
        all_varchar = 0
        &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
        &lt;p&gt;DuckDB provides detailed information about any errors encountered. It highlights the line of the CSV file where the issue occurred, presents the original line, and suggests possible fixes for the error, such as ignoring the problematic line or filling missing values with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;NULL&lt;/code&gt;. It also displays the full configuration used to scan the file and indicates whether the options were auto-detected or manually set.&lt;/p&gt;
        &lt;p&gt;The bottom line here is that, even with the advancements in CSV usage, the strictness of Parquet files make them much easier to operate on.&lt;/p&gt;
        &lt;p&gt;Of course, if you need to open your file in a text editor or Excel, you will need to have your data in CSV format. Note that Parquet files do have some visualizers, like &lt;a href=&quot;https://www.tadviewer.com/&quot;&gt;TAD&lt;/a&gt;.&lt;/p&gt;
        &lt;h3 id=&quot;performance&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/05/csv-files-dethroning-parquet-or-not.html#performance&quot;&gt;Performance&lt;/a&gt;
        &lt;/h3&gt;
        &lt;p&gt;There are primarily two ways to operate on files using DuckDB:&lt;/p&gt;
        &lt;ol&gt;
        &lt;li&gt;
        &lt;p&gt;The user creates a DuckDB table from the file and uses the table in future queries. This is a loading process, commonly used if you want to store your data as DuckDB tables or if you will run many queries on them. Also, note that these are the only possible scenarios for most database systems (e.g., Oracle, SQL Server, PostgreSQL, SQLite, …).&lt;/p&gt;
        &lt;/li&gt;
        &lt;li&gt;
        &lt;p&gt;One might run a query directly on the file scanner without creating a table. This is useful for scenarios where the user has limitations on memory and disk space, or if queries on these files are only executed once. Note that this scenario is typically not supported by database systems but is common for dataframe libraries (e.g., Pandas).&lt;/p&gt;
        &lt;/li&gt;
        &lt;/ol&gt;
        &lt;p&gt;To fairly compare the scanners, we provide the table schemas upfront, ensuring that the scanners produce the exact same data types. We also set &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;preserve_insertion_order = false&lt;/code&gt;, as this can impact the parallelization of both scanners, and set &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;max_temp_directory_size = &#39;0GB&#39;&lt;/code&gt; to ensure no data is spilled to disk, with all experiments running fully in memory.&lt;/p&gt;
        &lt;p&gt;We use the default writers for both CSV files and Parquet (with the default Snappy compression), and also run a variation of Parquet with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;CODEC &#39;zstd&#39;, COMPRESSION_LEVEL 1&lt;/code&gt;, as this can speed up querying/loading times.&lt;/p&gt;
        &lt;p&gt;For all experiments, we use an Apple M1 Max, with 64 GB RAM. We use TPC-H scale factor 20 and report the median times from 5 runs.&lt;/p&gt;
        &lt;h4 id=&quot;creating-tables&quot;&gt;
        &lt;a style=&quot;text-decoration: none;&quot; href=&quot;https://duckdb.org/2024/12/05/csv-files-dethroning-parquet-or-not.html#creating-tables&quot;&gt;Creating Tables&lt;/a&gt;
        &lt;/h4&gt;
        &lt;p&gt;For creating the table, we focus on the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;lineitem&lt;/code&gt; table.&lt;/p&gt;
        &lt;p&gt;After defining the schema, both files can be loaded with a simple &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;COPY&lt;/code&gt; statement, with no additional parameters set. Note that even with the schema defined, the CSV sniffer will still be executed to determine the dialect (e.g., quote character, delimiter character, etc.) and match types and names.&lt;/p&gt;
        &lt;table&gt;
        &lt;thead&gt;
        &lt;tr&gt;
        &lt;th&gt;Name&lt;/th&gt;
        &lt;th style=&quot;text-align: right&quot;&gt;Time (s)&lt;/th&gt;
        &lt;th style=&quot;text-align: right&quot;&gt;Size (GB)&lt;/th&gt;
        &lt;/tr&gt;
        &lt;/thead&gt;
        &lt;tbody&gt;
        &lt;tr&gt;
        &lt;td&gt;CSV&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;11.76&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;15.95&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
        &lt;td&gt;Parquet Snappy&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;5.21&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;3.78&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
        &lt;td&gt;Parquet ZSTD&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;5.52&lt;/td&gt;
        &lt;td style=&quot;text-align: right&quot;&gt;3.22&lt;/td&gt;
        &lt;/tr&gt;
        &lt;/tbody&gt;
        &lt;/table&gt;
        &lt;p&gt;We can see that the Parquet files are definitely smaller. About 5× smaller than the CSV file, but the performance difference is not drastic.&lt;/p&gt;
        &lt;p&gt;The CSV scanner is only about 2× slower than the Parquet scanner. It&#39;s also important to note that some of the cost associated with these operations (~1-2 seconds) is related to the insertion into the DuckDB table, not the sc

@TonyRL TonyRL merged commit e764739 into DIYgod:master Dec 10, 2024
31 checks passed
@mocusez mocusez deleted the duckdb-new branch December 11, 2024 01:05
artefaritaKuniklo pushed a commit to artefaritaKuniklo/RSSHub that referenced this pull request Dec 13, 2024
* fix(route/duckdb): change blogs link and author

* fix(route/duckdb): update description selector

---------
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Auto: Route Test Complete Auto route test has finished on given PR Route
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants