Skip to content

Commit

Permalink
Spec: Add v3 types and type promotion (apache#10955)
Browse files Browse the repository at this point in the history
  • Loading branch information
rdblue authored Oct 9, 2024
1 parent 208ab20 commit 67dc9e5
Showing 1 changed file with 42 additions and 7 deletions.
49 changes: 42 additions & 7 deletions format/spec.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ In addition to row-level deletes, version 2 makes some requirements stricter for

Version 3 of the Iceberg spec extends data types and existing metadata structures to add new capabilities:

* New data types: nanosecond timestamp(tz)
* New data types: nanosecond timestamp(tz), unknown
* Default value support for columns
* Multi-argument transforms for partitioning and sorting

Expand Down Expand Up @@ -184,6 +184,7 @@ Supported primitive types are defined in the table below. Primitive types added

| Added by version | Primitive type | Description | Requirements |
|------------------|--------------------|--------------------------------------------------------------------------|--------------------------------------------------|
| [v3](#version-3) | **`unknown`** | Default / null column type used when a more specific type is not known | Must be optional with `null` defaults; not stored in data files |
| | **`boolean`** | True or false | |
| | **`int`** | 32-bit signed integers | Can promote to `long` |
| | **`long`** | 64-bit signed integers | |
Expand Down Expand Up @@ -221,6 +222,8 @@ The `initial-default` is set only when a field is added to an existing schema. T

The `initial-default` and `write-default` produce SQL default value behavior, without rewriting data files. SQL default value behavior when a field is added handles all existing rows as though the rows were written with the new field's default value. Default value changes may only affect future records and all known fields are written into data files. Omitting a known field when writing a data file is never allowed. The write default for a field must be written if a field is not supplied to a write. If the write default for a required field is not set, the writer must fail.

All columns of `unknown` type must default to null. Non-null values for `initial-default` or `write-default` are invalid.

Default values are attributes of fields in schemas and serialized with fields in the JSON format. See [Appendix C](#appendix-c-json-serialization).


Expand All @@ -230,11 +233,32 @@ Schemas may be evolved by type promotion or adding, deleting, renaming, or reord

Evolution applies changes to the table's current schema to produce a new schema that is identified by a unique schema ID, is added to the table's list of schemas, and is set as the table's current schema.

Valid type promotions are:

* `int` to `long`
* `float` to `double`
* `decimal(P, S)` to `decimal(P', S)` if `P' > P` -- widen the precision of decimal types.
Valid primitive type promotions are:

| Primitive type | v1, v2 valid type promotions | v3+ valid type promotions | Requirements |
|------------------|------------------------------|------------------------------|--------------|
| `unknown` | | _any type_ | |
| `int` | `long` | `long` | |
| `date` | | `timestamp`, `timestamp_ns` | Promotion to `timestamptz` or `timestamptz_ns` is **not** allowed; values outside the promoted type's range must result in a runtime failure |
| `float` | `double` | `double` | |
| `decimal(P, S)` | `decimal(P', S)` if `P' > P` | `decimal(P', S)` if `P' > P` | Widen precision only |

Iceberg's Avro manifest format does not store the type of lower and upper bounds, and type promotion does not rewrite existing bounds. For example, when a `float` is promoted to `double`, existing data file bounds are encoded as 4 little-endian bytes rather than 8 little-endian bytes for `double`. To correctly decode the value, the original type at the time the file was written must be inferred according to the following table:

| Current type | Length of bounds | Inferred type at write time |
|------------------|------------------|-----------------------------|
| `long` | 4 bytes | `int` |
| `long` | 8 bytes | `long` |
| `double` | 4 bytes | `float` |
| `double` | 8 bytes | `double` |
| `timestamp` | 4 bytes | `date` |
| `timestamp` | 8 bytes | `timestamp` |
| `timestamp_ns` | 4 bytes | `date` |
| `timestamp_ns` | 8 bytes | `timestamp_ns` |
| `decimal(P, S)` | _any_ | `decimal(P', S)`; `P' <= P` |

Type promotion is not allowed for a field that is referenced by `source-id` or `source-ids` of a partition field if the partition transform would produce a different value after promoting the type. For example, `bucket[N]` produces different hash values for `34` and `"34"` (2017239379 != -427558391) but the same value for `34` and `34L`; when an `int` field is the source for a bucket partition field, it may be promoted to `long` but not to `string`. This may happen for the following type promotion cases:
* `date` to `timestamp` or `timestamp_ns`

Any struct, including a top-level schema, can evolve through deleting fields, adding new fields, renaming existing fields, reordering existing fields, or promoting a primitive using the valid type promotions. Adding a new field assigns a new ID for that field and for any nested fields. Renaming an existing field must change the name, but not the field ID. Deleting a field removes it from the current schema. Field deletion cannot be rolled back unless the field was nullable or if the current snapshot has not changed.

Expand Down Expand Up @@ -949,6 +973,7 @@ Maps with non-string keys must use an array representation with the `map` logica

|Type|Avro type|Notes|
|--- |--- |--- |
|**`unknown`**|`null` or omitted||
|**`boolean`**|`boolean`||
|**`int`**|`int`||
|**`long`**|`long`||
Expand Down Expand Up @@ -1002,6 +1027,7 @@ Lists must use the [3-level representation](https://github.com/apache/parquet-fo

| Type | Parquet physical type | Logical type | Notes |
|--------------------|--------------------------------------------------------------------|---------------------------------------------|----------------------------------------------------------------|
| **`unknown`** | None | | Omit from data files |
| **`boolean`** | `boolean` | | |
| **`int`** | `int` | | |
| **`long`** | `long` | | |
Expand All @@ -1023,12 +1049,16 @@ Lists must use the [3-level representation](https://github.com/apache/parquet-fo
| **`map`** | `3-level map` | `MAP` | See Parquet docs for 3-level representation. |


When reading an `unknown` column, any corresponding column must be ignored and replaced with `null` values.


### ORC

**Data Type Mappings**

| Type | ORC type | ORC type attributes | Notes |
|--------------------|---------------------|------------------------------------------------------|-----------------------------------------------------------------------------------------|
| **`unknown`** | None | | Omit from data files |
| **`boolean`** | `boolean` | | |
| **`int`** | `int` | | ORC `tinyint` and `smallint` would also map to **`int`**. |
| **`long`** | `long` | | |
Expand Down Expand Up @@ -1089,6 +1119,7 @@ The types below are not currently valid for bucketing, and so are not hashed. Ho

| Primitive type | Hash specification | Test value |
|--------------------|-------------------------------------------|--------------------------------------------|
| **`unknown`** | always `null` | |
| **`boolean`** | `false: hashInt(0)`, `true: hashInt(1)` | `true``1392991556` |
| **`float`** | `hashLong(doubleToLongBits(double(v))` [5]| `1.0F``-142385009`, `0.0F``1669671676`, `-0.0F``1669671676` |
| **`double`** | `hashLong(doubleToLongBits(v))` [5]| `1.0D``-142385009`, `0.0D``1669671676`, `-0.0D``1669671676` |
Expand Down Expand Up @@ -1119,6 +1150,7 @@ Types are serialized according to this table:

|Type|JSON representation|Example|
|--- |--- |--- |
|**`unknown`**|`JSON string: "unknown"`|`"unknown"`|
|**`boolean`**|`JSON string: "boolean"`|`"boolean"`|
|**`int`**|`JSON string: "int"`|`"int"`|
|**`long`**|`JSON string: "long"`|`"long"`|
Expand Down Expand Up @@ -1267,6 +1299,7 @@ This serialization scheme is for storing single values as individual binary valu

| Type | Binary serialization |
|------------------------------|--------------------------------------------------------------------------------------------------------------|
| **`unknown`** | Not supported |
| **`boolean`** | `0x00` for false, non-zero byte for true |
| **`int`** | Stored as 4-byte little-endian |
| **`long`** | Stored as 8-byte little-endian |
Expand Down Expand Up @@ -1319,10 +1352,11 @@ This serialization scheme is for storing single values as individual binary valu
### Version 3

Default values are added to struct fields in v3.

* The `write-default` is a forward-compatible change because it is only used at write time. Old writers will fail because the field is missing.
* Tables with `initial-default` will be read correctly by older readers if `initial-default` is always null for optional fields. Otherwise, old readers will default optional columns with null. Old readers will fail to read required fields which are populated by `initial-default` because that default is not supported.

Types `timestamp_ns` and `timestamptz_ns` are added in v3.
Types `unknown`, `timestamp_ns`, and `timestamptz_ns` are added in v3.

All readers are required to read tables with unknown partition transforms, ignoring the unsupported partition fields when filtering.

Expand Down Expand Up @@ -1423,3 +1457,4 @@ Iceberg supports two types of histories for tables. A history of previous "curre
might indicate different snapshot IDs for a specific timestamp. The discrepancies can be caused by a variety of table operations (e.g. updating the `current-snapshot-id` can be used to set the snapshot of a table to any arbitrary snapshot, which might have a lineage derived from a table branch or no lineage at all).

When processing point in time queries implementations should use "snapshot-log" metadata to lookup the table state at the given point in time. This ensures time-travel queries reflect the state of the table at the provided timestamp. For example a SQL query like `SELECT * FROM prod.db.table TIMESTAMP AS OF '1986-10-26 01:21:00Z';` would find the snapshot of the Iceberg table just prior to '1986-10-26 01:21:00 UTC' in the snapshot logs and use the metadata from that snapshot to perform the scan of the table. If no snapshot exists prior to the timestamp given or "snapshot-log" is not populated (it is an optional field), then systems should raise an informative error message about the missing metadata.

0 comments on commit 67dc9e5

Please sign in to comment.